Comsol -leaderboard other pages

Topics

China and Europe bid for post-LHC collider

Big thinking

The discovery of the Higgs boson at the LHC in the summer of 2012 set particle physics on a new course of exploration. While the LHC experiments have determined many of the properties of the Higgs boson with a precision beyond expectations, and will continue to do so until the mid-2030s, physicists have long planned a successor to the LHC that can further explore the Higgs mechanism and other potential sources of new physics. Several proposals are on the table, the most ambitious and scientifically far-reaching involving circular colliders with a circumference of around 100 km.

On 18 December, the Future Circular Collider (FCC) study released its conceptual design report (CDR) for a 100 km collider based at CERN. A month earlier, the Institute of High Energy Physics (IHEP) in China officially presented a CDR for a similar project called the Circular Electron Positron Collider (CEPC). Both studies were launched shortly after the discovery of the Higgs boson (the FCC was a direct response to a request from the 2013 update of the European Strategy for Particle Physics to prepare a post-LHC machine, following preliminary proposals for a circular Higgs factory at CERN in 2011), and both envisage physics programmes extending deep into the 21st century. Documents concerning the FCC and CEPC proposals were also submitted as input to the latest update of the European Strategy for Particle Physics at the end of the year (see Input received for European strategy update).

“If a high-luminosity electron–positron Higgs factory were to drop out of the sky tomorrow, the line of users would be very long, while a very-high-energy hadron collider is a vessel of discovery and will help us study the role of the Higgs boson in taming the high-energy behaviour of longitudinal gauge-boson (WW) scattering,” says theorist Chris Quigg of Fermilab in the US. “It is a very significant validation of the scientific promise opened by a 100 km ring for scientists of different regions to express the same judgment.”

The four-volume FCC report demonstrates the project’s technical feasibility and identifies the physics opportunities offered by the different collider options: a high-luminosity electron–positron collider (FCC-ee) with a centre-of-mass energy ranging from the Z pole (90 GeV) to the tt̅ threshold (365 GeV); a 100 TeV proton–proton collider (FCC-hh); a future lepton–proton collider (FCC-eh); and, finally, a higher-energy hadron collider in the existing tunnel (HE-LHC). The FCC is a global collaboration of more than 140 universities, research institutes and industrial partners. During the past five years, with the support of the European Commission’s Horizon 2020 programme, the FCC collaboration has made significant advances in high-field superconducting magnets, high-efficiency radio-frequency cavities, vacuum systems, large-scale cryogenic refrigeration and other enabling technologies (see A giant leap for physics).

According to the present proposal, an eight-year period for project preparation and administration is required before construction of FCC’s underground areas can begin, potentially allowing the FCC-ee physics programme to start by 2039. The FCC-hh, installed in the same tunnel, could then start operations in the late 2050s. “Though the two machines can be built independently, a combined scenario profits from the extensive reuse of civil engineering and technical systems, and also from the additional time available for high-field magnet breakthroughs,” says deputy leader of the FCC study Frank Zimmermann of CERN. “Timely preparation, early investment and diverse collaborations between researchers and industry are already yielding promising results and confirming the anticipated downward trend in the costs associated with operation.”

Asian ambition

CEPC is a putative 240 GeV circular electron-positron collider, the tunnel for which is foreseen to one day host a super proton–proton collider (SppC) that reaches energies beyond the LHC (CERN Courier June 2018 p21). The two-volume CEPC design report summarises the work accomplished in the past few years by thousands of scientists and engineers in China and abroad. IHEP states that construction of CEPC will begin as soon as 2022 – allowing time to build prototypes of key technical components and establish support for manufacturing – and be completed by 2030. According to the tentative operational plan, CEPC will run for seven years as a Higgs factory, followed by two years as a Z factory and one year at the WW threshold, potentially followed by the installation of the SppC. Although CEPC–SppC is a Chinese-proposed project to be built in China, it has an international advisory committee and more than 20 agreements have been signed with institutes and universities around the world.

“The Beijing Electron Positron Collider will stop running in the 2020s, and China’s government is encouraging Chinese scientists to initiate and work towards large international science projects, so it is possible that CEPC may get a green light soon,” explains deputy leader of the CEPC project Jie Gao of IHEP. “As for the site, many Chinese local governments showed strong interest to host CEPC with the support of the central government.”

Cost is a key factor for both the Chinese and European projects, with the tunnel taking up a large fraction of the expense. CEPC’s price tag is currently $5 billion and FCC-ee is hovering at around twice this value, while, at present, a hadron collider on either continent would cost significantly more due to the cost of the required superconducting wire. Geoffrey Taylor of the University of Melbourne, who is chair of the International Committee for Future Accelerators, says that CERN has the major benefit of magnet expertise and high-energy collider development and operation, in addition to already having the multi-billion-dollar accelerator infrastructure required for the project. “The value of this infrastructure at CERN outweighs the cost of the tunnel; on the other hand, the Chinese proposal has a lower cost of tunneling but lacks the immense infrastructure and expertise necessary for the hadron collider.”

Taylor says that whilst it is essential that CERN maintains its pre-eminent position, having competition from Asia with the potential for major investment would be beneficial for the field as a whole because Western investment in future machines may well remain at current levels. There are also broader cultural factors to be considered, says Quigg: “CERN has earned an exemplary reputation for inclusiveness and openness, which go hand in hand with scientific excellence. Any region, nation, and institution that aims to host a world-leading instrument must strive for a similar environment.”

For theorist Gerard ’t Hooft, who shared the 1999 Nobel Prize in Physics for elucidating the quantum structure of electroweak interactions, the physics target of a 100 km collider is far more important than its location. It is not obvious, in view of our present theoretical understanding, whether or not a 100 km accelerator will be able to enforce a breakthrough, he says. “Most theoreticians were hoping that the LHC might open up a new domain of our science, and this does not seem to be happening. I am just not sure whether things will be any different for a 100 km machine. It would be a shame to give up, but the question of whether spectacular new physical phenomena will be opened up and whether this outweighs the costs, I cannot answer. On the other hand, for us theoretical physicists the new machines will be important even if we can’t impress the public with their results.”

Profound discoveries

Experimentalist Joe Incandela of the University of California in Santa Barbara, who was spokesperson of the CMS experiment at the time of the Higgs-boson discovery, believes that a post-LHC collider is needed for closure – even if it does not yield new discoveries. “While such machines are not guaranteed to yield definitive evidence for new physics, they would nevertheless allow us to largely complete our exploration of the weak scale,” he says. “This is important because it is the scale where our observable universe resides, where we live, and it should be fully charted before the energy frontier is shut down. Completing our study of the weak scale would cap a short but extraordinary 150 year-long period of profound experimental and theoretical discoveries that would stand for millennia among mankind’s greatest achievements.”

Exploring the spin of top-quark pairs

Fig. 1.

One of the most fascinating particles studied at the LHC is the top quark. As the heaviest elementary particle to date, the top quark lives for less than a trillionth of a trillionth of a second (10–24 s) and decays long before it can form hadrons. It is also the only quark that provides the possibility to study a bare quark. This allows physicists to explore its spin, which is related to the quark’s intrinsic quantum angular momentum. The spin of the top quark can be inferred from the particles it decays into: a bottom quark and a W boson, which subsequently decays into leptons or quarks.

The CMS collaboration has analysed proton–proton collisions in which pairs of top quarks and antiquarks are produced. The Standard Model (SM) makes precise predictions for the frequency at which the spin of the top quark is aligned with (or correlated to) the spin of the top antiquark. A measure of this correlation is thus a highly sensitive test of the SM. If, for example, an exotic heavier Higgs boson were to exist in addition to the one discovered in 2012 at the LHC, it could decay into a pair of top quarks and antiquarks and change their spin correlation significantly. A high-precision measurement of the spin correlation therefore opens a window to explore physics beyond our current knowledge.

The CMS collaboration studied more than one million top-quark–antiquark pairs in dilepton final states recorded in 2016. To study all the spin and polarisation effects accessible in top-quark–antiquark pair production, nine event quantities sensitive to top-quark spin and correlations, and three quantities sensitive to the top-quark polarisation were measured. The measured observables were corrected for experimental effects (“unfolded”) and directly compared to precise theoretical predictions.

The observables studied in this analy­sis show good agreement between data and theory, for example showing no angular dependence for unpolarised top quarks (see figure 1, left). A moderate discrepancy is seen in one of the measured distributions sensitive to spin (the azimuthal opening angle between two leptons), with respect to one of the Monte Carlo simulations (POWHEGv2+PYTHIA). This discrepancy is consistent with an observation made by the ATLAS collaboration last year, although CMS finds that other simulations (“MG5_aMC@NLO”) and calculations that should give similar results agree with the data within the uncertainties.

In summary, a good agreement with the SM prediction is observed in CMS data, except for the case of one particular but commonly used observable, suggesting further input from theory calculations is probably necessary. The full Run-2 data set already recorded by CMS contains four times more top quarks than were used for this result. This larger sample will allow an even more precise measurement, increasing the chances for a first glimpse of new physics.

Real-time triggering boosts heavy-flavour programme

Fig. 1.

A report from the LHCb collaboration

Throughout LHC Run 2, LHCb has been flooded by b- and c-hadrons due to the large beauty and charm production cross-sections within the experiment’s acceptance. To cope with this abundant flux of signal particles and to fully exploit them for LHCb’s precision flavour-physics programme, the collaboration has recently implemented a unique real-time analysis strategy to select and classify, with high efficiency, a large number of b- and c-hadron decays. Key components of this strategy are a real-time alignment and calibration of the detector, allowing offline-quality event reconstruction within the software trigger, which runs on a dedicated computing farm. In addition, the collaboration took the novel step of only saving to tape interesting physics objects (for example, tracks, vertices and energy deposits), and discarding the rest of the event. Dubbed “selective persistence”, this substantially reduced the average event size written from the online system without any loss in physics performance, thus permitting a higher trigger rate within the same output data rate (bandwidth). This has allowed the LHCb collaboration to maintain, and even expand, its broad programme throughout Run 2, despite limited computing resources.

LHCb has been flooded by b- and c-hadrons due to the large beauty and charm production cross-sections within the experiment’s acceptance.

The two-stage LHCb software trigger is able to select heavy flavoured hadrons with high purity, leaving event-size reduction as the handle to reduce trigger bandwidth. This is particularly true for the large charm trigger rate, where saving the full raw events would result in a prohibitively high bandwidth. Saving only the physics objects entering the trigger decision reduces the event size by a factor up to 20, allowing larger statistics to be collected at constant bandwidth. Several measurements of charm production and decay properties have been made so far using only this information. The sets of physics objects that must be saved for offline analysis can also be chosen “à la carte”, opening the door for further bandwidth savings on inclusive analyses too.

For the LHCb upgrade (see LHCb’s momentous metamorphosis), when the instantaneous luminosity increases by a factor of five, these new techniques will become standard. LHCb expects that more than 70% of the physics programme will use the reduced event format. The full software trigger, combined with real-time alignment and calibration, along with the selective persistence pioneered by LHCb, will likely become the standard for very high-luminosity experiments. The collaboration is therefore working hard to implement these new techniques and ensure that the current quality of physics data can be equalled or surpassed in Run 3.

A giant leap for physics

Mind the gap

Particle physics has revolutionised our understanding of the universe. The experimental and theoretical tools developed in the 20th century delivered the Standard Model of particle physics, the particle content of which was completed in 2012 with the discovery of the Higgs boson at the LHC. And, yet, this hugely powerful theory leaves several observations unexplained. In solving mysteries such as the nature of dark matter, the origin of neutrino masses, the dominance of matter over antimatter on cosmological scales, and the low mass of the Higgs boson itself, physicists could open a completely new view of nature. Therefore, it is high time to start planning a new collider that maintains this rich course of exploration throughout the 21st century.

In late 2018 the Future Circular Collier (FCC) collaboration published a conceptual design report (CDR) addressing this need. A similar proposal is also under development in China (CERN Courier June 2018 p21). In more than 1000 pages distributed over four volumes, the FCC CDR covers all aspects of the project, including technologies, detector design, physics goals and civil-engineering considerations. But what changes when we move from a 27 km to a new 100 km-long tunnel, and what stays the same? The obstacles to new colliders pushing the current energy and intensity frontiers are many, yet the past five years have seen the international FCC study steadily break them down.

Lessons learned

The FCC design report shows that CERN’s existing accelerator chain can serve as the foundation for a 100 km post-LHC machine, while also opening a rich fixed-target programme. The new 100 km infrastructure is indeed enormous, representing a four-fold increase in dimensions compared to the LHC. But, taking history as a guide, it should be possible: this jump in scale is identical to that adopted in the 1980s to move from the Super Proton Synchrotron (SPS) to the Large Electron Positron collider (LEP) and eventually to the LHC, allowing the completion of the Standard Model. Jumping to larger and more complex machines always comes with new challenges, but these translate precisely into opportunities for young researchers and industry (CERN Courier September 2018 p51).

FCC-ee

A 100 km tunnel offers three main collider options. The most straightforward in terms of technological readiness is a luminosity-frontier lepton collider (FCC-ee) that will deliver unprecedented collision rates in a clean environment at specific energies corresponding to the Z pole (91 GeV), the WW threshold (161 GeV), Higgs production (240 GeV), and the top quark–antiquark threshold (350 to 365 GeV). By filling the FCC tunnel with new superconducting magnets twice the strength of the LHC’s (16 T as opposed to 8 T), however, a hadron collider called FCC-hh can be built with a collision energy of 100 TeV – an order-of-magnitude higher than the LHC. The FCC study, which was formally launched in early 2014, also explores the option of a proton–electron collider (FCC-he) that could run in parallel with FCC-hh, and a high-energy LHC based on high-field magnets installed in the current LHC tunnel (CERN Courier June 2018 p15).

The cost of future colliders is a major issue, and concerted value-engineering of all aspects from individual components through sustainability to logistics is required. Cost estimates for FCC construction and operation are detailed in the CDR, although the range of collider modes, staging approaches and technology choices make it difficult to place a single figure on each machine. Construction on a site with an existing infrastructure, as offered by CERN, is a major cost advantage in terms of capital investment, sharing of infrastructure and breadth of the overall physics programme. The sequence of FCC-ee and FCC-hh would also resemble the successful staging of LEP and the LHC: a lepton–lepton machine followed by a hadron collider (both for protons and heavy ions). In the case of the FCC, possibly even a future muon collider could then follow as a third stage.

Fig. 1.

FCC-ee is a dream machine for precision measurements, taking the successful LEP scheme into entirely new territory (figure 1). Precise measurements of the properties of the Z, W and Higgs boson and the top quark, together with much improved measurements of other input parameters to the Standard Model such as the electromagnetic and strong coupling constants, would provide sensitivity to new particles with masses in the range 10–70 TeV.

Common lattice

The bulk of FCC-ee will comprise around 8000 normal-conducting low-power and cost-effective twin-aperture dipole magnets, 3000 focusing magnets and between 26 (Z pole) and 161 (tt̅ threshold) four-cavity radio-frequency (RF) cryomodules, to compensate for the energy loss from synchrotron radiation and provide the required accelerating voltage. Currently, two interaction points are planned for high-luminosity FCC-ee operations, though up to four can be accommodated. A common FCC-ee lattice has been designed for all energy stages except for the highest energy tt̅ threshold, where a small rearrangement of the beamline passing through the RF cavities will be needed. The basic cell of the FCC-ee lattice has been chosen for operation at a beam energy of 182.5 GeV and combines four dipole magnets and two main quadrupoles in a 50 m-long section. Moreover, to achieve the required high luminosities, the vertical beta function at the interaction points (called βy*) has to be very small (0.8 mm) at the Z pole, which is 50 times smaller than for LEP but about three times larger than for the SuperKEKB accelerator now being commissioned in Japan. The reduction in βy* is possible because of technological innovations during the past three decades (such as local chromatic correction of the final-quadrupole doublet and use of a crab-waist collision scheme) and thanks to the large size of the ring.

Racetrack coil

Indeed, achieving the unprecedented FCC-ee luminosity of up to 4 × 1036 cm–2s–1 (the total for two experiments), while minimising the amount of synchrotron radiation near the detector, called for considerable effort in designing the final-focus system. Combined with a small crossing angle of 30 mrad, the minimum distance from the interaction point to the first quadrupole is 2 m, which is a compromise between beam dynamics and detector constraints. The present optics design has a momentum acceptance of around 2%, which is one of the most critical requirements of the FCC-ee design because it determines the beam lifetime.

A distinct feature of FCC-ee, in contrast to LEP, is the use of separate beam pipes for the two counter-rotating electron and positron beams, based on energy-efficient dual-aperture main magnets (pictured above). The two separate rings allow operation with a large number of bunches – up to around 16,000 at the Z pole – by avoiding parasitic collisions. This approach also allows for a well-centered orbit all around the ring and a nearly perfect mitigation of the energy “sawtooth” at the highest tt̅ energies. A so-called tapering scheme is foreseen, which will enable the strengths of all the magnets to be scaled according to the local energy of the electron and positron beams, taking into account any differences in the energy loss due to synchrotron radiation. Also distinct from LEP, a top-up injection scheme has been designed for FCC-ee to maximise the integrated luminosity, whereby electrons and positrons are injected into the machine by a full-energy booster to maintain a constant high beam current.

Beating the fourth power

When moving to a larger radius and higher energies, one of the key obstacles for colliders is the synchrotron radiation emitted by the accelerated particles because the resulting energy loss increases with the fourth power of a charged particle’s energy. Improving energy efficiency is critical for any future big accelerator, and the development of high-efficiency RF power sources, along with robust higher-gradient superconducting cavities, is at the core of the FCC programme. The cavities can be produced, for example, by applying a thin superconducting film on a copper substrate, as is currently being pursued by CERN in collaboration with global partners (CERN Courier May 2018 p26). To achieve a low power consumption and guarantee sustainable operation, a high conversion efficiency from wall-plug to RF power is critical. The FCC target RF operation efficiency is 65%, profiting from recent innovations in klystron design at CERN.

For FCC-ee to fulfil its promise of precision electroweak measurements, it is also vital that physicists can accurately determine its centre-of-mass energy so that the Z mass can be measured with a relative precision of 3 × 10–5, the total Z width with a precision of 0.1 MeV and the W mass within 0.5 MeV. A strategy based on the resonant-depolarisation technique, as used at LEP, guarantees precise energy measurements every 15-20 minutes for both the electron and positron beam.

The design of the FCC-ee detectors is also described in the FCC design report. Due to the beam crossing angle, the detectors’ solenoid magnetic field is limited to 2 T to confine their impact on the luminosity due to the synchrotron radiation emitted within the solenoid field. Two detector concepts have been optimised for the FCC-ee: CLD, a consolidated option based on the detector developed for CLIC, with a silicon tracker and a 3D-imaging highly-granular calorimeter; and IDEA, a bolder, possibly more cost-effective, design, with a short drift-wire chamber and a dual-readout calorimeter. However, specific detector-technology choices will be made at a later date.

Following the operation of FCC-ee, the same tunnel could host a 100 TeV proton collider, FCC-hh. A very large, circular hadron collider is the only feasible approach to reach significantly higher collision energies than the LHC (13-14 TeV) in the coming decades. A 100 TeV collider would offer access to new particles through direct production in the few-TeV to 30 TeV mass range, far beyond the LHC’s reach. It would also provide much higher rates for phenomena in the sub-TeV mass range and therefore much greater precision on key measurements (CERN Courier May 2017 p34).

Beam screen

Within 25 years of operation, FCC-hh could accumulate an integrated luminosity of around 20 ab–1 in each of the two main experiments. FCC-hh also offers the possibility of colliding heavy ions with protons and heavy ions with heavy ions, adding to its physics opportunities. Reaching the physics goals of such a collider requires a machine availability of about 70%, which is comparable to what has been routinely reached with the LHC. Nevertheless, considering the increased machine complexity and the introduction of an additional machine in the injector chain in the FCC baseline scenario, achieving this target availability poses major challenges.

FCC-hh is envisioned to lie adjacent to the LHC and SPS, with two injection insertions so that protons can be injected from either the LHC or SPS tunnel. In the first case, the beam will be injected at an energy of 3.3 TeV from the LHC (which requires, in addition to new transfer lines and extraction systems, some modifications to allow the LHC to be ramped five times faster than today). In the second case, a new superconducting SPS – from which other experiments would also profit – could provide a beam at 1.3 TeV using fast ramping and cost-effective 6 T superconducting magnets. The FCC design report presents a complete lattice for FCC-hh that is consistent with this layout and the required energy reach. The arc lattice consists of around 500 cells each 200 m long and made up of two short, straight sections and 12 cryo-dipoles, comprising one 14 m-long dipole and one 0.11 m-long sextupole corrector. Integrated studies of the lattice performance are ongoing and will inform the final choice for the magnet design, along with considerations of power efficiency and cost.

Reducing costs

The biggest cost in reaching higher energies is that of the magnets. A primary goal of FCC-hh is to build 16 T superconducting magnets that are a factor of three to five times more cost-effective per TeV than those of the LHC. Achieving this goal would impact many accelerator applications outside physics, from medical treatments to food-quality monitoring and energy storage and distribution. The FCC study has recently launched a global conductor R&D programme involving collaborators from the US, Russia, Europe, Japan and Korea to improve the performance of the niobium-tin conductor and to reduce its cost.

The FCC-hh foresees two high-luminosity experiments, for which a key design challenge is to obtain the target values of βy* in the collision points while protecting the detectors and the magnets from the collision debris. Incredibly, FCC-hh will produce a pile-up of up to 1000 events per bunch crossing, compared to around 200 at HL-LHC. Another major challenge for FCC-hh is the beam-dump system to protect the machine components. Each of the two rings will have to reliably abort proton beams with stored energies of around 8 GJ, which is more than an order of magnitude higher than for HL-LHC. Beam extraction at the FCC has to be fast, and the first prototypes of new kicker generator and superconducting septum technologies are now being tested.

Synchrotron radiation is also an issue, since FCC-hh will emit about 5 MW at 100 TeV, and calls for a novel beam screen held at a temperature of 50 K (compared with 5–20 K at the LHC). The FCC-hh beam screen, a prototype of which is shown left, enables cost-effective heat removal and maintains the high quality vacuum while providing shielding from the beam. Finally, cooling the FCC-hh superconducting magnets poses entirely new challenges compared to the LHC. In addition to the higher synchrotron radiation, the cooling system (which, like the LHC will use liquid helium at 1.9 K) will have to cope with higher heat dissipated inside the cold magnets as well as from the cold bore itself. About 100 MW of total cooling power will be required to remove 5 MW of synchrotron radiation heat (see China and Europe bid for post-LHC collider).

Coordinating the future

For almost 90 years, progress in particle physics has gone hand-in-hand with progress in accelerators. Today, capitalising on the great success of the LHC, the field faces pivotal decisions about what collider to build next. Advancing the enabling technologies for a future circular collider can only be done via a coordinated international effort between universities, research centres and industry. It also calls for smart solutions to ensure reliability and sustainability. The results of these efforts are documented in the four volumes of the FCC conceptual design report, which presents a clear route to a post-LHC machine and also serves as an input to the update of the European Strategy for Particle Physics.

The FCC offers great potential for curiosity-driven research with unimaginable consequences. Discoveries of new particles and forces not only alter our perspective of humankind’s position in the universe, but also, either directly or via the technology that made them possible, lead to radical applications that improve our quality of life. In the present age of political turbulence and rapid change, we are proposing an ambitious future accelerator complex to push the boundaries of knowledge and to optimally prepare future generations for the challenges they are sure to face. 

Report summarises dark-sector exploration

Fig. 1.

A report from the ATLAS experiment

In our current understanding of the energy content of the universe, there are two major unknowns: the nature of a non-luminous component of matter (dark matter) and the origin of the accelerating expansion of the universe (dark energy). Both are supported by astrophysical and cosmological measurements but their nature remains unknown. This has motivated a myriad of theoretical models, most of which assume dark matter to be a weakly interacting massive particle (WIMP).

WIMPs may be produced in high-energy proton collisions at the LHC, and are therefore intensively searched for by the LHC experiments. Since dark matter is not expected to interact with the detectors, its production leaves a signature of missing transverse momentum (ETmiss). It can be detected if the dark-matter particles recoil against a visible particle X, which could be a quark or gluon, a photon, or a W, Z or Higgs boson. These are commonly known as X + ETmiss signatures. To interpret these searches, a variety of simplified models are used that describe dark-matter production kinematics with a minimal number of free parameters. These models introduce new spin-0 or spin-1 mediator particles that propagate the interaction between the visible and the dark sectors. Because the mediators must couple to Standard Model (SM) particles in order to be produced in the proton–proton collisions, the mediators can also be directly searched for through their decays to jets, top-quark pairs and potentially even leptons. For certain model parameters, these direct searches can be more sensitive than the X + ETmiss ones.

However, simplified models are not full theories like, for example, supersymmetry. Recent theoretical work has therefore focused on developing more complete, renormalisable models of dark matter, such as two-Higgs doublet models (2HDM) with an additional mediator particle. These models introduce a larger number of free parameters, allowing for a richer phenomenology.

Fig. 2.

Similarly, for dark energy, effective field theory implementations may introduce a stable and non-interacting scalar field that universally couples to matter. This also leads to a characteristic ETmiss signature at the LHC.

ATLAS has recently released a summary gathering the results from more than 20 experimental searches for dark matter and a first collider search for dark energy. The wide range of analyses gives good coverage for the different dark-matter models studied. For new models, such as 2HDM with an additional pseudoscalar mediator, multiple regions of the parameter space are explored to probe the interplay between the masses, mixing angles and vacuum expectation values. For the 2HDM with an additional vector mediator, the resulting exclusion limits are further improved by combining the ETmiss + Higgs analyses where the Higgs boson decays to a pair of photons or b-quarks. For the dark-energy models, two operators at the lowest order effective Lagrangian allow for interactions between SM particles and the new scalar particles. These operators are proportional to the mass or momenta of the SM particles, making them most sensitive to the ETmiss + top–antitop or the ETmiss + jet final states.

To date, no significant excess over the SM backgrounds has been observed in any of the ATLAS searches for dark matter or dark energy. Limits on the simplified models are set on the mediator-versus- dark-matter masses (figure 1), which can also be compared to those obtained by direct detection experiments. For the 2HDM with a pseudoscalar mediator, limits are placed on the heavy pseudoscalar versus the mediator masses, highlighting the complementarity of different channels in different regions of the parameter space (figure 2). Finally, collider limits on the scalar dark energy model (see Colliders join the hunt for dark energy) are also set and for the models studied improve over the limits obtained from astronomical observation and lab measurements by several orders of magnitude. With the full dataset of LHC collisions collected by ATLAS during Run 2, the sensitivity to these models will continue to improve.

First beam at SLAC plasma facility

First beam

FACET-II, a new facility for accelerator research at SLAC National Accelerator Laboratory in California, has produced its first electrons. FACET-II is an upgrade to the Facility for Advanced Accelerator Experimental Tests (FACET), which operated from 2011 to 2016, and will produce high-quality electron beams to develop plasma-wakefield acceleration techniques. The $26 million project, recently approved by the US Department of Energy (DOE), will also operate as a federally sponsored research facility for advanced accelerator research that is open to scientists on a competitive, peer-reviewed basis.

“As a strategically important national user facility, FACET-II will allow us to explore the feasibility and applications of plasma-driven accelerator technology,” said James Siegrist of the DOE Office of Science. “We’re looking forward to seeing the groundbreaking science in this area that FACET-II promises, with the potential for a significant reduction in the size and cost of future accelerators, including free-electron lasers and medical accelerators.”

Whereas conventional accelerators impart energy to charged particles via radiofrequency fields inside metal structures, plasma-wakefield accelerators send a bunch of very energetic particles through a hot ionised gas to create a plasma wake on which a trailing bunch can “surf” and gain energy. This leads to acceleration gradients that are much higher and therefore potentially to smaller machines, but several crucial steps are required before plasma accelerators can become a reality. This is where FACET-II comes in, offering higher-quality beams than FACET, explains project scientist Mark Hogan. “We need to show that we’re able to preserve the quality of the beam as it passes through plasma. High-quality beams are an absolute requirement for future applications in particle and X-ray laser physics.”

Aerial view

SLAC has a rich history in developing such techniques, and the previous FACET facility enabled researchers to demonstrate electron-driven plasma acceleration for both electrons and positrons. FACET-II will use the middle third (corresponding to a length of 1 km) of SLAC’s linear accelerator to generate a 10 GeV electron beam, kitted out with diagnostics and computational tools that will accurately measure and simulate the physics of the new facility’s beams. The FACET-II design also allows for adding the capability to produce and accelerate positrons at a later stage, paving the way for plasma-based electron–positron colliders.

FACET-II has issued its first call for proposals for experiments that will run when the facility goes online in 2020. In mid-October, prospective users of FACET-II presented their ideas for a first round of experiments for evaluation, and the number of proposals is already larger than the number of experiments that can possibly be scheduled for the facility’s first run.

Last year, the AWAKE experiment at CERN demonstrated the first ever acceleration of a beam in a proton-driven plasma (CERN Courier October 2018 p7). Laser-driven plasma-wakefield acceleration is also receiving much attention thanks to advances in high-power lasers (CERN Courier November 2018 p7). “The FACET-II programme is very interesting, with many plasma-wakefield experiments,” says technical coordinator and CERN project leader for AWAKE, Edda Gschwendtner, who is also chair of the FACET-II programme advisory committee.

LHCb’s momentous metamorphosis

Tender loving care

In November 2018 the LHC brilliantly fulfilled its promise to the LHCb experiment, delivering a total integrated proton–proton luminosity of 10 fb–1 from Run 1 and Run 2 combined. This is what LHCb was designed for, and more than 450 physics papers have come from the adventure so far. Having recently finished swallowing these exquisite data, however, the LHCb detector is due some tender loving care.

In fact, during the next 24 months of long-shutdown two (LS2), the 4500 tonne detector will be almost entirely rebuilt. When it emerges from this metamorphosis, LHCb will be able to collect physics events at a rate 10 times higher than today. This will be achieved by installing new detectors capable of sustaining up to five times the instantaneous luminosity seen at Run 2, and by implementing a revolutionary software-only trigger that will enable LHCb to process signal data in an upgraded CPU farm at the frenetic rate of 40 MHz – a pioneering step among the LHC experiments.

Subdetector structure

LHCb is unique among the LHC experiments in that it is asymmetric, covering only one forward region. That reflects its physics focus: B mesons, which, rather than flying out uniformly in all directions, are preferentially produced at small angles (i.e. close to the beam direction) in the LHC’s proton collisions. The detector stretches for 20 m along the beam pipe, with its sub-detectors stacked behind each other like books on a shelf, from the vertex locator (VELO) to a ring-imaging Cherenkov detector (RICH1), the silicon upstream tracker (UT), the scintillating fibre tracker (SciFi), a second RICH (RICH2), the calorimeters and, finally, the muon detector.

The LHCb upgrade was first outlined in 2008, proposed in 2011 and approved the following year at a cost of about 57 million Swiss francs. The collaboration started dismantling the current detector just before the end of 2018 and the first elements of the upgrade are about to be moved underground.

Physics boost

The LHCb collaboration has so far made numerous important measurements in the heavy-flavour sector, such as the first observation of the rare decay B0s  µ+µ, precise measurement of quark-mixing parameters and the observation of new baryonic and pentaquark states. However, many crucial measurements are currently statistically limited. The LHCb upgrade will boost the experiment’s physics reach by allowing the software trigger to handle an input rate around 30 times higher than before, bringing greater precision to theoretically clean observables.

Under construction

Flowing at an immense rate of 4 TB/s, data will travel from the cavern, straight from the detector electronics via some 9000 300 m-long optical fibres, into front-end computers located in a brand-new data centre that is currently nearing completion. There, around 500 powerful custom-made boards will receive the data and transfer it to thousands of processing cores. Current trigger-hardware equipment will be removed and new front-end electronics have been designed for all the experiment’s sub-detectors to cope with the substantially higher readout rates.

For the largest and heaviest LHCb devices, namely the calorimeters and muon stations, the detector elements will remain mostly in place. All the other LHCb detector systems are to be entirely replaced, apart from a few structural frames, the dipole magnet, shielding elements and gas or vacuum enclosures.

Development

Subdetector activities

The VELO at the heart of LHCb, which allows precise measurements of primary and displaced vertices of short-lived particles, is one of the key detectors to be upgraded during LS2. Replacing the current system based on silicon microstrip modules, the new VELO consists of 26 tracking layers made from 55 × 55 µm2 pixel technology, which offers better hit resolution and simpler track reconstruction. The new VELO will also be closer to the beam axis, which poses significant design challenges. A new chip, the VELOPIX, capable of collecting signal hits from 256 × 256 pixels and sending data at a rate of up to 15 Gb/s, was developed for this purpose. Pixel modules include a cutting-edge cooling substrate based on an array of microchannels trenched out of a 260 µm-thick silicon wafer that carry liquid carbon dioxide to keep the silicon at a temperature of –20 °C. This is vital to prevent thermal run-away, since these sensors will receive the heaviest irradiation of all LHC detectors. Prototype modules have recently been assembled and characterised in tests with high-energy particles at the Super Proton Synchrotron.

The RICH detector will still be composed of two systems: RICH1, which discriminates kaons from pions in the low-momentum range, and RICH2, which performs this task in the high-momentum range. The RICH mirror system, which is required to deflect and focus Cherenkov photons onto photodetector planes, will be replaced with a new one that has been optimised for the much increased particle densities of future LHC runs. RICH detector columns are composed of six photodetector modules (PDMs), each containing four elementary cells hosting the multi-anode photomultiplier tubes. A full PDM was successfully operated during 2018, providing first particle signals.

Mounted just between RICH1 and the dipole magnet, the upstream tracker (UT) consists of four planes of silicon microstrip detectors. To counter the effects of irradiation, the detector is contained in a thermal enclosure and cooled to approximately –5 °C using a CO2 evaporative cooling system. Lightweight staves, with a carbon foam back-plane and embedded cooling pipe, are dressed with flex cables and instrumented with 14 modules, each composed of a polymide hybrid circuit, a boron nitride stiffener and a silicon microstrip sensor.

VELO upgrade

Further downstream, nestled between the RICH2 and the magnet, will sit the SciFi – a new tracker based on scintillating fibres and silicon photomultiplier (SiPM) arrays, which replaces the drift straw detectors and silicon microstrip sensors used by the current three tracking stations. The SciFi represents a major challenge for the collaboration, not only due to its complexity, but also because the technology has never been used for such a large area in such a harsh radiation environment. More than 11,000 km of fibre was ordered, meticulously verified and even cured from a few rare and local imperfections. From this, about 1400 mats of fibre layers were recently fabricated in four institutes and assembled into 140 rigid 5 × 0.5 m2 modules. In parallel, SiPMs were assembled on flex cables and joined in groups of 16 with a 3D-printed titanium cooling tube to form sophisticated photodetection units for the modules, which will be operated at about –40 °C.

As this brief overview demonstrates, the LHCb detector is undergoing a complete overhaul during LS2 – with large parts being totally replaced – to allow this unique LHC experiment to deepen and broaden its exploration programme. CERN support teams and the LHCb technical crew are now busily working in the cavern, and many of the 79 institutes involved in the LHCb collaboration from around the world have shifted their focus to this herculean task. The entire installation will have to be ready for the commissioning of the new detector by mid-2020 so that it is ready for the start of Run 3 in 2021.

Standing out from the crowd

Big physics

Advances in particle physics are driven by well-defined innovations in accelerators, instrumentation, electronics, computing and data-analysis techniques. Yet our ability to innovate depends strongly on the talents of individuals, and on how we continue to attract and foster the best people. It is therefore vital that, within today’s ever-growing collaborations, individual researchers feel that their contributions are recognised adequately within the scientific community at large.

Looking back to the time before large accelerators, individual recognition was not an issue in our field. Take Rutherford’s revolutionary work on the nucleus or, more recently, Cowan and Reines’ discovery of the neutrino – there were perhaps a couple of people working in a lab, at most with a technician, yet acknowledgement was at a global scale. There was no need for project management; individual recognition was spot-on and instinctive.

As high-energy physics progressed, the needs of experiments grew. During the 1980s, experiments such as UA1 and UA2 at the Super Proton Synchrotron (SPS) involved institutions from around five to eight countries, setting in motion a “natural evolution” of individual recognition. From those experiments, in which mentoring in family-sized groups played a big role, emerged spontaneous leaders, some of whom went on to head experimental physics groups, departments and laboratories. Moving into the 1990s, project management and individual recognition became even more pertinent. In the experiments at the Large Electron–Positron collider (LEP), the number of physicists, engineers and technicians working together rose by an order of magnitude compared to the SPS days, with up to 30 participating institutions and 20 countries involved in a given experiment.

Today, with the LHC experiments providing an even bigger jump in scale, we must ask ourselves: are we making our immense scientific progress at the expense of individual recognition?

Group goals

Large collaborations have been very successful, and the discovery of the Higgs boson at the LHC had a big impact in our community. Today there are more than 5000 physicists from institutions in more than 40 countries working on the main LHC experiments, and this mammoth scale demands a change in the way we nurture individual recognition and careers. In scientific collaborations with a collective mission, group goals are placed above personal ambition. For example, many of us spend hundreds of hours in the pit or carry out computing and software tasks to make sure our experiments deliver the best data, even though some of this collective work isn’t always “visible”. However, there are increasing challenges nowadays, particularly for young scientists who need to navigate the difficulties of balancing their aspirations. Larger collaborations mean there are many more PhD students and postdocs, while the number of permanent jobs has not increased equivalently; hence we also need to prepare early-career researchers for a non-academic career.

To fully exploit the potential of large collaborations, we need to bring every single person to maximum effectiveness by motivating and stimulating individual recognition and career choices. With this in mind, in spring 2018 the European Committee for Future Accelerators (ECFA) established a working group to investigate what the community thinks about individual recognition in large collaborations. Following an initial survey addressing leaders of several CERN and CERN-recognised experiments, a community-wide survey closed on 26 October with a total of 1347 responses.

Community survey

Participants expressed opinions on several statements related to how they perceive systems of recognition in their collaboration. More than 80% of the participants are involved in LHC experiments and researchers from most European countries were well represented. Just less than half (44%) were permanent staff members at their institute, with the rest comprising around 300 PhD students and 440 postdocs or junior staff. Participants were asked to indicate their level of agreement with a list of statements related to individual recognition. Each answer was quantified and the score distributions were compared between groups of participants, for instance according to career position, experiment, collaboration size, country, age, gender and discipline. Some initial findings are listed over the page, while the full breakdown of results – comprising hundreds of plots – is available at https://ecfa.web.cern.ch.

Conferences: “The collaboration guidelines for speakers at conferences allow me to be creative and demonstrate my talents.” Overall, participants from the LHCb collaboration agree more with this statement compared to those from CMS and especially ATLAS. For younger participants this sentiment is more pronounced. Respondents affirmed that conference talks are an outstanding opportunity to demonstrate to the broader community their creativity and scientific insight, and are perceived to be one of the most important aspects of verifying the success of a scientist.

Publications: “For me it is important to be included as an author of
all collaboration-wide papers.”
Although the effect is less pronounced for participants from very large collaborations, they value being included as authors on collaboration-wide publications. The alphabetic listing of authors is also supported, and at all career stages. Participants had divided opinions when it came to alternatives.

Assigned responsibilities: “I perceive that profiles of positions with responsibility are well known outside the particle-physics community.” The further away from the collaboration, the more challenging it becomes to inform people about the role of a convener, yet the selection as a convenor is perceived to be very important in verifying the success of a scientist in our field. The majority of the participating early-career researchers are neutral or do not agree with the statement that the process of selecting conveners is sufficiently transparent and accessible.

Technical contributions: “I perceive that my technical contributions get adequate recognition in the particle-physics community.”  Hardware and software technical work is at the core of particle-physics experiments, yet it remains challenging to recognise these contributions inside, but especially outside, the collaboration.

Scientific notes: “Scientific notes on analysis methods, detector and physics simulations, novel algorithms, software developments, etc, would be valuable for me as a new class of open publications to recognise individual contributions.” Although participants have very diverse opinions when it comes to making the internal collaboration notes public, they would value the opportunity to write down their novel and creative technical ideas in a new class of public notes.

Beyond disseminating the results of the survey, ECFA will reflect on how it can help to strengthen the recognition of individual achievements in large collaborations. The LHC experiments and other large collaborations have expressed openness to enter a dialogue on the topic, and will be invited by ECFA to join a pan-collaboration working group. This will help to relate observations from the survey to current practices in the collaborations, with the aim of keeping particle physics fit and healthy towards the next generation of experiments.

Understanding naturalness

Nathaniel Craig

What is “naturalness”?

Colloquially, a theory is natural if its underlying parameters are all of the same size in appropriate units. A more precise definition involves the notion of an effective field theory – the idea that a given quantum field theory might only describe nature at energies below a certain scale, or cutoff. The Standard Model (SM) is an effective field theory because it cannot be valid up to arbitrarily high energies even in the absence of gravity. An effective field theory is natural if all of its parameters are of order unity in units of the cutoff. Without fine-tuning, a parameter can only be much smaller than this if setting it to zero increases the symmetry of the theory. All couplings and scales in a quantum theory are connected by quantum effects unless symmetries distinguish them, making it generic for them to coincide.

When did naturalness become a guiding force in particle physics?

We typically trace it back to Eddington and Dirac, though it had precedents in the cosmologies of the Ancient Greeks. Dirac’s discomfort with large dimensionless ratios in observed parameters – among others, the ratio of the gravitational and electromagnetic forces between protons and electrons, which amounts to the smallness of the proton mass in units of the Planck scale – led him to propose a radical cosmology in which Newton’s constant varied with the age of the universe. Dirac’s proposed solutions were readily falsified, but this was a predecessor of the more refined notion of naturalness that evolved with the development of quantum field theory, which drew on observations by Gell-Mann, ’t Hooft, Veltman, Wilson, Weinberg, Susskind and other greats.

Does the concept appear in other disciplines?

There are notions of naturalness in essentially every scientific discipline, but physics, and particle physics in particular, is somewhat unique. This is perhaps not surprising, since one of the primary goals of particle physics is to infer the laws of nature at increasingly higher energies and shorter distances.

Isn’t naturalness a matter of personal judgement?

One can certainly come up with frameworks in which naturalness is mathematically defined – for example, quantifying the sensitivity of some parameter in the theory to variations of the other parameters. However, what one does with that information is a matter of personal judgement: we don’t know how nature computes fine-tuning (i.e. departure from naturalness), or what amount of fine-tuning is reasonable to expect. This is highlighted by the occasional abandonment of mathematically defined naturalness criteria in favour of the so-called Potter Stewart measure: “I know it when I see it.” The element of judgement makes it unproductive to obsess over minor differences in fine-tuning, but large fine-tunings potentially signal that something is amiss. Also, one can’t help but notice that the degree of fine-tuning that is considered acceptable has changed over time.

What evidence is there that nature is natural?

Dirac’s puzzle, the smallness of the proton mass, is a great example: we understand it now as a consequence of the asymptotic freedom of the strong interaction. A natural (of order-unity) value of the QCD gauge coupling at high energies gives rise to an exponentially smaller mass scale on account of the logarithmic evolution of the gauge coupling. Another excellent example, relevant to the electroweak hierarchy problem, is the mass splitting of the charged and neutral pions. From the perspective of an effective field theorist working at the energies of these pions, their mass splitting is only natural if the cutoff of the theory is around 800 MeV. Lo and behold, going up in energy from the pions, the rho meson appears at 770 MeV, revealing the composite nature of the pions and changing the picture in precisely the right way to render the mass splitting natural.

Which is the most troublesome observation for naturalness today?

The cosmological-constant (CC) problem, which is the disagreement by 120 orders of magnitude between the observed and expected value of the vacuum energy density. We understand the SM to be a valid effective field theory for many decades above the energy scale of the observed CC, which makes it very hard to believe that the problem is solved in a conventional way without considerable fine-tuning. Contrast that with the SM hierarchy problem, which is a statement about the naturalness of the mass of the Higgs boson. Data so far show that the cutoff of the SM as an effective field theory might not be too far above the Higgs mass, bringing naturalness within reach of experiment. On the other hand, the CC is only a problem in the context of the SM coupled to gravity, so perhaps its resolution lies in yet-to-be-understood features of quantum gravity.

What about the tiny values of the neutrino masses?

Neutrino masses are not remotely troublesome for naturalness. A parameter can be much smaller than the natural expectation if setting it to zero increases the symmetry of the theory (we call such parameters “technically natural”). For the neutrino, as for any SM fermion, there is an enhanced symmetry when neutrino masses are set to zero. This means that your natural expectation for the neutrino masses is zero, and if they are non-zero, quantum corrections to neutrino masses are proportional to the masses themselves. Although the SM features many numerical hierarchies, the majority of them are technically-natural ones that could be explained by physics at inaccessibly high energies. The most urgent problems are the hierarchies that aren’t technically natural, like the CC problem and the electroweak hierarchy problem.

Has applying the naturalness principle led directly to a discovery?

It’s fair to say that Gaillard and Lee predicted the charm-quark mass by applying naturalness arguments to the mass-splitting of neutral kaons. Of course, the same arguments were also used to (incorrectly) predict a wildly different value of the weak scale! This is a reminder that naturalness principles can point to a problem in the existing theory, and a scale at which the theory should change, but they don’t tell you precisely how the problem is resolved. The naturalness of the neutral kaon mass splitting, or the charged-neutral pion mass splitting, suggests to me that it is more useful to refer to naturalness as a strategy, rather than as a principle.

Unnatural?

A slightly more flippant example is the observation of neutrinos from Supernova 1987A. This marked the beginning of neutrino astronomy and opened the door to unrelated surprises, yet the large water-Cherenkov detectors that detected these neutrinos were originally constructed to look for proton decay predicted by grand unified theories (which were themselves motivated by naturalness arguments).

While it would be great if naturalness-based arguments successfully predict new physics, it’s also worthwhile if they ultimately serve only to draw experimental attention to new places.

What has been the impact of the LHC results so far on naturalness?

There have been two huge developments at the LHC. The first is the discovery of the Higgs boson, which sharpens the electroweak hierarchy problem: we seem to have found precisely the sort of particle whose mass, if natural, points to a significant departure from the SM around the TeV scale. The second is the non-observation of new particles predicted by the most popular solutions to the electroweak hierarchy problem, such as supersymmetry. While evidence for these solutions could lie right around the corner, its absence thus far has inspired both a great deal of uncertainty about the naturalness of the weak scale and a lively exploration of new approaches to the problem. The LHC null results teach us only about specific (and historically popular) models that were inspired by naturalness. It is therefore an ideal time to explore naturalness arguments more deeply. The last few years have seen an explosion of original ideas, but we’re really only at the beginning of the process.

The situation is analogous to the search for dark matter, where gravitational evidence is accumulating at an impressive rate despite numerous null results in direct-detection experiments. These null results haven’t ruled out dark matter itself; they’ve only disfavoured certain specific and historically popular models.

How can we settle the naturalness issue once and for all?

The discovery of new particles around the TeV scale whose properties suggest they are related to the top quark would very strongly suggest that nature is more or less natural. In the event of non-discovery, the question becomes thornier – it could be that the SM is unnatural; it could be that naturalness arguments are irrelevant; or it could be that there are signatures of naturalness that we haven’t recognised yet. Kepler’s symmetry-based explanation of the naturalness of planetary orbits in terms of platonic solids ultimately turned out to be a red herring, but only because we came to realise that the features of specific planetary orbits are not deeply related to fundamental laws.

Without naturalness as a guide, how do theorists go beyond the SM?

Naturalness is but one of many hints at physics beyond the SM. There are some incredibly robust hints based on data – dark matter and neutrino masses, for example. There are also suggestive hints, such as the hierarchical structure of fermion masses, the preponderance of baryons over antibaryons and the apparent unification of gauge couplings. There is also a compelling argument for constructing new-physics models purely motivated by anomalous data. This sort of “ambulance chasing” does not have a stellar reputation, but it’s an honest approach which recognises that the discovery of new physics may well come as another case of “Who ordered that?” rather than the answer to a theoretical problem.

What sociological or psychological aspects are at work?

If theoretical considerations are primarily shaping the advancement of a field, then sociology inevitably plays a central role in deciding what questions are most pressing. The good news is that the scales often tip, and data either clarify the situation or pose new questions. As a field we need to focus on lucidly articulating the case for (and against) naturalness as a guiding principle, and let the newer generations make up their minds for themselves.

ALICE revitalised

ALICE (A Large Ion Collider Experiment) will soon have enhanced physics capabilities thanks to a major upgrade of the detectors, data-taking and data-processing systems. These upgrades will improve the precision on measurements of the high-density, high-temperature phase of strongly interacting matter, the quark–gluon plasma (QGP), together with the exploration of new phenomena in quantum chromodynamics (QCD). Since the start of the LHC programme, ALICE has been participating in all data runs, with the main emphasis on heavy-ion collisions, such as lead–lead, proton–lead, and xenon–xenon collisions. The collaboration has been making major inroads into the understanding of the dynamics of the QGP – a state of matter that prevailed in the first instants of the universe and is recreated in droplets at the LHC.

To perform precision measurements of strongly interacting matter, ALICE must focus on rare probes – such as heavy-flavour particles, quarkonium states, real and virtual photons, and low-mass dileptons – as well as the study of jet quenching and exotic nuclear states. Observing rare phenomena requires very large data samples, which is why ALICE is looking forward to the increased luminosity provided by the LHC in the coming years. The interaction rate of lead ions during the LHC Run 3 is foreseen to reach around 50 kHz, corresponding to an instantaneous luminosity of 6 × 1027 cm–2 s–1. This will enable ALICE to accumulate 10 times more integrated luminosity (more than 10 nb–1) and a data sample 100 times larger than what has been obtained so far. In addition, the upgraded detector system will have better efficiency for the detection of short-lived particles containing heavy-flavour quarks thanks to the improved precision of the tracking detectors.

During long-shutdown two (LS2), several major upgrades to the ALICE detector will take place. These include: a new inner tracking system (ITS) with a new high-resolution, low-material-budget silicon tracker, which extends to the forward rapidities with the new muon forward tracker (MFT); an upgraded time projection chamber (TPC) with gas electron multiplier (GEM) detectors, along with a new readout chip for faster readout; a new fast interaction trigger (FIT) detector and forward diffraction detector. New readout electronics will be installed in multiple subdetectors (the muon spectrometer, time-of-flight detector, transition radiation detector, electromagnetic calorimeter, photon spectrometer and zero-degree calorimeter) and an integrated online–offline (O2) computing system will be installed to process and store the large data volumes.

Detector upgrades

A new all-pixel silicon inner tracker based on CMOS monolithic active pixel sensor (MAPS) technology will be installed covering the mid-rapidity (|η| < 1.5) region of the ITS as well as the forward rapidity (–3.6 < η < –2.45) of the MFT. In MAPS technology, both the sensor for charge collection and the readout circuit for digitisation are hosted in the same piece of silicon instead of being bump-bonded together. The chip developed by ALICE is called ALPIDE, and uses a 180 nm CMOS process provided by TowerJazz. With this chip, the silicon material budget per layer is reduced by a factor of seven compared to the present ITS. The ALPIDE chip is 15 × 30 mm2 in area and contains more than half a million pixels organised in 1024 columns and 512 rows. Its low power consumption (< 40 mW/cm2) and excellent spatial resolution (~5 μm) are perfect for the inner tracker of ALICE.

Inner tracker

The ITS consists of seven cylindrical layers of ALPIDE chips, summing up to 12.5 billion pixels and a total area of 10 m2. The pixel chips are installed on staves with radial distances 22–400 mm away from the interaction point (IP). The beam pipe has also been redesigned with a smaller outer radius of 19 mm, allowing the first detection layer to be placed closer to the IP at a radius of 22.4 mm compared to 39 mm at present. The brand-new ITS detector will improve the impact parameter resolution by a factor of three in the transverse plane and by a factor of five along the beam axis. It will extend the tracking capabilities to much lower pT, allowing ALICE to perform measurements of heavy-flavour hadrons with unprecedented precision and down to zero pT.

In the forward-rapidity region, ALICE detects muons using the muon spectrometer. The new MFT detector is designed to add vertexing capabilities to the muon spectrometer and will enable a number of new measurements that are currently beyond reach. As an example, it will allow us to distinguish J/ψ mesons that are produced directly in the collision from those that come from decays of mesons that contain a beauty quark. The MFT consists of five disks, each composed of two MAPS detection planes, placed perpendicular to the beam axis between the IP and the hadron absorber of the muon spectrometer.

The TPC is the main device for tracking and charged-particle identification in ALICE. The readout rate of the TPC in its present form is limited by its readout chambers, which are based on multi-wire proportional chambers. In order to avoid drift-field distortions produced by ions from the amplification region, the present readout chambers feature a charge gating scheme to collect back-drifting ions that lead to a limitation of the readout rate to 3.5 kHz. To overcome this limitation, new readout chambers employing a novel configuration of stacks of four GEMs have been developed during an extensive R&D programme. This arrangement allows for continuous readout at 50 kHz with lead–lead collisions, at no cost to detector performance. The production of the 72 inner (one GEM stack each) and outer (three GEM stacks each) chambers is now practically completed and certified. The replacement of the chambers in the TPC will take place in summer 2019, once the TPC is extracted from the experimental cavern and transported to the surface.

The new forward interaction trigger, FIT, comprises two arrays of Cherenkov radiators with MCP–PMT sensors and a single, large-size scintillator ring. The arrays will be placed on both sides of the IP. It will be the primary trigger, luminosity and collision time-measurement detector in ALICE. The detector will be capable of triggering at an interaction rate of 50 kHz, with a time resolution better than 30 ps, with 99% efficiency.

The newly designed ALICE readout system presents a change in approach, as all lead–lead collisions that are produced in the accelerator, at a rate of 50 kHz, will be read out in a continuous stream. However, triggered readout will be used by some detectors and for commissioning and calibration runs and the central trigger processor is being upgraded to accommodate the higher interaction rate. The readout of the TPC and muon chambers will be performed by SAMPA, a newly developed, 32-channel front-end analogue-to-digital converter with integrated digital signal processor.

Performance boost

The significantly improved ALICE detector will allow the collaboration to collect 100 times more events during LHC Run 3 compared to Run 1 and Run 2, which requires the development and implementation of a completely new readout and computing system. The O2 system is designed to combine all the computing functionalities needed in the experiment: detector readout, event building, data recording, detector calibration, data reconstruction, physics simulation and analysis. The total data volume produced by the front-end cards of the detectors will increase significantly, reaching a sustained data throughput of up to 3 TB/s. To minimise the requirements of the computing system for data processing and storage, the ALICE computing model is designed for a maximal reduction in the data volume read out from the detectors as early as possible during the data processing. This is achieved by online processing of the data, including detector calibration and reconstruction of events in several steps synchronously with data taking. At its peak, the estimated data throughput to mass storage is 90 GB/s.

Enhancements

A new computing facility for the O2 system is being installed on the surface, near the experiment. It will have a data-storage system with a storage capacity large enough to accommodate a large fraction of data of a full year’s data taking, and will provide the interface to permanent data storage at the tier-0 Grid computing centre at CERN, as well as other data centres.

ALICE upgrade activities are proceeding at a frenetic pace. Soon after the machine stopped in December, experts entered the cavern to open the massive doors of the magnet and started dismounting the detector in order to prepare for the upgrade. Detailed planning and organisation of the work are mandatory to stay on schedule, as Arturo Tauro, the deputy technical coordinator of ALICE explains: “Apart from the new detectors, which require dedicated infrastructure and procedures, we have to install a huge number of services (for example, cables and optical fibres) and perform regular maintenance of the existing apparatus. We have an ambitious plan and a tight schedule ahead of us.”

When the ALICE detector emerges revitalised from the two busy and challenging years of work ahead, it will be ready to enter into a new era of high-precision measurements that will expand and deepen our understanding of the physics of hot and dense QCD matter and the quark–gluon plasma. 

bright-rec iop pub iop-science physcis connect