FACET-II, a new facility for accelerator research at SLAC National Accelerator Laboratory in California, has produced its first electrons. FACET-II is an upgrade to the Facility for Advanced Accelerator Experimental Tests (FACET), which operated from 2011 to 2016, and will produce high-quality electron beams to develop plasma-wakefield acceleration techniques. The $26 million project, recently approved by the US Department of Energy (DOE), will also operate as a federally sponsored research facility for advanced accelerator research that is open to scientists on a competitive, peer-reviewed basis.
“As a strategically important national user facility, FACET-II will allow us to explore the feasibility and applications of plasma-driven accelerator technology,” said James Siegrist of the DOE Office of Science. “We’re looking forward to seeing the groundbreaking science in this area that FACET-II promises, with the potential for a significant reduction in the size and cost of future accelerators, including free-electron lasers and medical accelerators.”
Whereas conventional accelerators impart energy to charged particles via radiofrequency fields inside metal structures, plasma-wakefield accelerators send a bunch of very energetic particles through a hot ionised gas to create a plasma wake on which a trailing bunch can “surf” and gain energy. This leads to acceleration gradients that are much higher and therefore potentially to smaller machines, but several crucial steps are required before plasma accelerators can become a reality. This is where FACET-II comes in, offering higher-quality beams than FACET, explains project scientist Mark Hogan. “We need to show that we’re able to preserve the quality of the beam as it passes through plasma. High-quality beams are an absolute requirement for future applications in particle and X-ray laser physics.”
SLAC has a rich history in developing such techniques, and the previous FACET facility enabled researchers to demonstrate electron-driven plasma acceleration for both electrons and positrons. FACET-II will use the middle third (corresponding to a length of 1 km) of SLAC’s linear accelerator to generate a 10 GeV electron beam, kitted out with diagnostics and computational tools that will accurately measure and simulate the physics of the new facility’s beams. The FACET-II design also allows for adding the capability to produce and accelerate positrons at a later stage, paving the way for plasma-based electron–positron colliders.
FACET-II has issued its first call for proposals for experiments that will run when the facility goes online in 2020. In mid-October, prospective users of FACET-II presented their ideas for a first round of experiments for evaluation, and the number of proposals is already larger than the number of experiments that can possibly be scheduled for the facility’s first run.
Last year, the AWAKE experiment at CERN demonstrated the first ever acceleration of a beam in a proton-driven plasma (CERN Courier October 2018 p7). Laser-driven plasma-wakefield acceleration is also receiving much attention thanks to advances in high-power lasers (CERN Courier November 2018 p7). “The FACET-II programme is very interesting, with many plasma-wakefield experiments,” says technical coordinator and CERN project leader for AWAKE, Edda Gschwendtner, who is also chair of the FACET-II programme advisory committee.
In November 2018 the LHC brilliantly fulfilled its promise to the LHCb experiment, delivering a total integrated proton–proton luminosity of 10 fb–1 from Run 1 and Run 2 combined. This is what LHCb was designed for, and more than 450 physics papers have come from the adventure so far. Having recently finished swallowing these exquisite data, however, the LHCb detector is due some tender loving care.
In fact, during the next 24 months of long-shutdown two (LS2), the 4500 tonne detector will be almost entirely rebuilt. When it emerges from this metamorphosis, LHCb will be able to collect physics events at a rate 10 times higher than today. This will be achieved by installing new detectors capable of sustaining up to five times the instantaneous luminosity seen at Run 2, and by implementing a revolutionary software-only trigger that will enable LHCb to process signal data in an upgraded CPU farm at the frenetic rate of 40 MHz – a pioneering step among the LHC experiments.
LHCb is unique among the LHC experiments in that it is asymmetric, covering only one forward region. That reflects its physics focus: B mesons, which, rather than flying out uniformly in all directions, are preferentially produced at small angles (i.e. close to the beam direction) in the LHC’s proton collisions. The detector stretches for 20 m along the beam pipe, with its sub-detectors stacked behind each other like books on a shelf, from the vertex locator (VELO) to a ring-imaging Cherenkov detector (RICH1), the silicon upstream tracker (UT), the scintillating fibre tracker (SciFi), a second RICH (RICH2), the calorimeters and, finally, the muon detector.
The LHCb upgrade was first outlined in 2008, proposed in 2011 and approved the following year at a cost of about 57 million Swiss francs. The collaboration started dismantling the current detector just before the end of 2018 and the first elements of the upgrade are about to be moved underground.
Physics boost
The LHCb collaboration has so far made numerous important measurements in the heavy-flavour sector, such as the first observation of the rare decay B0s→ µ+µ–, precise measurement of quark-mixing parameters and the observation of new baryonic and pentaquark states. However, many crucial measurements are currently statistically limited. The LHCb upgrade will boost the experiment’s physics reach by allowing the software trigger to handle an input rate around 30 times higher than before, bringing greater precision to theoretically clean observables.
Flowing at an immense rate of 4 TB/s, data will travel from the cavern, straight from the detector electronics via some 9000 300 m-long optical fibres, into front-end computers located in a brand-new data centre that is currently nearing completion. There, around 500 powerful custom-made boards will receive the data and transfer it to thousands of processing cores. Current trigger-hardware equipment will be removed and new front-end electronics have been designed for all the experiment’s sub-detectors to cope with the substantially higher readout rates.
For the largest and heaviest LHCb devices, namely the calorimeters and muon stations, the detector elements will remain mostly in place. All the other LHCb detector systems are to be entirely replaced, apart from a few structural frames, the dipole magnet, shielding elements and gas or vacuum enclosures.
The VELO at the heart of LHCb, which allows precise measurements of primary and displaced vertices of short-lived particles, is one of the key detectors to be upgraded during LS2. Replacing the current system based on silicon microstrip modules, the new VELO consists of 26 tracking layers made from 55 × 55 µm2 pixel technology, which offers better hit resolution and simpler track reconstruction. The new VELO will also be closer to the beam axis, which poses significant design challenges. A new chip, the VELOPIX, capable of collecting signal hits from 256 × 256 pixels and sending data at a rate of up to 15 Gb/s, was developed for this purpose. Pixel modules include a cutting-edge cooling substrate based on an array of microchannels trenched out of a 260 µm-thick silicon wafer that carry liquid carbon dioxide to keep the silicon at a temperature of –20 °C. This is vital to prevent thermal run-away, since these sensors will receive the heaviest irradiation of all LHC detectors. Prototype modules have recently been assembled and characterised in tests with high-energy particles at the Super Proton Synchrotron.
The RICH detector will still be composed of two systems: RICH1, which discriminates kaons from pions in the low-momentum range, and RICH2, which performs this task in the high-momentum range. The RICH mirror system, which is required to deflect and focus Cherenkov photons onto photodetector planes, will be replaced with a new one that has been optimised for the much increased particle densities of future LHC runs. RICH detector columns are composed of six photodetector modules (PDMs), each containing four elementary cells hosting the multi-anode photomultiplier tubes. A full PDM was successfully operated during 2018, providing first particle signals.
Mounted just between RICH1 and the dipole magnet, the upstream tracker (UT) consists of four planes of silicon microstrip detectors. To counter the effects of irradiation, the detector is contained in a thermal enclosure and cooled to approximately –5 °C using a CO2 evaporative cooling system. Lightweight staves, with a carbon foam back-plane and embedded cooling pipe, are dressed with flex cables and instrumented with 14 modules, each composed of a polymide hybrid circuit, a boron nitride stiffener and a silicon microstrip sensor.
Further downstream, nestled between the RICH2 and the magnet, will sit the SciFi – a new tracker based on scintillating fibres and silicon photomultiplier (SiPM) arrays, which replaces the drift straw detectors and silicon microstrip sensors used by the current three tracking stations. The SciFi represents a major challenge for the collaboration, not only due to its complexity, but also because the technology has never been used for such a large area in such a harsh radiation environment. More than 11,000 km of fibre was ordered, meticulously verified and even cured from a few rare and local imperfections. From this, about 1400 mats of fibre layers were recently fabricated in four institutes and assembled into 140 rigid 5 × 0.5 m2 modules. In parallel, SiPMs were assembled on flex cables and joined in groups of 16 with a 3D-printed titanium cooling tube to form sophisticated photodetection units for the modules, which will be operated at about –40 °C.
As this brief overview demonstrates, the LHCb detector is undergoing a complete overhaul during LS2 – with large parts being totally replaced – to allow this unique LHC experiment to deepen and broaden its exploration programme. CERN support teams and the LHCb technical crew are now busily working in the cavern, and many of the 79 institutes involved in the LHCb collaboration from around the world have shifted their focus to this herculean task. The entire installation will have to be ready for the commissioning of the new detector by mid-2020 so that it is ready for the start of Run 3 in 2021.
Advances in particle physics are driven by well-defined innovations in accelerators, instrumentation, electronics, computing and data-analysis techniques. Yet our ability to innovate depends strongly on the talents of individuals, and on how we continue to attract and foster the best people. It is therefore vital that, within today’s ever-growing collaborations, individual researchers feel that their contributions are recognised adequately within the scientific community at large.
Looking back to the time before large accelerators, individual recognition was not an issue in our field. Take Rutherford’s revolutionary work on the nucleus or, more recently, Cowan and Reines’ discovery of the neutrino – there were perhaps a couple of people working in a lab, at most with a technician, yet acknowledgement was at a global scale. There was no need for project management; individual recognition was spot-on and instinctive.
As high-energy physics progressed, the needs of experiments grew. During the 1980s, experiments such as UA1 and UA2 at the Super Proton Synchrotron (SPS) involved institutions from around five to eight countries, setting in motion a “natural evolution” of individual recognition. From those experiments, in which mentoring in family-sized groups played a big role, emerged spontaneous leaders, some of whom went on to head experimental physics groups, departments and laboratories. Moving into the 1990s, project management and individual recognition became even more pertinent. In the experiments at the Large Electron–Positron collider (LEP), the number of physicists, engineers and technicians working together rose by an order of magnitude compared to the SPS days, with up to 30 participating institutions and 20 countries involved in a given experiment.
Today, with the LHC experiments providing an even bigger jump in scale, we must ask ourselves: are we making our immense scientific progress at the expense of individual recognition?
Group goals
Large collaborations have been very successful, and the discovery of the Higgs boson at the LHC had a big impact in our community. Today there are more than 5000 physicists from institutions in more than 40 countries working on the main LHC experiments, and this mammoth scale demands a change in the way we nurture individual recognition and careers. In scientific collaborations with a collective mission, group goals are placed above personal ambition. For example, many of us spend hundreds of hours in the pit or carry out computing and software tasks to make sure our experiments deliver the best data, even though some of this collective work isn’t always “visible”. However, there are increasing challenges nowadays, particularly for young scientists who need to navigate the difficulties of balancing their aspirations. Larger collaborations mean there are many more PhD students and postdocs, while the number of permanent jobs has not increased equivalently; hence we also need to prepare early-career researchers for a non-academic career.
To fully exploit the potential of large collaborations, we need to bring every single person to maximum effectiveness by motivating and stimulating individual recognition and career choices. With this in mind, in spring 2018 the European Committee for Future Accelerators (ECFA) established a working group to investigate what the community thinks about individual recognition in large collaborations. Following an initial survey addressing leaders of several CERN and CERN-recognised experiments, a community-wide survey closed on 26 October with a total of 1347 responses.
Community survey
Participants expressed opinions on several statements related to how they perceive systems of recognition in their collaboration. More than 80% of the participants are involved in LHC experiments and researchers from most European countries were well represented. Just less than half (44%) were permanent staff members at their institute, with the rest comprising around 300 PhD students and 440 postdocs or junior staff. Participants were asked to indicate their level of agreement with a list of statements related to individual recognition. Each answer was quantified and the score distributions were compared between groups of participants, for instance according to career position, experiment, collaboration size, country, age, gender and discipline. Some initial findings are listed over the page, while the full breakdown of results – comprising hundreds of plots – is available at https://ecfa.web.cern.ch.
Conferences:“The collaboration guidelines for speakers at conferences allow me to be creative and demonstrate my talents.” Overall, participants from the LHCb collaboration agree more with this statement compared to those from CMS and especially ATLAS. For younger participants this sentiment is more pronounced. Respondents affirmed that conference talks are an outstanding opportunity to demonstrate to the broader community their creativity and scientific insight, and are perceived to be one of the most important aspects of verifying the success of a scientist.
Publications:“For me it is important to be included as an author of
all collaboration-wide papers.” Although the effect is less pronounced for participants from very large collaborations, they value being included as authors on collaboration-wide publications. The alphabetic listing of authors is also supported, and at all career stages. Participants had divided opinions when it came to alternatives.
Assigned responsibilities:“I perceive that profiles of positions with responsibility are well known outside the particle-physics community.” The further away from the collaboration, the more challenging it becomes to inform people about the role of a convener, yet the selection as a convenor is perceived to be very important in verifying the success of a scientist in our field. The majority of the participating early-career researchers are neutral or do not agree with the statement that the process of selecting conveners is sufficiently transparent and accessible.
Technical contributions:“I perceive that my technical contributions get adequate recognition in the particle-physics community.”Hardware and software technical work is at the core of particle-physics experiments, yet it remains challenging to recognise these contributions inside, but especially outside, the collaboration.
Scientific notes: “Scientific notes on analysis methods, detector and physics simulations, novel algorithms, software developments, etc, would be valuable for me as a new class of open publications to recognise individual contributions.” Although participants have very diverse opinions when it comes to making the internal collaboration notes public, they would value the opportunity to write down their novel and creative technical ideas in a new class of public notes.
Beyond disseminating the results of the survey, ECFA will reflect on how it can help to strengthen the recognition of individual achievements in large collaborations. The LHC experiments and other large collaborations have expressed openness to enter a dialogue on the topic, and will be invited by ECFA to join a pan-collaboration working group. This will help to relate observations from the survey to current practices in the collaborations, with the aim of keeping particle physics fit and healthy towards the next generation of experiments.
Colloquially, a theory is natural if its underlying parameters are all of the same size in appropriate units. A more precise definition involves the notion of an effective field theory – the idea that a given quantum field theory might only describe nature at energies below a certain scale, or cutoff. The Standard Model (SM) is an effective field theory because it cannot be valid up to arbitrarily high energies even in the absence of gravity. An effective field theory is natural if all of its parameters are of order unity in units of the cutoff. Without fine-tuning, a parameter can only be much smaller than this if setting it to zero increases the symmetry of the theory. All couplings and scales in a quantum theory are connected by quantum effects unless symmetries distinguish them, making it generic for them to coincide.
When did naturalness become a guiding force in particle physics?
We typically trace it back to Eddington and Dirac, though it had precedents in the cosmologies of the Ancient Greeks. Dirac’s discomfort with large dimensionless ratios in observed parameters – among others, the ratio of the gravitational and electromagnetic forces between protons and electrons, which amounts to the smallness of the proton mass in units of the Planck scale – led him to propose a radical cosmology in which Newton’s constant varied with the age of the universe. Dirac’s proposed solutions were readily falsified, but this was a predecessor of the more refined notion of naturalness that evolved with the development of quantum field theory, which drew on observations by Gell-Mann, ’t Hooft, Veltman, Wilson, Weinberg, Susskind and other greats.
Does the concept appear in other disciplines?
There are notions of naturalness in essentially every scientific discipline, but physics, and particle physics in particular, is somewhat unique. This is perhaps not surprising, since one of the primary goals of particle physics is to infer the laws of nature at increasingly higher energies and shorter distances.
Isn’t naturalness a matter of personal judgement?
One can certainly come up with frameworks in which naturalness is mathematically defined – for example, quantifying the sensitivity of some parameter in the theory to variations of the other parameters. However, what one does with that information is a matter of personal judgement: we don’t know how nature computes fine-tuning (i.e. departure from naturalness), or what amount of fine-tuning is reasonable to expect. This is highlighted by the occasional abandonment of mathematically defined naturalness criteria in favour of the so-called Potter Stewart measure: “I know it when I see it.” The element of judgement makes it unproductive to obsess over minor differences in fine-tuning, but large fine-tunings potentially signal that something is amiss. Also, one can’t help but notice that the degree of fine-tuning that is considered acceptable has changed over time.
What evidence is there that nature is natural?
Dirac’s puzzle, the smallness of the proton mass, is a great example: we understand it now as a consequence of the asymptotic freedom of the strong interaction. A natural (of order-unity) value of the QCD gauge coupling at high energies gives rise to an exponentially smaller mass scale on account of the logarithmic evolution of the gauge coupling. Another excellent example, relevant to the electroweak hierarchy problem, is the mass splitting of the charged and neutral pions. From the perspective of an effective field theorist working at the energies of these pions, their mass splitting is only natural if the cutoff of the theory is around 800 MeV. Lo and behold, going up in energy from the pions, the rho meson appears at 770 MeV, revealing the composite nature of the pions and changing the picture in precisely the right way to render the mass splitting natural.
Which is the most troublesome observation for naturalness today?
The cosmological-constant (CC) problem, which is the disagreement by 120 orders of magnitude between the observed and expected value of the vacuum energy density. We understand the SM to be a valid effective field theory for many decades above the energy scale of the observed CC, which makes it very hard to believe that the problem is solved in a conventional way without considerable fine-tuning. Contrast that with the SM hierarchy problem, which is a statement about the naturalness of the mass of the Higgs boson. Data so far show that the cutoff of the SM as an effective field theory might not be too far above the Higgs mass, bringing naturalness within reach of experiment. On the other hand, the CC is only a problem in the context of the SM coupled to gravity, so perhaps its resolution lies in yet-to-be-understood features of quantum gravity.
What about the tiny values of the neutrino masses?
Neutrino masses are not remotely troublesome for naturalness. A parameter can be much smaller than the natural expectation if setting it to zero increases the symmetry of the theory (we call such parameters “technically natural”). For the neutrino, as for any SM fermion, there is an enhanced symmetry when neutrino masses are set to zero. This means that your natural expectation for the neutrino masses is zero, and if they are non-zero, quantum corrections to neutrino masses are proportional to the masses themselves. Although the SM features many numerical hierarchies, the majority of them are technically-natural ones that could be explained by physics at inaccessibly high energies. The most urgent problems are the hierarchies that aren’t technically natural, like the CC problem and the electroweak hierarchy problem.
Has applying the naturalness principle led directly to a discovery?
It’s fair to say that Gaillard and Lee predicted the charm-quark mass by applying naturalness arguments to the mass-splitting of neutral kaons. Of course, the same arguments were also used to (incorrectly) predict a wildly different value of the weak scale! This is a reminder that naturalness principles can point to a problem in the existing theory, and a scale at which the theory should change, but they don’t tell you precisely how the problem is resolved. The naturalness of the neutral kaon mass splitting, or the charged-neutral pion mass splitting, suggests to me that it is more useful to refer to naturalness as a strategy, rather than as a principle.
A slightly more flippant example is the observation of neutrinos from Supernova 1987A. This marked the beginning of neutrino astronomy and opened the door to unrelated surprises, yet the large water-Cherenkov detectors that detected these neutrinos were originally constructed to look for proton decay predicted by grand unified theories (which were themselves motivated by naturalness arguments).
While it would be great if naturalness-based arguments successfully predict new physics, it’s also worthwhile if they ultimately serve only to draw experimental attention to new places.
What has been the impact of the LHC results so far on naturalness?
There have been two huge developments at the LHC. The first is the discovery of the Higgs boson, which sharpens the electroweak hierarchy problem: we seem to have found precisely the sort of particle whose mass, if natural, points to a significant departure from the SM around the TeV scale. The second is the non-observation of new particles predicted by the most popular solutions to the electroweak hierarchy problem, such as supersymmetry. While evidence for these solutions could lie right around the corner, its absence thus far has inspired both a great deal of uncertainty about the naturalness of the weak scale and a lively exploration of new approaches to the problem. The LHC null results teach us only about specific (and historically popular) models that were inspired by naturalness. It is therefore an ideal time to explore naturalness arguments more deeply. The last few years have seen an explosion of original ideas, but we’re really only at the beginning of the process.
The situation is analogous to the search for dark matter, where gravitational evidence is accumulating at an impressive rate despite numerous null results in direct-detection experiments. These null results haven’t ruled out dark matter itself; they’ve only disfavoured certain specific and historically popular models.
How can we settle the naturalness issue once and for all?
The discovery of new particles around the TeV scale whose properties suggest they are related to the top quark would very strongly suggest that nature is more or less natural. In the event of non-discovery, the question becomes thornier – it could be that the SM is unnatural; it could be that naturalness arguments are irrelevant; or it could be that there are signatures of naturalness that we haven’t recognised yet. Kepler’s symmetry-based explanation of the naturalness of planetary orbits in terms of platonic solids ultimately turned out to be a red herring, but only because we came to realise that the features of specific planetary orbits are not deeply related to fundamental laws.
Without naturalness as a guide, how do theorists go beyond the SM?
Naturalness is but one of many hints at physics beyond the SM. There are some incredibly robust hints based on data – dark matter and neutrino masses, for example. There are also suggestive hints, such as the hierarchical structure of fermion masses, the preponderance of baryons over antibaryons and the apparent unification of gauge couplings. There is also a compelling argument for constructing new-physics models purely motivated by anomalous data. This sort of “ambulance chasing” does not have a stellar reputation, but it’s an honest approach which recognises that the discovery of new physics may well come as another case of “Who ordered that?” rather than the answer to a theoretical problem.
What sociological or psychological aspects are at work?
If theoretical considerations are primarily shaping the advancement of a field, then sociology inevitably plays a central role in deciding what questions are most pressing. The good news is that the scales often tip, and data either clarify the situation or pose new questions. As a field we need to focus on lucidly articulating the case for (and against) naturalness as a guiding principle, and let the newer generations make up their minds for themselves.
ALICE (A Large Ion Collider Experiment) will soon have enhanced physics capabilities thanks to a major upgrade of the detectors, data-taking and data-processing systems. These upgrades will improve the precision on measurements of the high-density, high-temperature phase of strongly interacting matter, the quark–gluon plasma (QGP), together with the exploration of new phenomena in quantum chromodynamics (QCD). Since the start of the LHC programme, ALICE has been participating in all data runs, with the main emphasis on heavy-ion collisions, such as lead–lead, proton–lead, and xenon–xenon collisions. The collaboration has been making major inroads into the understanding of the dynamics of the QGP – a state of matter that prevailed in the first instants of the universe and is recreated in droplets at the LHC.
To perform precision measurements of strongly interacting matter, ALICE must focus on rare probes – such as heavy-flavour particles, quarkonium states, real and virtual photons, and low-mass dileptons – as well as the study of jet quenching and exotic nuclear states. Observing rare phenomena requires very large data samples, which is why ALICE is looking forward to the increased luminosity provided by the LHC in the coming years. The interaction rate of lead ions during the LHC Run 3 is foreseen to reach around 50 kHz, corresponding to an instantaneous luminosity of 6 × 1027 cm–2 s–1. This will enable ALICE to accumulate 10 times more integrated luminosity (more than 10 nb–1) and a data sample 100 times larger than what has been obtained so far. In addition, the upgraded detector system will have better efficiency for the detection of short-lived particles containing heavy-flavour quarks thanks to the improved precision of the tracking detectors.
During long-shutdown two (LS2), several major upgrades to the ALICE detector will take place. These include: a new inner tracking system (ITS) with a new high-resolution, low-material-budget silicon tracker, which extends to the forward rapidities with the new muon forward tracker (MFT); an upgraded time projection chamber (TPC) with gas electron multiplier (GEM) detectors, along with a new readout chip for faster readout; a new fast interaction trigger (FIT) detector and forward diffraction detector. New readout electronics will be installed in multiple subdetectors (the muon spectrometer, time-of-flight detector, transition radiation detector, electromagnetic calorimeter, photon spectrometer and zero-degree calorimeter) and an integrated online–offline (O2) computing system will be installed to process and store the large data volumes.
Detector upgrades
A new all-pixel silicon inner tracker based on CMOS monolithic active pixel sensor (MAPS) technology will be installed covering the mid-rapidity (|η| < 1.5) region of the ITS as well as the forward rapidity (–3.6 < η < –2.45) of the MFT. In MAPS technology, both the sensor for charge collection and the readout circuit for digitisation are hosted in the same piece of silicon instead of being bump-bonded together. The chip developed by ALICE is called ALPIDE, and uses a 180 nm CMOS process provided by TowerJazz. With this chip, the silicon material budget per layer is reduced by a factor of seven compared to the present ITS. The ALPIDE chip is 15 × 30 mm2 in area and contains more than half a million pixels organised in 1024 columns and 512 rows. Its low power consumption (< 40 mW/cm2) and excellent spatial resolution (~5 μm) are perfect for the inner tracker of ALICE.
The ITS consists of seven cylindrical layers of ALPIDE chips, summing up to 12.5 billion pixels and a total area of 10 m2. The pixel chips are installed on staves with radial distances 22–400 mm away from the interaction point (IP). The beam pipe has also been redesigned with a smaller outer radius of 19 mm, allowing the first detection layer to be placed closer to the IP at a radius of 22.4 mm compared to 39 mm at present. The brand-new ITS detector will improve the impact parameter resolution by a factor of three in the transverse plane and by a factor of five along the beam axis. It will extend the tracking capabilities to much lower pT, allowing ALICE to perform measurements of heavy-flavour hadrons with unprecedented precision and down to zero pT.
In the forward-rapidity region, ALICE detects muons using the muon spectrometer. The new MFT detector is designed to add vertexing capabilities to the muon spectrometer and will enable a number of new measurements that are currently beyond reach. As an example, it will allow us to distinguish J/ψ mesons that are produced directly in the collision from those that come from decays of mesons that contain a beauty quark. The MFT consists of five disks, each composed of two MAPS detection planes, placed perpendicular to the beam axis between the IP and the hadron absorber of the muon spectrometer.
The TPC is the main device for tracking and charged-particle identification in ALICE. The readout rate of the TPC in its present form is limited by its readout chambers, which are based on multi-wire proportional chambers. In order to avoid drift-field distortions produced by ions from the amplification region, the present readout chambers feature a charge gating scheme to collect back-drifting ions that lead to a limitation of the readout rate to 3.5 kHz. To overcome this limitation, new readout chambers employing a novel configuration of stacks of four GEMs have been developed during an extensive R&D programme. This arrangement allows for continuous readout at 50 kHz with lead–lead collisions, at no cost to detector performance. The production of the 72 inner (one GEM stack each) and outer (three GEM stacks each) chambers is now practically completed and certified. The replacement of the chambers in the TPC will take place in summer 2019, once the TPC is extracted from the experimental cavern and transported to the surface.
The new forward interaction trigger, FIT, comprises two arrays of Cherenkov radiators with MCP–PMT sensors and a single, large-size scintillator ring. The arrays will be placed on both sides of the IP. It will be the primary trigger, luminosity and collision time-measurement detector in ALICE. The detector will be capable of triggering at an interaction rate of 50 kHz, with a time resolution better than 30 ps, with 99% efficiency.
The newly designed ALICE readout system presents a change in approach, as all lead–lead collisions that are produced in the accelerator, at a rate of 50 kHz, will be read out in a continuous stream. However, triggered readout will be used by some detectors and for commissioning and calibration runs and the central trigger processor is being upgraded to accommodate the higher interaction rate. The readout of the TPC and muon chambers will be performed by SAMPA, a newly developed, 32-channel front-end analogue-to-digital converter with integrated digital signal processor.
Performance boost
The significantly improved ALICE detector will allow the collaboration to collect 100 times more events during LHC Run 3 compared to Run 1 and Run 2, which requires the development and implementation of a completely new readout and computing system. The O2 system is designed to combine all the computing functionalities needed in the experiment: detector readout, event building, data recording, detector calibration, data reconstruction, physics simulation and analysis. The total data volume produced by the front-end cards of the detectors will increase significantly, reaching a sustained data throughput of up to 3 TB/s. To minimise the requirements of the computing system for data processing and storage, the ALICE computing model is designed for a maximal reduction in the data volume read out from the detectors as early as possible during the data processing. This is achieved by online processing of the data, including detector calibration and reconstruction of events in several steps synchronously with data taking. At its peak, the estimated data throughput to mass storage is 90 GB/s.
A new computing facility for the O2 system is being installed on the surface, near the experiment. It will have a data-storage system with a storage capacity large enough to accommodate a large fraction of data of a full year’s data taking, and will provide the interface to permanent data storage at the tier-0 Grid computing centre at CERN, as well as other data centres.
ALICE upgrade activities are proceeding at a frenetic pace. Soon after the machine stopped in December, experts entered the cavern to open the massive doors of the magnet and started dismounting the detector in order to prepare for the upgrade. Detailed planning and organisation of the work are mandatory to stay on schedule, as Arturo Tauro, the deputy technical coordinator of ALICE explains: “Apart from the new detectors, which require dedicated infrastructure and procedures, we have to install a huge number of services (for example, cables and optical fibres) and perform regular maintenance of the existing apparatus. We have an ambitious plan and a tight schedule ahead of us.”
When the ALICE detector emerges revitalised from the two busy and challenging years of work ahead, it will be ready to enter into a new era of high-precision measurements that will expand and deepen our understanding of the physics of hot and dense QCD matter and the quark–gluon plasma.
In high-energy physics laboratories, experiments use heavy-ion collisions to investigate the properties of matter at extremely high temperature and density, and to study the quark–gluon plasma. This monograph explains the ideas involved in the theoretical analysis of the data produced in such experiments. It comprises three parts, the first two of which are independent but lay the ground for the topics addressed later.
The book starts with an overview of the (vacuum) theory of hadronic interactions at low energy: vacuum propagators for fields of different spins are introduced and then the phenomenon of spontaneous symmetry breaking leading to Goldstone bosons and chiral perturbation theory are discussed.
The second part covers equilibrium thermal field theory, which is formulated in the real time method. Finally, in the third part, the methods previously developed are applied to the study of different thermal one- and two-point functions in the hadronic phase, using chiral perturbation theory.
The book includes chosen exercises proposed at the end of each chapter and fully worked out. These are used to provide important side results or to develop calculations, without breaking the flow of the main text. Similarly, some of the results mentioned in the text are derived in a few appendices. It is a useful reference for graduate students interested in relativistic thermal field theory.
A century after its formulation by Einstein, the theory of general relativity is at the core of our interpretation of various astrophysical and cosmological observations – from neutron stars and black-hole formation to the accelerated expansion of the universe. This new advanced textbook on relativity aims to present all the different aspects of this brilliant theory and its applications. It brings together, in a coherent way, classical Newtonian physics, special relativity and general relativity, emphasising common underlying principles.
The book is structured in three parts around these topics. First, the authors provide a modern view of Newtonian theory, focusing on the aspects needed for understanding quantum and relativistic contemporary physics. This is followed by a discussion of special relativity, presenting relativistic dynamics in inertial and accelerated frames, and an overview of Maxwell’s theory of electromagnetism.
In the third part the authors delve into general relativity, developing the geometrical framework in which Einstein’s equations are formulated, and present many relevant applications, such as black holes, gravitational radiation and cosmology.
This book is aimed at undergraduate and graduate students, as well as researchers wishing to acquire a deeper understanding of relativity. But it could also appeal to the curious reader with a scientific background who is interested in discovering the profound implications of relativity and its applications.
Scientific essays well suited to the interested layperson are notoriously difficult to write. It is then not surprising that various popular books, articles and internet sites recycle similar analogies – or even entire discussions – to explain scientific concepts with the same standardised, though very polished, language. CERN theorist Alvaro De Rújula recently challenged this unfortunate and relatively recent trend by proposing a truly original and unconventional essay for agile minds. There are no doubts that this book will be appreciated not only by the public but also by undergraduate students, teachers and active scientists.
Enjoy our Universe consists of 37 short chapters accounting for the serendipitous evolution of basic science in the last 150 years, roughly starting with the Faraday–Maxwell unification and concluding with the discovery of the Higgs boson and of gravitational waves. While going through the “fun” of our universe, the author describes the conceptual and empirical triumphs of classical and quantum field theories without indulging in excessive historic or technical details. Those who had the chance to attend lectures or talks given by De Rujula will recognise the “parentheses” (i.e. swift digressions) that he literally opens and closes in his presentations with gigantic brackets on the slides. A rather original glossary is included at the end of the text for the benefit of general readers.
This book is also a collection of opinions, reminiscences and healthy provocations of an active scientist whose contributions undeniably shaped the current paradigm of fundamental interactions. This is a bonus for practitioners of the field (and for curious colleagues), who will often find the essence of long-standing diatribes hidden in a collection of apparently innocent jokes or in the caption of a figure. As the author tries to argue in his introduction, science should always be discussed with that joyful and playful attitude we normally use when talking about sport and other interesting matters not immediately linked to the urgencies of daily life.
One of the most interesting subliminal suggestions of this book is that physics is not a closed logical system. Basic science in general (and physics in particular) can only prosper if the confusion of ideas is tolerated and encouraged, at least within certain reasonable limits.
The text is illustrated with drawings by the author himself and this aspect, among others, brings to mind an imaginative popular essay by George Gamow (Gravity 1962), where the author drew his own illustrations (unfortunately not in colour) with a talent comparable to De Rújula’s. The inspiration in this book is also a reminder of the autobiographical essay of Victor Weisskopf written almost 30 years ago, entitled The Joy of Insight, which echoes the enjoyment of the universe and suggests that the true motivation for basic science is the fun of curiosity: all the rest is irrelevant. So, please, enjoy our universe since you have no other choice!
Enrico Fermi can be considered as one of the greatest physicists of all time due to his genius creativity in both theoretical and experimental physics. This book describes his prodigious story, as a man and a scientist.
Born in Rome in 1901, Fermi spent the first part of his life in Italy, where he made his brilliant debut in theoretical physics in 1926 by applying statistical mechanics to atomic physics in a quantum framework, thus sealing the birth of what is now known as Fermi–Dirac statistics. In 1933 he postulated the original theory of weak interactions to explain the mysterious results on nuclear ß decays. Having soon become a theoretical “superstar”, he then switched to experimental nuclear physics, leading a celebrated team of young physicists at the University of Rome, known as the “boys”. Among them were Edoardo Amaldi, Ettore Majorana, Bruno Pontecorvo, Franco Rasetti and Emilio Segrè. They nicknamed him “the Pope” since he knew and understood everything and was considered to be simply infallible. His discoveries on neutron-induced radioactivity and on the neutron slowing-down effect earned him the Nobel Prize in Physics in 1938.
Those were, however, difficult years for Fermi because of Italy’s inconsistent research strategy and harsh political situation of fascism and antisemitism. Fermi left with his family to go to the US in December 1938, using the Nobel ceremony as a chance to travel abroad. Initially at Columbia University, Fermi then moved to the “Met Lab” of the University of Chicago, which was the seed of the Manhattan Project. There, he created the first self-sustained nuclear reactor in December 1942. The breakthrough ushered in the nuclear age, leaving a lasting impact on physics, engineering, medicine and energy – not to mention the development of nuclear weapons. In 1944 Fermi moved to the Manhattan Project’s secret laboratory in Los Alamos. Within this project, he collaborated with some of the world’s top scientists, including Hans Bethe, Niels Bohr, Richard Feynman, John von Neumann, Isidor Rabi, Leo Szilard and Edward Teller. These were terrible times of war.
When the Second World War concluded, Fermi resumed his research activities with energy and enthusiasm. On the experimental front he focused on nuclear physics, particle accelerators and technology, and early computers. On the theoretical front he concentrated on the origin of extreme high-energy cosmic rays. He also campaigned on the peaceful use of nuclear physics. As in Rome, in Chicago he was also the master of a wonderful school of pupils, among whom were several Nobel laureates. Fermi sadly died prematurely in 1954.
This book is about the epic life of Fermi, mostly known to the general public for the first ever nuclear reactor and the Manhattan Project, but to scientists for his theoretical and experimental discoveries – all diverse and crucial in modern physics – which always resulted in major advances. He remains less known as a personality or a public figure, and his scientific legacy is somehow underestimated. The merit of this book is therefore to bring Fermi’s genius within everyone’s reach.
Many renowned texts have been dedicated to Fermi until now, offering various perspectives on his life and his work. First of all on the personal life of Fermi, there is Atoms in the Family (1954) by his widow, Laura. Exhaustive information about Fermi’s outstanding works in physics can be found in the volume Enrico Fermi, Physicist (1970) by his friend and colleague Emilio Segrè, Nobel laureate and Gino Segrè’s uncle, and in Enrico Fermi: Collected Papers, two volumes published in the 1960s by the University of Chicago. Also worth mentioning are: Fermi Remembered (2004), edited by Nobel laureate James W Cronin; Enrico Fermi: His Work and Legacy (2001, then 2004), edited by C Bernardini and L Bonolis, and The Lost Notebook of Enrico Fermi by F Guerra and N Robotti (2015, then 2017), both published by the Italian Physical Society–Springer. Finally, published almost at the same time as Segrè and Hoerlin’s book, is another biography of Fermi: The Last Man Who Knew Everything by D N Schwartz, the son of Nobel laureate Melvin Schwartz. In their “four-handed” book, Segrè and Hoerlin have highlighted with expertise the scientific biography of Fermi and his extraordinary achievements, and described with emotion the human, social and political aspects of his life.
Readers familiar with Fermi’s story will enjoy this book, which is as scientifically sound as a textbook but at the same time bears the gripping character of a novel.
The High-Luminosity LHC (HL-LHC) has reached its halfway point. The upgrade project was launched eight years ago and is scheduled to start up in 2026, following major interventions to the CERN accelerator complex. From 15 to 18 October, representatives of the institutes contributing to the HL-HLC gathered at CERN for the 8th annual meeting of the HL-LHC to assess progress as the project moves from prototyping to the series production phase for much of the equipment.
The HL-LHC annual meeting is a chance to conduct a global review of the project. The civil-engineering work has progressed since it began in the spring: excavations have reached a depth of 30 m at Point 1 (ATLAS) and 25 m at Point 5 (CMS). The two 80 m shafts should be fully excavated by the beginning of 2019. As for the accelerator, one of the key tasks is the production of around 100 magnets of 11 different types. Some of these, notably the main superconducting quadrupole magnets that will replace the LHC’s triplets and focus the beams very strongly before they collide, are made from the conductor niobium tin, which is particularly difficult to work with. The short prototype phase is already nearing completion for the quadrupole magnets: the long (7.15 m) quadrupoles are being produced at CERN, while shorter (4.2 m) quadrupoles are being developed in the framework of the US LHC-AUP (LHC Accelerator Upgrade Project) collaboration. Several short prototypes have already reached the required intensities on both sides of the Atlantic.
New dipole magnets at the interaction points, which divert the beams before and after the collision point, are being developed in Japan and Italy. One short model has been successfully tested at the KEK laboratory in Japan and a second is in the process of being tested. INFN in Italy is also assembling a short model. Finally, progress is being made on the development of the corrector magnets at CERN and in Spain (CIEMAT), Italy (INFN) and China (IHEP), with several prototypes already tested. In 2022, a test line will be installed at CERN’s SM18 hall to test the first magnet chains.
One of the major successes of 2018 is the installation in the Super Proton Synchrotron (SPS) of a test bench with an autonomous cryogenic unit. The test bench houses two DQW (double-quarter wave) crab cavities, one of two designs under study (CERN Courier May 2018 p18). The two cavities rotated the proton bunches as soon as the tests began in May, marking a world first. The construction of the DQW cavities will continue while the second architecture, the radiofrequency dipole, is being developed in the US.
Many other developments were presented during the symposium: new collimators have been tested in the LHC; a beam absorber for the injection points from the SPS was tested over the summer and will be installed during the LHC’s second long shutdown; a demonstrator for a magnesium-diboride superconducting link is currently being validated; and studies have been undertaken to test and adjust the remote alignment of all the equipment in the interaction regions.
Over the four days that the meeting took place, some 180 presentations covered a wide range of technologies developed for the HL-LHC and beyond.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.