In high-energy nucleus–nucleus collisions, heavy-flavour quarks (charm and beauty) are produced on a very short time scale in initial hard-scattering processes and thus they experience the entire evolution of the collision. Such quarks are valuable probes to study the mechanisms of energy loss and hadronisation in the hot and dense matter, the quark–gluon plasma, formed in heavy-ion collisions.
To investigate these effects, proton–proton (pp) and proton–lead (p–Pb) collisions are measured as a reference. While the former allows the study of heavy-flavour production when no medium is formed, the latter gives access to cold nuclear matter effects, namely parton scattering in the initial state and modifications of the parton densities in the nucleus.
The excellent electron identification capabilities and track impact parameter resolution of the ALICE detector enable measurements of electrons from heavy-flavour hadron decays at mid-rapidity. To study the predicted quark mass dependence of the parton energy loss, the contributions of electrons from charm- and beauty-hadron decays are statistically separated using the different impact parameter distributions as a proxy for their decay length and empirical estimations of the background.
The measurement of electrons from heavy-flavour hadron decays in p–Pb collisions shows no indication of a modification of the production with respect to pp collisions at high transverse momentum (pT), indicating that cold nuclear matter effects are small. The observed reduction in yield at high pT in central Pb–Pb collisions relative to pp interactions can thus be attributed to the presence of the hot and dense medium formed in Pb–Pb collisions. This implies that beauty quarks interact with the medium.
The larger suppression of electrons from both charm- and beauty-hadron decays compared with the beauty-only measurement is consistent with the ordering of charm and beauty suppression seen previously in the comparison of prompt D mesons (measured by ALICE) and J/ψ from B meson decays (measured by CMS). The larger samples of Pb–Pb collisions in Run 2 will improve the precision of the measurements and will make it possible to determine if beauty quarks participate in the collective expansion of the quark–gluon plasma.
Recently, the CMS collaboration performed an updated search for a neutral Higgs boson decaying into two τ leptons using 13 fb−1 of data recorded during 2016. Although the existence of the Higgs has been established beyond doubt since its debut in the CMS and ATLAS detectors in 2012, the vast majority of Higgs bosons recorded so far concern its decay into pairs of bosons. Observing the Higgs via its decays into pairs of fermions further tests the predictions of the Standard Model (SM). In particular, τ leptons have played a major role in measuring the Yukawa couplings between the Higgs and fermions, and thus proved to be an important tool for discovering new physics at the LHC.
CMS first reported evidence for Higgs to ττ decays in 2014. With a lifetime of around 10–13 seconds and a mass of 1.776 GeV, τ leptons present a unique but challenging experimental signature at hadron colliders. Their very short lifetime means that τ particles decay in the LHC beam pipe before reaching the inner layers of the CMS detector. Approximately 35% of the time, the τ decays into two neutrinos plus a lighter lepton, while 65% of the time it decays into a single neutrino and hadrons. τ decays yield low charged and neutral particle multiplicities: more than 95% of the hadronic decays contain just one or three charged hadrons and less than two neutral pions. The primary difficulty when dealing with the τ is the distinction between genuine τ leptons and copiously produced quark and gluon jets that can be misidentified as taus.
To identify the dominant τ decay modes, CMS has developed a powerful τ reconstruction algorithm, which makes use of the single-particle reconstruction procedure (called particle flow). Charged hadrons are combined with photons from neutral pion decays to reconstruct τ decay modes with one or three charged hadrons and neutral pions (figure 1). The algorithm also pays particular attention to the effects of detector materials in converting photons into electron–positron pairs. The large magnetic field of CMS causes secondary electrons to bend, resulting in broad signatures in the phi (azimuthal) co-ordinate, and “strips” are created by clustering photons and electrons via an iterative process. In a new development for LHC Run 2, the strip size is allowed to vary based on the momentum of the clustered candidates.
Applying the latest τ algorithm, along with numerous other analysis techniques, CMS finds no excess of events in which a Higgs decays into two τ leptons compared to the expectation from the SM. Instead, upper limits were determined for the product of the production cross-section and branching fraction for masses in the region 90–3200 GeV, and the results were also interpreted in the context of the Minimal Supersymmetric SM (MSSM) (figure 2). The LHC is now operating at its highest energy and an increase in instantaneous luminosity is planned. The next few years of operations will therefore be vital for further testing the SM and MSSM using the τ lepton as a tool.
The SNOLAB laboratory in Ontario, Canada, has received a grant of $28.6m to help secure its next three years of operations. The facility is one of 17 research facilities to receive support through Canada’s Major Science Initiative (MSI) fund, which exists to secure state-of-the-art national research facilities.
SNOLAB, which is located in a mine 2 km beneath the surface, specialises in neutrino and dark-matter physics and claims to be the deepest cleanroom facility in the world. Current experiments located there include: PICO and DEAP-3600, which search for dark matter using bubble-chamber and liquid-argon technology, respectively; EXO, which aims to measure the mass and nature of the neutrino; HALO, designed to detect supernovae; and a new neutrino experiment SNO+ based on the existing SNO detector.
The new funds will be used to employ the 96-strong SNOLAB staff and support the operations and maintenance of the lab’s facilities.
Researchers at the GSI Helmholtz Center for Heavy Ion Research in Darmstadt, Germany, have demonstrated the feasibility of using carbon ions to treat cardiac arrhythmia, in which abnormal electrical patterns can lead to sudden heart failure or permanent damage as a result of stroke. Conventional treatments for certain forms of cardiac arrhythmia include drugs or “catheter ablation,” in which catheters are guided through blood vessels to the heart to destroy certain tissue. The GSI team, in conjunction with physicians from Heidelberg University and the Mayo Clinic in the US, have now shown that high-energy carbon ions produced by a particle accelerator can in principle be used to perform such treatments without catheters.
The non-invasive procedure induces specific changes to cardiac tissue that prevent the transmission of electrical signals, permanently interrupting the propagation of disruptive impulses. Following promising results from initial tests on cardiac cell cultures and beating-heart preparations, the researchers developed an animal study. Further detailed studies are needed, however, before the method can start to benefit patients.
A crucial advantage of the new method it that the ions can penetrate to any desired depth. Irradiating cardiac tissue with carbon ions appears as a promising, non-invasive alternative to catheters, and ultimately ion-based procedures are expected to take a few minutes compared with a few hours. “It is exciting that the carbon beam could work with surgical precision in particularly sensitive areas of the body,” says Paolo Giubellino, scientific managing director of FAIR and GSI and former spokesperson of the LHC’s ALICE experiment at CERN. “We’re proud that the first steps toward a new therapy have now been taken.”
Although the night sky appears dark between the stars and galaxies that we can see, a strong background emission is present in other regions of the electromagnetic spectrum. At millimetre wavelengths, the cosmic microwave background (CMB) dominates this emission, while a strong X-ray background peaks at sub-nanometre wavelengths. For the past 50 years it has also been known that a diffuse gamma-ray background at picometre wavelengths also illuminates the sky away from the strong emission of the Milky Way and known extra-galactic sources.
This so-called isotropic gamma-ray background (IGRB) is expected to be uniform on large scales, but can still contain anisotropies on smaller scales. The study of these anisotropies is important for identifying the nature of the unresolved IGRB sources. The best candidates are star-forming galaxies and active galaxies, in particular blazars, which have a relativistic jet pointing towards the Earth. Another possibility to be investigated is whether there is a detectable contribution from the decay or the annihilation of dark-matter particles, as predicted by models of weakly interacting massive particles (WIMPs).
Using NASA’s Fermi Gamma-ray Space Telescope, a team led by Mattia Fornasa from the University of Amsterdam in the Netherlands studied the anisotropies of the IGRB in observations acquired over more than six years. This follows earlier results published in 2012 by the Fermi collaboration and shows that there are two different classes of gamma-ray sources. A specific type of blazar appears to dominate at the highest energies, while at lower frequencies star-forming galaxies or another class of blazar is thought to imprint a steeper spectral slope in the IGRB. A possible additional contribution from WIMP annihilation could not be identified by Fornasa and collaborators.
The constraints on dark matter will improve with new data continuously collected by Fermi
The first step in such an analysis is to exclude the sky area most contaminated by the Milky Way and extra-galactic sources, and then to subtract remaining galactic contributions and the uniform emission of the IGRB. The resulting images include only the IGRB anisotropies, which can be characterised by computing the associated angular power spectrum (APS) similarly to what is done for the CMB anisotropies. The authors do this both for a single image (“auto-APS”) and between images recorded in two different energy regions (“cross-APS”).
The derived auto-APS and cross-APS are found to be consistent with a Poisson distribution, which means they are constant on all angular scales. This absence of scale dependence in gamma-ray anisotropies suggests that the main contribution comes from distant active galactic nuclei. On the other hand, the emission by star-forming galaxies and dark-matter structures would be dominated by their local distribution that is less uniform on the sky and thus would lead to enhanced power at characteristic angular scales. This allowed Fornasa and co-workers to derive exclusion limits on the dark-matter parameter space. Although less stringent than the best limits achieved from the average intensity of the IGRB or from the observation of dwarf spheroidal galaxies, they independently confirm the absence, so far, of a gamma-ray signal from dark matter.
The constraints on dark matter will improve with new data continuously collected by Fermi, but a potentially more promising approach is to complement them at higher gamma-ray energies with data from the future Cherenkov Telescope Array and possibly also with high-energy neutrinos detected by IceCube.
This 11 m-high structure with thick steel walls will soon contain a prototype detector for the Deep Underground Neutrino Experiment (DUNE), a major international project based in the US for studying neutrinos and proton decay. It is being assembled in conjunction with CERN’s Neutrino Platform, which was established in 2014 to support neutrino experiments hosted in Japan and the US (CERN Courier July/August 2016 p21), and is pictured here in December as the roof of the structure was lowered into place. Another almost identical structure is under construction nearby and will house a second prototype detector for DUNE. Both are being built at CERN’s new “EHN1” test facility, which was completed last year at the north area of the laboratory’s Prévessin site.
DUNE, which is due to start operations in the next decade, will address key outstanding questions about neutrinos. In addition to determining the ordering of the neutrino masses, it will search for leptonic CP violation by precisely measuring differences between the oscillations of muon-type neutrinos and antineutrinos into electron-type neutrinos and antineutrinos, respectively (CERN Courier December 2015 p19). To do so, DUNE will consist of two advanced detectors placed in an intense neutrino beam produced at Fermilab’s Long-Baseline Neutrino Facility (LBNF). One will record particle interactions near the source of the beam before the neutrinos have had time to oscillate, while a second, much larger detector will be installed deep underground at the Sanford Underground Research Laboratory in Lead, South Dakota, 1300 km away.
Insertion of the 3 × 1 × 1 m3 technology demonstrator in the cryostat of the dual-phase protoDUNE module. Image credit: M Brice/CERN.
In collaboration with CERN, the DUNE team is testing technology for DUNE’s far detector based on large liquid-argon (LAr) time-projection chambers (TPCs). Two different technologies are being considered – single-phase and double-phase LAr TPCs – and the eventual DUNE detectors will comprise four modules, each with a total LAr mass of 17 kt. The single-phase technique is well established, having been deployed in the ICARUS experiment at Gran Sasso, while the double-phase concept offers potential advantages. Both may be used in the final DUNE far detector. Scaling LAr technology to such industrial levels presents several challenges – in particular the very large cryostats required, which has led the DUNE collaboration to use technological solutions inspired by the liquified-natural-gas (LNG) shipping industry.
The outer structure of the cryostat(red, pictured at top) for the single-phase protoDUNE module is now complete, and an equivalent structure for the double-phase module is taking shape just a few metres away and is expected to be complete by March. In addition, a smaller technology demonstrator for the double-phase protoDUNE detector is complete and is currently being cooled down at a separate facility on the CERN site (image above). The 3 × 1 × 1 m3 module will allow the CERN and DUNE teams to perfect the double-phase concept, in which a region of gaseous argon situated above the usual liquid phase provides additional signal amplification.
The large protoDUNE modules are planned to be ready for test beam by autumn 2018 at the EHN1 facility using dedicated beams from the Super Proton Synchrotron. Given the intensity of the future LBNF beam, for which Fermilab’s Main Injector recently passed an important milestone by generating a 700 kW, 120 GeV proton beam for a period of more than one hour, the rate and volume of data produced by the DUNE detectors will be substantial. Meanwhile, the DUNE collaboration continues to attract new members and discussions are now under way to share responsibilities for the numerous components of the project’s vast far detectors (see “DUNE collaboration meeting comes to CERN” in this month’s Faces & Places).
Inside the IB3 Tech Building at Fermilab on the outskirts of Chicago, a heavy-duty machine several metres long slowly winds a flat superconducting cable. Watching the bespoke coil winder – called the Spirex and manufactured by Italian firm SELVA – in action, and the meticulous attention to detail from the coil’s specialist operators, is mesmerising. Their task is to fabricate the precision coils that will form the core of novel magnets for CERN’s High-Luminosity LHC (HL-LHC) project, scheduled to begin operation in the early 2020s. “It has to make 50 turns in total, 22 on the inner layer and 28 on the outer,” explains Fred Nobrega, of Fermilab’s magnet-systems department. The main challenge is the niobium-tin (Nb3Sn) material, he says. “Bend it and it breaks like spaghetti.”
The HL-LHC magnets will be built from Nb3Sn, a new conductor used for the first time in an accelerator. Unlike copper, however,Nb3Sn is extremely brittle. Winding turns around the ends of the coil is particularly difficult, says Nobrega, and new chemical and heat treatments are being developed in the current R&D phase of the project at Fermilab to address this issue. The aim is to move from the prototype stage directly to the mass production of 45 long coils that are uniform and of high quality. A further 45 coils will be manufactured more than 1000 km away at Brookhaven National Laboratory (BNL).
Fermilab’s Giorgio Apollinari in the former assembly hall of the CDF experiment, where preparations for HL-LHC magnets are under way.
The HL-LHC relies on a number of innovative magnet and accelerating technologies, most of which are not available off-the-shelf. Key to the new accelerator configuration are powerful superconducting dipole and quadrupole magnets with field strengths of 11 and 12 T, respectively (for comparison, the superconducting niobium-titanium dipoles that guide protons around the existing LHC have fields of around 8.3 T. The new quadrupoles will be installed on either side of the LHC collision points to increase the total number of proton–proton collisions by a factor 10, therefore boosting the chances of a discovery. Although the project requires modifications to just 5% of the current LHC configuration (see article on p28), each one of the HL-LHC’s key innovative technologies pose exceptional challenges that involve several institutes around the world.
Magnets of choice
Fermilab has a glorious history in superconductivity. It was here that the first large superconducting magnet accelerator was built, for example. “But more than that, it was shown that [superconducting magnets] could be reliably employed in a collider experiment for hours and hours of stable beams,” says physicist Giorgio Bellettini, who was spokesperson of the CDF experiment at Fermilab’s Tevatron collider during the mid-1990s at the time the top quark was discovered there. “The LHC experience is built upon this previous large endeavour.”
Flat Nb3Sn cables, coloured white after treatment with glass fibre to insulate each spire, are slowly fed from the coil winder to form prototype dipole magnets for the HL-LHC
The plan is to develop and build half of the focusing magnets for the HL-LHC in the US. These have the specific project labels Q1 and Q3, and are a collaboration between three laboratories: Fermilab, BNL and Lawrence Berkeley National Laboratory in California. Nb3Sn technology, whose development has been supported by the US Department of Energy, was not applicable to accelerator magnets until around a decade ago. Now, Nb3Sn magnets are the technology of choice. The prototypes being developed here are 4 m long, and once assembled with the surrounding “cold mass” to keep them below the superconducting operational temperature of Nb3Sn, they will grow to around twice this length.
The innovative feature of these magnets is their very large aperture – 150 mm in diameter – which is necessary to focus the proton beams more tightly in the interaction points. It also allows greater control of the stress on the magnets and the coils induced by the large magnetic field, explains Giorgio Apollinari, who joined Fermilab in the early days and is now director of the US LHC Accelerator Research Program (LARP). No magnet today can achieve fields of 12 T with such a big opening, which is three times larger than that of the existing LHC dipoles. This is a new development introduced by the LARP team, explains Apollinari, and it took several years to go from 70, then 90 to 120 and now 150 mm required by the HL-LHC. “And then you have to have all the infrastructure necessary to build the magnets, test the magnets, make sure they work, measure the field quality and hopefully send them to CERN for installation in the beamline in 2025.”
The clean room at ANL.
Fermilab and the other LARP laboratories have successfully built 1 m-long short models to demonstrate that the technology meets the technical requirements, and the components are working exactly as expected. Now the teams are building longer prototypes with the correct length, aperture and all other design features. The next step is to build a full prototype with four coils, to complete the quadrupole configuration of the magnets, this coming spring. Similar magnets are being prototyped at CERN with a more ambitious length of 7.5 m. The final product from the US will be a 60 cm-diameter 4 m-long basic magnet containing a hole for the HL-LHC beam pipe. Twenty of these structures will be built in total, 10 in the US and 10 at CERN, of which 16 will be installed and the rest kept as spares. “This is collaboration in physics at its best,” explains Apollinari. “Everybody is trying to go faster, but we are looking at what each other does openly and learning from each other.”
Focus on cavities
Over at Fermilab’s sister laboratory, Argonne National Laboratory (ANL) some 40 km away, the other substantial part of the US contribution to the HL-LHC project is gathering pace. This involves novel “crab”-cavity technology, which is needed both to increase the luminosity and reduce so-called beam–beam parasitic effects that limit the collision efficiency of the accelerator. Unlike standard radiofrequency cavities, which accelerate charged particles in the direction along their path, crab cavities provide a transverse deflection of the beam which causes it to rotate.
The view from the clean-room control room at ANL, where researchers are developing pure niobium structures for the HL-LHC’s superconducting crab cavities.
The cavities are made from pure niobium and therefore require strict control from contamination via chemical processing. ANL specialises in superconducting cavities with a wide range of geometries, and a joint facility for the chemical processing of cavities is in place. ANL’s extensive experience with superconducting cavities includes the Argonne Tandem Linac Accelerator System (ATLAS). Built and operated by the physics division, this is the world’s first superconducting linear accelerator for heavy ions, working at energies in the vicinity of the Coulomb barrier to study the properties of the nucleus. It is for this machine that niobium was used for the first time in an accelerator, in 1977, and for which “quarter-wave” superconducting cavities were developed. “We developed superconducting cavities for a whole variety of projects, for the ATLAS accelerator, Fermilab, BNL, SLAC and of course for the HL-LHC at CERN,” says ANL accelerator scientist Michael Kelly. We meet in the lobby of the ANL physics division, next to a piece of the laboratory’s history: Enrico Fermi’s original “chopper”, a mechanical rotating shutter to select neutrons built in 1947 as part of ANL’s original nuclear-physics programme. “Today we process crab cavities for the HL-LHC, trying to achieve the highest possible accelerating or crabbing voltages, by making a very very clean surface on the cavity,” he explains. Chemical processing taking place at a separate Argonne facility.
ANL’s chemical processing facility has recently been enlarged to accommodate new buffer chemical polishing and electro-polishing rooms. Wearing a complete set of clean-room garments as we enter the facility, electronic engineer Brent Stone explains the importance of surface processing. “A feature of niobium is that a damaged layer is formed as it is mined from the ground and goes through all different processes, so when the niobium is transformed into cavities we need to remove a 120–150 μm-thick damaged layer,” he says. “Inside these layers you can have inclusions that may affect their performance and it is critical to remove them.”
Several steps, and journeys, are required to process the cavities. After the application of acids to remove material from the surface, the cavities undergo two cycles in ultrasonic tanks before being rinsed at high pressure and returned to Fermilab to be degassed in vacuum at high temperatures. They are then taken back to ANL for final chemical treatment, cleaning and assembly in the clean room. Finally, the cavities processed at Argonne are sent to BNL were they are cooled down to liquid-helium temperatures to test if they meet the crabbing voltage required for the HL-LHC. “One of the cavities processed has just very easily achieved its design goal,” says Kelly proudly, before we take leave of the laboratory.
Next stop CERN
The crab cavities are less advanced than the magnets for the HL-LHC, both at CERN and at Fermilab. But efforts are progressing on schedule on both sides of the Atlantic. Two different designs have been developed for the HL-LHC interaction points: vertical plane for ATLAS and horizontal plane for CMS. Both cavity designs originated from LARP, the LHC accelerator R&D programme created by the DOE in 2005 while the LHC was nearing its completion. “Without that foresight we wouldn’t have the HL-LHC today,” says Apollinari.
The High-Luminosity LHC (HL-LHC) project at CERN is a major upgrade that will extend the LHC’s discovery potential significantly. Approved in June 2014 and due to enter operation in the mid-2020s, the HL-LHC will increase the LHC’s integrated luminosity by a factor 10 beyond its original design value. The complex upgrade, which must be implemented with minimal disruption to LHC operations, demands careful study and will take a decade to achieve.
The HL-LHC relies on several innovative and challenging technologies, in particular: new superconducting dipole magnets with a field of 11 T; highly compact and ultra-precise superconducting “crab” cavities to rotate the beams at the collision points and thus compensate for the larger beam crossing angle; beam-separation and recombination superconducting dipole magnets; beam-focusing superconducting quadrupole magnets; and 80 m-long high-power superconducting links with zero energy dissipation.
These new LHC accelerator components will be mostly integrated at Point 1 and Point 5 of the ring where the two general-purpose detectors ATLAS and CMS are located (see diagram). The new infrastructure and services consist mainly of power transmission, electrical distribution, cooling, ventilation, cryogenics, power converters for superconducting magnets and inductive output tubes for superconducting RF cavities. To house these large elements, civil-engineering structures including buildings, shafts, caverns and underground galleries are required.
Design study complete
The definition of the civil engineering for the HL-LHC began in 2015. Last year, the completion of a concept study allowed CERN to issue a call for tender for two civil-engineering consultant contracts, which were adjudicated in June 2016. These consultants are in charge of the preliminary, tender and construction design phases of the civil-engineering work, in addition to managing the construction and defect-liability phase. At Point 1, which is located in Switzerland just across from the main CERN entrance, the consultant contract involves a consortium of three companies: SETEC TPI (France), which is the consortium leader, together with CSD Engineers (Switzerland) and Rocksoil (Italy). A similar consortium has been appointed at Point 5, in France. Here, the consultant contract is shared between consortium-leader Lombardi (Switzerland), Artelia (France) and Pini Swiss (Switzerland). In November 2016, the two consultant consortia completed the preliminary design phase including cost and construction-schedule estimates for the civil-engineering work.
In parallel with the preliminary design, and with the help of external architects, CERN has submitted building-permit applications to the Swiss and French authorities with a view to start construction work by mid-2018. CERN has also performed geotechnical investigations to better understand the underground conditions (which consist of glacial moraines overlying a local type of soft rock called molasse), and has placed a contract with independent engineers ARUP (UK) and Geoconsult (Austria). These companies will confirm that the consultant designs have been performed with the appropriate skill, care and diligence in accordance with applicable standards. In addition, a panel comprising lawyers, architects and civil engineers is in place to resolve any disputes between parties.
At ground level, the HL-LHC civil engineering consists of five buildings at each of the two LHC points, technical galleries, access roads, concrete slabs and landscaping. At each point, the total surface corresponds to about 20,000 m2 including 3300 m2 of buildings. A cluster of three buildings is located at the head of the shaft and will house the helium-refrigerator cold box (SD building, see images above), water-cooling and ventilation units (SU building) and also the main electrical distribution for high and low voltage (SE building). Completing the inventory at each point are two stand-alone buildings that will house the primary water-cooling towers (SF building) and the warm compressor station of the helium refrigerator (SHM building). Buildings housing noisy equipment (SU, SF, SHM) will be constructed with noise-insulating concrete walls and roofs.
In terms of underground structures, the civil-engineering work consists of a shaft, a service cavern, galleries and vertical cores (see image above left). The total volume to be excavated is around 50,000 m3 per point. The PM shaft (measuring 9.7 m in diameter and 70–80 m deep) will house a secured access lift and staircase as well as the associated services. The service cavern (US/UW, measuring 16 m in diameter and 45 m long) will house cooling and ventilation units, a cryogenic box, an electrical safe room and electrical transformers. The UR gallery (5.8 m diameter, 300 m long) will house the power converters and electrical feed boxes for the superconducting magnets as well as cryogenic and service distribution. Two transverse UA galleries (6.2 m diameter, 50 m long) will house the RF equipment for the powering and controls of the superconducting crab cavities. At the end of the UA galleries, evacuation galleries (UPR) are required for personnel emergency exits. Two transversal UL galleries (3 m diameter, 40 m long) will house the superconducting links to power the magnets and cryogenic distribution system. Finally, the HL-LHC underground galleries are connected to the LHC tunnel via 16 vertical cores measuring 1 m in diameter and approximately 7 m long.
Next milestone
The next important milestone will be the adjudication in March 2018 of the two contracts (one per point) for the civil-engineering construction work. In December 2016, CERN launched a market survey for the construction tender, which will be followed by invitations to tender to qualified firms by June 2017. The main excavation work, which may generate harmful vibrations for the LHC accelerator performance, must be performed during the second long shutdown of the LHC accelerator scheduled for 2019–2020. Handover of the final building is scheduled by the end of 2022, while the vertical cores connecting the HL-LHC galleries to the LHC tunnel will be constructed at the start of the third LHC long shutdown beginning in 2024.
Realising the HL-LHC is a major challenge that involves more than 25 institutes from 12 countries, and in addition to civil-engineering work it demands several cutting-edge magnet and other accelerator technologies. The project is the highest priority in the European Strategy for Particle Physics, and will ensure a rich physics programme at the high-energy frontier into the 2030s.
The annual global incidence of cancer is expected to rise from 15 million cases in 2015 to as many as 25 million cases in 2035. Of these, it is estimated that 65–70% will occur in low-and middle-income countries (LMICs) where there is a severe shortfall in radiation treatment capacity. The growing burden of cancer and other non-communicable diseases in these countries has been recognised by the United Nations General Assembly and the World Health Organization.
Radiation therapy is an essential component of effective cancer control, and approximately half of all cancer patients – regardless of geographic location – would benefit from such treatment. The vast majority of modern radiotherapy facilities rely on linear accelerators (linacs) to accelerate electrons, which are either used directly to treat superficial tumours or are directed at targets such as tungsten to produce X-rays for treating deep-seated tumours.
Electron linacs were first used clinically in the 1950s, in the UK and the US. Since then, great advances in photon treatment have been made. These are due to improved imaging, real-time beam shaping and intensity modulation of the beam with multileaf collimators, and knowledge of the radiation doses to kill tumours alone and in combination with drugs. In addition, the use of particle beams means that radiotherapy directly benefits from knowledge and technology gained in high-energy-physics research.
Meeting global demand
In September 2015, the Global Task Force on Radiotherapy for Cancer Control (GTFRCC) released a comprehensive study of the global demand for radiation therapy. It highlighted the inadequacy of current equipment coverage (image at top) and the resources required, as well as the costs and economic and societal benefits of improving coverage.
Limiting factors to the development and implementation of radiotherapy in lower-resourced nations include the cost of equipment and infrastructure, and the shortage of trained personnel to properly calibrate and maintain the equipment and to deliver high-quality treatment. The GTFRCC report estimated that as many as 12,600 megavolt-class treatment machines will be needed to meet radiotherapy demands in LMICs by 2035. Based on current staffing models, it was estimated that an additional 30,000 radiation oncologists, more than 22,000 medical physicists and almost 80,000 radiation technologists will be required.
Approximately three years ago, with the aim of making cancer treatments accessible to underserved populations, initial discussions took place between CERN and representatives of the US National Cancer Institute and an emerging non-government organisation, the International Cancer Expert Corps (ICEC), whose aim is to help LMICs establish in-country cancer-care expertise. The focus of discussions was on an “out-of-the-box” concept for global health, specifically the design of a novel, possibly modular, linear accelerator for use in challenging environments (defined as those in which the general infrastructure is poor or lacking, where power outages and water-supply fluctuations can occur, and where climatic conditions might be harsh). Following further activities, CERN hosted a workshop in November 2016 convened by the ICEC, which brought together invited experts from many disciplines including industry (see panel below).
In addition to improving the quality of care for cancer patients globally, linac-based radiotherapy systems also reduce the reliance on less expensive and simpler systems that provide treatment with photons from radionuclide sources such as 60Co and 137Cs. While some of the 60Co units have multileaf collimators for improved beam delivery, they do not have the advanced features of modern linacs. Eliminating radionuclides also reduces the risk of malicious use of medical radioactive materials (see panel below).
Design characteristics
It is important that the newly designed linac retains the advanced capability of the machines now in use, and that through software advances, resource sharing and sustainable partnerships, the treatments in LMICs are of comparable quality to those in upper-income countries. This not only avoids substandard care but is also an incentive for experts to go to and remain in LMICs.
CERN workshop initiates discussions for novel medical linacs
On 7–8 November 2016, CERN hosted a first-of-its-kind workshop to discuss the design characteristics of radiotherapy linacs for low- and middle-income countries (LMICs). Around 75 participants from 15 countries addressed: the role of radiotherapy in treating cancer in challenging environments and the related security of medical radiological materials, especially 60Co and 137Cs; the design requirements of linear accelerators and related technologies for use in challenging environments; the education and training of a sustainable workforce needed to utilise novel radiation treatment systems; and the cost and financing of the project. Leading experts were invited from international organisations, government agencies, research institutes, universities and hospitals, and companies that produce equipment for conventional X-ray and particle therapy.
The ideal radiation-therapy treatment system for LMICs is thought to be as modular as possible, so that it can be easily shipped, assembled in situ, repaired and upgraded as local expertise in patient treatment develops. Another critical issue concerns the sustainability of treatment systems after installation. To minimise the need for local specialised technical staff to maintain and promptly repair facilities, procedures and economic models need to be developed to ensure regional technical expertise and also a regional supply of standard spare parts and simpler (modular) replacement procedures. Difficulties due to remoteness and poor communication also need to be considered.
There are several design considerations when developing a linear accelerator for operation in challenging environments. In addition to ease of operation, repair and upgradability, key factors include reliability, self-diagnostics, insensitivity to power interruptions, low power requirements and reduced heat production. To achieve most of these design considerations relatively quickly requires a system based on current hardware technology and software that fully exploits automation. The latter should include auto-planning and operator monitoring and training, even to the point of having a treatment system that depends on limited on-site human involvement, to allow high-quality treatment to be delivered by an on-site team with less technical expertise.
Current technology can be upgraded with software upgrades, but generally it requires the purchase of an entire new unit to substantially improve technology – often costing many millions of dollars. A modular design that allows major upgrades of components on the same base unit could be much less expensive. Major savings would also result from developing new advanced software to expand the capability of the hardware.
Participants in the CERN workshop agreed that we need to develop a treatment machine that delivers state-of-the-art radiation therapy, rather than to develop a sub-standard linac in terms of the quality of the treatment it could deliver. The latter approach would not only provide lower-quality treatment but would be a disincentive for recruitment and retention of high-quality staff. As used in virtually all industries, the user interface should be developed through interaction with the users. Improved hardware such as a power generator in conjunction with energy management should also be provided to control electrical network fluctuations.
The task ahead
Experience from past and current radiation-therapy initiatives suggests that successful radiotherapy programmes require secure local resources, adequate planning, local commitment and political stability. To make a highly functional radiotherapy treatment system available in the near-term, one could upgrade one or more existing linear accelerators with software optimisations. The design and development of a truly novel radiation treatment system, on the other hand, will require a task force to refine the design criteria and then begin development and production.
With the rise in global terrorism comes the threat of the use of un- or poorly secured radioactive sources that would have enormous health, economic and political consequences. This includes medical sources such as 60Co that are generally not highly protected, many of which are located in relatively under-resourced regions. Interest in developing alternative technologies has brought together medical practitioners who currently use these sources, governmental and global agencies whose mission includes the security of radiological and nuclear material, and organisations dedicated to the non-proliferation of nuclear weapons.
This confluence of expertise resulted in meetings in Brazil and South Africa in 2016, with the realisation that simply removing 60Co would leave people in many regions without cancer care. Removing dangerous sources while establishing a better cancer-care environment would require education, training, mentorship and partnerships to use more complex linear-accelerator-based radiotherapy systems. The austerity of the environment is a challenge that requires new thinking, however.
The ability to offer a state-of-the-art non-isotopic radiation treatment system for challenging environments was emphasised by the Office of Radiological Security of the US National Nuclear Security Administration, which is responsible for reducing the global reliance on radioactive sources as well as protecting those sources from unauthorised access. The benefit of replacing 60Co radiation treatment units with linear accelerators from the point of view of decreasing the risk of malicious use of 60Co by non-state (terrorist) actors was also emphasised in a report from the Center for Nonproliferation Studies that offered the new paradigm “treatment, not terror”.
Following the November workshop, an oversight committee and three task forces have been established. A technology task force will focus on systems solutions and novel technology for a series of radiation-treatment systems that incorporate intelligent software and are modular, rugged and easily operated yet sufficiently sophisticated to also benefit therapy in high-income countries. A second task force will identify education and training requirements for the novel treatment systems, in addition to evaluating the impact of evolving treatment techniques, changes in cancer incidence and the population mix. Finally, a global connectivity and fundraising task force will develop strategies for securing financial support in client countries as well as from governmental, academic and philanthropic organisations and individuals.
The overall aim of this ambitious project is to make excellent near-term and long-term radiation treatment systems, including staffing and physical infrastructure, available for the treatment of cancer patients in LMICs and other geographically underserved regions in the next 5–10 years. The high-energy physics community’s broad expertise in global networking, technology innovation and open-source knowledge for the benefits of health are essential to the progress of this ambitious effort. It is anticipated that an update meeting will take place at the International Conference on Advances in Radiation Oncology (ICARO2) to be held in Vienna in June 2017.
Dark matter is one of the greatest mysteries of our cosmos. More than 80 years after its postulation in modern form by the Swiss–American astronomer Fritz Zwicky, the existence of a new unseen form of matter in our universe is established beyond doubt. Dark matter is not just the gravitational glue that holds together galaxies, galaxy clusters and structures on the largest cosmological scales. Over the past few decades it has become clear that dark matter is also vital to explain the observed fluctuations in cosmic-microwave-background radiation and the growth of structures that began from these primordial density fluctuations in the early universe. Yet despite overwhelming evidence, its existence is inferred only indirectly via its gravitational pull on luminous matter. As of today, we lack the answer to the most fundamental questions: what is dark matter made of and what is its true nature?
DARWIN, the ultimate dark-matter detector using the noble element xenon in liquid form, will be in a unique position to address these fundamental questions. Currently in the design and R&D phase, DARWIN will be constructed at the Gran Sasso National Laboratory (LNGS) in Italy and is scheduled to carry out its first physics runs from 2024. The DARWIN consortium is growing, and currently consists of about 150 scientists from 26 institutions in 11 countries.
Worldwide search
The particles described by the Standard Model of particle physics are unable to account for dark matter. Although neutrinos, the only elementary particles that do not interact with photons, would be ideal candidates, they are much too light and do not form the observed large-scale structures. Dark matter could, however, be made of new elementary particles that were born in the young and energetic universe. Such particles would carry no electric or colour charge, would be either stable or very long-lived and, similar to neutrinos, would interact only feebly (if at all) with known matter via new fundamental forces. Theories beyond the Standard Model predict a wealth of viable dark-matter candidates. The most popular class has the generic name of weakly interactive massive particles (WIMPs), while a different class is axions, or more generally axion-like particles (ALPs).
Worldwide, more than a dozen experiments are preparing to observe low-energy nuclear recoils induced by galactic WIMPs in ultra-sensitive, low-background detectors. Since the predicted WIMP masses and scattering cross-sections are model-dependent and essentially unknown, searches must cover a vast parameter space. Among the most promising detectors are those based on liquefied noble-gas targets such as liquid xenon (LXe) or liquid argon (LAr) – a well-established technology that can be scaled up to tonne-scale target masses and take data over periods lasting several years.
DARWIN, which will operate a multi-tonne liquid-xenon time projection chamber (TPC), follows in the footsteps of its predecessors XENON, ZEPLIN, LUX and PandaX. The technology employed by these experiments is very similar and, in addition, the entire XENON collaboration is now a part of the DARWIN collaboration. Since December 2016, an upgraded detector called XENON1T has been recording its first dark-matter data at LNGS using two tonnes of liquid xenon as the WIMP target (the total mass of xenon in the detector is 3.3 tonnes). It will probe WIMP–nucleon cross-sections down to as little as 1.6 × 10–47 cm2 at a mass of 50 GeV/c2 (for comparison, the scattering cross-section of low-energy 7Be solar neutrinos on electrons is about 6 × 10–45 cm2). A further planned upgrade called XENONnT with seven tonnes of LXe will increase the WIMP sensitivity by one order of magnitude.
The goals of DARWIN are even more ambitious, promising an unprecedented sensitivity of 2.5 × 10–49 cm2 at a WIMP mass of 40 GeV/c2. Such a reach would allow us to explore the entire experimentally accessible parameter space for WIMPs, to the point where the WIMP signal becomes indistinguishable from background processes from coherent neutrino-nucleus scattering events.
Rich physics programme
DARWIN will not only search for WIMP dark matter. Because of its ultra-low background level, it will be sensitive to additional, hypothetical particles that are expected to have non-vanishing couplings to electrons. These include solar axions, galactic ALPs and bosonic super-weakly interacting massive particles called superWIMPs, which have masses at the keV scale and are candidates for warm dark matter. It will also detect low-energy solar neutrinos produced by proton–proton fusion reactions in the Sun (so-called pp neutrinos) with high statistics, and therefore address one of the remaining observational challenges in the field of solar neutrinos: a precise comparison of the Sun’s neutrino and photon luminosities. Capable of providing a statistical precision of less than one per cent on this comparison with just five years of data, the high-statistics measurement of the pp-neutrino flux would provide a stringent test of the solar model, as well as neutrino properties, because non-standard neutrino interactions could modify the survival probability of electron neutrinos at these low energies.
The DARWIN observatory will also observe coherent neutrino-nucleus interactions from 8B solar neutrinos and be sensitive to neutrinos of all flavours from core-collapse supernovae: it would see about 800 events, or 20 events/tonne, from a supernova with 27 solar masses at a distance of 10 kpc, for example. By looking at the time evolution of the event rate from a nearby supernova, DARWIN could possibly even distinguish between different supernova models. Finally, DARWIN would search for the neutrinoless double beta (0νββ) decay of 136Xe, which has a natural abundance of 8.9 per cent in xenon. The observation of this ultra-rare nuclear decay would directly prove that neutrinos are Majorana particles, and that lepton number is violated in nature (CERN Courier July/August 2016 p34).
One common feature of these exciting questions in contemporary particle and astroparticle physics is the exceedingly low expected interaction rates in the detector, corresponding to less than one event per tonne of target material and year. In addition, these searches – with the exception of the 0νββ decay – require an energy threshold that is as low as possible (a few keV), while the 0νββ decay, superWIMP and axion searches will profit from the very good energy resolution of the detector. A multi-tonne liquid-xenon observatory such as DARWIN can address the combination of an ultra-low background level, a low-energy threshold and a good energy resolution within a single, large, monolithic detector.
The WIMP landscape
The current best sensitivity to WIMP searches for masses above 6 GeV/c2 is provided by detectors using LXe as a target, and the majority of existing (XENON1T, LUX, PandaX) and planned (LZ, XENONnT) LXe dark-matter detectors employ dual-phase TPCs (figure 1). These detectors maintain xenon at a constant temperature of about –100 °C and detect two distinct signals (the prompt scintillation light and the ionisation electrons) via arrays of photosensors operated in the liquid and vapour phase. The observation of both signals delivers information about the type of interaction and its energy, as well as the 3D position and timing of an event. WIMP collisions and coherent neutrino scatters will produce nuclear recoils, while pp neutrinos, axions, superWIMPs and double beta decays, along with the majority of background events, will cause electronic recoils. Fast neutrons from materials or induced through cosmic-ray muons will also give rise to nuclear recoils, but WIMPs and neutrinos will scatter only once in a given detector, while neutrons can scatter multiple times in large detectors such as DARWIN.
Since the primary intent of DARWIN is to investigate dark-matter interactions, it is vital that background processes are understood. The observatory can exploit the full discovery potential of the liquefied xenon technique with a 40 tonne LXe TPC that allows all known sources of background to be considered. These stem from several sources: the residual radioactivity of detector-construction materials (γ radiation, neutrons); β decays of the anthropogenic 85Kr present in the atmosphere due to nuclear fuel reprocessing, weapons tests and accidents such as that at Fukushima nuclear plant in Japan; and the progenies of 222Rn in the LXe target. Two-neutrino double beta decays (2νββ) of 136Xe and interactions of low-energy solar neutrinos (pp, 7Be) are another source of background, as are higher-energy neutrino interactions with xenon nuclei in coherent neutrino-nucleus scattering.
In the standard WIMP-scattering scenario, the leading interactions between a dark-matter particle and a nucleon are due to two subtly different processes: spin-dependent couplings and isospin-conserving, spin-independent couplings. Since LXe contains nuclei with and without spin, DARWIN can probe both types of interactions. Assuming an exposure of 200 tonnes × years (500 tonnes × years), a spin-independent WIMP sensitivity of 2.5 × 10–49 cm2 (1.5 × 10–49 cm2) can be reached at a WIMP mass of 40 GeV/c2. For spin-dependent WIMP–neutron couplings and WIMP masses up to about 1 TeV, the searches conducted by DARWIN will be complementary to those of the LHC and High-Luminosity LHC at a centre-of-mass energy of 14 TeV. Natural xenon includes two isotopes with nonzero total nuclear angular momentum, 129Xe and 131Xe, at a combined abundance of about 50%. If the WIMP–nucleus interaction is indeed spin-dependent, DARWIN will also probe inelastic WIMP–nucleus scattering, where these two nuclei are excited into low-lying states at 40 keV and 80 keV, respectively, with subsequent prompt de-excitation. The discovery of such a signature would be a clear indication for an axial-vector coupling of WIMPs to nuclei.
Ultimate detector
Should dark-matter particles be discovered by one of the running (XENON1T, DEAP-3600) or near-future (LZ, XENONnT) detectors, DARWIN would be able to reconstruct the mass and scattering cross-section from the measured nuclear recoil spectra. With an exposure of 200 tonnes × years, 152, 224 and 60 events would be observed for the three WIMP masses, respectively (figure 2). DARWIN may therefore be the ultimate liquid-xenon dark-matter detector, capable of probing the WIMP paradigm and thus detect or exclude WIMPs with masses above 6 GeV/c2, down to the extremely low cross-sections of 1.5 × 10–49 cm2.
Should WIMPs not be observed in the DARWIN detector, the WIMP paradigm would be under very strong pressure. With its large, uniform target mass, low-energy threshold, and ultra-low background level, the observatory will also open up a unique opportunity for other rare event searches such as axions and other weakly interacting light particles. It will address open questions in neutrino physics, which is one of the most promising areas in which to search for physics beyond the Standard Model. At its lowest energies, the DARWIN detector will observe coherent neutrino-nucleus interactions from solar 8B neutrinos, thus precisely testing the standard-solar-model flux prediction, and may detect neutrinos from galactic supernovae.
The DARWIN observatory was approved for an initial funding period, via ASPERA, in 2010. It is included in the European Roadmap for Astroparticle Physics and in various other programs, for example by the Swiss State Secretariat for Education, Research and Innovation and the Strategic Plan for Astroparticle Physics in the Netherlands. The current phase will culminate with a technical design report in 2019, followed by engineering studies in 2020 and 2021, with the construction at LNGS and first physics runs scheduled to start in 2022 and 2024, respectively. The experiment will operate for at least 10 years and may write a new chapter in the exciting story of dark matter.
DARWIN scales LXe technology to new heights
The DARWIN observatory will operate a large amount (50 tonnes) of liquid xenon in a low-background cryostat surrounded by concentric shielding structures (diagram right). The heart of the experiment is the dual-phase TPC, containing 40 tonnes of instrumented xenon mass (diagram below). The high density of liquid xenon (3 kg/l) results in a short radiation length and allows for a compact detector geometry with efficient self-shielding. A drift field of the order 0.5 kV cm–1 across the liquid target will cause the electrons to drift away from the interaction vertex towards the liquid–gas interface. Large field-shaping rings made from oxygen-free, high-conductivity copper will ensure the homogeneity of the field. The TPC will mostly be constructed from copper as a conductor and polytetrafluoroethylene as an insulator, with the latter being an efficient reflector for vacuum ultra-violet scintillation light. It will be housed in a double-walled cryostat, and all the detector materials will first be selected for ultra-low intrinsic radioactivity using dedicated, high-purity germanium (HPGe) detector screening facilities. In the baseline scenario, the prompt and proportional scintillation signals will be recorded by two arrays of photomultiplier tubes (PMTs) installed above and below the xenon target. These will have a diameter of 3″ or 4″ and feature a very low intrinsic radioactivity, high quantum efficiency of 35% at 178 nm, a gain of around 5 × 106 and a very low dark count rate at –100 °C. Albeit a proven and reliable technology, PMTs are bulky, expensive and generate a significant fraction of the radioactive background in a dark-matter detector, especially concerning nuclear recoils produced by neutrons from (alpha, n) reactions. Several alternative light read-out schemes are thus being considered by the collaboration in small R&D set-ups. Among these are arrays of silicon photomultipliers (with a potential scheme where the TPC is fully surrounded by photosensors), gaseous photomultipliers and hybrid photosensors. A novel concept of liquid-hole multipliers could allow for charge and light read-out in a single-phase TPC, and potentially result in a significant improvement in light yield and thus a lower energy threshold.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.