In September, following three years of successful operation and growth, CERN announced the continuation of the global SCOAP3 open-access initiative for at least three more years. SCOAP3 (Sponsoring Consortium for Open Access Publishing in Particle Physics) is a partnership of more than 3000 libraries, funding agencies and research organisations from 44 countries that has made tens of thousands of high-energy physics articles publicly available at no cost to individual authors. Inspired by the collaborative model of the LHC, SCOAP3 is hosted at CERN under the oversight of international governance. It is primarily funded through the redirection of budgets previously used by libraries to purchase journal subscriptions.
Since 2014, in co-operation with 11 leading scientific publishers and learned societies, SCOAP3 has supported the transition to open access of many long-standing titles in the community. During this time, 20,000 scientists from 100 countries have benefited from the opportunity to publish more than 13,000 open-access articles free of charge.
With strong consensus of the growing SCOAP3 partnership, and supported by the increasing policy requirements for and global commitment to open access in its Member States, CERN has now signed contracts with 10 scientific publishers and learned societies for a three-year extension of the initiative. “With its success, SCOAP3 has shown that its model of global co-operation is sustainable, in the same broad and participative way we build and operate large collaborations in particle physics,” says CERN’s director for research and computing, Eckhard Elsen.
A next-generation dark-matter detector in the US called LUX-ZEPLIN (LZ), which will be at least 100 times more sensitive than its predecessor, is on schedule to begin its deep-underground hunt for WIMPs in 2020. In August, LZ received a US Department of Energy approval (“Critical Decision 2 and 3b”) concerning the project’s overall scope, cost and schedule. The latest approval step sets in motion the building of major components and the preparation of its nearly mile-deep cavern at the Sanford Underground Research Facility (SURF) in Lead, South Dakota.
The experiment, which is supported by a collaboration of more than 30 institutions and about 200 scientists worldwide, is designed to search for dark-matter signals from within a chamber filled with 10 tonnes of purified liquid xenon. LZ is named for the merger of two dark-matter-detection experiments: the Large Underground Xenon experiment (LUX) and the UK-based ZonEd Proportional scintillation in LIquid Noble gases (ZEPLIN) experiment. LUX, a smaller liquid-xenon-based underground experiment at SURF that earlier this year ruled out a significant region of WIMP parameter space, will be dismantled to make way for the new project.
“Nobody looking for dark-matter interactions with matter has so far convincingly seen anything, anywhere, which makes LZ more important than ever,” says LZ project-director Murdock Gilchriese of the University of California at Berkeley.
Usually, the motto of the LHC operations team is “maximum luminosity”. For a few days per year, however, this motto is put aside to run the machine at very low luminosity. The aim is to provide data for the broad physics programme of the LHC’s “forward physics” experiments – TOTEM and ATLAS/ALFA. By running the LHC with larger beam sizes at the interaction points, corresponding to a lower luminosity, the dedicated TOTEM and ATLAS/ALFA detectors can probe the proton–proton elastic-scattering regime at small angles.
In elastic scattering, two protons survive their encounter intact and only change direction by exchanging momentum. TOTEM, which is located in the straight sections of the LHC on either side of CMS at Point 5, and ATLAS/ALFA at Point 1, are not able to study this process during normal operation. To facilitate the special run, which took place in the third week of September, the LHC team has developed a special machine configuration that delivers exceptionally large beams at the interaction points (IP) of ATLAS and CMS. The focusing at the IP is normally parameterised by β*: the higher the value of β*, the bigger the beams and, importantly, the lower the angular divergence. For this year’s high-β* run, its value had to be raised to 2.5 km compared with around 1 km during LHC Run 1 at an energy of 8 TeV, because the higher energy of LHC Run 2 causes the two incoming protons to scatter at smaller angles. The measurements were carried out with very low-intensity beams, allowingTOTEM and ALFA to bring their “Roman Pot” detectors remarkably close to the beam.
In addition to the precise determination of the total proton–proton interaction probability at 13 TeV, TOTEM will focus on a detailed study of elastic scattering in the low-transferred momentum regime. The experiment will investigate how Coulomb scattering interferes with the nuclear component of the elastic interaction, which can shed light on the internal structure of the protons. TOTEM will also search for special states formed by three gluons.
ATLAS/ALFA also intends to carry out a precision measurement of the proton–proton total cross-section, and will use this to determine the absolute LHC luminosity at Point 1. For ATLAS/ALFA, the interesting part of the spectrum is at low values of transferred momentum, where Coulomb scattering is dominant. Since the Coulomb scattering cross-section is theoretically known, its measurement provides an independent estimate of the absolute luminosity of the LHC. This would provide an important cross-check of the luminosity calibration measurements performed via van der Meer scans during dedicated LHC fills.
The Higgs boson has been observed via its decays to photons, tau leptons, and Z and W bosons, which has allowed ATLAS to glean much information about the particle’s properties. So far, these properties agree with the predictions of the Standard Model (SM). However, there are several aspects of the Higgs boson that are still largely unexplored, most notably the coupling of the Higgs boson to quarks. The two heaviest quarks, the bottom and top, are particularly interesting because they have the largest couplings to the Higgs boson. If these couplings differ from the SM predictions, it could provide a first hint of new physics.
Observing the coupling of the Higgs boson to these two quark flavours is challenging, however. Despite the Higgs decaying to a pair of bottom quarks around 58% of the time, this decay has not yet been observed because such decays manifest themselves as jets in the detector and this signature is overwhelmed by the SM production of multi-jets. As a result, physicists search for this decay by looking for the production of the Higgs in association with a vector boson (W or Z) or a top-quark pair. The additional particles have a more distinctive decay signature, but this comes at the price of a much lower signal-production rate.
Regarding the top quark, the only way to directly measure the coupling of the Higgs to the top quark at the LHC is to study events where a Higgs is produced in association with a top-quark pair. Like the situation with bottom quarks, this process has not yet been observed. Indeed, even with the more distinct decays, the background processes that mimic these signals are large, complex and difficult to model. In both the top and bottom production channels, the backgrounds are controlled by using advanced machine-learning techniques to separate signal events from background (see figure).
We should finally observe both of these processes at a high statistical significance later during Run 2,
Both searches have now been carried out by ATLAS with data from LHC Run 2, revealing a sensitivity to the Higgs boson couplings to top and bottom quarks that is competitive with searches at Run 1. However, they are still not precise enough to identify if there are any deviations from SM behaviour. With further improvements to the analyses, better understanding of the backgrounds and the unprecedented performance of the LHC, we should finally observe both of these processes at a high statistical significance later during Run 2. This will tell us if the Higgs boson is indeed responsible for the masses of the quarks as predicted in the SM, or if there is new physics beyond it.
Twenty years after its discovery at the Tevatron collider at Fermilab, interest in studying the top quark at the LHC is higher than ever. This was illustrated by the plethora of new results presented by the CMS collaboration at the ICHEP conference in August and at TOP 2016, which took place in the Czech Republic from 19 to 23 September.
The top quark is the only fermion heavier than the W boson and which has weak decays that do not involve a virtual particle. This leads to an unusually short lifetime (5 × 10–24 s) for a weak-mediated process, and provides a unique opportunity to probe the properties and couplings of a bare quark. In particular, the width of the top quark (which, like for all quantum resonances, is inversely proportional to its lifetime) may be easily affected by new-physics processes.
In a series of recent publications, the CMS collaboration has explored the width of the top quark in a model-independent way and searched for contributions from extremely rare processes mediated by so-called flavour-changing neutral currents (FCNCs).
The top-quark width is too narrow compared with the experimental resolution of the CMS detector to allow a precision measurement directly from the shape of the top’s invariant-mass distribution. CMS therefore considers alternative observables that provide complementary information on the top’s mass and width.
One of those observables is the invariant-mass distribution of lepton and b-jet systems produced after top-quark pair decays, which has allowed the collaboration to place new bounds on a Standard Model-like top-quark width of 0.6 ≤ Γt ≤ 2.4 GeV, based on the first 13 fb–1 of data collected in 2016 at a collision energy of 13 TeV. In parallel, based on the LHC Run 1 data set recorded at lower energies, a set of dedicated searches for FCNC processes involving top quarks has been carried out. This analysis focuses on the couplings of the top-quark to other up-type quarks (up, charm) and different neutral bosons: the gluon, the photon, the Z boson and the Higgs boson.
CMS collaboration is fast approaching sensitivity to the FCNC signals expected by some models with just Run 1 data.
Another approach adopted by CMS was to search for the rare production of a single top quark in association with a photon and a Z boson with the 8 TeV data set. These channels exploit the large up-quark density in the proton, and to a lesser extent the charm-quark density, therefore compensating for the smallness of the FCNC couplings. Finally, events with the conventional signature of t-channel production (resulting in a single top-quark decay and a light-quark jet) were used to set constraints on FCNC and other anomalous couplings by simultaneously considering their effects on the production and the decay of the top quark with both the 7 and 8 TeV data sets.
Although no deviation from the background-only expectations has been observed in any of the analyses so far, the CMS collaboration is fast approaching sensitivity to the FCNC signals expected by some models with just Run 1 data (see figure). All the analyses are limited in statistics and therefore will only benefit from more data to start effectively probing beyond-the-Standard-Model effects in the top quark sector.
The LHCb collaboration presented new results at the 8th International Workshop on Charm Physics (Charm 2016), which took place in Bologna on 5 to 9 September. Among various novelties, the collaboration reported the most precise measurements of the asymmetry between the effective lifetime of the D0 meson (composed of a c u quark pair) and that of its anti-partner, the D0 meson, decaying to final states composed of two charged pions or kaons. Such an asymmetry, referred to as AΓ, differs from zero if and only if the effective lifetimes of these particular D0 and D0 decays are different, signalling the existence of CP-violating effects.
The invariant-mass distribution of D0→ K+K– decays from one of the two analyses.
CP violation is still unobserved in the charm-quark sector, and its effects here are predicted to be very tiny by the Standard Model (well below the 10–3 level in this specific case). Thanks to the unprecedented sample sizes that LHCb is accumulating, it is only now that such a level of precision on these CP-violating observables with charm-meson decays is starting to be accessible.
Charm mesons are produced copiously at the LHC, either directly in the proton–proton collisions or in the decays of heavier beauty particles. Only the former production mechanism was used in this analysis. To determine whether the decaying meson is a D0 or a D0 (since they cannot be distinguished by the π+π– or K+K– common final state), LHCb reconstructed the decay chains D*+→ D0 π+ and D*–→D0π– so that the sign of the charged pion could be exploited to identify which D meson was involved in the decay. Two distinct analysis techniques were developed (see figure). The results of the two analyses are in excellent agreement and are consistent with no CP violation within about three parts in 104. These constitute the most precise measurements of CP violation ever made in the charm sector, with the full Run 2 data set expected to reduce the uncertainties even further.
The correlation of vn as a function of centrality, which is related to the amount of overlap of both lead ions at the time of the collision: solid red points are positive, while negative correlations are shown in blue.
One of the key goals in exploring the properties of QCD matter is to pin down the temperature dependence of the shear-viscosity to entropy-density ratio (η/s). In the limit of a weakly interacting gas, kinetic theory indicates that this ratio is proportional to the mean free path. Many different fluids exhibit a similar temperature dependence for η/s around a critical temperature Tc associated with a phase transition.
Heavy-ion collisions at the LHC create a state of hot and dense matter where quarks and gluons become deconfined (the quark gluon plasma, QGP). It exists within the initial instants of the collision, then as the system cools, the quarks and gluons form a hadronic gas at Tc. The temperature dependence of η/s is expected to follow the trend of other fluids, with a minimum at Tc. The minimum value of η/s is of particular interest because weakly coupled QCD and AdS/CFT models predict different values.
Measurements of vn versus the pseudorapidity of produced charged particles, with lines indicating hydrodynamical calculations tuned with RHIC data.
The ALICE collaboration has recently released results from anisotropic-flow measurements, which provide new constraints for η/s(T). Anisotropic flow results from spatial anisotropies in the initial state that are converted to momentum anisotropies via pressure gradients during the evolution of the system. The magnitudes of momentum anisotropies are quantified by the so-called vn coefficients, where v2 is generated by initial states with an elliptic shape, v3 a triangular shape, etc. The shape of the initial state fluctuates on an event-by-event basis.
Our results show that the average temperature of the system decreases with the pseudorapidity magnitude (figure, above), which means that measurements at forward rapidities are more sensitive to the hadronic phase. Although the model calculations reproduce the general trends of the data, it is clear that other parameterisations of η/s(T) could be explored to better describe the RHIC and LHC data simultaneously.
The correlation of vn as a function of centrality, which is related to the amount of overlap of both lead ions at the time of the collision: solid red points are positive, while negative correlations are shown in blue.
We also measured event-by-event correlations of different vn coefficients in lead–lead collisions (figure, left). It is clear that the v2 and v4 correlations are rather sensitive to different parameterisations. By contrast, the correlation between v2 and v3 is not, and is largely sensitive to how the initial state is modelled. Subsequently, it was found that the agreement improved as the number of degrees of freedom in the initial model was increased. Whether the deviations between data and model for the v2 and v4 correlation are due to the η/s(T) parameterisations or the initial-state modelling will be the subject of future study.
The largest all-sky survey of celestial objects has been compiled by ESA’s Gaia mission. On 13 September, 1000 days after the satellite’s launch, the Gaia team published a preliminary catalogue of more than a billion stars, far exceeding the reach of ESA’s Hipparcos mission completed two decades ago.
Astrometry – the science of charting the sky – has undergone tremendous progress over the centuries, from naked-eye observations in antiquity to Gaia’s sophisticated space instrumentation today. The oldest known comprehensive catalogue of stellar positions was compiled by Hipparchus of Nicaea in the 2nd century BC. His work, which was based on even earlier observations by Assyro-Babylonian astronomers, was handed down 300 years later by Ptolemy in his 2nd century treatise known as the Almagest. Although it listed the positions of 850 stars with a precision of less than one degree, which is about twice the diameter of the Moon, this work was significantly surpassed only in 1627 with the publication of a catalogue of about 1000 stars by the Danish astronomer Tycho Brahe, who achieved a precision of about 1 arcminute by using large quadrants and sextants.
Gaia has an astrometric accuracy about 100 times better than Hipparcos.
The first stellar catalogue compiled with the aid of a telescope was published in 1725 by English astronomer John Flamsteed, listing the positions of almost 3000 stars with a precision of 10–20 arcseconds. The precision increased significantly during the following centuries, with the use of photographic plates by the YaleTrigonometric Parallax Catalogue reaching 0.01 arcsecond in 1995. ESA’s Hipparcos mission, which operated from 1989 to 1993, was the first space telescope devoted to measuring stellar positions. The Hipparcos catalogue, released in 1997, provides the position, parallax and proper motion of 117,955 stars with a precision of 0.001 arcsecond. The “parallax” is a small displacement of the star’s position after a six-month interval, offering a different viewpoint from Earth’s annual orbit around the Sun and allowing the star’s distance to be derived.
While Hipparcos could probe the stars to distances of about 300 light-years, Gaia’s objective is to extend this to a significant fraction of the size of our Galaxy, which spans about 100,000 light-years. To achieve this, Gaia has an astrometric accuracy about 100 times better than Hipparcos. As a comparison, if Hipparcos could measure the angle that corresponds to the height of an astronaut standing on the Moon, Gaia would be able to measure the astronaut’s thumbnail.
Gaia was launched on 19 December 2013 towards the Lagrangian point L2, which is a prime location to look at the sky away from disturbances from the Sun, Earth and Moon. Although the first data release already comprises about a billion stars observed during the first 14 months of the mission, there was not enough time to disentangle the proper motion from the parallax. This could only be computed with higher precision for about two million stars previously observed by Hipparcos.
The new catalogue gives an impression of the great capabilities of Gaia. More observations are needed to make a dynamic 3D map of the Milky Way and to find and characterise possible brightness variations of all these stars. Gaia will then be able to provide the parallax distance of many periodic stars such as Cepheids, which are crucial in the accurate determination of the cosmic-distance ladder.
The Crystal Clear (CC) collaboration was approved by CERN’s Detector Research and Development Committee in April 1991 as experiment RD18. Its objective was to develop new inorganic scintillators that would be suitable for electromagnetic calorimeters in future LHC detectors. The main goal was to find dense and radiation-hard scintillating material with a fast light emission that can be produced in large quantities. This challenge required a large multidisciplinary effort involving world experts in different aspects of material sciences – including crystallography, solid-state physics, luminescence and defects in solids.
From 1991 to 1994, the CC collaboration carried out intensive studies to identify the most adequate scintillator material for the LHC experiments. Three candidates were identified and extensively studied: cerium fluoride (CeF3), lead tungstate (PbWO4) and heavy scintillating glass. In 1994, lead tungstate was chosen by the CMS and ALICE experiments as the most cost-effective crystal compliant with the operational conditions at the LHC. Today, 75,848 lead-tungstate crystals are installed in CMS electromagnetic calorimeters and 17,920 in ALICE. The former contributed to the discovery of the Higgs boson, which was identified in 2012 by CMS and the ATLAS experiment via its decay, among others, into two photons. The CC collaboration’s generic R&D on scintillating materials has brought a deep understanding of cerium ions for scintillating activators and seen the development of lutetium and yttrium aluminium perovskite crystals for both physics and medical applications.
In 1997, the CC collaboration made its expertise in scintillators available to industry and society at large. Among the most promising sectors were medical functional imaging and, in particular, positron emission tomography (PET), due to its growing importance in cancer diagnostics and similarities with the functionality of electromagnetic calorimeters (the principle of detecting gamma rays in a PET scanner is identical to that in high-energy physics detectors).
Following this, CC collaboration members developed and constructed several dedicated PET prototypes. The first, which was later commercialised by Raytest GmbH in Germany under the trademark ClearPET, was a small-animal PET machine used for radiopharmaceutical research. At the turn of the millennium, five ClearPET prototypes characterised by a spatial resolution of 1.5 mm were built by the CC collaboration, which represented a major breakthrough in functional imaging at that time. The same crystal modules were also developed by the CC team at Forschungszentrum Jülich, Germany, to image plants in order to study carbon transport. A modified ClearPET geometry was also combined with X-ray single-photon detectors by CC researchers at CPPM Marseille, offering simultaneous PET and computed-tomography (CT) acquisition, and providing the first PET/CT simultaneous images of a mouse in 2015 (see image above). The simultaneous use of CT and PET allows the excellent position resolution of anatomic imaging (providing detailed images of the structure of tissues) to be combined with functional imaging, which is sensitive to the tissue’s metabolic activity.
After the success of ClearPET, in 2002, CC developed a dedicated PET camera for breast imaging called ClearPEM. This system had a spatial resolution of 1.3 mm and represented the first PET imaging based on avalanche photodiodes, which were initially developed for the CMS electromagnetic calorimeter. The machine was installed in Coimbra, Portugal, where clinical trials were performed. In 2005, a second ClearPEM machine combined with 3D ultrasound and elastography was developed with the aim of providing anatomical and metabolic information to allow better identification of tumours. This machine was installed in Hôpital Nord in Marseille, France, in December 2010 for clinical evaluations of 10 patients, and three years later it was moved to the San Girardo hospital in Monza, Italy, to undertake larger clinical trials, which are ongoing.
In 2011, a European FP7 project called EndoTOFPET-US, which was a consortium of three hospitals, three companies and six institutes, began the development of a prototype for a novel bi-modal time-of-flight PET and ultrasound endoscope with a spatial resolution better than 1 mm and a time resolution of 200 ps. This was aimed at the detection of early stage pancreatic or prostatic tumours and the development of new biomarkers for pancreatic and prostatic cancers. Two prototypes have been produced (one for pancreatic and one for prostate cancers) and the first tests on a phantom-prostate prototype were performed in spring 2015 at the CERIMED centre in Marseille. Work is now ongoing to improve the two prototypes, in view of preclinical and clinical operation.
In addition to the development of ClearPET detectors, members of the collaboration have initiated the development of the Monte Carlo simulation software-package GATE, a GEANT4-based simulation tool allowing the simulation of full PET detector systems.
In 1992, the CC collaboration organised the first international conference on inorganic scintillators and their applications, which led to a global scientific community of around 300 people. Today, this community comes together every two years at the SCINT conferences, the next instalment of which will take place in Chamonix, France, from 18 to 22 September 2017.
To this day, the CC collaboration continues its investigations into new scintillators and understanding their underlying scintillation mechanisms and radiation-hardness characteristics – in addition to the development of detectors. Among its most recent activities is the investigation of key parameters in scintillating detectors that enable very precise timing information for various applications. These include mitigating the effect of “pile-up” caused by the high event rate at particle accelerators operating at high peak luminosities, and also medical applications in time-of-flight PET imaging. This research requires the study of new materials and processes to identify ultrafast scintillation mechanisms such as “hot intraband luminescence” or quantum-confined excitonic emission with sub-picosecond rise time and sub-nanosecond decay time. It also involves investigating the enhancement of the scintillator light collection by using various surface treatments, such as nano-patterning with photonic crystals. CC recently initiated a European COST Action called Fast Advanced Scintillator Timing (FAST) to bring together European experts from academia and industry to ultimately achieve scintillator-based detectors with a time precision better than 100 ps, which provides an excellent training opportunity for researchers interested in this domain.
Among other recent activities of the CC collaboration are new crystal-production methods. Micro-pulling-down techniques, which allow inorganic scintillating crystals to be grown in the shape of fibres with diameters ranging from 0.3 to 3 mm, open the way to attractive detector designs for future high-energy physics experiments by replacing a block of crystals with a bundle of fibres. A Horizon 2020 European RISE Marie Skłodowska-Curie project called Intelum has been set up by the CC collaboration to explore the cost-effective production of large quantities of fibres. More recently, the development of new PET crystal modules has been launched by CC collaborators. These make use of new photodetector silicon photomultipliers and have a high spatial resolution (1.5 mm), depth-of-interaction capability (better than 3 mm) and a fast timing resolution (better than 200 ps).
Future directions
For the past 25 years, the CC collaboration has actively carried out R&D on scintillating materials, and investigated their use in novel ionising radiation-detecting devices (including read-out electronics and data acquisition) for use in particle-physics and medical-imaging applications. In addition to significant progress made in the understanding of scintillation mechanisms and radiation hardness of different materials, the choice of lead tungstate for the CMS electromagnetic calorimeter and the realisation of various prototypes for medical imaging are among the CC collaboration’s highlights so far. It is now making important contributions to understanding the key parameters for fast-timing detectors.
The various activities of the CC collaboration, which today has 29 institutional members, have resulted in more than 650 publications and 72 PhD theses. The motivation of CC collaboration members and the momentum generated throughout its many projects open up promising perspectives for the future of inorganic scintillators and their use in HEP and other applications.
• An event to celebrate the 25th anniversary of the CC collaboration will take place at CERN on 24 November.
Le collisionneur linéaire compact (CLIC) est un collisionneur linéaire à haute luminosité de plusieurs TeV, en développement depuis 1985, pour lequel un rapport préliminaire de conception a été achevé en 2012. Avec la découverte du boson de Higgs en juillet de la même année, l’intérêt indiscutable d’une exploitation du CLIC à une énergie dans le centre de masse plus faible (380 GeV) est devenu évident. Récemment, CLIC a publié une mise à jour de son scénario de base, étape par étape; il met l’accent sur une première étape d’énergie optimisée. Cette première étape est basée sur des performances déjà démontrées de la technologie d’accélération novatrice du CLIC, et elle sera nettement moins onéreuse que celle figurant dans le rapport préliminaire de conception initial.
One of CERN’s main options for a flagship accelerator in the post-LHC era is an electron–positron collider at the high-energy frontier. The Compact Linear Collider (CLIC) is a multi-TeV high-luminosity linear collider that has been under development since 1985 and currently involves 75 institutes around the world. Being linear, such a machine does not suffer energy losses from synchrotron radiation, which increases strongly with the beam energy in circular machines. Another option for CERN is a very high-energy circular proton–proton collider, which is currently being considered as the core of the Future Circular Collider (FCC) programme. So far, CLIC R&D has principally focused on collider technology that’s able to reach collision energies in the multi-TeV range. Based on this technology, a conceptual design report (CDR) including a feasibility study for a 3 TeV collider was completed in 2012.
With the discovery of the Higgs boson in July of that year, and the fact that the particle turned out to be relatively light with a mass of 125 GeV, it became evident that there is a compelling physics case for operating CLIC at a lower centre-of-mass energy. The optimal collision energy is 380 GeV because it simultaneously allows physicists to study two Higgs-production processes in addition to top-quark pair production. Therefore, to fully exploit CLIC’s scientific potential, the collider is foreseen to be constructed in several stages corresponding to different centre-of-mass energies: the first at 380 GeV would be followed by stages at 1.5 and 3 TeV, allowing powerful searches for phenomena beyond the Standard Model (SM).
While a fully optimised collider at 3 TeV was described in the CDR in 2012, the lower-energy stages were not presented at the same level of detail. In August this year, however, the CLIC and CLICdp (CLIC detector and physics study) collaborations published an updated baseline-staging scenario that places emphasis on an optimised first-energy stage compatible with an extension to high energies. The performance, cost and power consumption of the CLIC accelerator as a function of the centre-of-mass energy were addressed, building on experience from technology R&D and system tests. The resulting first-energy stage is based on already demonstrated performances of CLIC’s novel acceleration technology and will be significantly cheaper than the initial CDR design.
CLIC physics
An electron–positron collider provides unique opportunities to make precision measurements of the two heaviest particles in the SM: the Higgs boson (125 GeV) and the top quark (173 GeV). Deviations in the way the Higgs couples to the fermions, the electroweak bosons and itself are predicted in many extensions of the SM, such as supersymmetry or composite Higgs models. Different scenarios lead to specific patterns of deviations, which means that precision measurements of the Higgs couplings can potentially discriminate between different new-physics scenarios. The same is true of the couplings of the top quark to the Z boson and photon. CLIC would offer such measurements as the first step of its physics programme, and full simulations of realistic CLIC detector concepts have been used to evaluate the expected precision and to guide the choice of collision energy.
The principal Higgs production channel, Higgstrahlung (e+e–→ ZH), requires the centre-of-mass energy to be equal to the sum of the Higgs- and Z-boson masses plus a few tens of GeV. For an electron–positron collider such as CLIC, Higgsstrahlung has a maximum cross-section at a centre-of-mass energy of around 240 GeV and decreases as a function of energy. Because the colliding electrons and positrons are elementary particles with a precisely known energy, Higgsstrahlung events can be identified by detecting the Z boson alone as it recoils against the Higgs boson. This can be done without looking at the decay of the Higgs boson, and hence the measurement is completely independent of possible unknown Higgs decays. This is a unique capability of a lepton collider and the reason why the first energy stage of CLIC is so important. The most powerful method with which to measure the Higgsstrahlung cross-section in this way is based on events where a Z boson decays into hadrons, and the best precision is expected at centre-of-mass energies around 350 GeV. (At lower energies it is more difficult to separate signal and background events, while at higher energies the measurement is limited by the smaller signal cross-section and worse recoil mass resolution.)
The other main Higgs-production channel is WW fusion (e+e−→ Hveve). In contrast to Higgsstrahlung, the cross-section for this process rises quickly with centre-of-mass energy. By measuring the rates for the same Higgs decay, such as H → bb, in both Higgsstrahlung and WW-fusion events, researchers can significantly improve their knowledge of the Higgs decay width – which is a challenging measurement at hadron colliders such as the LHC. A centre-of-mass energy of 380 GeV at the first CLIC energy stage is ideal for achieving a sizable contribution of WW-fusion events.
So far, the energy of electron–positron colliders has not been high enough to allow direct measurements of the top quark. At the first CLIC energy stage, however, properties of the top quark can be obtained via pair-production events (e+e−→ tt). A small fraction of the collider’s running time would be used to scan the top pair-production cross-section in the threshold region around 350 GeV. This would allow us to extract the top-quark mass in a theoretically well-defined scheme, which is not possible at hadron colliders. The value of the top-quark mass has an important impact on the stability of the electroweak vacuum at very high energies.
With current knowledge, the achievable precision on the top-quark mass is expected to be in the order of 50 MeV, including systematic and theoretical uncertainties. This is about an order of magnitude better than the precision expected at the High-Luminosity LHC (HL-LHC).
The couplings of the top quark to the Z boson and photon can be probed using the top-production cross-sections and “forward-backward” asymmetries for different electron-beam polarisation configurations available at CLIC. These observables lead to expected precisions on the couplings which are substantially better than those achievable at the HL-LHC. Deviations of these couplings from their SM expectations are predicted in many new physics scenarios, such as composite-Higgs scenarios or extra-dimension models. It was recently shown, using detailed detector simulations, that although higher energies are preferred, this measurement is already feasible at an energy of 380 GeV, provided the theoretical uncertainties improve in the coming years. The expected precisions depend on our ability to reconstruct tt– events correctly, which is more challenging at 380 GeV compared to higher energies because both top quarks decay almost isotropically.
Combining all available knowledge therefore led to the choice of 380 GeV for the first-energy stage of the CLIC programme in the new staging baseline. Not only is this close to the optimal value for Higgs physics around 350 GeV but it would also enable substantial measurements of the top quark. An integrated luminosity of 500 fb–1 is required for the Higgs and top-physics programmes, which could take roughly five years. The top cross-section threshold scan, meanwhile, would be feasible with 100 fb–1 collected at several energy points near the production threshold.
Stepping up
After the initial phase of CLIC operation at 380 GeV, the aim is to operate CLIC above 1 TeV at the earliest possible time. In the current baseline, two stages at 1.5 TeV and 3 TeV are planned, although the exact energies of these stages can be revised as new input from the LHC and HL-LHC becomes available. Searches for beyond-the-SM phenomena are the main goal of high-energy CLIC operation. Furthermore, additional unique measurements of Higgs and top properties are possible, including studies of double Higgs production to extract the Higgs self-coupling. This is crucial to probe the Higgs potential experimentally and its measurement is extremely challenging in hadron collisions, even at the HL-LHC. In addition, the full data sample with three million Higgs events would lead to very tight constraints on the Higgs couplings to vector bosons and fermions. In contrast to hadron colliders, all events can be used for physics and there are no QCD backgrounds.
Two fundamentally different approaches are possible to search for phenomena beyond the SM. The first is to search directly for the production of new particles, which in electron–positron collisions can take place almost up to the kinematic limit. Due to the clean experimental conditions and low backgrounds compared to hadron colliders, CLIC is particularly well suited for measuring new and existing weakly interacting states. Because the beam energies are tunable, it is also possible to study the production thresholds of new particles in detail. Searches for dark-matter candidates, meanwhile, can be performed using single-photon events with missing energy. Because lepton colliders probe the coupling of dark-matter particles to leptons, searches at CLIC are complementary to those at hadron colliders, which are sensitive to the couplings to quarks and gluons.
The second analysis approach at CLIC, which is sensitive to even higher mass scales, is to search for unexpected signals in precision measurements of SM observables. For example, measurements of two-fermion processes provide discovery potential for Z´ bosons with masses up to tens of TeV. Another important example is the search for additional resonances or anomalous couplings in vector-boson scattering. For both indirect and direct searches, the discovery reach improves significantly with increasing centre-of-mass energy. If new phenomena are found, beam polarisation might help to constrain the underlying theory through observables such as polarisation asymmetries.
The CLIC concept
CLIC will collide beams of electrons and positrons at a single interaction point, with the main beams generated in a central facility that would fit on the CERN site. To increase the brilliance of the beams, the particles are “cooled” (slowed down and reaccelerated continuously) in damping rings before they are sent to the two high-gradient main linacs, which face each other. Here, the beams are accelerated to the full collision energy in a single pass and a magnetic telescope consisting of quadrupoles and different multipoles is used to focus the beams to nanometre sizes in the collision point inside of the detector. Two additional complexes produce high-current (100 A) electron beams to drive the main linacs – this novel two-beam acceleration technique is unique to CLIC.
The CLIC accelerator R&D is focused on several core challenges. First, strong accelerating fields are required in the main linac to limit its length and cost. Outstanding beam quality is also essential to achieve a high rate of physics events in the detectors. In addition, the power consumption of the CLIC accelerator complex has to be limited to about 500 MW for the highest-energy stage; hence a high efficiency to generate RF power and transfer it into the beams is mandatory. CLIC will use high-frequency (X-band) normal-conducting accelerating structures (copper) to achieve accelerating gradients at the level of 100 MV/m. A centre-of-mass energy of 3 TeV can be reached with a collider of about 50 km length, while 380 GeV for CLIC’s first stage would require a site length of 11 km, which is slightly larger than the diameter of the LHC. The accelerator is operated using 50 RF pulses of 244 ns length per second. During each pulse, a train of 312 bunches is accelerated, which are separated by just 0.5 ns. To generate the accelerating field, each CLIC main-linac accelerating structure needs to be fed with an RF power of 60 MW. With a total of 140,000 structures in the 3 TeV collider, this adds up to more than 8 TW.
Because it is not possible to generate this peak power at reasonable cost with conventional klystrons (even for the short pulse length of 244 ns), a novel power-production scheme has been developed for CLIC. The idea is to operate a drive beam with a current of 100 A that runs parallel to the main beam via power extraction and transfer structures. In these structures, the beam induces electric fields, thereby losing energy and generating RF power, that is transferred to the main-linac accelerating structures. The drive beam is produced as a long (146 μs) high-current (4 A) train of bunches and is accelerated to an energy of about 2.4 GeV and then sent into a delay loop and combiner-ring complex where sets of 24 consecutive sub-pulses are used to form 25 trains of 244 ns length with a current of about 100 A. Each of these bunch-trains is then used to power one of the 25 drive-beam sectors, which means that the initial 146 μs-long pulse is effectively compressed in time by a factor of 600, and therefore its power is increased by the same factor.
To demonstrate this novel scheme, a test facility (CTF3) was constructed at CERN since 2001 that reused the LEP pre-injector building and components as well as adding many more. The facility now consists of a drive-beam accelerator, the delay loop and one combiner ring. CTF3 can produce a drive-beam pulse of about 30 A and accelerate the main beam with a gradient of up to 145 MV/m. A large range of components, feedback systems and operational procedures needed to be developed to make the facility a success, and by the end of 2016 it will have finished its mission. Further beam tests at SLAC, KEK and various light sources remain important. The CALIFES electron beam facility at CERN, which is currently being evaluated for operation from 2017, can provide a testing ground for high-gradient structures and main-beam studies. More prototypes for CLIC’s main-beam and drive-beam components are being developed and characterised in dedicated test facilities at CERN and collaborating institutes. The resulting progress in X-band acceleration technology also generated important interest in the Free Electron Laser (FEL) community, where it may allow for more compact facilities.
To achieve the required luminosities (6 × 1034 cm–2 s–1 at 3 TeV), nanometre beam sizes are required at CLIC’s interaction point. This is several hundred times smaller than at the SLC, which operated at SLAC in the 1990s and was the first and only operational linear collider, and therefore requires novel hardware and sophisticated beam-based alignment algorithms. A precision pre-alignment system has been developed and tested that can achieve an alignment accuracy in the range of 10 μm, while beam-based tuning algorithms have been successfully tested at SLAC and other facilities. These algorithms use beams of different energies to diagnose and correct the offset of the beam-position monitors, reducing the effective misalignments to a fraction of a micron. Because the motion of the ground due to natural and technical sources can cause the beam-guiding quadrupole magnets to move, knocking the beams out of focus, the magnets will be stabilised with an active feedback system that has been developed by a collaboration of several institutes, and which has already been demonstrated experimentally.
CLIC’s physics potential has been illustrated through the simulation and reconstruction of benchmark physics processes in two dedicated detector concepts. These are based on the SiD and ILD detector concepts developed for the International Linear Collider (ILC), an alternative machine currently under consideration for construction in Japan, and have been adapted to the experimental environment at the higher-energy CLIC. Because the high centre-of-mass energies and CLIC’s accelerator technology lead to relatively high beam-induced background levels for a lepton collider, the CLIC detector design and the event-reconstruction techniques are both optimised to suppress the influence of these backgrounds. A main driver for the ILC and CLIC detector concepts is the required jet-energy resolution. To achieve the required precision, the CLIC detector concepts are based on fine-grained electromagnetic and hadronic calorimeters optimised for particle-flow analysis techniques. A new study is almost complete, which defines a single optimised CLIC detector for use in future CLIC physics benchmark studies. The work by CLICdp was crucial for the new staging baseline (especially for the choice of 380 GeV) because the physics potential as a function of energy can only be estimated with the required accuracy using detailed simulations of realistic detector concepts.
The new staged design
To optimise the CLIC accelerator, a systematic design approach has been developed and used to explore a large range of configurations for the RF structures of the main linac. For each structure design, the luminosity performance, power consumption and total cost of the CLIC complex are calculated. For the first stage, different accelerating structures operated at a somewhat lower accelerating gradient of 72 MV/m will be used to reach the luminosity goal at a cost and power consumption similar to earlier projects at CERN – while also not inflating the cost of the higher-energy stages. The design should also be flexible enough to take advantage of projected improvements in RF technology during the construction and operation of the first stage.
When upgrading to higher energies, the structures optimised for 380 GeV will be moved to the beginning of the new linear accelerator and the remaining space filled with structures optimised for 3 TeV operation. The RF pulse length of 244 ns is kept the same at all stages to avoid major modifications to the drive-beam generation scheme. Data taking at the three energy stages is expected to last for a period of seven, five and six years, respectively. The stages are interrupted by two upgrade periods each lasting two years, which means that the overall three-stage CLIC programme will last for 22 years from the start of operation. The duration of each stage is derived from integrated luminosity targets of 500 fb–1 at 380 GeV, 1.5 ab–1 at 1.5 TeV and 3 ab–1 at 3 TeV.
An intense R&D programme is yielding other important improvements. For instance, the CLIC study recently proposed a novel design for klystrons that can increase the efficiency significantly. To reduce the power consumption further, permanent magnets are also being developed that are tunable enough to be able to replace the normal conducting magnets. The goal is to develop a detailed design of both the accelerator and detector in time for the update of the European Strategy for Particle Physics towards the end of the decade.
Mature option
With the discovery of the Higgs boson, a great physics case exists for CLIC at a centre-of-mass energy of 380 GeV. Hence particular emphasis is being placed on the first stage of the accelerator, for which the focus is on reducing costs and power consumption. The new accelerating structure design will be improved and more statistics on the structure performance will be obtained. The detector design will continue to be optimised, driven by the requirements of the physics programme. Technology demonstrators for the most challenging detector elements, including the vertex detector and main tracker, are being developed in parallel.
Common studies with the ILC, which is currently being considered for implementation in Japan, are also important, both for accelerator and detector elements, in particular for the initial stage of CLIC. Both the accelerator and detector parameters and designs, in particular for the second- and third-energy stages, will evolve according to new LHC results and studies as they emerge.
CLIC is the only mature option for a future multi-TeV high-luminosity linear electron–positron collider. The two-beam technology has been demonstrated in large-scale tests and no show stoppers were identified. CLIC is therefore an attractive option for CERN after the LHC. Once the European particle-physics community decides to move forward, a technical design will be developed and construction could begin in 2025.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.