Events with a single jet of particles in the final state have traditionally been studied in the context of searches for supersymmetry, for large extra spatial dimensions and for candidates for dark matter. Having searched for new phenomena in monojet final states in the 2011 data, the ATLAS collaboration turned its attention to data collected in 2012, with the first results presented at the Hadron Collider Physics (HCP) symposium in Kyoto in November.
Models with large extra spatial dimensions aim to provide a solution to the mass-hierarchy problem (related to the large difference between the electroweak unification scale at around 102 GeV and the Planck scale around 1019 GeV) by postulating the presence of n extra dimensions, such that the Planck scale in 4+n dimensions becomes naturally close to the electroweak scale. In these models, gravitons (the particles hypothesized as mediators of the gravitational interaction) are produced in association with a jet of hadrons; the extremely weakly interacting gravitons would escape detection, leading to a monojet signature in the final state.
Dark-matter particles could also give rise to monojet events. According to the current understanding of cosmology, non-baryonic non-luminous matter contributes about 23% of the total mass-energy budget of the universe but the exact nature of this dark matter remains unknown. A commonly accepted hypothesis is that it consists of weakly interacting massive particles (WIMPs) acting through gravitational or weak interactions. At the LHC, WIMPs could be produced in pairs that would pass through the experimental devices undetected. Such events could be identified by the presence of an energetic jet from initial-state radiation, leading again to a monojet signature. The LHC experiments have a unique sensitivity for dark-matter candidates with masses below 4 GeV and are therefore complementary to other searches for dark matter.
The study presented at HCP uses 10 fb–1 of proton–proton data collected during 2012, at a centre-of-mass energy of 8 TeV. As with the earlier analysis, the results are still in good agreement with the predictions of the Standard Model (figure 2). The new results have been translated into updated exclusion limits on the presence of large extra spatial dimensions and the production of WIMPs, as well as new limits on the production of gravitinos (the supersymmetric partners of gravitons) that result in the best lower bound to date on the mass of the gravitino.
It has taken decades of hunting but finally the first evidence for one of the rarest particle decays ever seen in nature, the decay of a Bs (composed of a beauty antiquark and a strange quark) into two muons, has been uncovered by the LHCb collaboration.
In the Standard Model, the decay Bs → μμ is calculated to occur only three times in every 1000 million Bs decays. While the Standard Model has been incredibly successful, it leaves many unanswered questions concerning, for example, the origin of the matter–antimatter asymmetry and the essence of dark matter. Extended theories, such as supersymmetry, may resolve some of these issues. These theories allow for new particles and phenomena that can affect measurable quantities. The branching fraction B(Bs → μμ), for example, can be enhanced or reduced with respect to the Standard Model prediction, so the measurement has the potential to reveal hints of new physics. The LHCb experiment is particularly suited for such an indirect search for the effects of new physics, complementary to direct searches for new particles.
The LHCb collaboration performed the search for Bs → μμ (and B0 → μμ) by analysing 1.0 fb–1 of proton–proton collisions at 7 TeV in the centre of mass (from 2011) and 1.1 fb–1 at 8 TeV (2012). The signal selection starts with the search for pairs of oppositely charged muons that make a vertex that is displaced from the proton–proton interaction vertex (see figure 1). The signal and background are then separated using simultaneously the invariant mass of the two muons as well as kinematic and topological information combined in a multivariate analysis classifier. The particular classifier used is a boosted decision-tree (BDT) algorithm, which is calibrated with data for both signal and background events. The latter are dominated by random combinations of two muons from two different B mesons; this contribution is carefully determined from data.
The number of B0 → μμ candidates that LHCb observes is consistent with the background expectation, giving an upper limit of B(B0 → μμ) < 9.4 × 10–10 at 95% confidence level. This is the world’s most stringent upper limit from a single experiment on this branching fraction. However, for Bs → μμ, LHCb sees an excess of candidates with respect to the background expectation (figure 2). A maximum-likelihood fit gives a branching fraction of B(Bs → μμ) = 3.2 +1.5–1.2 × 10–9. The probability that the background could produce an excess of this size or larger is 5.3 × 10–4, corresponding to a signal significance of 3.5σ.
The measurement of Bs → μμ is close to the Standard Model prediction, albeit with a large uncertainty. This eagerly awaited result was presented at the Hadron Collider Physics Symposium in Kyoto and at a CERN seminar, and is now published. While it does not provide evidence for supersymmetry, it does constrain the parameter space for this and other models of new physics, and is a step further in understanding the universe.
The CMS collaboration has published its first result on proton–lead (pPb) collisions (CMS collaboration 2012), related to the observation of a phenomenon that was seen first in nucleus–nucleus collisions but also detected by CMS in 2010 in the first LHC proton–proton (pp) collisions at a centre-of-mass energy of 7 TeV (V Khachatryan et al. CMS collaboration 2010). The effect is a correlation between pairs of particles formed in high-multiplicity collisions – that is, collisions producing a high number of particles – which manifests as a ridge-like structure.
About once in every 100,000 pp collisions with the highest produced particle multiplicity, CMS observed an enhancement of particle pairs with small relative azimuthal angle Δφ (figure 1a). Such correlations had not been observed before in pp collisions but they were reminiscent of effects seen in nucleus–nucleus collisions first at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) and later in collisions of lead–lead nuclei (PbPb) at the LHC (figure 1b shows peripheral PbPb collisions from CMS).
Nucleus–nucleus collisions produce a hot, dense medium similar to the quark–gluon plasma (QGP) thought to have existed in the first microseconds after the Big Bang. The long-range correlations in PbPb collisions are interpreted as a result of a hydrodynamic expansion of this medium and are used to determine its fluid properties. Remarkably, this matter is found to have low frictional resistance (shear viscosity/entropy density ratio), behaving as a (nearly) perfect liquid. Because a QGP medium was not expected in the small pp system, the CMS results led to a large variety of theoretical models, which attempted to explain the origin of these ridge-like correlations (Wei Li 2012).
In September 2012, the LHC provided a short pilot run of pPb collisions at a centre-of-mass energy of 5 TeV per nucleus, for just a few hours. CMS collected two million pPb collisions (figure 2) – and now the first correlation analysis of these data has revealed strong long-range correlations, most easily visible as the ridge-like structure highlighted in figure 1c. As was the case for the pp data, the most common simulations of pPb collisions do not show ridge-like correlations, thus indicating a new, still unexplained phenomenon. Surprisingly, the effect in pPb collisions is much stronger than in pp collisions. In fact, it is similar to that seen in PbPb collisions.
The 2013 pPb run should yield at least a 30,000-fold increase in the pPb data sample at the same collision energy. Combined with the surprisingly large magnitude of the observed correlations, this will enable detailed studies and open a new testing ground for basic questions in the physics of strongly interacting systems and the nature of the initial state of nuclear collisions.
More than 20 years ago, the CMS and ATLAS experiments at the LHC embarked on a long road into the unknown and, rather like Christopher Columbus, the two collaborations reached a new land last summer. But did they discover what they expected – the long awaited Higgs boson of the Standard Model – or have they found the first hint of a new unknown world? The only way to find out is to measure the characteristics of the new particle to establish if it is compatible with the expectations of the Standard Model.
The decay of the new boson to two Z bosons and subsequently to four leptons (figure 1) is an especially powerful tool. This decay channel produces four well measured tracks of particles in a low-background environment and contains a rich set of information that no other channel can provide. The CMS collaboration has exploited this information first to boost the significance of signal observed last summer and then to go even further. By using the decay kinematics – understanding how the masses and angles of all of the particles in the process are correlated – they have attempted to determine if the new particle is the Standard Model Higgs boson or a gateway to a new world.
Using the full event information, the analysis assigns to each event the probability that it is a genuine Higgs boson, a more exotic particle or is just background. From these probabilities, it is possible to say how likely one model is compared with another. Figure 2 shows the expected likelihood for a genuine scalar Higgs boson (pink) and a pseudo-scalar boson (blue). The two hypotheses differ in the parity of the particle; in effect, the pseudo-scalar boson has a reversed mirror image. The green arrow on the plot is the measurement showing that the probability of a pure pseudo-scalar boson is small, indicating that this option is largely disfavoured by the data. This observation makes it possible to rule out a set of possible extensions of the Standard Model. A similar test of the hypothesis of a spin-2 particle has also been performed but it requires more data for a conclusive result. These are just the first steps into this new world. Further studies of the new boson will be possible in future as more data become available.
A team of scientists from the Paul Scherrer Institute (PSI), CERN’s ISOLDE facility and the Institut Laue-Langevin (ILL) has published results from a preclinical study of new tumour-targeting radiopharmaceuticals based on the element terbium. The results demonstrate the potential of providing a new generation of radioisotopes with excellent properties for the diagnosis and treatment of cancer.
Radiopharmaceuticals in which a radioactive isotope is attached to a carrier that selectively delivers it to tumour cells are used in two main ways, for diagnosis and for treatment. Nuclear imaging for diagnostics involves either β+-emitting radioisotopes for positron-emission tomography (PET) or γ-emitting radioisotopes for use in single-photon-emission computed tomography (SPECT) and in planar imaging with gamma-cameras. By contrast, targeted radionuclide employs the short-range radiation (α-particles and electrons) emitted by radioisotopes to destroy cancer cells.
So-called “matched pairs” of diagnostic and therapeutic radioisotopes of the same chemical element are particularly useful because they allow the preparation of radiopharmaceuticals that are absorbed and distributed in identical ways in the body. Terbium is the only element in the periodic table to offer not just a pair but four clinically interesting radioisotopes with complementary nuclear-decay characteristics covering all of the options for nuclear medicine: 152Tb for PET, 155Tb for SPECT, 149Tb for α-particle therapy and 161Tb for therapy with electrons (β–, conversion and Auger electrons).
The team from the PSI, ILL and CERN has now made the first comprehensive preclinical study of this range of terbium radiopharmaceuticals. The neutron-deficient isotopes 149Tb, 152Tb and 155Tb were produced by 1.4 GeV proton-induced spallation in a tantalum target and separated with the ISOLDE online isotope separator at CERN. 161Tb was produced at the high-flux reactor of ILL and at the spallation neutron source SINQ at PSI. The isotopes were then purified using cation-exchange chromatography at PSI.
For this first in vivo proof-of-principle study the team developed a new delivery agent, which targets folate receptors in the body. These receptors are over-expressed in a variety of aggressive tumours, including ovarian and other gynaecological cancers as well as certain breast, renal, lung, colorectal and brain cancers, while their distribution in normal tissues and organs is highly limited. Folate vitamins have a rapid uptake in the body but they are also rapidly eliminated, so they do not remain long enough to reach all cancer cells. Hence, the team designed a new folate delivery agent called “cm09”, where folic acid is conjugated with an albumin-binding entity to prolong the circulation time in the blood.
For the study, the terbium radioisotopes were combined with the cm09 and then administered to tumour-bearing mice. Excellent tumour-to-background ratios 24 hours after injection allowed tumour xenografts in mice to be seen using small-animal PET (152Tb-cm09) and small-animal SPECT (155Tb-cm09 and 161Tb-cm09). In vivo therapy experiments using 149Tb-cm09 (α-therapy) and 161Tb-cm09 (β-therapy) resulted in a marked delay in tumour growth or even complete remission, as well as a significant increased survival in treated animals compared with untreated controls.
Future progress in these promising diagnostic and treatment options depends crucially on the regular availability of the terbium isotopes, in particular of 149Tb. At present ISOLDE at CERN is the world’s only source of this isotope.
Although the Hubble Space Telescope is more than 22 years old, the regular upgrade of its instruments preserves intact its discovery potential as time goes by. Now, its quest to detect the most distant and therefore earliest galaxies in the universe is reaching new frontiers with two candidates at a redshift of around 11. One of them was found in the Hubble Ultra Deep Field, the other using the light amplification of gravitational lensing induced by a cluster of galaxies.
Of all of the scientific satellites, Hubble is the only one that can be upgraded by astronauts. The fourth and final servicing mission was conducted in May 2009 with the space shuttle Atlantis. One of the two new instruments installed was the Wide Field Camera 3 (WFC3), which offers a large field of view and broad wavelength-coverage from ultraviolet to infrared light. These characteristics make it an ideal instrument to find rare and extremely distant galaxies.
Identifying such galaxies requires looking for as long as possible in an apparently empty patch of the sky and searching for the faintest spots of light that show up in the infrared image while being absent in the visible range. A remote galaxy will be observed only in the infrared because the wavelength of its visible radiation has been stretched on its journey by the expansion of the universe. This redshift, z, is a direct measurement of the cosmological distance of a galaxy and it can be determined accurately by measuring the shift of well identified spectral lines. Such a spectroscopic determination is out of reach for current instrumentation because the galaxies are too faint. A less robust alternative is to integrate the light in a series of spectral bands with various filters and to locate the bands on each side of the “Lyman break” – a sharp feature that results from the absorption of light by neutral hydrogen in a star-forming galaxy at wavelengths below 91.2 nm, corresponding to the energy (13.6 eV) needed to ionize the atom.
A team of scientists co-led by Richard Ellis of Caltech and Ross McLure of the University of Edinburgh has made new observations of the Hubble Ultra Deep Field (CERN Courier November 2012 p15). The study used one additional filter and undertook much deeper exposures in some filters to improve the reliability of high-redshift determinations. The team identified seven galaxies at redshifts above 8.5 that would represent a previously unseen population of galaxies that formed more than 13 thousand million years ago, when the universe was only about 3–4% of its current age. One of the galaxies, designated UDFj-39546284, was already a candidate for the highest redshift (z around 10) two years ago (CERN Courier March 2011 p10). The new observations suggest that it is even further away, at z = 11.9, unless it is an intense emission-line galaxy at z around 2.4. The latter possibility can only be ruled out with a deep infrared spectrum of the kind that the James Webb Space Telescope will provide after its planned in 2018.
Another team, led by Dan Coe of the Space Telescope Science Institute (STScI) in Baltimore, is using a different approach. They look for high-redshift objects around 25 clusters of galaxies observed by Hubble. The clusters are used as magnifying glasses that have the potential to amplify the light of background galaxies by a large factor, thanks to strong gravitational lensing (CERN Courier April 2008 p11). The latest discovery is a galaxy, known as MACS0647-JD, with a redshift of 10.8 ± 0.5. It seems to be a tiny galaxy with a mass not exceeding 1% of the mass of the Milky Way and could be one of many building blocks of a spiral galaxy like ours.
There was a keen sense of anticipation and excitement throughout the ATLAS collaboration as 2012 dawned. The LHC had performed superbly over the previous two years, delivering 5 fb–1 of proton–proton collision data at a centre-of-mass energy of 7 TeV in 2011, thereby allowing ATLAS to embark on a thorough exploration of a new energy regime. This work culminated with the first hints of a potential Higgs-like particle at a mass of about 126 GeV being reported by both the ATLAS and CMS collaborations at the CERN Council meeting in December 2011. With the promise of a much larger data sample at the increased collision energy of 8 TeV in 2012, everyone looked forward to seeing what the new data might bring.
The period leading up to the first collisions in early April 2012 saw intensive activity on the ATLAS detector itself, with the installation of additional sets of chambers to improve the coverage of the muon spectrometer, as well as the regular winter maintenance and consolidation work – essential for making sure that the detector was ready for the long year of data-taking ahead. With the promise of high-luminosity data with up to 40 simultaneous proton–proton collisions (“pile-up”) per bunch crossing – some 2–3 times more than seen in 2011 – experts from the groups responsible for the trigger, offline reconstruction and physics objects worked intensively to ensure that the online and offline software and selections were ready to cope with the influx of data. Careful optimization ensured that the performance of selections for electrons, τ leptons and missing transverse momentum, for example, were made stable against high levels of pile-up, while still keeping within the limits of the computing resources and maintaining – or even exceeding – the efficiencies and purities obtained in the 2011 data.
Meanwhile, the physics-analysis teams worked to finalize their analyses of the 2011 data for presentation at the winter/spring conferences and subsequent publication, while at the same time preparing for analysis of the new data. Members of the Higgs group focused attention on the two high mass-resolution channels H→γγ and H→ZZ(*)→4 leptons (figure 1), where the Higgs signal would appear as a narrow peak above a smoothly varying background. These channels had shown hints in the 2011 data and had the greatest potential to deliver early results in 2012. Using data samples from 2011 and a Monte Carlo simulation of the anticipated new data at 8 TeV, the analyses were re-optimized to maximize sensitivity in the mass region of 120–130 GeV, taking full advantage of the new object-reconstruction algorithms and selections.
The race to Australia
Once data-taking began in early April, the first priority was to calibrate and verify the performance of the detector, trigger and reconstruction, comparing the results with the new 8 TeV Monte Carlo simulation. The modelling of pile-up was particularly important and was checked using a dedicated low-luminosity run of the LHC, where events were recorded with only a single interaction per bunch crossing. Having established the basic conditions for physics analysis, attention then turned to preparations for the International Conference on High-Energy Physics (ICHEP) taking place on 5–11 July in Melbourne, where the particle-physics community and the world’s media would be eagerly awaiting the latest results from the new data.
As ICHEP drew nearer, the LHC began to deliver the goods, with up to 1 fb–1 of data per week
As ICHEP drew nearer, the LHC began to deliver the goods, with up to 1 fb–1 of data per week. Each new run was recorded, calibrated and processed through the Tier-0 centre of the Worldwide LHC Computing Grid at CERN, before being thoroughly checked and validated by the ATLAS data-quality group and delivered to the physics-analysis teams on a regular weekly schedule. At the same time, the worldwide computing Grid resources available to ATLAS worked round the clock to prepare the corresponding Monte Carlo simulation samples at the new collision energy of 8 TeV. At first, the analysers in the Higgs group restricted their attention to control regions in data, aiming to prove to themselves and the rest of the collaboration that the new data were thoroughly understood. After a series of review meetings, with a few weeks remaining before ICHEP, the go-ahead was given to “un-blind” the data taken so far – a moment of great excitement and not a little anxiety.
At first only hints were visible but as more data were added week by week and combined with the results from an improved analysis of the 2011 data, it rapidly became clear that there was a significant signal in both the γγ and 4-lepton channels. The last few weeks before ICHEP were particularly intense, with exhaustive cross-checks of the results and many discussions on exactly how to present and interpret what was being seen. With the full 5.8 fb–1 sample from LHC data-taking up until 18 June included, ATLAS had signals with significances of 4.5σ in the γγ channel and 3.4σ in 4 leptons, leading to the reporting of the observation of a new particle with a combined significance of 5.0σ at the special seminar at CERN on 4 July and at the ICHEP conference.
Similar signals were seen by CMS and both collaborations submitted papers reporting the discovery of this new Higgs-like resonance at the end of July. As well as the γγ and 4-lepton results reported at ICHEP, the paper by ATLAS also included the analysis of the H→WW(*)→lνlν channel, which revealed a broad excess with a significance of 2.8σ around 125 GeV. The combination of these three channels together with the 2011 data analysis from several other channels established the existence of this new particle at the 5.9σ level (figure 2), ushering in a new era in particle physics.
Searching for the unexpected
As well as following up on the hints of the Higgs seen in the 2011 data, the ATLAS collaboration has continued to conduct intensive searches across the full range of physics scenarios beyond the Standard Model, including those that involve supersymmetry (SUSY) and non-SUSY extensions of the Standard Model. More than 20 papers have been published or submitted on SUSY searches with the complete 2011 data set, with a similar number published on other searches beyond the Standard Model. One particular highlight is the search for the dark matter that is postulated to exist from astronomical observations but which has never been seen in the laboratory. By searching for “unbalanced” events, in which a single photon or jet of particles is produced recoiling against a pair of “invisible” undetected particles, limits can be set on the interaction cross-sections of the dark-matter candidates known as weakly interacting massive particles (WIMPs) with ordinary matter. Using the full 2011 data set, ATLAS was able to set limits on such WIMP-nucleon cross-sections for WIMPs of mass up to around 1 TeV; these limits are complementary to those achieved by direct-detection and gamma-ray observation experiments.
Another highlight is the search for new particles that decay into pairs of top (t) and antitop (t) quarks, giving rise to resonances in the tt– invariant mass spectrum. The complete 2011 data set gives access to invariant masses well beyond 1 TeV, where the t and t tend to decay in “boosted” topologies with two sets of back-to-back collimated decay products. By reconstructing each top decay as a single “fat” jet and exploiting recently developed techniques to search for distinct objects within the “substructure” of these jets, ATLAS was able to set limits on the production of resonances from the decay of Z’ bosons or Kaluza-Klein gluons in the tera-electron-volt range, even though high levels of pile-up added noise to the jet substructure. Such techniques will become even more important in extending these searches to higher masses with the full 2012 data sample.
The search for SUSY continued apace in 2012, with new results from 8 TeV data presented at both the SUSY 2012 conference in August and the Hadron Collider Physics Symposium in November. By looking for events with several jets and large missing transverse energy, limits on the strong production of squarks and gluinos were pushed beyond 1.5 TeV for equal-mass squarks and gluinos in the framework of minimal supergravity grand unification (mSUGRA) and the constrained minimal supersymmetric extension of the Standard Model (CMSSM). The lack of evidence for “generic” SUSY signatures with masses close to the electroweak and top-quark mass scales – together with the discovery of a light Higgs-like object around 126 GeV – has led to much theoretical interest in scenarios where only the third generation of SUSY particles (top and bottom squarks, stau lepton) are relatively light. ATLAS performed a series of dedicated searches for the direct production of bottom and top squarks. The latter in particular give rise to final states that are similar to top-pair production, so searches become particularly challenging if the masses of the top squark and quark are similar. Data from 2012 were used to fill much of the “gap” around the mass of the top quark (figure 3).
Precision measurements
The ATLAS search programme described above relies on a thorough understanding of the Standard Model physics-processes that form the background to any search, but are also interesting to study in their own right. Fully exploiting the large statistics of the 2011 and 2012 data samples requires an understanding of the efficiencies, energy scales and resolutions for physics objects such as electrons, muons, τ leptons, jets and b-jets to the level of a few per cent or better, which in turn requires a dedicated effort that continued throughout 2012. This effort paid off in a large number of precise measurements involving the production of combinations of W and Z bosons, photons and jets, including those with heavy flavour. In many cases, these results challenge the current precision of QCD-based Monte Carlo calculations and provide important input for improving the ability to describe physics at LHC energy scales. Studies of high-rate jet production and soft QCD processes have also continued, with measurements of event shapes, energy flow and the underlying event contributing to knowledge of the backgrounds that underlie all physics processes at the LHC. The measurements of WW, WZ, ZZ, Wγ and Zγ production have allowed stringent constraints to be placed on anomalous couplings of these bosons at high energies, in addition to being an essential ingredient in understanding the backgrounds to Higgs searches.
The large top-quark samples available in the data from 2011 and now 2012 have opened up a new era in the study of the heaviest known fundamental particle. The cross-sections for the production of both tt– pairs and single top quarks have been measured precisely at both 7 TeV and 8 TeV; evidence for the associated production of a W boson and a top quark has also been observed. Limits have been set on the associated production of tt pairs together with W and Z particles, and even Higgs bosons, and these studies will be extended with the full 2012 data set. The asymmetry in tt production has also been measured with the full 7 TeV data set – although, unlike at the Tevatron at Fermilab, no hints of anomalies have been seen. The polarizations of top quarks and W bosons produced in their decays have been measured and spin correlations between decaying t and t quarks observed. Furthermore, ATLAS has begun to characterize the top-quark production processes in detail, looking at kinematic distributions and the production of associated jets – key ingredients in increasing the precision of top-quark measurements, as well as in evaluating top-quark backgrounds in searches for physics beyond the Standard Model.
In addition, ATLAS has continued to exploit the large samples of B hadrons produced at the LHC, in particular those from dimuon final states, which can be recorded even at the highest LHC luminosities. Highlights include the detailed study of CP violation in the decay Bs→J/ψφ, which was found to be in perfect agreement with the expectation from the Standard Model, and the precise measurement of the Λb mass and lifetime.
In late 2011, ATLAS recorded around 20 times more lead–lead collisions than in 2010, allowing the studies of the hot, dense medium produced in such collisions to be expanded to include photons and Z bosons, as well as jets. A new technique was developed to subtract the “underlying event” background in lead–lead collisions, enabling precise measurements of jet energies and the identification of electrons and photons in the electromagnetic calorimeter. Bosons emerge from the nuclear collision region “unscathed”, opening the door to using the energy balance in photon-jet and Z-jet events to study the energy loss suffered by jets. In addition, ATLAS has pursued a broad heavy-ion physics programme, which includes the study of correlations and flow, charged-particle multiplicities and suppression, as well as heavy-flavour production. The collaboration looks forward eagerly to the proton–lead physics run scheduled for early 2013.
What is next?
At the time of writing, ATLAS is on track to record more than 20 fb–1 of proton–proton collision data in 2012 and studies of these data by the various teams are in full swing across the whole range of search and measurement analysis. Building on the discovery announced in July, the next task for the Higgs analysis group is to learn more about the new particle, comparing its properties with those expected for the Standard Model Higgs boson and various alternatives. A first step was presented in September, where the July analyses were interpreted in terms of limits on the coupling strength of the new particle to gauge bosons, leptons and quarks, albeit with limited precision at this stage. It is also important to see if the particle decays directly to fermions, by searching for the decays H→ττ and H→bb.
These analyses are extremely challenging because of the high backgrounds and low invariant-mass resolution but first results using 13 fb–1 of 8 TeV data were presented at the Hadron Collider Physics Symposium in November. These results are not yet conclusive; the full 2012 data sample is needed to make any definite statements. At that point, it should also be possible to probe the spin and CP-properties of the new particle and improve the precision on the couplings, bringing the picture of this fascinating new object into sharper focus. At the same time, first results from searches beyond the Standard Model with the complete 2012 data set should be available, further increasing the sensitivity across the full spectrum of new physics models. The analysis of this data set will continue throughout the 2013–2014 shutdown, setting the stage for the start of the 13–14 TeV LHC physics programme in 2015 with an upgraded ATLAS detector.
• This article has only scratched the surface of the ATLAS physics programme in 2012. For more details of the more than 200 papers and 400 preliminary results, please see https://twiki.cern.ch/twiki/bin/view/AtlasPublic.
Some 400 theorists and experimentalists from all around the world convened in Munich on 8–12 October to discuss developments in the theory of strong interactions. They were attending the tenth conference on “Quark Confinement and the Hadron Spectrum” (ConfX) at the Garching Research Campus, hosted by the Physics Department of the Technical University of Munich (TUM), with support from the Excellence Cluster “Origin and Structure of the Universe”. Topics included areas at the boundaries of the field, such as theories beyond the Standard Model with a strongly coupled sector and QCD approaches to nuclear physics and astrophysics.
Inaugurated in 1994 in Como, Italy, this series of conferences has established itself as an important forum in the field, bringing together people working in strong interactions on approaches that range from lattice QCD to perturbative QCD, models of the QCD vacuum to phenomenology and experiments, the mechanism of confinement to deconfinement and heavy-ion physics, and from effective field theories to physics beyond the Standard Model. Taking place at a particularly important time for particle physics, with the observation of a Higgs-like particle at CERN, the tenth conference provided a valuable opportunity not only to reconsider what was done on past occasions but also to discuss the perspectives for strongly coupled theories.
The scientific focus of ConfX was spread across seven main scientific sessions: vacuum structure and confinement; light quarks; heavy quarks; deconfinement; QCD and new physics; nuclear and astroparticle physics; and strongly coupled theories. These subjects are relevant for the physics of B factories (Belle and BaBar), tau-charm experiments (BESIII), LHC experiments (LHCb, CMS, ATLAS), heavy-ion experiments (RHIC, ALICE), future experiments at FAIR-GSI (Panda, CMB) and in general for many low-energy experiments (such as at Jefferson Lab, COSY, MAMI) and some parts of experimental astrophysics.
It is impossible to summarize here the wealth of results presented at the meeting, the intensity of the discussions and the flow of information. What follows is just a brief selection.
The first plenary session began with recent progress in the theoretical calculations of double parton-scattering at the LHC presented by Aneesh Manohar of the University of California, San Diego. The application of soft collinear effective theory to many collider physics processes was then introduced by Thomas Becher of Bern University and followed by a review of quarkonium production by Kuang-Ta Chao of Peking University. In particular, J/ψ production has now finally been calculated at next-to-leading order in nonrelativistic QCD (NRQCD) and the extraction of colour-octet matrix elements from a combined fit to collider data has become possible for the first time. The current picture hints at the universality of the NRQCD matrix elements and a proof of the NRQCD factorization in the fragmentation approach seems to be close. Predictions for the production of Υ and other quarkonia states at the LHC experiments are now available. The progress in theory together with the new LHC data should soon allow the resolution of the long-standing puzzles about the J/ψ polarization and the production mechanism of quarkonium, both at hadron colliders and at B factories.
Heavy ions and more
The study of quarkonium production and suppression at finite temperature in heavy-ion collisions as a probe of quark–gluon plasma was reviewed in the context of a new effective field-theory approach (potential NRQCD at finite temperature). Here the shift in paradigm from the typical phenomenological description is apparent, the quarkonium dissociation being caused by the emergence of a large imaginary part in the quark–antiquark potential rather than by a Debye screening phenomenon as reported by Jacopo Ghiglieri of McGill University. The effective field-theory approach allows a systematic calculation of the thermal modifications in the energy and width of the Υ(1S) as produced at the LHC in heavy-ion collisions.
There has been great progress in developing the capabilities of the lattice approach to calculate the properties of heavy and light quarks, and also in connection to chiral effective field theories, as Peter Lepage of Cornell University, Laurent Lellouch of the Centre de Physique Théorique, Marseilles, and Zoltan Fodor of the University of Wuppertal reported.
The interest and relevance of light scalars, as well as the long-standing controversy dating back to the 1950s about their existence and nature, has been resolved in recent years by means of better data and more powerful theoretical techniques that include effective Lagrangians and dispersion theory, as José Pelaez of the Complutense University of Madrid argued.
Highlights in strong physics beyond the Standard Model presented at the conference include: composite dynamics as put in context by Francesco Sannino of the Centre for Cosmology and Particle Physics Phenomenology, Odense, at the time of the Higgs discovery; gauge gravity duality; holographic QCD explained by Shigeki Sugimoto from Tokyo University; and applications of anti-deSitter/conformal field theory correspondence to heavy-ion collisions contrasted to proton–proton physics at the LHC now and in the future, including the outstanding LHC results, presented by Günther Dissertori of ETH Zurich. This session culminated in a heated discussion about future strongly coupled scenarios, led by Antonio Pich of Valencia University, in which different views of scenarios beyond the Standard Model were discussed but remained unreconciled among the panel members Estia Eichten of Fermilab, Emanuel Katz of Boston University, Juan Maldacena of the Institute of Advanced Study, Princeton, and Stefan Pokorski of the University of Warsaw.
The conference featured a total of 250 talks
The plenary session on Wednesday morning was dedicated to the impact of QCD on nuclear and astroparticle physics. Opening the session, Ulrich Wiedner of Ruhr University Bochum presented a comprehensive review of the highlights and future of low-energy experiments in hadron physics. An effective field theory and lattice description of a variety of nuclear bound states and reactions, as well as a review of the low-energy interaction of strange and charm hadrons with nucleons and nuclei, were presented by Evgeny Epelbaum, also of Bochum, and William Detmold at Massachusetts Institute of Technology. Charles Horowitz of Indiana University spoke about multimessenger observations of neutron-rich matter, describing the Lead Radius Experiment (PREX) at Jefferson Lab, which measures the neutron density of 208Pb using parity-violating electron scattering. This has important implications for neutron-rich matter and neutron stars. He also described X-ray observations of radii of neutron stars, which are possibly model dependent, and their implications for the equation of state. Gravitational-wave observations of merging neutron stars and r-mode oscillations were discussed in terms of the equation of state, mechanical properties and bulk and shear viscosities of neutron-rich matter. This prepared the ground for the roundtable discussion on “What can compact stars really tell us about dense QCD matter”, chaired by Andreas Schmitt of the Vienna University of Technology.
On Thursday morning, Pich gave an overview of the perturbative determination of αs in which he presented the final value of 0.1187 ± 0.0007 and discussed the impact of the different type of αs extractions on the final result.
A number of low-energy precision measurements are sensitive to new physics either because the Standard Model prediction for the measured quantity is precisely known – for example, the anomalous magnetic moment of the muon (g-2) – or because the Standard Model “background” is small, as in the case of electric dipole moments (EDMs). Timothy Chupp of the University of Michigan presented several studies that are under way to probe physics beyond the Standard Model, including g-2 and EDMs. He also described the prospects for the precision measurement of the Cabibbo-Kobayashi-Maskawa matrix element, Vud, from neutron decay, i.e. the neutron lifetime and measurement of the axial-vector coupling constant (gA), as well as couplings beyond the Standard Model accessible from neutron decay. The discussion culminated in the roundtable “Resolving physics beyond the Standard Model at low energy” led by Susan Gardner of the University of Kentucky.
The final plenary session on Friday afternoon started with a talk by Mikko Laine of the University of Bern, in which he drew analogies and relationships between hot QCD and cosmology. John Harris of Yale University went on to review the latest heavy-ion data from Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) and the LHC. In particular, the data show how the “soup” of quark–gluon plasma flows easily, with extremely low viscosity – suggesting a near-perfect liquid of quarks and gluons. However, it appears opaque to energetic partons at RHIC and less so to the extremely energetic parton probes available in collisions at the LHC. This review was followed by presentations on the theoretical challenges and perspectives in the exploration of the hot QCD matter, including recent highlights in lattice calculations at finite temperature and finite density as presented by Peter Petreczky of Brookhaven National Laboratory. The session culminated with a roundtable about “Quark Gluon Plasma: what is it and how do we find it out?” chaired by Berndt Mueller of Duke University.
Yiota Foka of GSI and CERN reported on the International Particle Physics Outreach Group, which has developed an educational activity that brings LHC data into the classroom. Each year since 2005, thousands of high-school students in many countries go to nearby universities or research centres for one day to unravel the mysteries of particle physics and to be “scientists for a day”. In 2012, 10,000 students from 130 institutions in 31 countries took part in the popular event over a four-week period.
The conference featured a plenary session and seven sessions running in parallel on the subjects of the seven topical sections, with a total of 250 parallel talks. The sections on vacuum structure and confinement and on deconfinement constituted almost two conferences in themselves, with a total of 54 talks in 17.5 hours and 57 talks in 24 hours, respectively. The conference as a whole ended with a visionary talk by Chris Quigg of Fermilab on “Beyond Confinement”. The extraordinary scientific discussion and exchange that characterized the conference has served as a trigger for a document “Strongly Coupled Physics: challenges, scenarios and perspectives” that is currently in preparation in collaboration with the section conveners.
During the poster session, participants could also enjoy tasting cheese and a variety of wine from all of the countries represented. A ride down the gigantic slide belonging to the Mathematics Department complemented the lively scientific discussions. An evening session on the “Colourful world of quark and gluons” given by Gerhard Ecker, “The shaping of QCD”, and Thomas Mannel, “The many facets of QCD”, attracted the public from Garching city and from the many campus research institutes, as well as conference participants. Tours of Munich, glimpses of Bavarian culture at the famous Hofbräuhaus and a social dinner at the Hofbräukeller complemented the opportunity to discover the local campus facilities (the TUM Institute of Advanced studies and the TUM engineering, mathematics and physics departments).
In a recent article, Harald Fritzsch shared his perspective on the history of the understanding of the strong interaction (CERN Courier October 2012 p21). Here, we’d like to supplement that view. Our focus is narrower but also sharper. We will discuss a brief but dramatic period during 1973–1974, when the modern theory of the strong interaction – quantum chromodynamics, or QCD – emerged, essentially in its current form. While we were active participants in that drama, we have not relied solely on memory but have carefully reviewed the contemporary literature.
At the end of 1972 there was no fundamental theory of the strong interaction – and no consensus on how to construct one. Proposals based on S-matrix philosophy, dual-resonance models, phenomenological quark models, current algebras, ideas about “partons” and chiral dynamics – the logical descendant of Hideki Yukawa’s original pion-exchange idea – created a voluminous and rapidly growing literature. None of those competing ideas, however, offered a framework in which uniquely defined calculations leading to sharp, testable predictions could be carried out. It seemed possible that strong-interaction physics would evolve along the lines of nuclear physics: one would gradually accumulate insight experimentally, and acquire command of an ever-larger range of phenomena through models and rules of thumb. An overarching theory worthy to stand beside Maxwell’s electrodynamics or Einstein’s general relativity was no more than a dream – and not a widely shared one.
Within less than two years the situation had transformed radically. We had arrived at a very specific candidate theory of the strong interaction, one based on precise, beautiful equations. And we had specific, quantitative proposals for testing it. The theoretical works [1–5] that were central to this transformation can be identified, we think, with considerable precision.
First clues
Let us briefly recall the key lines of evidence and thought that those works reconciled, synthesized and brought to fruition. They can be summarized under three headings: quarks and colour; scaling and partons; quantum field theory and the renormalization group.
Quarks and colour: A large body of strong-interaction phenomenology, including the particle spectrum and magnetic moments, had been organized using the idea that mesons and baryons are composite particles made from combinations of a small number of more fundamental constituents: quarks. This approach, which had its roots in the ideas of Murray Gell-Mann [6] and George Zweig [7], is reviewed in a nice book by J J J Kokkedee [8]. For the model to work, the quarks were required to have bizarre properties – qualitatively different from the properties of any known particles. Their electric charges had to be fractional. They had to have an extra internal “colour” degree of freedom [9,10]. Above all, they had to be confined. Extensive experimental searches for individual quarks gave negative results. Within the model quark–antiquark pairs made mesons, while quark–quark–quark triplets made baryons; single quarks had to be much heavier than mesons and baryons – if, indeed, they existed at all.
Scaling and partons: The famous electroproduction experiments at SLAC revealed, beginning in the late 1960s, that inclusive cross-sections did not exhibit the “soft” or “form factor” behaviour familiar in exclusive and purely hadronic processes (as explored up to that time). Richard Feynman [11] interpreted these experiments as indicating the existence of more fundamental point-like constituent particles within protons, which he called partons. His approach was intuitive, employing a form of impulse approximation. James Bjorken [12] arrived at related results earlier, using more formal operator methods (local current algebra). Current-algebra sum rules were derived using “quark–gluon” models with Abelian, flavourless gluons. The agreement of these sum rules with experimental results on electron and neutrino deep-inelastic scattering gave strong evidence that charged partons are spin 1/2 particles [13] and that they have baryon number 1/3 [14], i.e. that charged partons are quarks.
Quantum field theory and the renormalization group: Martinus Veltman and Gerardus ’t Hooft [15] brought powerful new tools to the study of perturbative renormalization theory, leading to a more rigorous, quantitative formulation of gauge theories of electroweak interactions. Kenneth Wilson introduced a wealth of new ideas, conveniently though rather obscurely referred to as the renormalization group, into the study of quantum field theory beyond the limits of perturbation theory. He used these ideas with great success to study critical phenomena. Neither of those developments related directly to the strong interaction problem but they formed an important intellectual background and inspiration. They showed that the possibilities for quantum field theory to describe physical behaviour were considerably richer than previously appreciated. Wilson [16] also sketched how his renormalization-group ideas might be used to study short-distance behaviour, with specific reference to problems in the strong interaction.
These various clues appeared to be mutually exclusive, or at least in considerable tension. The parton model is based on neglect of interference terms whose existence, however, is required by basic principles of quantum mechanics. Attempts to identify partons with dynamical quarks [17] were partially successful but ascribed a much more intricate structure to protons than was postulated in the simplistic quark models and unambiguously required additional, non-quark constituents. The confinement of quarks contradicted all previous experience in phenomenology. Furthermore, such behaviour could not be obtained within perturbative quantum field theory. There were numerous technical challenges in combining re-scaling transformations, as used in the renormalization group, with gauge symmetry.
But the most concrete, quantitative tension, and the one whose resolution ultimately broke the whole subject open, was the tension between the scaling behaviour observed experimentally at SLAC and the basic principles of quantum field theory. Several workers [18] expanded Wilson’s somewhat sketchy indications into a precise mapping between calculable properties of quantum field theories and observable aspects of inclusive cross-sections. Specifically, this work made it clear that the scaling behaviour observed at SLAC could be obtained only in quantum field theories with very small anomalous dimensions. (Strict scaling, which is equivalent to vanishing anomalous dimensions, cannot occur in a non-trivial – interacting – quantum field theory [19].) A few realized that approximate scaling could be achieved in an interacting quantum theory, if the effective interaction approached zero at short distances. Anthony Zee called such field theories “stagnant”(they are essentially what we now call asymptotically free theories) and he [20], Kurt Symanzik [21] and Giorgio Parisi [22] searched for such theories. However, none found any physically acceptable examples. Indeed, a powerful no-go result [23] demonstrated that no four-dimensional quantum field theory lacking non-Abelian gauge symmetry can be asymptotically free.
The tension between scaling and quantum field theory might be resolved but only within a special, limited class of theories
Our paper, submitted in April 1973 [1], alludes directly to these motivating issues in its opening: “Non-Abelian theories have received much attention recently as a means of constructing unified and renormalizable theories of the weak and electromagnetic interactions. In this note we report an investigation of the ultraviolet (UV) asymptotic behaviour of such theories. We have found that they possess the remarkable feature, perhaps unique among renormalizable theories, of asymptotically approaching free-field theory. Such asymptotically free theories will exhibit, for matrix elements between on-mass-shell states, Bjorken scaling. We therefore suggest that one should look to a non-Abelian gauge theory of the strong interactions to provide the explanation for Bjorken scaling, which has so far eluded field-theoretic understanding.”
Thus the tension between scaling and quantum field theory might be resolved but only within a special, limited class of theories. The paper surveys those possibilities and concludes: “One particularly appealing model is based on three triplets of fermions, with Gell-Mann’s SU(3)xSU(3) as a global symmetry and an SU(3) “colour” gauge group to provide the strong interactions. That is, the generators of the strong-interaction gauge group commute with ordinary SU(3)xSU(3) currents and mix quarks with the same isospin and hypercharge but different “colour”. In such a model the vector mesons are neutral and the structure of the operator product expansion of electromagnetic or weak currents is (assuming the strong coupling constant is in the domain of attraction of the origin!) essentially that of the free quark model (up to calculable logarithmic corrections).*” This was the first clear formulation of the theory that we know today as QCD. The footnote indicated by * refers to additional work, which became the core of our two subsequent papers [3, 4].
David Politzer’s paper [2] contains calculations of the renormalization group coefficients for non-Abelian gauge theories with fermions, broadly along the same lines as in our first paper quoted above [1]. It does not refer to the problem of understanding scaling in the hadronic strong interaction. (The reference to “strong interactions” in the title is generic.) Politzer emphasized the importance of the converse of asymptotic freedom – that is, that the effective coupling grows at long distances. He remarks that this could lead to surprises regarding the particle content of asymptotically free theories and support dynamical symmetry breaking. Although we arrived at our results independently, we and Politzer learnt of each other’s work before publication, compared results, requested simultaneous publication and referred to one another. The paper by Howard Georgi and Politzer [5] adopts QCD without comment and independently derives predictions for deviations from scaling parallel to the corresponding parts of our papers [3, 4].
Further reflections
The preceding account omits several sidelights and near misses, and lots of prehistory. But, although it is incomplete, we do not think it is distorted.
It may be appropriate to mention explicitly contributions by two extremely eminent physicists (with collaborators) that are often cited together with papers 1–5 in ways that can be misleading.
’t Hooft, together with Veltman, had developed effective methods for calculating quantum corrections in non-Abelian gauge theories. They had worked out many examples, specifically including one-loop wave function and vertex divergences [24]. It would not have been very difficult, as a technical matter, to re-assemble pieces of those calculations to construct calculations of renormalization group coefficients. ’t Hooft attests – and Symanzik corroborated – that he announced a negative value of the β function for non-Abelian gauge theories with fermions at a conference in Marseilles in the summer of 1972. Unfortunately, there is no record of this in the workshop proceedings, nor in the contemporary literature, so there is no documentation regarding the exact content of the announcement or its context. It had no influence on papers 1–5. In his contemporary work on the strong interaction, ’t Hooft adopted a completely different perspective from that of Gross-Wilczek and Georgi-Politzer, a perspective from which it would be very difficult to arrive at QCD and its property of asymptotic freedom as we understand them today. Specifically, ’t Hooft’s work considered a spontaneously broken gauge theory with hadrons as the fundamental objects, e.g. ρ mesons as gauge particles. His relevant publications immediately following papers 1–5 supply alternative methods for calculating renormalization group coefficients but do not propose specific physical applications.
Two contributions involving Gell-Mann and collaborators are sometimes cited as sources of QCD. The first is the “Rochester Conference” at Fermilab in the summer of 1972 [25]. It contains two relevant presentations, Gell-Mann’s summary talk and a contributed paper with Fritzsch, entitled “Current Algebra: Quarks and What Else?” In the summary talk, SLAC scaling is mentioned and interpreted in terms of “quarks, treated formally”. The discussion is not rooted in quantum field theory; indeed, most of the discussion of the strong interaction, by far, is given over to S-matrix and dual-resonance ideas. The presentation with Fritzsch briefly mentions the possibility of using colour octet gluons, as one among several possibilities for extending light-cone current algebra (again, not within a quantum field theory).
The second contribution [26] appeared after 1–5 and refers to them. From a historical perspective, what is particularly revealing about it is the comment: “For us, the result that the colour octet field theory model comes closer to asymptotic scaling than the colour singlet model is interesting, but not necessarily conclusive, since we conjecture that there may be a modification at high frequencies that produces true asymptotic scaling.”
As events unfolded, the most profound and most fruitful aspects of QCD and asymptotic freedom proved to be their embodiment in a rigorously defined, quantitatively precise quantum field theory, which could be tested through its prediction of deviations from scaling. Yet just those aspects are what the authors hesitated to accept, even after they had been analysed.
The emergence of a specific, precise quantum field theory for the strong interaction – featuring beautiful equations – marked a watershed. Remarkable progress ensued on several fronts.
The realization that basic strong interaction processes at high energy could be calculated in a practical, controlled and systematically improvable way opened up many applications (figure 1). The subject now called perturbative QCD, which refines and improves parton model ideas, is a direct outgrowth of papers 1–5 but extends their scope almost beyond recognition. Perturbative QCD is the subject of several large textbooks, dozens of conference proceedings, etc. It has become the essential foundation for analysing experimental results from high-energy accelerators including, notably, the LHC. It justifies, in particular, the identification of “jets” with quarks and gluons (figure 2), and allows calculation of their production rates.
The paradoxical heuristics of the quark model, with its juxtaposition of free-particle properties with confinement, became physically plausible and matured into a well posed mathematical problem [4]. For the growth of the effective coupling with increasing distance, together with the existence of formally massless (colour) charged particles, brought the theory into uncharted territory. Because uncancelled field energy threatens to build up catastrophically, it was plausible that only singlet states might emerge with finite energy. Essentially new mathematical techniques were invented to address this challenge. The most successful of these, based on direct numerical solution of the equations (so-called “lattice gauge theory”) has gone far beyond demonstrating confinement to yield sharp quantitative results for the mass spectrum and for many detailed properties of hadrons.
The equations of QCD are rooted in the same mathematics of gauge symmetry
More generally, the dramatic success of a fully realized quantum field theory in yielding a wealth of striking physical phenomena that are not evident in a linear approximation – including emergence of a dynamical scale (“mass without mass”), dynamical symmetry breaking, a rich physical spectrum and, of course, confinement – helped catalyse a renewed interest in the deep possibilities of quantum field theory. It continues to surprise us today.
Prior to papers 1–5, the behaviour of matter at ultrahigh temperatures and densities seemed utterly inaccessible to theoretical understanding. After these papers, it was understood instead to be remarkably simple. That circumstance opened up the earliest moments of the Big Bang to scientific analysis. It is the foundation of what has become a large and fruitful field: astroparticle physics.
The equations of QCD are rooted in the same mathematics of gauge symmetry [27] that underlies the modern theory of electroweak interactions. They are worthy to stand beside Maxwell’s equations; one might even say they are an enriched version of those equations. It becomes possible to contemplate still more extensive symmetries, unifying the different forces. The methods used to establish asymptotic freedom – specifically, running couplings – provide quantitative tools for exploring that idea. Intriguing, encouraging results have been obtained along these lines. They suggest, in particular, the possibility of low-energy supersymmetry, such as might be observed at the LHC.
The LHC, the largest scientific instrument ever built, will extend its discovery potential at the beginning of the next decade through a fivefold increase in luminosity beyond the design value, in a new configuration called the High Luminosity LHC (HL-LHC). This extraordinary technical enterprise will rely on a combination of cutting-edge 11–13 T superconducting magnets, compact and ultraprecise superconducting radio-frequency cavities for beam rotation, as well as 300-m-long, high-power superconducting links with zero energy dissipation. In addition, the higher luminosities will make new demands on vacuum, cryogenics and machine protection, and will require new concepts for collimation and diagnostics, as well as advanced modelling for the intense beams.
Now, as the LHC nears the end of its first long run – from March 2010 to March 2013 – preparation work for this major upgrade is gathering speed. The past year has seen major developments in some of the key superconducting technologies, in particular for the new high-field magnets and the high-power links. Meanwhile, important decisions have been taken within the HiLumi LHC Design Study, which was launched just over a year ago. Supported in part by funding from the Seventh Framework Programme (FP7) of the European Commission (EC), this is the first phase of the larger HL-LHC project.
Broad collaboration
Towards the end of 2012, two meetings provided the opportunity for people involved at these accelerator frontiers to review progress and plan future activities, not only within their institutes around the world but also with industrial partners. On 14–16 November, the INFN Frascati National Laboratory was host to the 2nd Joint HiLumi LHC–LARP Annual Meeting. This brought together some 130 experts from Europe, Japan, Russia and the US LHC Accelerator Research Program (LARP). Three weeks later, on 4–5 December, a workshop on “Superconducting technologies for next-generation accelerators” took place at CERN organized by the HiLumi LHC Design Study in conjunction with the Test Infrastructure and Accelerator Research Area (TIARA) project, which is also co-funded by the EC under FP7. The workshop attracted more than 100 specialists, half from industry and half from laboratories and institutes. The aim was to explore the technical challenges emerging from the design of new accelerators and to match them with state-of-the-art industrial solutions.
Superconductivity has been the most important enabling technology in particle accelerators for the past 30 years – since the time of CERN’s Intersecting Storage Rings (the first accelerator to employ superconducting magnets during operation) and Fermilab’s Energy Doubler. The latter, later renamed the Tevatron, was the first large-scale superconducting system and it paved the way for all of the subsequent superconductivity projects, including the HERA collider at DESY, phase II of the Large Electron–Positron collider at CERN, the TRISTAN electron–positron collider at KEK and the Relativistic Heavy-Ion Collider at Brookhaven National Laboratory. Today, superconductivity is the core technology of the LHC, which employs some 1700 large superconducting magnets (dipoles and quadrupoles) and nearly 8000 superconducting corrector magnets, all cooled by more than 100 tonnes of superfluid helium.
The LHC’s main dipoles are 8 T superconducting magnets made from coils of niobium-titanium (NbTi) alloy. To allow the installation of additional collimators to deal with the increased luminosity in the HL-LHC, in 2010 CERN’s Lucio Rossi suggested replacing some of the 8 T dipoles with shorter 11 T magnets based on niobium-tin (Nb3Sn), which is superconducting at a higher temperature than NbTi. This idea also interested Fermilab, which has a high-field magnet R&D programme aimed at developing magnets for future machines such as a muon collider. CERN and Fermilab began to collaborate and by the spring of 2012 they completed a 2-m-long Nb3Sn dipole. In summer it was tested at 1.9 K in the Fermilab Vertical Test Facility, reaching a current of 11.2 kA and a calculated field of 10.4 T.
Such developments feed directly into the HiLumi LHC Design Study, which covers six work-packages (WP) of the larger HL-LHC project. The work of the design study is overseen by project management (WP1), which has CERN’s Hermann Schmickler as its new technical co-ordinator. Various committees and bodies, in particular the newly formed HL-LHC Co-ordination Group, ensure the necessary link between the machine-upgrade and the detector-upgrade projects, under the supervision of CERN management. The recent Joint HiLumi LHC–LARP Annual Meeting reviewed their progress as well as the headway that has been made towards a final layout for the accelerator upgrade.
Good progress
The main target for the HL-LHC is to achieve an integrated luminosity of 250 fb–1 a year and a total of 3000 fb–1 over 12 years. A key step in reaching this target lies in reducing the β* function (related to the focal length) at collision. With this in view, the team working on accelerator physics and performance (WP2) has collaborated closely with members of the LHC injector upgrade project as well as the current LHC operation group. As a result, they have defined possible sets of machine optics (in relation to β* and the crossing angle) and beam parameters (emittance, bunch spacing, bunch charge) that can achieve their goal. A further important development in WP2 is the recent, successful test in the LHC of luminosity-levelling by varying β*.
The conceptual design of the new D1 dipoles for the IRs is being steered by the KEK laboratory in Japan
A reduced β* in turn requires a redesign of the magnets in the insertion regions (IRs) where the collisions occur, which is the task of WP3. One important decision, taken in July in collaboration with WP2 and WP10 of HL-LHC (energy deposition and absorber), was to opt for the maximum possible aperture for the quadrupoles of the inner triplets: 150 mm of coil-free bore. This choice was based on successful tests within US-LARP of a 4-m-long, 90 mm aperture quadrupole and a more recent 1-m-long structure with a 120 mm aperture, both based on advanced Nb3Sn superconductor. In light of this decision, the teams working on accelerator physics and magnets in US-LARP are adjusting their plans and preparing a construction project for 2015.
While the work of WP3 has focused on providing major input to the choice of the quadrupole aperture, a decision on shielding has been made to use tungsten elements and a beam screen. At the same time, the conceptual design of the new D1 dipoles for the IRs is being steered by the KEK laboratory in Japan, where teams have analysed the performance of three possible apertures. The proposal is to have an 8-m-long magnet operating at 5 T.
To make the decreased β* most effective, the HL-LHC will use superconducting “crab cavities” to rotate particle bunches before they collide. These special radio-frequency cavities, which are the focus of WP4, may also provide levelling of the luminosity during the beam spill. The conceptual and technical design of three compact cavities (“4-rod”, “double ridge” and “quarter-wave”) has now been completed successfully. The new Crab Cavity Technical Co-ordination Working Group will, after the first long shutdown of the LHC, oversee preparation for the integration of crab cavities in the LHC and the preliminary tests in the Super Proton Synchrotron in 2015. Laboratory tests of a prototype 4-rod crab cavity built from bulk niobium superconductor by Lancaster University and the Cockcroft Institute in the UK began in November at CERN, while a prototype of the double-ridge type is under final preparation by a team from SLAC and Old Dominion University (ODU) at Jefferson Lab in the US, with tests foreseen by the beginning of 2013. A prototype of the quarter-wave type is under manufacture at Brookhaven National Laboratory, also using bulk niobium.
The HL-LHC will require higher beam currents, so new collimators will be necessary to protect the magnets from the 500 MJ of stored energy in each beam. The collimation team (WP5) has made the first steps towards the design of new IR collimation, with close collaboration between teams at CERN and from US-LARP. Tracking simulation tools have been set up to calculate losses by performing multi-turn tracking of the collision products, which can induce significant losses in the matching sections and dispersion suppressors at Point 1, Point 2 (with ions) and Point 5.
A further challenge for the HL-LHC project is to relocate equipment such as power convertors away from the tunnel to avoid radiation damage to electronics as well as to ease installation and integration of new equipment near the high-luminosity IRs, which are already crowded. This will require superconducting links that can transport high currents (up to 150 kA DC per line) from power supplies at ambient temperature on the surface to components operating at 1.9 K in the tunnel, some 100 m below ground. Work on this “cold powering” has started well ahead of schedule in WP6, with a study made of possible powering layouts for the new quadrupole magnets in the IRs, based on input concerning features of the optics and magnets agreed with WP2, WP3 and WP7 of HL-LHC (machine protection). Preliminary studies of the integration of the cold-powering system in the LHC machine have also been performed.
Cables built from tapes of copper and either HTS or MgB2 have been built and tested at CERN
In the LHC, current leads that incorporate a high-temperature superconductor (HTS) supply currents of up to 13 kA to the magnets in the tunnel. For the upgrade, CERN has been working with the Italian company Columbus to develop new superconducting wires based on magnesium diboride (MgB2), which has a lower operating temperature than HTS – the former being superconducting at the operating current at up to 25 K rather than 35–50 K with the latter – but is considerably less expensive. Until now, only flat ribbons of MgB2 have been available but CERN and Columbus have jointly developed round wires that are more suitable for the higher currents required for the HL-LHC. Cables built from tapes of copper and either HTS or MgB2 have been built and tested at CERN, and multi-cable assemblies have also been designed and constructed. Tests of the first 20 kA superconducting link are now taking place at CERN in the new test-station that has been set up in building SM18.
New and more advanced superconducting devices lie at the heart of not only the HL-LHC but of other large projects, such as the Facility for Antiproton and Ion Research at GSI, the XFEL at DESY and the European Spallation Source (ESS) in Lund. The workshop held in December on superconducting technologies was therefore based on talks about the HiLumi LHC Design Study, TIARA and the ESS, interweaved with presentations by representatives from industry. Companies also had stands and meeting points to provide the opportunity to exchange ideas and information.
One point of discussion was the model for laboratory–industry relations. Both the approach based on “turnkey” contracts (where only the main characteristics of equipment are laid down in a functional specification) and an approach based on “built-to-print” contracts (where industry is responsible for a specific manufacture rather than for the full product) can be effective and yield the best value for money. For equipment that has been fully or even partly developed for previous projects, the turnkey model can probably be used, thus minimizing the human resources required at the laboratory. When R&D is long and based on new types of equipment, such as for the LHC upgrade, the built-to-print model is probably more suitable. However, in both cases, only a close laboratory–industry relationship during construction can avoid misunderstandings and painful extra costs. There were also discussions on how to improve the exchange of information on technologies, processes, materials, facilities, work organization and training of the next generation of engineers and technicians.
In addition to reviewing progress in superconducting technologies for the HL-LHC, the workshop looked forwards in considering items that will need to be procured once the project is approved by CERN Council; in principle in June, in the context of the updated European Strategy for Particle Physics. The requirements include: 20 large superconducting Nb3Sn quadrupoles rated for 12–13 T, 10 of which will be supplied by the US; five large 6 T superconducting dipoles (D1) in NbTi from Japan; five large 4–4.5 T superconducting dipoles (D2); five large superconducting twin quadrupoles (Q4) rated for 8 T; six to twenty superconducting 11 T twin dipoles in Nb3Sn; five large SC twin quads (Q7) rated for 7 T; five to six modules each of three superconducting crab cavities; 3 km of superconducting links rated for 50–150 kA; and a number, still to be defined, of corrector-magnet packages. These will all have their own cryostats and will need new cryogenic plants and vacuum requirements. In addition, there will be new collimators, some equipped with special wire to compensate inter-beam effects near collision, as well as other equipment that is under development.
The workshop was the first step in communicating with industry to find partners for new development and construction, with a goal of maximizing the industrial return and incrementing the industrial capability of the EU. The aim is to achieve full funding of the project, including design and prototyping, totalling around SwFr750 million by 2015, with an additional SwFr200–250 million from external collaboration with the US and Japan. Construction and testing would then take place between 2016 and 2020, ready for installation at the end of 2021.
• The next joint HiLumi LHC–LARP Annual Meeting is planned to take place at the Cockcroft Institute, Daresbury Laboratory, on 12–15 November 2013, while in May 2013 the collaboration will meet at the joint LARP–HiLumi LHC Annual Meeting in the US. For more about the workshop on “Superconducting technologies for next-generation accelerators”, see https://indico.cern.ch/conferenceDisplay.py?confId=196164. For more about HL-LHC, see http://cern.ch/hilumilhc.
Letting young scientists shine
A new section at the 2nd Joint HiLumi LHC–LARP meeting was the “Young Scientist Talk”, a session organized to showcase recipients of LARP’s Toohig Fellowship, which is awarded each year to two recent PhD recipients in physics or engineering. Toohig Fellows John Cesaratto and Valentina Previtali attended the meeting. Currently based at member institutions of LARP – Cesaratto at SLAC National Accelerator Laboratory and Previtali at Fermilab – they will also spend time at CERN as part of the fellowship. Cesaratto gave a talk on the control of beam instabilities in CERN’s Super Proton Synchrotron and Previtali presented first results from simulations of the hollow electron lens. They were joined by Meghan McAteer, a Marie Curie Fellow, who talked about optics measurements in the Boosters at Fermilab and CERN.
The section was convened by John Fox of SLAC and a member of LARP. He is chair of LARP’s Toohig Fellowship Committee and is keenly interested in promoting the work of young scientists. He believes that bringing young scientists to the conference is multi-valued, enabling the young Toohig Fellows to meet scientists at CERN and, conversely, allowing young scientists from CERN to meet members of LARP. Such opportunities to meet and strike up collaborations are important for young scientists at US labs, who get few chances to interact with the broader community.
The Toohig Fellowships are awarded in honour of the late Timothy Toohig, a physicist and Jesuit priest who devoted his life to promoting accelerator science and increasing understanding, communication and collaboration among scientists of all nations and religions. The fellowships are for two years, extendable to three, and are explicitly for postdoctoral research and development regarding the LHC.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.