By Thomas Kuhr Springer Hardback: £117 €137.10
Paperback: £109 €119.19
The Tevatron collider operated by Fermilab close to Chicago was – until the LHC at CERN took over – the most powerful particle accelerator on Earth, colliding protons and antiprotons with, finally, a centre-of-mass energy of almost 2 TeV. Among many interesting results, the key discovery was the observation of the top quark by the CDF and DØ collaborations in 1995. In pp– collisions, huge numbers of B and D mesons are also produced, offering sensitive probes for testing the quark-flavour sector of the Standard Model, which is described by the Cabibbo-Kobayashi-Maskawa (CKM) matrix. A closely related topic concerns violation of the charge-parity (CP) symmetry, which can be accommodated through a complex phase in the CKM matrix. Physics beyond the Standard Model may leave footprints in the corresponding observables.
In this branch of particle physics, the key aspect addressed at the upgraded Tevatron (Run-II) was the physics potential of the B0s mesons, which consist of an anti-bottom quark and a strange quark. Since these mesons and their antiparticles were not produced in the e+e– B factories that operated at the Υ(4S) resonance, they fall in the domain of B-physics experiments at hadron colliders, although the Belle experiment could get some access to these particles with the KEK B-factory running at the Υ(5S) resonance. Since the Tevatron stopped operation in autumn 2011, the experimental exploration of the B0s system has been fully conducted at the LHC, with its B-decay experiment LHCb.
The CDF and DØ collaborations did pioneering work in B physics, which culminated in the observation of B0s – B–0s mixing in 2006, first analyses of CP-violating observables provided by the decay B0s → Jψφ around 2008, and intriguing measurements of the dimuon charge asymmetry by DØ in 2010, which probes CP violation in B0s – B0s oscillations.
The author of this book has been a member of the CDF collaboration for many years and gives the reader a guided tour through the flavour-physics landscape at the Tevatron. It starts with historical remarks and then focuses on the quark-flavour sector of the Standard Model with the CKM matrix and the theoretical description of mixing and CP violation, before discussing the Tevatron collider, its detectors and experimental techniques. After these introductory chapters, the author brings the reader in touch with key results, starting with measurements of lifetimes and branching ratios of weak b-hadron decays and their theoretical treatment, followed by a discussion of flavour oscillations, where B0s – B0s mixing is the highlight. An important part of the book deals with various manifestations of CP violation and the corresponding probes offered by the B0s system, where B0s → Jψφ and the dimuon charge-asymmetry are the main actors. Last, rare decays are discussed, putting the spotlight on the B0s → μ+μ– channel, one of the rarest decay processes that nature has to offer. While the book has a strong focus on the B0s system, it also addresses Λb decays and charm physics.
This well written book with its 161 pages is enjoyable to read and offers a fairly compact way to get an overview of the B-physics programme conducted at the Tevatron in the past decade. A reader familiar with basic concepts of particle physics should be able to deal easily with the content. It appears suited to experimental PhD students making first contact with this topic, but experienced researchers from other branches of high-energy physics may also find the book interesting and useful. Topics such as the rare decay B0s → μ+μ–, which has recently appeared as a first 3.5σ signal in the data from LHCb, and measurements of CP violation in B0s decays will continue to be hot topics in the LHC physics programme during this decade, complementing the direct searches for new particles at the ATLAS and CMS detectors.
In the last beam period before a two-year shutdown, the LHC began 2013 with a challenge: proton–ion collisions. Following a trial run in September, the machine went into full operation beyond its design specification, producing head-on collisions of protons with lead nuclei from mid-January to mid-February. At 5 TeV per colliding nucleon pair, the gain in collision energy is a factor of 25 above previous collisions of a similar type, making it one of the largest such gains in the history of particle accelerators.
Commissioning this new and almost unprecedented mode of collider operation was a major challenge for both the teams behind the LHC and its injector chain. The LHC configuration had to be modified quickly before and during the short run to achieve a number of physics goals.
Nonetheless, on 11 January, single bunches of protons and lead nuclei were injected into the LHC and successfully ramped to full energy. Over the following night the LHC-operations and beam-physics teams sprang into action to commission and measure the optics through a completely new sequence to squeeze the beams at collision. Interventions on the power and cryogenics systems slowed down the commissioning plan but by 20 January stable beams had been achieved with 13 bunches per beam.
In the next fill of the machine, the first bunch-trains were injected, leading to stable beams with 96 bunches of protons and 120 of ions. This important fill allowed the study of “moving”, long-range beam–beam encounters. Stationary long-range encounters occur in proton–proton or lead–lead runs, when bunches in the two beams “see” one another as they travel in the same vacuum chamber on either side of the experiments. The situation becomes more complicated with proton–lead collisions because the long-range encounters move as a result of the different revolution times of the two species – a key feature of proton–lead operation.
At injection energy, lead ions travel more slowly than protons and complete eight fewer turns a minute round the LHC (674,721 turns compared with 674,729 turns for protons). As a result, the two beams – and their RF systems – run independently at different frequencies. Once the energy has been ramped up, the frequency differences become small enough for the RF systems to be locked together in a non-trivial process known as “cogging”.
During the first cogging exercises, high beam-losses triggered beam dumps. This was later found to be caused by an improper synchronization of the two RF frequencies, and careful fine-tuning of the cogging process overcame the problem. After the cogging exercise and throughout the physics fill, the beams run “off-momentum” with opposite offsets to their orbits, requiring special corrections of the beam optics.
The full filling-scheme with 338 bunches in both beams was injected and successfully ramped on 21 January. In addition, the teams achieved a record lead-bunch intensity in the LHC thanks to the excellent performance of both the machine and the injectors. From 24 January onwards the machine was running routinely with stable beams of 338 bunches of protons in ring 1 (clockwise) and lead ions in ring 2. On 1 February, the beams were swapped so that ALICE, inherently an asymmetrical detector, could take data in both directions. A number of issues with cogging and squeezing made this beam reversal challenging, with the machine providing collisions between 192 ion bunches with 216 proton bunches for some days before the operators attempted to reach 338 bunches in each beam by the end of the run on 11 February.
Despite the short time-frame of this asymmetrical run, all seven LHC experiments were able to take data. On a good day, fills had peak luminosity at the beginning of the collisions of around 1029 s–1 cm–2 in ALICE, ATLAS and CMS. Integrated luminosity was well above expectations at around 2 nb–1 a day for each of these experiments. This bodes well for the experimental analysis that will continue to go from strength to strength as the LHC enters its first long shutdown to consolidate and improve this impressive machine.
Every so often the source of the lead ions has to be replaced. A small sliver of solid isotopically pure 208Pb is placed in a ceramic crucible that sits in an “oven” casing at the end of a metal rod. The metal is heated to around 800°C and ionized to become plasma. Ions are then extracted from the plasma and accelerated. Depending on the beam intensity, in stable running the accelerator chain consumes about 2 mg of lead every hour – a tiny amount, but 10 g costs some SwFr12,000 (approx US$13,000). In this image the position of the oven is being measured inside the source for Linac 3.
In analysing data from last year’s test run with proton–lead collisions in the LHC, the ALICE collaboration, followed almost immediately and independently by the ATLAS collaboration, have announced a surprising observation in the way that particles emerge from the high-energy collisions. Here the two collaborations report on their results.
To prepare for the recent LHC run with collisions of protons and lead ions, the LHC team performed a test run for a few hours last September. During this run the ALICE experiment recorded close to two million events, which have already led to new results (CERN Courier December 2012 p6). Now, after an in-depth analysis, the ALICE collaboration has made the surprising observation of a double “ridge” structure in the correlation of particles emerging from the proton–lead collisions. This follows the observation by the CMS collaboration, using data from the same test run, of a “near-side” ridge-like correlation structure elongated in pseudorapidity – a measure of the angle an emerging particle takes relative to the direction of the beam (CERN Courier January/February 2013 p9).
The analysis performed by the ALICE collaboration characterizes two-particle angular correlations as a function of the event activity, which is quantified by the multiplicity, measured in a pair of forward scintillator detectors. The correlations are determined by counting the number of associated particles as a function of their difference in azimuth (Δφ) and pseudorapidity (Δη) with respect to a trigger particle, in bins of this particle’s transverse momentum pT, trig and associated transverse momentum pT,assoc.
On the “near side” (Δφ≈0), the separation of an Δη-elongated ridge structure from the contribution of a jet to the correlation is straightforward because the jet peak is concentrated around Δη= 0. It is more difficult on the “away side” (Δφ≈π) because both structures are elongated in Δη and not easily separable by selecting on Δη. Experimentally, however, the near-side jet peak shows only a weak evolution with event multiplicity. So by subtracting the correlations at different event multiplicities from one another, it is possible to remove the jet-like contribution of the correlation to a large extent and to quantify modifications as a function of event multiplicity.
Figure 1 shows the two-particle correlation of low-multiplicity events subtracted from that of high-multiplicity events. It reveals a distinct excess in the correlation, which forms two ridges along Δη. The ridge on the near side, qualitatively similar to the one observed by CMS, is accompanied by a second ridge of similar magnitude on the away side, which is observed for the first time.
Such double-ridge structures are typically found in collisions of heavy ions and have their origins in collective phenomena occurring in the quark–gluon plasma that is created. However, these phenomena are not generally thought to occur in proton–lead collisions, where the size of the collision region is expected to be too small to allow the development of significant collective effects.
The projection of figure 1 onto Δφ allows the yield and width of the near-side and away-side ridges to be quantified above a constant baseline. Figure 2 presents the ridge yield for different event multiplicities. It is remarkable that the near- and away-side yields always agree within uncertainties for a given sample despite the absolute values changing substantially with event multiplicity and pT interval. Such a tight correlation between the yields suggests a common underlying physical origin for the two ridges. The extracted widths on the near side and the away side agree with each other within 20% and show no significant dependence on pT, which suggests that the observed ridge is not of jet origin.
This intriguing and unexpected result still needs to be explained theoretically. Models that produce almost identical near- and away-side ridges are based on the colour-glass condensate framework or on hydrodynamical calculations that assume collective effects to occur also in proton–lead collisions. Whatever the origin may be, this observation has opened the window on a novel phenomenon. Further analysis of the high-statistics proton–lead data promises to yield exciting results.
Further reading
ALICE collaboration 2013 Phys. Lett.B719 29.
Studies of two-particle correlations in high-multiplicity proton–proton and proton–lead collisions at the LHC have shown a phenomenon frequently referred to as the “ridge”. The ridge is a result of correlated production of particles at small relative-azimuthal angle (Δφ) over a wide range of relative pseudorapidity (Δη). Using data from the highly successful pilot proton–lead run on 12 September 2012, ATLAS has shown that the ridge has an identical twin resulting from correlated production of particles that are back-to-back in azimuth.
To observe this twin, ATLAS had to remove background in the two-particle correlation function arising from hard scattering processes, momentum conservation and low-momentum resonance decays. Two-particle correlations were measured as a function of the proton–lead total transverse energy (ΣET) detected in one of the ATLAS forward calorimeters. The contribution of the background to the two-particle correlations was found to be independent of ΣET. As a result, the background could be measured in low-ΣET proton–lead collisions, which have little contribution from the ridge, and then subtracted from the two-particle correlation function in high-ΣET collisions.
The left and right panels in the figure (above) show the two-particle correlation function before and after background subtraction, respectively. Before subtraction, the correlation function includes: a jet peak near Δφ = 0, Δη = 0; the previously observed ridge; and a broad structure arising from particles recoiling from the jet. The subtraction procedure removes the recoil contribution and nearly all of the jet peak, leaving behind two symmetrical ridges, extending over +/–5 units of Δη. The strength of the correlation increases with the transverse momentum of the particles over the measured pT range, 0 < pT < 6 GeV.
The presence of such a symmetrical ridge had been predicted by QCD calculations invoking the colour-glass condensate, which describes the gluon content of a high-energy nucleus in the saturation regime. Alternative calculations that model the system formed in proton–lead collision as a “near perfect fluid” have also predicted a symmetrical ridge arising from final-state collective motion similar to that observed in lead–lead collisions. The data collected during the recent 2013 high-luminosity proton–lead run should provide a way to resolve this theoretical ambiguity. The good news is that either explanation will represent a ground-breaking advance in the understanding of high-energy proton–nucleus collisions.
Further reading
ATLAS collaboration 2012 arXiv:1212.5198 [hep-ex], submitted to Phys. Rev. Lett.
In proton collisions at the LHC, vector boson fusion (VBF) happens when quarks from each one of the two colliding protons radiate W or Z bosons that subsequently interact or “fuse” as in the Feynman diagram shown where two W bosons fuse to produce a Z boson. Each quark radiating a weak boson exchanges four-momentum, Q2, of around m2Z, m2W in the t-channel. In this way, the two quarks scatter away from the beamline, typically inside the acceptance of the detector where they can be detected as hadronic jets. The distinctive signature of VBF is therefore the presence of two energetic hadronic jets (tagging jets), predominantly in the forward and backward directions with respect to the proton beamline.
The study of VBF production of the Z boson is an important benchmark in establishing the presence of these processes in general and to cross-check measurements of Higgs VBF, where the radiated bosons fuse to form a Higgs boson. However, the VBF production of Z bosons has some intriguing differences with respect to that of Higgs bosons. In VBF Z-boson production, a large number of other purely electroweak non-VBF processes can lead to an identical final state and play an important role: they yield large negative interferences with the VBF production, which are related to the very foundations of the Standard Model. This situation makes VBF production of Z bosons more complicated but also more interesting.
An additional and peculiar feature of VBF and all other purely electroweak processes is that no QCD colour is exchanged in the processes. This leads to the expectation of a “rapidity gap”, or suppressed hadronic activity between the two tagging jets, which can also be identified in these events.
The CMS collaboration has searched for the pure electroweak production of a Z boson in association with two jets in the 7 TeV proton–proton collision data from 2011. They have analysed both dielectron and dimuon Z decays. The leptons are required to have transverse momenta pT > 20 GeV/c and pseudorapidity |η| < 2.4; in addition, the dilepton invariant-mass is required to be consistent with that of the Z boson. The two associated tagging jets are reconstructed with two alternative algorithms (“particle flow” and “jet-plus-track”) and are required to be within |η| < 3.6 and have pT > 65 GeV/c (for the leading jet) and 40 GeV/c (subleading jet).
Selected events are passed to a multivariate boosted decision tree (BDT) that is trained to separate signal events from the large background stemming from the Z bosons produced via the Drell-Yan process and associated with two jets from additional QCD radiation. The BDT makes use of the full kinematic information of the three-body (Z+2jets) final state and of internal composition properties of the jet, which can discriminate if the jet originates from a gluon or a light quark. Figure 2a shows output distributions of the BDT for data and different simulated background components, as well as the simulated signal (purple) for selected dimuon Z decays. A fit to the BDT output distribution was used to measure the signal cross-section, σ(EWK Z+2jets) = 154 ± 24 (stat.) ± 46 (exp. syst.) ± 27 (th. syst.) ± 3 (lum.) fb. This is in good agreement with the theoretical expectation of 166 fb calculated at next-to-leading order precision.
The hadronic activity in the rapidity interval between the two tagging jets and the radiation patterns of the selected Z boson events with two forward jets have also been measured and are in good agreement with the expectations.
One of the most interesting discoveries of the past decade is that of an unconventional hadron, the X(3872), by the Belle experiment (Belle 2003). Its decay to J/ψπ+π– indicates that it is charmonium-like but its narrow width and mass above the threshold for decay to open charm do not fit any of the spectrum of predicted cc states. Several experiments have since confirmed this observation, in different production mechanisms and decay modes. In parallel to these experimental investigations, many theoretical interpretations have been put forward but the fundamental question remains open of whether the X(3872) is a quark–antiquark meson or a more exotic state.
When any new resonance is observed it is mandatory to determine its quantum numbers. The observation of the decay X(3872)→J/ψγ fixed the charge conjugation: C = +1. However, angular analyses left two possibilities for JPC: 1++ and 2–+ (CDF 2007). Exotic models where the X(3872) is a DD* molecule or a tetraquark state predict JPC = 1++.
The LHCb collaboration has now reported an analysis of the decay chain B+ → X(3872)K+ → J/ψπ+π–K+, with J/ψ → μ+μ–, where they use all five angular variables to maximize the separation power between the hypotheses of 1++ and 2–+. The analysis uses the data sample of 1.0 fb–1 that LHCb collected during 2011, which contains 313 ± 26 B+ → X(3872)K+ decays. As figure 1 shows, the outcome of the multidimensional likelihood fit prefers JPC = 1++ with more than 8σ significance. Compared with previous analyses, the measurement benefits from larger statistics but importantly also makes use of the full angular information, which improves the ability to use correlations between angular variables to separate the two hypotheses (figure 2 shows an example).
This result rules out explanations of the X(3872) as the ηc2(11D2) state. Instead, it favours more exotic interpretations. However, distinguishing between molecular and tetraquark models will require studies of complementary decay modes. The 2.0 fb–1 data sample that LHCb accumulated during 2012, as well as the larger samples that will be recorded in future LHC runs, will allow the collaboration to keep on the trail of these and other puzzles in heavy-flavour spectroscopy.
Despite first being described over three centuries ago, gravity remains one of the least understood of the fundamental forces. At CERN’s recently completed AEgIS experiment, a team is setting out to examine its effects on something much less familiar: antimatter.
Located in the experimental hall at the Antiproton Decelerator (AD), the AEgIS experiment is designed to make the first direct measurement of Earth’s gravitational effect on antimatter. By sending a beam of antihydrogen atoms through very thin gratings, the experiment will be able to measure how far the antihydrogen atoms fall and in how much time – giving the AEgIS team a measurement of the gravitational coupling. The team finished putting all of the elements of the experiment together by the end of 2012, but they will have to wait for two years for beams to return to the AD hall following the Long Shutdown (LS1), which has just begun.
To make progress in the meantime, the AEgIS team has decided to try out the experiment with hydrogen instead of antihydrogen. By replacing antiprotons with their own proton source, the team will be able to manufacture its own hydrogen beam to use for commissioning and testing the set-up. Surprisingly, carrying out the experiment with hydrogen will be more difficult technically than with antihydrogen. Another challenge will be in the production of the positronium that will be used in creating the hydrogen. The positronium needs to be moving fast enough to ensure that it does not decay before it meets the protons/antiprotons, but not so fast as to pass the protons/antiprotons altogether. The AEgIS team will be carrying out this commissioning during the coming months, opening up their set-up next month to make any necessary adjustments and to install a hydrogen detector and proton source.
Astronomers have detected simultaneous X-ray and radio-mode switches in co-ordinated observations of a pulsar. Pulsed X-ray emission is only present in states of weak radio emission. This indicates a rapid global change in the magnetosphere, which challenges current emission theories.
Pulsars were discovered in 1967 as flickering sources of radio waves and were soon interpreted as being rotating, strongly magnetized neutron stars. The radiation is thought to be emitted by high-energy particles moving along the lines of magnetic field. As the emission is concentrated in two cones emerging from the magnetic poles, the source behaves like a lighthouse. We see a pulse each time that the radiation beam is pointed towards the Earth. This happens at the spin frequency of the neutron star because the rotation and magnetic axes are generally misaligned.
Among the thousands of known pulsars, only a small fraction has been detected in X-rays or gamma-rays (CERN Courier September 2006 p13 December 2008 p9). The X-ray emission can be steady or pulsed. The steady X-ray emission is high for young neutron stars and decreases as their surface temperature falls. The pulsed emission suggests that X-ray-emitting hot-spots are located at the magnetic poles.
Astronomers know of only a handful of old pulsars that shine in X-rays. One of them is PSR B0943+10, which is five million years old. This source also switches suddenly between a radio-bright and a radio-quiet state at intervals of several hours. It is therefore a prime target to investigate the X-ray behaviour associated with changes of the radio mode. This idea was suggested by a team led by Wim Hermsen of the Netherlands Institute for Space Research (SRON) and the Astronomical Institute “Anton Pannekoek” of the University of Amsterdam. It then took them five years to convince the time-allocation committee to schedule some long periods of observation with ESA’s X-ray Multi-Mirror Mission (XMM-Newton) satellite co-ordinated with radio telescopes.
The satellite performed six observations of six hours each on PSR B0943+10 at the end of 2011. Radio-data were gathered at the same time by the Indian Giant Metrewave Radio Telescope (GMRT) and the international Low-Frequency Array (LOFAR) in the Netherlands. The result of the campaign was completely unexpected. The X-ray emission was found not to follow the states of radio brightness. On the contrary, it was observed to be weak when the source is bright in radio emission and vice versa.
The timing and spectral analysis of the XMM-Newton data offered yet more surprises. The source was found to pulsate in X-rays only during the X-ray-bright phase corresponding to the quiet-radio state. During this phase, the X-ray emission appears to be the sum of two components: a pulsating component consisting of thermal X-rays, which is seen to switch off during the X-ray-quiet phase; and a persistent one consisting of non-thermal X-rays.
The results suggest that the entire magnetosphere around the pulsar is switching from one state to another within a few seconds. The rapidity of this change is puzzling but it is not the only issue. The observed radio and X-ray behaviour is predicted by neither of the leading models for pulsar emission.
Hermsen and his team plan to repeat the same study for another pulsar that has similar radio properties but with a different geometrical configuration. This will allow them to test whether the viewing angle with respect to the magnetic and rotational axes has an effect on the properties of the X-ray emission. In the meantime, theorists will be busy investigating possible physical mechanisms that could cause the observed sudden and drastic changes to the pulsar’s magnetosphere.
The LHC at CERN is a prime example of worldwide collaboration to build a large instrument and pursue frontier science. The discovery there of a particle consistent with the long-sought Higgs boson points to future directions both for the LHC and more broadly for particle physics. Now, the international community is considering machines to complement the LHC and further advance particle physics, including the favoured option: an electron–positron linear collider (LC). Two major global efforts are underway: the International Linear Collider (ILC), which is distributed among many laboratories; and the Compact Linear Collider (CLIC), centred at CERN. Both would collide electrons and positrons at tera-electron-volt energies but have different technologies, energy ranges and timescales. Now, the two efforts are coming closer together and forming a worldwide linear-collider community in the areas of accelerators, detectors and resources.
Last year, the organizers of the 2012 IEEE Nuclear Science Symposium held in Anaheim, California, decided to arrange a Special Linear Collider Event to summarize the accelerator and detector concepts for the ILC and CLIC. Held on 29–30 October, the event also included presentations on the impact of LC technologies for different applications and a discussion forum on LC perspectives. It brought together academic, industry and laboratory-based experts, providing an opportunity to discuss LC progress with the accelerator and instrumentation community at large, and to justify the investments in technology required for future particle accelerators and detectors. Representatives of the US Funding Agencies were also invited to attend.
The CLIC studies focus on an option for a multi-tera-electron-volt machine using a novel two-beam acceleration scheme
CERN’s director-general, Rolf Heuer, introduced the event before Steinar Stapnes, CLIC project leader, and Barry Barish, director of the ILC’s Global Design Effort (GDE), reviewed the two projects. The ILC concept is based on superconducting radio-frequency (SRF) cavities, with a nominal accelerating field of 31.5 MV/m, to provide e+e– collisions at sub-tera-electron-volt energies in the centre-of-mass. The CLIC studies focus on an option for a multi-tera-electron-volt machine using a novel two-beam acceleration scheme, with normal-conducting accelerating structures operating at fields as high as 100 MV/m. In this approach, two beams run parallel to each other: the main beam, to be accelerated; and a drive beam, to provide the RF power for the accelerating structures.
Both studies have reached important milestones. The CLIC Conceptual Design Report was released in 2012, with three volumes for physics, detectors and accelerators. The project’s goals for the coming years are well defined, the key challenges being related to system specifications and performance studies for accelerator parts and detectors, technology developments with industry and implementation studies. The aim is to present an implementation plan by 2016, when LHC results at full design energy should become available.
The ILC GDE took a major step towards the final technical design when a draft of the four-volume Technical Design Report (TDR) was presented to the ILC Steering Committee on 15 December 2012 in Tokyo. This describes the successful establishment of the key ILC technologies, as well as advances in the detector R&D and physics studies. Although not released by the time of the NSS meeting, the TDR results served as the basis for the presentations at the special event. The chosen technologies – including SRF cavities with high gradients and state-of-the-art detector concepts – have reached a stage where, should governments decide in favour and a site be chosen, ILC construction could start almost immediately. The ILC TDR, which describes a cost-effective and mature design for an LC in the energy range 200–500 GeV, with a possible upgrade to 1 TeV, is the final deliverable for the GDE mandate.
The newly established Linear Collider Collaboration (LCC), with Lyn Evans as director, will carry out the next steps to integrate the ILC and CLIC efforts under one governance. One highlight in Anaheim was a talk on the physics of the LC by Hitoshi Murayama of the Kavli Institute for Mathematics and Physics of the Universe (IPMU) and future deputy-director for the LCC. He addressed the broader IEEE audience, reviewing how a “Higgs factory” (a 250 GeV machine) as the first phase of the ILC could elucidate the nature of the Higgs particle – complementary to the LHC. The power of the LC lies in its flexibility. It can be tuned to well defined initial states, allowing model-independent measurements from the Higgs threshold to multi-tera-electron-volt energies, as well as precision studies that could reveal new physics at a higher energy scale.
Detailed technical reviews of the ILC and CLIC accelerator concepts and associated technologies followed the opening session. Nick Walker of DESY presented the benefits of using SRF acceleration with a focus on the “globalization” of the technology and the preparation for a worldwide industrial base for the ILC construction. The ultralow cavity-wall losses allow the use of long RF pulses, greatly simplifying the RF source while facilitating efficient acceleration of high-current beams. In addition, the low RF frequency (1.3 GHz) significantly reduces the impedance of the cavities, leading to reduced beam-dynamics effects and relatively relaxed alignment tolerances. More than two decades of R&D have led to a six-fold increase in the available voltage gradient, which – together with integration into a single cryostat (cryomodule) – has resulted in an affordable and mature technology. One of the most important goals of the GDE was to demonstrate that the SRF cavities can be reliably produced in industry. By the end of 2012, two ambitious goals were achieved: to produce cavities qualified at 35 MV/m and to demonstrate that an average gradient of 31.5 MV/m can be reached for ILC cryomodules. These high-gradient cavities have now been produced by industry with 90% yield, acceptable for ILC mass-production.
CERN’s Daniel Schulte reviewed progress with the CLIC concept, which is based on 12 GHz normal conducting accelerating structures and a two-beam scheme (rather than klystrons) for a cost-effective machine. The study has over the past two decades developed high-gradient, micron-precision accelerating structures that now reach more than 100 MV/m, with a breakdown probability of only 3 × 10–7 m–1 pulse–1 during high-power tests and more than 145 MV/m in two-beam acceleration tests at the CTF3 facility at CERN (tolerating higher breakdown rates). The CLIC design is compatible with energy-staging from the ILC baseline design of 0.5 TeV up to 3 TeV. The ILC and CLIC studies are collaborating closely on a number of technical R&D issues: beam delivery and final-focus systems, beam dynamics and simulations, positron generation, damping rings and civil engineering.
Another area of common effort is the development of novel detector technologies
Another area of common effort is the development of novel detector technologies. The ILC and CLIC physics programmes are both based on two complementary detectors with a “push–pull concept” to share the beam time between them. Hitoshi Yamamoto of Tohoku University reviewed the overall detector concepts and engineering challenges: multipurpose detectors for high-precision vertex and main tracking; a highly granular calorimeter inside a large solenoid; and power pulsing of electronics. The two ILC detector concepts (ILD and SiD) formed an excellent starting point for the CLIC studies. They were adapted for the higher-energy CLIC beams and ultra-short (0.5 ns) bunch-spacing by using calorimeters with denser absorbers, modified vertex and forward detector geometries, and precise (a few nanosecond) time-stamping to cope with increased beam-induced background.
Vertex detectors, a key element of the LC physics programme, were presented by Marc Winter of CNRS/IPHC Strasbourg. Their requirements address material budget, granularity and power consumption by calling for new pixel technologies. CCDs with 50 μm final pixels, CMOS pixel sensors (MIMOSA) and depleted field-effect transistor (DEPFET) devices have been developed successfully for many years and are suited to running conditions at the ILC. For CLIC, where much faster read-out is mandatory, the R&D concentrates on multilayer devices, such as vertically integrated 3D sensors comprising interconnected layers thinner than 10 μm, which allow a thin charge-sensitive layer to be combined with several tiers of read-out fabricated in different CMOS processes.
Both ILD and SiD require efficient tracking and highly granular calorimeters, optimized for particle-flow event reconstruction but differ in the central-tracking approach. ILD aims for a large time-projection chamber (TPC) in a 3.5–4 T field, while SiD is designed as a compact, all-silicon tracking detector with a 5 T solenoid. Tim Nelson of SLAC described tracking in the SiD concept with particular emphasis on power, cooling and minimization of material. Takeshi Matsuda of KEK presented progress on the low material-budget field cage for the TPC and end-plates based on micropattern gas detectors (GEMs, Micromegas or InGrid).
Apart from their dimensions, the electromagnetic calorimeters are similar for the ILC and CLIC concepts, as Jean-Claude Brient of Laboratoire Leprince-Ringuet explained. The ILD and the SiD detectors both rely on the silicon-tungsten sampling calorimeters, with emphasis on the separation of close electromagnetic showers. The use of small scintillator strips with silicon photodiodes operated in Geiger mode (SiPM) read-out as an active medium is being considered, as well as mixed designs using alternating layers of silicon and scintillator. José Repond of Argonne National Laboratory described the progress in hadron calorimetry, which is optimized to provide the best possible separation of energy deposits from neutral and charged hadrons. Two main options are under study: small plastic scintillator tiles with embedded SiPMs or higher granularity calorimeters based on gaseous detectors. Simon Kulis of AGH-UST Cracow addressed the importance of precise luminosity measurement at the LC and the challenges of the forward calorimetry.
In the accelerator instrumentation domain, tolerances at CLIC are much tighter because of the higher gradients. Thibaut Lefevre of CERN, Andrea Jeremie of LAPP/CNRS and Daniel Schulte of CERN all discussed beam instrumentation, alignment and module control, including stabilization. Emittance preservation during beam generation, acceleration and focusing are key feasibility issues for achieving high luminosity for CLIC. Extremely small beam sizes of 40 nm (1 nm) in the horizontal (vertical) planes at the interaction point require beam-based alignment down to a few micrometres over several hundred metres and stabilization of the quadrupoles along the linac to nanometres, about an order of magnitude lower than ground vibrations.
Two sessions were specially organized to discuss potential spin-off from LC detector and accelerator technologies. Marcel Demarteau of Argonne National Laboratory summarized a study report, ILC Detector R&D: Its Impact, which points to the value of sustained support for basic R&D for instrumentation. LC detector R&D has already had an impact in particle physics. For example, the DEPFET technology is chosen as a baseline for the Belle-II vertex detector; an adapted version of the MIMOSA CMOS sensor provides the baseline architecture for the upgrade of the inner tracker in ALICE at the LHC; and construction of the TPC for the T2K experiment has benefited from the ILC TPC R&D programme.
The LC hadron calorimetry collaboration (CALICE) has initiated the large-scale use of SiPMs to read out scintillator stacks (8000 channels) and the medical field has already recognized the potential of these powerful imaging calorimeters for proton computed tomography. Erika Garutti of Hamburg University described another medical imaging technique that could benefit from SiPM technology – positron-emission tomography (PET) assisted by time-of-flight (TOF) measurement, with a coincidence-time resolution of about 300 ps FWHM, which is a factor of two better than devices available commercially. Christophe de La Taille of CNRS presented a number of LC detector applications for volcano studies, astrophysics, nuclear physics and medical imaging.
Accelerator R&D is vital for particle physics. A future LC accelerator would use high-technology coupled to nanoscale precision and control on an industrial scale. Marc Ross of SLAC introduced the session, “LC Accelerator Technologies for Industrial Applications”, with an institutional perspective on the future opportunities of LC technologies. The decision in 2004 to develop the ILC based on SRF cavities allowed an unprecedented degree of global focus and participation and a high level of investment, extending the frontiers of this “technology” in terms of performance, reliability and cost.
Particle accelerators are widely used as tools in the service of science with an ever growing number of applications to society
Particle accelerators are widely used as tools in the service of science with an ever growing number of applications to society. An overview of industrial, medical and security-related uses for accelerators was presented by Stuart Henderson of Fermilab. A variety of industrial applications makes use of low-energy beams of electrons, protons and ions (about 20,000 instruments) and some 9000 medical accelerators are in operation in the world. One example of how to improve co-ordination between basic and applied accelerator science is the creation of the Illinois Accelerator Research Centre (IARC). This partnership between the US Department of Energy and the State of Illinois aims to unite industry, universities and Fermilab to advance applications that are directly relevant to society.
SRF technology has potential in a number of industrial applications, as Antony Favale of Advanced Energy Systems explained. For example, large, high-power systems could benefit significantly using SRF, although the costs of cavities and associated cryomodules are higher than for room-temperature linacs; continuous-wave accelerators operating at reasonably high gradients benefit economically and structurally from SRF technology. The industrial markets for SRF accelerators exist for defence, isotope production and accelerator-driven systems for energy production and nuclear-waste mitigation. Walter Wuensch of CERN described how the development of normal-conducting linacs based on the high-gradient 100 MV/m CLIC accelerating structures may be beneficial for a number of accelerator applications, from X-ray free-electron lasers to industrial and medical linacs. Increased performance of high-gradient accelerating structures, translated into lower cost, potentially broadens the market for such accelerators. In addition, industrial applications increasingly require micron-precision 3D geometries, similar to the CLIC prototype accelerating structures. A number of firms have taken steps to extend their capabilities in this area, working closely with the accelerator community.
Steve Lenci of Communications and Power Industries LLC presented an overview of RF technology that supports linear colliders, such as klystrons and power couplers, and discussed the use of similar technologies elsewhere in research and industry. Marc Ross summarized applications of LC instrumentation, used for beam measurements, component monitoring and control and RF feedback.
The Advanced Accelerator Association Promoting Science & Technology (AAA) aims to facilitate industry-government-academia collaboration and to promote and seek industrial applications of advanced technologies derived from R&D on accelerators, with the ILC as a model case. Founded in Japan in 2008, its membership has grown to comprise 90 companies and 38 academic institutions. As the secretary-general Masanori Matsuoka explained, one of the main goals is a study on how to reach a consensus to implement the ILC in Japan and to inform the public of the significance of advanced accelerators and ILC science through social, political and educational events.
The Special Linear Collider Event ended with a forum that brought together directors of the high-energy-physics laboratories and leading experts in LC technologies, from both the academic research sector and industry. A panel discussion, moderated by Brian Foster of the University of Hamburg/DESY, included Rolf Heuer (CERN), Joachim Mnich (DESY), Atsuto Suzuki (KEK), Stuart Henderson (Fermilab), Hitoshi Murayama (IPMU), Steinar Stapnes (CERN) and Akira Yamamoto (KEK) .
The ILC has received considerable recent attention from the Japanese government. The round-table discussion therefore began with Suzuki’s presentation of the discovery of a Higgs-like particle at CERN and the emerging initiative toward hosting an ILC in Japan. The formal government statement, which is expected within the next few years, will provide the opportunity for the early implementation of an ILC and the recent discovery at CERN is strong motivation for a staged approach. This would begin with a 250 GeV machine (a “Higgs factory”), with the possibility of increasing in energy in a longer term. Suzuki also presented the Japan Policy Council’s recommendation, Creation of Global Cities by hosting the ILC, which was published in July 2012.
The discussion then focused on three major issues: the ILC Project Implementation Plan; the ILC Technology Roadmap; and the ILC Added Value to Society. While the possibility of implementing CLIC as a project at CERN to follow the LHC was also on the table, there was less urgency for discussion because the ILC effort counts on an earlier start date. The panellists exchanged many views and opinions with the audience on how the ILC international programme could be financed and how regional priorities could be integrated into a consistent worldwide strategy for the LC. Combining extensive host-lab-based expertise together with resources from individual institutes around the world is a mandatory first step for LC construction, which will also require the development of links between projects, institutions, universities and industry in an ongoing and multifaceted approach.
SRF systems – the central technology for the ILC – have many applications, so a worldwide plan for distributing the mass production of the SRF components is necessary, with technology-transfer proceeding in parallel, in partnership with funding agencies. Another issue discussed related to the model of global collaboration between host/hub-laboratories and industry to build the ILC, where each country shares the costs and human resources. Finally, accelerator and detector developments for the LC have already penetrated many areas of science. The question is how to improve further the transfer of technology from laboratories, so as to develop viable, on-going businesses that serve as a general benefit to society; as in the successful examples, such the IARC facility and the PET-TOF detector, presented in Anaheim.
Last, but not least, this technology-oriented symposium would have been impossible without the tireless efforts of the “Special LC Event” programme committee: Jim Brau, University of Oregon, Juan Fuster (IFIC Valencia), Ingrid-Maria Gregor (DESY Hamburg), Michael Harrison (BNL), Marc Ross (FNAL), Steinar Stapnes (CERN), Maxim Titov (CEA Saclay), Nick Walker (DESY Hamburg), Akira Yamamoto (KEK) and Hitoshi Yamamoto (Tohoku University). In all, this event was considered a real success. More than 90% of participants who answered the conference questionnaire rated it extremely important.
The IEEE tradition
The 2012 Institute of Electrical and Electronics Engineers (IEEE) Nuclear Science Symposium (NSS) and Medical Imaging Conference (MIC) together with the Workshop on Room-Temperature Semiconductor X-Ray and Gamma-Ray Detectors took place at the Disneyland Hotel, Anaheim, California, on 29 October – 3 November. Having received over 850 NSS abstracts – a record number of NSS submissions for the conferences in North America – the 2012 IEEE NSS/MIC Symposium attracted more than 2200 attendees. The NSS series, which started in 1954, offers an outstanding opportunity for detector physicists and other scientists and engineers interested in the fields of nuclear science, radiation detection, accelerators, high-energy physics, astrophysics and related software. During the past decade the symposium has become the largest annual event in the area of nuclear and particle-physics instrumentation, providing an international forum to discuss the science and technology of large-scale experimental facilities at the frontiers of research.
The historic academic building of Utrecht University provided the setting for the 5th International Workshop on Heavy Quark Production in Heavy-Ion Collisions, offering a unique atmosphere for a lively discussion and interpretation of the current measurements on open and hidden heavy flavour in high-energy heavy-ion collisions. Held on 14–17 November, the workshop attracted some 70 researchers from around the world, a third of the participants being theorists and more than 20% female researchers. The topics for discussion covered recent results, upgrades and future experiments at CERN’s LHC, Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) and the Facility for Antiproton and Ion Research (FAIR) at Darmstadt, as well as theoretical developments. There was a particular focus on the exchange of information and ideas between the experiments on open heavy-flavour reconstruction.
Open and hidden heavy flavour
Representatives from all of the major collaborations nicely summarized recent experimental results and prospects for future measurements. In particular, with the advent of the LHC, an unprecedented wealth of data on the production of heavy quarks and quarkonium in nuclear collisions has become available. One of the more spectacular effects observed at RHIC is the quenching of the transverse momentum (pT) spectra of light hadrons, related to the energy loss of quarks inside the hot quark–gluon plasma (QGP) phase produced in lead–lead (PbPb) collisions. This has now been studied in detail for the first time by the ALICE, ATLAS and CMS collaborations in the heavy-quark sector.
Among the highlights presented at the workshop, the ALICE collaboration reported a strong suppression (up to a factor around 5) of the production of D mesons in PbPb collisions at a centre-of-mass energy, √sNN, of 2.76 TeV, compared with proton–proton data at the same energy. The CMS experiment has also found a sizeable suppression of the yield of J/ψs coming from the decay of B hadrons. When this effect is compared with the one measured by the same experiments for light hadrons, interesting hints of a hierarchy of suppression are seen, with the beauty hadrons being less suppressed than the charmed hadrons and the latter less suppressed than light hadrons. Such an observation may be connected to the so called dead-cone effect, a reduction of small-angle gluon radiation for heavy compared with light quarks, predicted by QCD and related to the energy density reached in the medium.
In the quarkonium sector, the ALICE and CMS collaborations showed new and intriguing results on J/ψ and Υ production, respectively. A suppression of charmonium states had been previously observed at CERN’s Super Proton Synchrotron (SPS) and at RHIC and was explained as an effect of the screening of the binding colour force in a QGP. With data from the LHC, accurate results on the bottomonium states have proved for the first time – beyond any doubt – that the less-strongly bound Υ(2S) and Υ(3S) are up to five times more strongly suppressed in a QGP with respect to the tightly bound Υ(1S) state, an observation that is expected in a colour-screening scenario. On the contrary, the ALICE collaboration sees a smaller suppression-effect for the J/ψ with respect to RHIC and the SPS, despite the larger energy density reached in nuclear collisions at the LHC. An interesting hypothesis relates this observation to a recombination of cc pairs, which are produced with high multiplicity in each PbPb collision, in the later stages when the system cools down and crosses the transition temperature between the QGP and the ordinary hadronic world.
Theoretical developments
The talks on theory provided quite a comprehensive overview of the vigorous research efforts towards a theoretical understanding of heavy-quark probes in heavy-ion collisions. The experimental findings on open heavy-flavour suppression and elliptic flow have led to many theoretical investigations of heavy-quark diffusion in the strongly coupled QGP. Most models use a relativistic Fokker-Planck-Langevin approach, with drag and diffusion coefficients taken from various microscopic models for the heavy-quark interactions with the hot and dense medium. The microscopic models include estimates from perturbative QCD for elastic- and/or radiative-scattering processes, T-matrix calculations using in-medium lattice potentials (from both the free and the internal thermodynamic potentials) and collision terms in full transport simulations, including 2 ↔ 2 and 2 ↔ 3 processes in perturbative QCD.
First studies of the influence of the hadronic phase on the modifications of the open-heavy-flavour medium were presented at the workshop. Estimates of the viscosity to entropy-density ratio, η/s, from the corresponding partonic and hadronic heavy-quark transport coefficients, lead to values that are not too far from the conjectured anti-de Sitter/conformal field theory lower bound of 1/4π in the phase-transition region, showing the characteristic minimum around the critical temperature, Tc. Results from a direct calculation of the heavy-quark transport coefficients via the maximum-entropy method applied to lattice-QCD correlation functions were also reported.
In the field of heavy quarkonia, the notion of a possible regeneration of heavy quarkonia via qq recombination in the medium in addition to the dissociation/melting processes leading to their suppression in the QGP has in recent years led to detailed studies on the bound-state properties of heavy quarkonia in the hot medium. Here, the models range from the evaluation of static qq potentials in hard-thermal-loop resummed thermal-QCD to a generalization of systematic nonrelativistic QCD and heavy-quark effective theory studies, generalizing from the vacuum to thermal field theory.
These theoretical studies have already led to major progress in understanding the possible microscopic mechanisms behind the coupling of heavy-quark degrees of freedom with the hot and dense medium created in heavy-ion collisions. In future, it might be possible to gain an even better quantitative understanding of fundamental quantities such as the transport coefficients of the QGP (for example η/s) and the dissociation temperatures of heavy quarkonia, which could provide a thermometer for the QGP formed in heavy-ion collisions. Whatever happens, the workshop has provided an excellent framework to discuss this exciting theoretical work and trigger some fruitful ideas for its future development.
The observed signals for the QGP are expected to be even stronger in PbPb collisions at √sNN = 5.1 TeV (foreseen in 2015) and allow the properties of the QGP to be characterized further. Proton–lead data are urgently needed to measure the contribution from the effects in cold nuclear matter, such as nuclear shadowing and Cronin enhancement. The experimental teams at the LHC and at RHIC are working on upgrades of the inner tracking systems of their detectors, aiming for an improved resolution in impact parameter, which will make the measurement of open beauty in heavy-ion collisions feasible in the near future.
• The organizers would like to thank the Lawrence Berkeley National Laboratory and the Foundation for Fundamental Research on Matter (FOM) for financial support.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.