Dark matter remains dark. The Fermi Gamma-ray Space Telescope could not detect an annihilation signal from dark-matter reservoirs in nearby dwarf galaxies. This nonetheless constrains the properties of candidate dark-matter particles. The results exclude for the first time weakly interacting massive particles (WIMPs) within a specific range of masses and interaction rates.
That 80 per cent of the matter content in the universe is invisible makes it one of the challenges of modern physics and astronomy. Looking at different wavelengths does not help because the elusive matter seems neither to emit nor absorb electromagnetic radiation. Yet, its gravitational influence is manifest in the orbital speeds of stars inside galaxies and in shaping the structure of clusters of galaxies. The mystery is compounded by the lack of clues as to what this dark matter actually is. It must be non-baryonic – that is, not made of ordinary matter, essentially protons and neutrons – which suggests a new kind of subatomic particle. A favoured class of dark-matter candidates consists of WIMPs, which are presumed not to interact with normal matter or radiation, except through gravitation, but which could mutually annihilate to produce gamma rays. Such a weak-scale annihilation has the advantage that it accounts naturally for the observed cosmological density of dark matter – and this is the prime motivation for favouring WIMP dark matter.
NASA’s Fermi satellite is well suited to look for a WIMP-annihilation signal. Launched on 11 June 2008, its payload includes the Large Area Telescope (LAT), which scans the whole sky every three hours in the 100 MeV to 300 GeV energy range (CERN Courier November 2008 p13). A prime target to look for signs of WIMP annihilation are dwarf spheroidal galaxies. These small galaxies orbit the Milky Way and are characterized by a high ratio of dark to normal matter. Their stellar population is old, making them unlikely to contain supernova remnants or pulsars emitting contaminating gamma rays; they can also be selected at locations in the sky that avoid the gamma-ray-bright Galactic plane.
The Fermi-LAT collaboration has made a study of the gamma-ray emission of dwarf spheroidal galaxies observed over two years and published results based on a joint likelihood analysis that evaluates all of the galaxies at once without merging the data (Ackermann et al. 2012). The study also accounts for uncertainties in the actual distribution of dark matter inside the galaxies. The results yield no evidence of a gamma-ray signal from dark matter and thus can strongly constrain the cross-section for dark-matter particle annihilation.
Specifically, the study strongly disfavours the existence of WIMPs with the most generic velocity-averaged cross-section (3 × 10–26 cm3 s–1 for a purely s-wave annihilation cross-section) and masses less than around 30 GeV. A lower cross-section would imply too high a cosmological density of dark matter, whereas a higher cross-section would result in a significant detection of gamma rays. The effective exclusion of the less massive WIMP candidates is confirmed by an independent study reported in the same issue of Physical Review Letters. Authored by Alex Geringer-Sameth and Savvas Koushiappas of Brown University, Rhode Island, the second analysis uses another set of Fermi-LAT data and a different statistical method and treatment of the background.
The limits presented in these two papers are among the strongest dark-matter limits obtained to date and suggest that Fermi-LAT has the potential either to discover the WIMP-annihilation signal from dwarf spheroidal galaxies or to rule out that dark matter is made of WIMPs. The next step is the inclusion of more recent gamma-ray measurements that extend to higher energies with an improved LAT sensitivity. It will be interesting to see whether the forthcoming study leads to a further decisive push towards higher energies of the allowed range of WIMP masses, or to a historic discovery.
The measurements of the electron- and muon-neutrino fluxes published by the Super-Kamiokande collaboration in 1998 marked a turning point in the history of particle physics. This team showed that fewer muon-neutrinos arrive at the surface of the Earth than are produced by cosmic-ray interactions in the upper atmosphere (atmospheric neutrinos). This in turn indicated evidence for neutrino oscillations, the phenomenon in which the flavour of the neutrino changes (oscillates) as the neutrino propagates through space and time. Since the publication of Super-Kamiokande’s seminal paper, the phenomenon of neutrino oscillations has been established through further measurements of atmospheric neutrinos, as well as of neutrinos and antineutrinos produced in the Sun, by nuclear reactors and by high-energy particle accelerators. It is arguably the most significant advance in particle physics of the past decade.
Extending the Standard Model
Neutrino oscillations imply that the Standard Model is incomplete and must be extended to include neutrino mass as well as mixing among the three neutrino flavours. The mechanism by which neutrino mass is generated is not known. An intriguing possibility is that the tiny neutrino mass is the result of physics at extremely high energy scales. Such a “see-saw” mechanism might also help to explain why neutrino mixing is so much stronger than the mixing among quarks. Mixing among three massive neutrinos admits the possibility that symmetry between matter and antimatter (CP-symmetry) is violated via the neutrino mixing matrix. Nonzero neutrino mass implies that lepton number must be used to distinguish a neutrino from an antineutrino. If lepton number is not conserved then a neutrino is indistinguishable from an antineutrino, i.e. the neutrino is a Majorana particle – a completely new state of matter. The determination of the properties of the neutrino, therefore, is fundamental to the development of particle physics.
These exciting new measurements imply that it may be possible to observe CP-violation in neutrino oscillations
Neutrino oscillations are readily described by extending the Standard Model to include three neutrino-mass eigenstates, ν1, ν2 and ν3, such that the neutrino-flavour eigenstates, νe, νμ and ντ, are quantum-mechanical mixtures of the mass eigenstates (figure 1). Neutrino oscillations arise from the “beating” of the phase of the neutrino-mass eigenstates as a neutrino produced as an eigenstate of flavour propagates through space and time. The matrix by which the mass-basis is rotated into the flavour-basis is parameterized in terms of three mixing angles (θ12, θ23 and θ13) and one phase parameter (δ). If δ is nonzero (and not equal to π), then CP-violation in the neutrino sector will occur so long as θ13 > 0. Measurements of neutrino oscillations in vacuum can be used to determine the moduli of the mass-squared differences Δm231 = m23 – m21 and Δm221 = m22 – m21 and, with the aid of interactions with matter, also the sign.
The bulk of the measurements of neutrino oscillations to date have been collected using the dominant “disappearance” channels νe → νe and νμ → νμ. These data have yielded values for the three mixing angles, as well as for the magnitude of the mass-squared differences Δm231 and Δm221, and have shown that m2 > m1 (i.e. that Δm221 > 0). Last year, the T2K, MINOS and Double Chooz experiments presented evidence that θ13 may be greater than zero. Then, in March this year, the Daya Bay collaboration reported that sin22θ13 = 0.092 ± 0.016 (stat.) ± 0.005 (syst.), i.e. that sin22θ13 = 0 is excluded at 5.2 σ. The announcement was soon followed by the report of a similar result from the RENO experiment. These exciting new measurements imply that it may be possible to observe CP-violation in neutrino oscillations. The challenge for the neutrino community, therefore, is to refine the measurement of θ13 to determine the sign of Δm231 (the “mass hierarchy”), to discover CP-violation (if, indeed, it does occur) by measuring δ and to improve the accuracy with which θ23 is known.
Over the next few years, several experiments – MINOS, T2K, NOνA, Double Chooz, Daya Bay and RENO – will exploit the νμ→ νe and νe → νx channels to improve significantly the precision with which θ13 is known. The NOνA long-baseline experiment might also be able to determine the mass hierarchy. However, it is unlikely that either T2K or NOνA will be able to discover CP-violation, i.e. that δ ≠ 0 or π.
The Neutrino Factory
Neutrino oscillations also have implications well beyond the confines of particle physics. The possibility of CP-violation through the neutrino mixing matrix, combined with the possibility that the neutrino is a Majorana particle, makes it conceivable that the interactions of the neutrino led to the observed domination of matter over antimatter in the universe. The abundance of neutrinos in the universe is second only to that of photons. Even with a tiny mass, the neutrino may make a significant contribution to dark matter and thereby play an important role in determining the structure of the universe.
Such a breadth of impact justifies an ambitious, far-reaching experimental programme. Determining the nature of the neutrino – whether Majorana or Dirac – through the search for neutrinoless double-beta decay (2β0ν) is an important part of this programme. The absolute neutrino mass must also be determined either through observations of 2β0ν decay or from the measurement of the end-point of the electron spectrum in beta decay. Equally important is the accurate determination of the parameters that determine the properties of the neutrino. This requires intense, high-energy neutrino and antineutrino beams – precisely what the Neutrino Factory is designed to produce.
In the Neutrino Factory, beams of νe and νμ (νe,νμ) are produced from the decays of μ+ (μ–) circulating in a storage ring. High neutrino-energies can readily be achieved because the neutrinos carry away a substantial fraction of the energy of the muon. Time-dilation is beneficial, allowing sufficient time to produce a pure, collimated beam. The table above lists the oscillation channels that are available at the Neutrino Factory. Charged-current interactions induced by νe → νμ oscillations – the “gold channel” – produce muons that are opposite in charge to those produced by the νμ in the beam, so a magnetized detector is required. The additional capability to investigate the “silver” (νe → ντ) and “platinum” (νμ → νe) channels also makes the Neutrino Factory an excellent place to look for oscillation phenomena that are outside the standard three-neutrino mixing paradigm. It would be the ideal facility to serve the precision-era of neutrino-oscillation measurements.
In 2011, the International Design Study for the Neutrino Factory (the IDS-NF) collaboration presented two options for the facility in its Interim Design Report (IDR) (Choubey et al. 2011). The first, optimized for discovery reach at small θ13 (sin22θ13 < 10–2), calls for two distant detectors, with baselines of 2500–5000 km and 7000–8000 km, and a stored-muon energy of 25 GeV. The second option, optimized for sensitivity at large θ13, requires a single detector at a distance of around 2000 km and a stored-muon beam with an energy of only 10 GeV. Figure 2 shows the discovery reach of the facility presented in terms of the fraction of all possible values of δ (the “CP fraction”) and plotted as a function of sin22θ13.
In the past few weeks, the Daya Bay and RENO collaborations have announced the first measurements of sin22θ13 with a value around 0.1. Figure 2 shows that at such a large value of θ13, excellent performance can be achieved using the “low-energy” option. At such a large value of θ13, the precision and discovery reach of a “low energy” Neutrino Factory is significantly better than the realistic alternatives (IDS-NF 2011).
Novel techniques
The IDS-NF baseline accelerator facility sketched in figure 3 provides a total of 1021 muon decays per year, split between the two distant neutrino detectors. The process of creating the muon beam begins with the bombardment of a pion-production target with a pulsed proton beam. The pions are captured in a solenoidal channel in which they decay to produce the muon beam. A sequence of accelerators is then used to manipulate and reduce (cool) the muon-beam phase space and to accelerate the muons to their final energy.
The muon’s short lifetime has required novel techniques to be developed to carry out these steps. Ionization cooling, the technique by which it is proposed to cool the muons, involves passing the beam through a material in which it loses energy through ionization and then re-accelerating it in the longitudinal direction to replace the lost energy. Muon acceleration will be carried out in a series of superconducting linear and recirculating linear accelerators. The final stage of acceleration, from 12.6 GeV to the stored-muon energy of 25 GeV, is provided by a fixed-field alternating-gradient (FFAG) accelerator. The baseline neutrino detector is a MINOS-like iron-scintillator sandwich calorimeter with a sampling fraction optimized for the Neutrino Factory beam. The baseline calls for a fiducial mass of 100 kilotonnes to be placed at the intermediate baseline and a detector of 50 kilotonnes at the magic baseline.
Much of the Neutrino Factory facility, the accelerator complex and the neutrino detectors exploit state-of-the-art technologies. To achieve the ultimate performance (1021 muon decays per year) the IDS-NF baseline calls for: a proton-beam power of 4 MW, delivered at a repetition rate of 50 Hz in short (around 2 ns) bunches; a pion-production target capable of accepting the high proton-beam power; an ionization-cooling channel that increases the useful muon flux by a factor of around 2; and an FFAG to boost the beam energy rapidly to 25 GeV. R&D programmes that address each of these issues are underway. CERN, along with other proton-accelerator laboratories, is actively developing the technologies necessary to deliver multimega-watt, pulsed proton beams. The principle of a mercury-jet pion-production target was demonstrated by the MERIT experiment in 2008 that ran in the beamline of n_TOF, the neutron time-of-flight facility at CERN. The nonscaling FFAG accelerator EMMA (the Electron Model of Muon Acceleration, also known as the Electron Model of Many Applications) has been commissioned at the Daresbury Laboratory in the UK and used to demonstrate the “serpentine acceleration” characteristic of the nonscaling FFAG. The international Muon Ionization Cooling Experiment (MICE) at the Rutherford Appleton Laboratory will provide the engineering demonstration of the ionization-cooling technique (see box, previous page).
The Neutrino Factory is the facility of choice for the study of neutrino oscillations. It has excellent discovery reach and offers the best precision on the mixing parameters. The ability to vary the stored-muon energy and, perhaps the detector technology, gives the necessary flexibility to respond to developments in understanding neutrino physics and in the discovery of new phenomena. The R&D programme required to make the Neutrino Factory a reality will directly benefit the development of a muon collider and experiments that seek to discover charged lepton-flavour violation. The case for the Neutrino Factory as part of a comprehensive muon-physics programme is compelling indeed.
I gratefully acknowledge the help, advice, and support of my many colleagues within the IDS-NF, EUROnu and MICE collaborations and the Neutrino Factory community who have freely discussed their results with me and from whose work and results I have drawn freely.
MICE is a single-particle experiment in which the position and momentum of each muon is measured before it enters the MICE cooling channel and is measured again after it has left (Gregoire et al. 2003 and 2005). Muons with momenta between 140 MeV/c and 240 MeV/c, with normalized emittance between 2 πmm and 10 πmm, will be provided by a purpose-built beamline at the 800 MeV proton synchrotron, ISIS, at the Rutherford Appleton Laboratory.
The MICE cooling channel, a single lattice cell, comprises three 20-l volumes of liquid hydrogen and two short linac modules each consisting of four 201 MHz cavities. Beam transport is achieved by a series of superconducting solenoids: the “focus coils” focus the beam into the liquid-hydrogen absorbers, while a “coupling coil” surrounds each of the linac modules. A particle-identification system, with scintillator time-of-flight (TOF) hodoscopes and threshold Cherenkov counters, upstream of the cooling channel allows a pure muon beam to be selected. Downstream of the cooling channel, a final hodoscope and a calorimeter system allow muon decays to be identified. The calorimeter is composed of a lead-scintillator section, of a similar design to that of the KLOE detector at DAΦNE but with thinner lead foils, followed by a fully active scintillator detector (the electron-muon ranger) in which the muons are brought to rest.
Charged-particle tracking in MICE is provided by two solenoidal spectrometers that together determine the relative change in transverse emittance of the beam, which is expected to be approximately 10%, with a precision of ±1% (i.e. a 0.1% measurement of the change in absolute emittance). The trackers themselves are required to have high track-finding efficiency in the presence of background that is induced by X-rays produced in the RF cavities.
In the first “step” of the experiment, the muon beam for MICE has been characterized using the beamline instrumentation and the TOF, Cherenkov and lead-scintillator systems (figure 5). The results, which are being prepared for publication, show that the muon beam can provide the range of momentum and emittance required by MICE. The trackers and a prototype of the electron-muon ranger have been tested and shown to perform to specification. The cavities that make up the two short linac sections have been manufactured by Lawrence Berkeley National Laboratory (LBNL). The superconducting magnets required for the cooling channel are all under construction. By the end of 2012, the collaboration will commission the two spectrometer modules and the first liquid-hydrogen absorber and focus-coil module. This will allow preliminary studies of the ionization-cooling effect to be performed. The full MICE cooling cell will be constructed once the initial cooling studies are complete.
This month sees the final touches being made to a detailed, 500-page report on the physics programme, a detector design and the accelerator options for the proposed Large Hadron Electron Collider (LHeC) project. Following invitations by CERN and the European Committee for Future Accelerators (ECFA) and after three annual workshops, a study group of nearly 200 physicists and engineers from 60 institutes has now laid out the motivation and design concepts for a next-generation collider and a detector to explore the tera-electron-volt energy scale. The technical and particle physics aspects of the report have been refereed by more than 20 world experts, who were invited by CERN last year to scrutinize the design. The design process was monitored by ECFA and the Nuclear Physics European collaboration Committee (NuPECC), as well as by a scientific advisory committee. The potential for electron–ion scattering led NuPECC in 2010 to include the LHeC in its long-range plan for European nuclear physics.
The LHeC project involves extending the capabilities of the LHC with a 60 GeV polarized electron beam, which in collisions with the intense proton (and ion) beams of the LHC would reach luminosities about 100 times larger than at HERA, the world’s first electron–proton collider that ran at DESY in the years 1991–2007. The aim would be to exceed HERA’s maximum four-momentum-transfer squared, Q2, by a factor of 20. This would open up a new chapter in the physics of deep inelastic-scattering (DIS), a story that began at SLAC with the discovery of quarks as the smallest constituents of the proton in 1968. More recently it led to the discovery at HERA that at small relative parton momenta, x, the proton is largely determined by gluon interactions, which also give mass to the visible matter of the universe.
The electron beam for the LHeC could be supplied by a new electron storage-ring mounted on top of the LHC, for which new, lighter, high-quality dipole magnets have been successfully developed both at CERN and at the Budker Institute of Nuclear Physics, Novosibirsk, in accordance with the design report. An alternative is to use an electron linac in a “racetrack” configuration of 1/3 the circumference of the LHC. This would consist of some 120 accelerating-cavity cryomodules placed in two linacs, each 1 km long and connected by triple return arcs (figure 1). These superconducting cavities operate in a continuous-wave mode at a gradient of about 20 MV/m, similar to the European XFEL project at DESY, and at a frequency that is likely to be 721 MHz. The limitation of the total power consumption to 100 MW and the necessity to achieve maximum luminosity, in excess of 1033 cm–2 s–1, led to the linac for the LHeC being designed as an energy-recovery linac. The concept of energy recovery is growing in popularity and with the LHeC, CERN and its partners would develop the highest-energy application. With a linac length of 2 km, the new accelerator is no longer than SLAC’s famous linac; however, the reach in Q2 is enlarged by a factor of almost 105 owing to the collider configuration and the high-energy beams of the LHC.
The design report describes the machine physics, such as optics and beam–beam dynamics, for both options for the LHeC’s electron beam, as well as schemes to achieve high positron currents in the linac option. It also gives details for the various elements of the accelerator system, such as the warm dipole and cold interaction-region magnets, the cryogenics and the power supply (RF) components.
To achieve the high integrated luminosity at the LHeC, the design envisages that the LHC would operate synchronously with electron–proton and proton–proton collisions. This would turn the LHC into a novel three-beam facility; it also determines the time schedule for building and operating the LHeC (figure 2).
The report also covers a new collider detector, designed for high acceptance – down to 1° to the beam axis – and for the highest precision. Relying on novel technologies, as used in the ATLAS and CMS experiments and being developed for their upgrades, and based on the experience from the H1 and ZEUS detectors at HERA, the detector could be built in the 10 years or so available. Figure 3 shows the main detector, which is complemented by forward devices to tag protons, neutrons and deuterons for diffractive-scattering studies, and by backward electron and photon calorimeters for tagging events at low Q2 (photo-production) and for measuring the luminosity with Bethe-Heitler scattering. With the assumption of only one interaction region being available for the LHeC in the 2020s, the report considers only one collider detector, with possibly two analysis collaborations to ensure independent and competing analysis approaches – a novel concept for particle physics
The design report will provide valuable input to the discussion on the future of European particle physics. The next steps towards the LHeC will be discussed at a workshop near Coppet, near Geneva, on 14–15 June. The project offers the promise for a new multipurpose experiment for particle physics at CERN. It is reminiscent of the time when the SppS operated while CERN was also the centre of DIS with its muon- and neutrino-scattering experiments such as BCDMS and CDHSW. The LHeC builds on the LHC, enriching its physics harvest substantially and continuing the tradition of DIS as part of the exploration of the energy frontier. The accelerator technology and the experimental prospects are fascinating. By increasing the energy or the positron intensity there is also a bright future for further developments, reaching into the time when the LHC could be replaced by a new high-energy proton–proton collider and where the maximum Q2 could approach 10 TeV2.
LHeC points of view
The physics chapters of the design report discuss the rich and unique programme of the LHeC. There exist different and complementary points of view on the interest in such a project:
• The LHC point of view sees the LHC as the natural, highest-energy collider for finding physics beyond or complementing the Standard Model. New particles observed in proton–proton collisions may also be produced in electron–proton interactions and their characteristics studied. One example would be the Standard Model scalar boson at 125 GeV, if confirmed; its charge-parity properties and decays to b-quark pairs may be cleanly investigated at the LHeC in the process of WW fusion. If new particles or phenomena are so heavy that they can be seen only at the LHC, the precise understanding of quarks and gluons, mostly at large Bjorken x, could become crucial in distinguishing new observations from instrumental or merely partonic effects.
• The precision-physics point of view recognizes the unique potential related to ultraprecise electron–proton measurements. A far-reaching programme of investigations in experimental DIS physics and in perturbative QCD is linked to the possibility of measuring the strong coupling constant αs(MZ2) with tenfold improved precision (to per mille accuracy) as required in supersymmetric grand-unification scenarios of the electromagnetic, weak and strong interactions.
• The parton-distribution function (PDF) point of view emphasizes that the LHeC, for the first time, provides a complete foundation based not on fits but on data for the determination of the distributions of the two valence and six sea quarks, including the first mapping of those for the strange and top quarks. The LHeC maps the gluon distribution to unprecedented precision in a range from very low x > 10–6 to x close to 1. The complete set of precision PDFs is crucial for extending the ranges of searches at the LHC or for measuring the mass of the W boson.
• From a QCD point of view, this precision needs to be matched by calculations of a further order of perturbation theory. New theoretical concepts, such as generalized parton distributions (based on scattering amplitudes), unintegrated parton distributions (that take transverse parton momenta into account) and diffractive parton distributions, are in their infancy. Factorization and resummation may be tested decisively in combining data from the LHC and LHeC. The investigation of high-energy electron–proton scattering can also be important for constructing a non-perturbative approach to QCD based on effective string theory in higher dimensions.
• From a neutron point of view, tagging the spectator proton in electron–deuteron collisions leads to a removal of the corrections for Fermi motion. Moreover, the nuclear-shadowing effects may be controlled with diffractive scattering as proposed by Vladimir Gribov. These new methods would put tests of neutron structure, parton-symmetry relations and the evolution of QCD on new, firm ground.
• The heavy-ion point of view notes that the LHeC will extend the kinematic range in electron–ion scattering by almost four orders of magnitude and lead to essential innovations in understanding nuclear parton distributions. This is deeply related to the initial state of the quark–gluon plasma and will allow the black-body limit of deep inelastic electron–ion scattering to be established experimentally. Such a possibility would complete the exciting programme of physics with heavy ions at the LHC.
• From the HERA point of view, there is a large programme to be performed with higher luminosity. Examples include precision measurements of the longitudinal structure function down to low x, or the solution of the up-to-down quark limit at high x, with data free of both nuclear and power corrections.
• The photon point of view recognizes that the most elementary boson yet has a quantum mechanical, partonic (gluon, charm etc.) structure, which could be uniquely investigated at the LHeC. It will allow both new phenomena and classic QCD subjects in photoproduction to be studied at much higher energy. The LHeC design with a linear accelerator could also generate a real photon beam, allowing the possibility of the first-ever photon–proton collider.
• The surprise point of view, finally, relies on the greatly extended kinematic range and high luminosity for observing fundamentally new phenomena. HERA discovered the marked rise of parton densities towards low x and that in 10% of the events the proton remained intact despite the violence of the interaction – a fact that remains surprising. Known candidates for discovery are: a three-gluon state, the odderon; a topological QCD phenomenon, the instanton; a currently hypothetical substructure of the top quark or weak bosons; and the exclusion of the saturation phenomenon. Top quarks have never been noticeably produced in deep inelastic-scattering but they will appear copiously at the LHeC. Steps into an unknown kinematic region have always led to surprises, either through new particles or through their absence.
Take a 27-km, record-breaking machine, with 10,000 scientists from 100 countries and 630 institutions, throw in selected artists and arts specialists, and what do you get? An experiment to bring about head-on collisions between things that are even more elusive than the Higgs Boson – creativity, imagination and human ingenuity. Without them, science, art and technology would not exist. The name of this experiment is Arts@CERN, and last year saw the switch-on of this new and rather different collider at CERN.
The start-up has seen CERN collaborate in the world’s most prestigious digital-arts festival, Ars Electronica, in Linz; feature in the keynote event at the Agenda 2016 conference at Moderna Museet, in Stockholm; supply live footage from the LHC to the US film director David Lynch for the Mathematics exhibition at one of the world’s leading contemporary arts museums, Fondation Cartier, in Paris; and have its research into antimatter feature on the centre spread of China’s best-selling design magazine.
Other results of the arts switch-on involve specially curated visits to CERN’s facilities for leading international artists. Recently these included the Swiss video artist Pipilotti Rist, the Polish conceptual artist Goshka Mocuga and the master of contemporary dance, the US choreographer William Forsythe, as well as up-and-coming young artists, such as performer Niamh Shaw from Ireland. And to cap it all, this year CERN has two artists in residence on the new, three-year international artists’ residency programme, Collide@CERN, which is funded and supported by external donors and partners.
This all seems a long time since 2009, when I was given the opportunity to go anywhere in the world after I received the Clore Fellowship – an award for cultural leadership. Instead of taking the opportunity to work in a famous arts organization, I decided to approach CERN to come for three months, supported by the UK Government who funded my award, to carry out a feasibility study for an artists’ residency scheme. Little did I know that I would be hired in the spring of 2010 to build a p(art)icle collider for CERN.
So why should CERN engage with the arts? CERN has a mission to engage science in society. The arts reach areas that science and technology alone cannot reach – touching the public who might otherwise be turned off. By joining forces, arts, technology and science make an unbeatable force for change and innovation in the 21st century, as Eric Schmidt, now executive chair of Google, points out. In the words of CERN’s director-general, Rolf Heuer: “They are expressions of what makes us human in the world.”
This phrase, more than any other, shows what is behind CERN’s high-level engagement with the arts and can be summed up in a simple equation: arts + science + technology = culture. For an organization to be truly cultural and innovative in the 21st century, it has to embrace all factors and facets of human experience, engaging with them on the same level of excellence as its institutional values.
Science and the arts are intimately connected in other ways, too. The British sculptor Antony Gormley is one of several leading international artists who are the patrons of the Collide@CERN artist in residence scheme. He recently donated one of his pieces, Feeling Material, to CERN in acknowledgement of the inspiration of particle physics on his work; it now hangs in the Main Building. Gormley is clear about the connection between art and science: “My whole philosophy is that art and science are better together than apart. We have somehow accepted an absolute division between analysis and intuition but I think actually the structures that they both come up with are an intricate mix of the two.”
The showpiece event that signalled the switch-on of CERN’s arts experiment was the six-day Ars Electronica Festival in Linz in 2011. Being the world’s leading digital-arts festival, it features spectacular performances in and around its state-of-the-art building and museum in addition to digital-arts exhibitions and interventions throughout the city. In 2011, CERN was the major collaborative partner and inspiration for the festival, which was called “Origin” and attracted more than 70,000 visitors from 33 countries. A symposium explored the importance of fundamental research and CERN’s collaborative international organizational structure. Even the logo for the festival was taken from the collisions in the ATLAS detector. CERN’s director of research and innovation, Sergio Bertolucci, and the director-general both spoke at the festival, and researchers from the experiments at the LHC gave the public “walk and talk through” guides to the innards of the detectors, with extraordinary high-resolution images.
That was not all. Ars Electronica and CERN also announced at the festival a landmark, three-year international cultural partnership with the launch of the annual Prix Ars Electronica Collide@CERN award for digital artists. The prize is a residency at both institutions lasting three months – two months at CERN for inspiration and one month at Ars Electronica for production. The first competition attracted 395 artists from 40 countries – from Azerbaijan and Uzbekistan, Brazil and Iceland, as well as from across Europe and the US. The winning artist was the 28-year-old Julius von Bismarck – one of the rising stars of the international arts scene, who is currently studying with the celebrated Icelandic Danish artist Olafur Eliasson at the Institute of Spatial Experiments in Berlin.
It was only after awarding von Bismarck the prize that the jury discovered that he had wanted to be a physicist, and that both his brother and his grandfather are physicists. This only goes to prove the point at the heart of the Arts@CERN initiative – that scientists and artists are inter-related. He has just completed his residency of two months at CERN, being inspired by the science and the environment and having been “matched” with James Wells, a theorist at CERN, as his partner for scientific inspiration.
During his time at the laboratory, von Bismarck carried out interventions in perception among the CERN community and held many informal discussions. He is now at Ars Electronica’s transdisciplinary innovation and research facility, Futurelab, producing the ideas generated at CERN. He is working with his production mentor Horst Hoertner – one of the co-founders of the Prix Ars Electronica Collide@CERN. He will showcase the work at this year’s Ars Electronica Festival before bringing the piece back to CERN for a lecture on 25 September. However, the ripples of the residency and the ideas will continue long after von Bismarck has left. As he stated after just two weeks at the laboratory: “This experience is changing my life.”
If this arts experiment sounds easy, it isn’t. As with any experiment, it needs expertise and knowledge to make it happen and to build it, using foundation and structure. So I created for CERN its first arts policy, “Great Arts for Great Science”, putting the arts on the same level of selected excellence as the science to create truly meaningful, high impact-quality engagement, mutual understanding and respect between the arts and science. The first CERN Cultural Board was appointed at this high level of knowledge and excellence – to build expertise in the arts into CERN. The board members, honorary appointments for three years, are recognized leaders in their fields. They include the director-general of the Lyon Opera House, Serge Dorny, and the director of Zurich’s Kunsthalle, Beatrix Ruf, who is acknowledged as one of the most influential figures in contemporary art today. All of the board members donate their time and, crucially, the board also includes a CERN physicist, Michael Doser. Researchers from CERN are also on the juries for all of the artists’ residencies awards.
Every year, the board will select at least one major arts project in which CERN officially collaborates, its stamp of approval enabling the project to find external funding. In 2012–2013, the selected project is the cutting-edge, multimedia/dance/opera/film Symmetry, by a truly international team of artists performing across several art forms, including the soprano Claron McFadden and the Nederlands Dans Theater dancer, Lukáš Timulak. The project is the brainchild of the emerging film director, Ruben Van Leer.
So, that is step one of building a p(art)icle collider – create the policy and the structure. The other steps were to: create the flagship Collide@CERN residency scheme; launch a website to make the work, visits and potential involvement with CERN of artists (past, present and future) visible and accessible; and finally give back to the CERN community by advising on home-grown initiatives that have international artistic potential. In 2010, one of my first acts was to carry out a major strategic review of the home-grown, biannual film festival CinéGlobe, created by CERN’s Open Your Eyes film club. The review recommended developing the brand, mission, vision and values, as well as substantial organizational restructuring and planning. I also suggested the slogan “Inspired by Science” to sum up the festival’s mission.
Two years since being hired by CERN, I am still there. It is the positive spirit of fundamental research – the quest to expand human knowledge and understanding for the good of all, engaging with cutting-edge ideas and technologies – that inspires me to work at CERN, as well as being the source of inspiration for artists. After all, landmark moments of science in the 20th century created some of the most significant arts movements of the modern world. My personal belief is that particle physics combines the twin souls of the artist – the theorist who thinks beyond the paradigms and the experimentalist who tests the new and brings them down to Earth. By building a p(art)icle collider, creative collisions between arts and science have truly begun at CERN.
Nuclear fragmentation is the name given to the break-up of nuclei. It can happen when a high-energy hadron hits an intact nucleus. This is the process that is used to produce beams of exotic projectiles, such as radioactive nuclei, at CERN’s ISOLDE facility, which has served a worldwide community for many years. However, nuclear fragmentation also takes place in inelastic peripheral collisions between heavy ions, a process that is now being put to use to generate beams of light ions in the North Area at CERN.
In a heavy-ion collision, the nuclear matter is unstable outside the region where the interacting nuclei overlap – mainly because of the mismatch between shape and surface-energy – and it disintegrates into a mixture of different nuclei. The composition of the fragments produced, in terms of particle mass (A), charge (Z) and momentum, varies considerably from one collision event to another, even for fixed initial conditions in energy and impact parameter. This type of nuclear fragmentation has been studied extensively and found to occur over a range of incident energies, from as low as 20 MeV per nucleon up to highly relativistic energies. For a given collision system (that is, with specific values of A and Z for the projectile and target), the distributions of mass and charge of the nuclei in the final state are, to a good approximation, independent of the incident energy. The same independence is also true for the momenta of the produced ions in the rest frame of the corresponding parent nucleus. In the laboratory frame, however, the fragments experience an energy-dependent boost, which causes a forward-peaked angular distribution.
Fragmentation of beams of heavy nuclei is used at a variety of facilities, including GANIL, RIKEN, GSI and the National Superconducting Cyclotron Laboratory at Michigan State University. However, a different application of nuclear fragmentation was introduced 12 years ago at CERN, when beams of fragments with energies of 40A GeV/c and 158A GeV/c were produced in a primary carbon target and delivered to the North Area at the Super Proton Synchrotron (SPS).
Figure 1a shows the production cross-section of ion-fragment projectiles as a function of the fragment’s charge that was measured when lead nuclei at an energy of 158 GeV per nucleon collided with the carbon target (Cecchini et al. 2002, Thuillier et al. 2002). The results were in good agreement with model calculations and confirmed that there is a relatively high probability of producing ions with either low or high charge, giving rise to a U-shaped distribution of the kind previously observed at much lower energies (Trautmann et al. 1992, Schüttauf et al. 1996). At the same time a fragmented lead-ion beam was used by the NA49 experiment for physics, in which fragments with A/Z values close to two were transported to the experimental area. Charge measurements of the beam particles allowed “tagging” of the charge states Z = 6 or 14, corresponding to the 12C and 28Si ions whose interactions with the secondary target in NA49 were recorded and analysed. Fragmentation was also used to produce beams of mixed ions, with a large spread of combinations of A and Z, for the calibration of detectors such as the ring imaging Cherenkov counter for the Alpha Magnetic Spectrometer experiment in 2002 (Efthymiopoulos and Buenerd 2003).
The NA61/SHINE collaboration has recently revived this method with the aim of producing light-ion beams with increased purity (NA61/SHINE 2009). The work is part of an effort to study the onset of deconfinement in heavy-ion collisions and search for the critical point of hadronic/partonic matter by scanning systematically both in collision energy and in the size of the colliding nuclei. For the light-ion part of the programme, the collaboration decided to begin with a fragment beam, as primary light ions will become available in the North Area only in 2014. To create the light ions, a primary beam of lead ions from the SPS was directed towards a stationary target in the North Area, where a secondary beamline was tuned to transport projectile fragments with an optimized content of 7Be ions to NA61/SHINE, for studying the reaction 7Be + 9Be.
The selection and transport of a specific ion species from a fragmented heavy-ion (208Pb) beam is not straightforward. The secondary beamlines in the North Area are designed to transport particles emerging from the primary targets to the experiments. They basically consist of two large spectrometers, which can select particles with a range in rigidity (momentum-to-charge ratio, Bρ ≈ 3.31γA/Z) of ±1.5%. The desired ions produced in the fragmentation of the primary beam will be immersed in a variety of other nuclei that have a similar mass-to-charge ratio and, therefore, a rigidity value within the beam acceptance. Moreover, overlaps in rigidity occur not only for ions with the same mass-to-charge ratio but also for neighbouring elements. This is because the momentum of the ions varies as a result of the nuclear Fermi motion of the fragments. Without Fermi motion, the fragments would leave the interaction region almost undisturbed, with the same velocity (or momentum per nucleon) as the incident lead ions. Instead, the Fermi motion, which depends on the masses of the fragment and the projectile, can spread the longitudinal momenta of light nuclear fragments by up to 3–5% – i.e. much, more than the beam acceptance.
The 7Be ion was chosen for the beam for the NA61/SHINE experiment because it has no long-lived near-neighbours, thus allowing the production of a light-ion beam with a large proportion of the desired ions. The near neighbours to 7Be are its isotopes 6Be and 8Be and nuclei with a charge-difference of one and a similar mass-to-charge ratio (e.g. 5Li,9B). Furthermore 7Be has more protons (Z = 4) than neutrons (N = 3). Such nuclear configurations are disfavoured with increasing nuclear mass because a surplus of protons causes a Coulomb repulsion that cannot be balanced by the attractive potential of the smaller number of neutrons. Figure 1b shows ion rates in the fragment beam delivered to NA61/SHINE. It indicates that 7Be fragments are accompanied mainly by deuterons and helium ions, whose rigidity overlaps with that of the wanted ions because of the Fermi motion. A counter-example for the choice of ion-species would have been a nucleus with a mass-to-charge ratio of two, which would be accompanied by a range of stable or long-lived nuclei from 2D up to 56Ni.
At low energies, the insertion of a “degrader” into the beamline improves the separation of the desired ions (Münzenberg et al. 1992, Geissel et al. 1995), profiting from the double spectrometer configuration of the secondary beamline. The first spectrometer selects ions within a rigidity range that maximizes the proportion of wanted ions produced by the primary fragmentation target; on passing through the degrader, a piece of material introduced at the spectrometer’s focal point, these ions lose energy in a charge-dependent way. The second spectrometer then separates the ions spatially according to their charge so that they can then be selected by using a thin collimator slit.
The drawback of this method is a loss of beam intensity, through both the nuclear interactions and the beam blow-up caused by multiple scattering in the material of the degrader, which rises with increasing thickness. So the high separation power is accompanied by a high loss of intensity. Furthermore, for a given degrader thickness both the nuclear cross-section and the energy loss are energy independent to a large extent. This means that the separation power (ΔE/E) increases with decreasing energy.
NA61/SHINE is located on the H2 beamline in the North Area, where lead ions from the SPS are focused onto a primary beryllium fragmentation target, 180 mm long. In passing through the target the lead beam undergoes collisions, mostly peripheral, with the light target-nuclei. Part of the resulting mixture of nuclear fragments is captured by the beamline, which is tuned to a rigidity that maximizes the ratio of the created 7Be to all ions. Figure 2 shows the layout of the H2 beamline with its two-step spectrometer. The optional degrader (a copper plate either 1 cm or 4 cm in thickness) is located between the two spectrometer sections. The composition of the ion beam can be monitored by scintillation counters that measure the charge (Z2) and time-of-flight of the ions. The latter allows the determination of the mass (A) of the ions for momenta lower than 20 GeV/c per nucleon.
Investigations of fragment separation in the H2 beamline took place during test-beam time in 2010, using a 13A GeV/c lead beam incident on the primary target and with the 4 cm degrader in place. Figure 3 shows, for a given rigidity setting, the charge distributions detected with the collimator set to optimize the selection of either 7Be or 11C ions. During running in 2011 the NA61/SHINE collaboration used the configuration without degrader to record a total of 6 × 1067Be + 9Be collisions at beam momenta of 158A GeV/c, 80A GeV/c and 40A GeV/c. A typical charge spectrum for a fragment beam selected by the spectrometer is indicated in figure 1b. With an incident beam from the SPS of several 108 lead ions per spill, typical beam intensities at NA61/SHINE were 5000 to 10,000 7Be particles per spill, with 10 to 20 times as many unwanted ions (Efthymiopoulos et al. 2011).
A second period with a 7Be beam is scheduled for autumn this year. It will be devoted to data-taking at beam momenta of 30A GeV/c, 20A GeV/c and 13A GeV/c. The latter is close to the lower limit of what is possible given the characteristics of the SPS accelerator and the external beamlines.
Friday, 31 May 2001, 6 p.m. – Back in my office, I open my notebook and write “My understanding of MD’s ideas” in blue ink. I draw a box and write the words “Open Lab” in the middle of it. I’ve just left the office of Manuel Delfino, the head of CERN’s IT division. His assistant had called to ask me to go and see Manuel at 4 p.m. to talk about “industrial relations”. I’ve been technology-transfer co-ordinator for a few weeks but I had no idea of what he was going to say to me. An hour later, I need to collect my thoughts. Manuel has just set out one of the most amazing plans I’ve ever seen. There’s nothing like it, no model to go on, and yet the ideas are simple and the vision is clear. He’s asked me to take care of it. The CERN openlab adventure is about to begin.
This is how the opening lines of the openlab story could begin if it were ever to be written as a novel. At the start of the millennium, the case was clear for Manuel Delfino: CERN was in the process of developing the computing infrastructure for the LHC; significant research and development was needed; and advanced solutions and technologies had to be evaluated. His idea was that, although CERN had substantial computing resources and a sound R&D tradition, collaborating with industry would make it possible to do more and do it better.
Four basic principles
CERN was no stranger to collaboration with industry, and I pointed out to Manuel that we had always done field tests on the latest systems in conjunction with their developers. He nodded but stressed that here was the difference: what he was proposing was not a random collection of short-term, independent tests governed by various different agreements. Instead, the four basic principles of openlab would be as follows (I jotted them down carefully because Manuel wasn’t using notes): first, openlab should use a common framework for all partnerships, meaning that the same duration and the same level of contribution should apply to everyone; second, openlab should focus on long-term partnerships of up to three years; third, openlab should target the major market players, with the minimum contribution threshold set at a significant level; last, in return CERN would contribute its expertise, evaluation capacity and its unique requirements. Industrial partners would contribute in kind – in the form of equipment and support – and in cash by funding young people working on joint projects. Ten years on, openlab is still governed by these same four principles.
Back to May 2001. After paving the way with extensive political discussions over several months, Manuel had written a formal letter to five large companies, Enterasys, IBM, Intel, Oracle and KPN QWest, inviting them to become the founding members of the Open Lab (renamed “openlab” a few months later). These letters, which were adapted to suit each case, are model sales-pitches worthy of a professional fundraiser. They set out the unprecedented computing challenges associated with the LHC, the unique opportunities of a partnership with CERN in the LHC framework, the potential benefits for each party and proposed clear areas of technical collaboration for each partner. The letters also demanded a rapid response, indicating that replies needed to reach CERN’s director-general just six weeks later, by 15 June. A model application letter was also provided. With the director-general’s approval, Manuel wrote directly to the top management of the companies concerned, i.e. their chairs and vice-chairs. The letters had the desired effect: three companies gave a positive response by the 15 June deadline, while the other two followed suit a few months later – openlab was ready to go.
The first task was to define the common framework. CERN’s legal service was brought in and the guiding principles of openlab, drawn up in the form of a public document and not as a contract, were ready by the end of 2001. The document was designed to serve as the basis for the detailed agreements with individual partners, which now had to be concluded.
Three-year phases
At the start of 2002, after a few months of existence, openlab had three partners: Enterasys, Intel and KPN QWest (which later withdrew when it became a casualty of the bursting of the telecoms and dotcom bubbles). On 11 March, the first meeting of the board of sponsors was held at CERN. Chaired by the then director-general, Luciano Maiani, representatives of the industrial companies were in attendance as well as Manuel, Les Robertson (the head of the LHC Computing Grid project) and me. At the meeting I presented the first openlab annual report, which has since been followed by nine more, each printed in more than 1000 copies. Then, in July, openlab was joined by HP, and subsequently followed by IBM in March 2003 and by Oracle in October 2003.
In the meantime, a steering structure for openlab was set up at CERN in early 2003, headed by the new head of the IT Department, Wolfgang von Rüden, in an ex officio capacity. Sverre Jarp was the chief technical officer, while François Grey was in charge of communication and I was to co-ordinate the overall management. January 2003 was also a good opportunity to resynchronize the partnerships. The concept of three-year “openlab phases” was adopted, the first covering the years 2003–2005. Management practices and the technical focus would be reviewed and adapted through the successive phases.
Thus, Phase I began with an innovative and ambitious technical objective: each partnership was to form a building block of a common structure so that all of the projects would be closely linked. This common construction, which we were all building together, was called “opencluster”. It was an innovative and ambitious idea – but unfortunately too ambitious. The constraints ultimately proved too restrictive – both for the existing projects and for bringing in new partners. So what of a new unifying structure to replace opencluster? The idea was eventually abandoned when it came to openlab-II: although the search for synergies between individual projects was by no means excluded, it was no longer an obligation.
A further adjustment occurred in the meantime, in the shape of a new and complementary type of partnership: the status of “contributor” was created in January 2004, aimed at tactical, shorter-term collaborations focusing on a specific technology. Voltaire was the first company to acquire the new status on 2 April, to provide CERN with the first high-speed network based on Infiniband technology. A further innovation followed in July. François set up the openlab Student Programme, designed to bring students to CERN from around the world to work on openlab projects. With the discontinuation of the opencluster concept, and with the new contributor status and the student programme, openlab had emphatically demonstrated its ability to adapt and progress. The second phase, openlab-II, began in January 2006, with Intel, Oracle and HP as partners and the security-software companies Stonesoft and F-Secure as contributors. They were joined in March 2007 by EDS, a giant of the IT-services industry, which contributed to the monitoring tools needed for the Grid computing system being developed for the LHC.
The year 2007 also saw a technical development that was to prove crucial for the future of openlab. At the instigation of Jean-Michel Jouanigot of the network group, CERN and HP ProCurve pioneered a new joint-research partnership. So far, projects had essentially focused on the evaluation and integration of technologies proposed by the partners from industry. In this case, CERN and HP ProCurve were to undertake joint design and development work. The openlab’s hallmark motto, “You make it, we break it”, was joined by a new slogan, “We make it together”. Another major event followed in September 2008 when Wolfgang’s patient, months-long discussions with Siemens culminated in the company becoming a openlab partner. Thus, by the end of Phase II, openlab had entered the world of control systems.
At the start of openlab-III in 2009, Intel, Oracle and HP were joined by Siemens. EDS also decided to extend its partnership by one year. This third phase was characterized by a marked increase in education and communication efforts. More and more workshops were organized on specific themes – particularly in the framework of collaboration with Intel – and the communication structure was reorganized. The post of openlab communications officer, directly attached to the openlab manager, was created in the summer of 2008. A specific programme was drawn up with each partner and tools for monitoring spin-offs were implemented.
Everything was therefore in place for the next phase, which Wolfgang enthusiastically started to prepare at the end of 2010. In May 2011, in agreement with Frédéric Hemmer, who had taken over as head of the IT Department in 2009, he handed over the reins to Bob Jones. The fourth phase of openlab began in January 2012 with not only HP, Intel and Oracle as partners, but also with Chinese multinational Huawei, whose arrival extended openlab’s technical scope to include storage technologies.
After 10 years of existence, the basic principles of openlab still hold true and its long-standing partners are still present. While I, too, passed on the baton at the start of 2012, the openlab adventure is by no means over.
It all started a year ago over dinner with a good bottle of wine in front of us. Steve Gourlay of Lawrence Berkeley National Laboratory, Stuart Henderson of Fermilab and myself talked about the future of accelerator R&D in the US and what could be done to promote it.
We had no idea that an opportunity would present itself so quickly, that it would require such fast action or that blogging would be a central part of carrying out our mission.
A 2009 symposium called “Accelerators for America’s Future” had laid out some of the issues and obstacles, and in September 2011 the US Senate Committee on Appropriations asked the US Department of Energy (DOE) to submit a strategic plan for accelerator R&D by June 2012.
The DOE asked me to lead a task force to develop ideas about this important matter: what should the DOE do, over the next 10 years, to streamline the transfer of accelerator R&D so that its benefits could spread out into the larger society?
We were ready to go by October. The task force would have until 1 February 2012 – just four months – to identify research opportunities targeted to applications, estimate their costs and outline the possible impediments to carrying out such a plan. Based on this information, DOE officials would draw up their strategic plan in time for the congressional deadline.
It was a huge job. The 15 members of the task force, who hailed from six DOE national laboratories, industry, universities, DOE headquarters and the National Science Foundation, would need to gather facts, opinions and ideas from a range of people with a stake in this issue – from basic researchers at the national laboratories to university and industry scientists, entrepreneurs, inventors, regulators, industry leaders, defence agencies and owners of businesses both small and large.
We quickly held a workshop in Washington, DC, followed by others at the Argonne and Lawrence Berkeley National Laboratories, where we presented some of the major ideas. And to gather the most feedback from the most people in the shortest amount of time, I did something that I like to do: I started a blog.
Now, anyone who has been around high-energy physics for a while knows that blogs and other forms of cutting-edge social media are nothing new. We particle physicists, after all, started the World Wide Web as a way to share our ideas, and what became known as the arXiv to distribute preprints of our research results. Many physicists are avid bloggers, and a number of laboratories – from CERN to Fermilab and KEK – operate blogs of their own; you can see a sample of these blogs at www.quantumdiaries.org. But it’s not as usual to incorporate a blog into the work of a task force – although, for the life of me, I don’t know why you would not want to do it.
One of the first things that I did when I came to SLAC two years ago was to start a blog aimed at fostering communication among people in the Accelerator Directorate. A blog is a great way to talk about topics that are burning under our fingernails – although sometimes one needs to overcome a certain amount of cultural resistance to get people talking freely. Instead of filling various inboxes with chains of e-mails, “electronic blackboards” are easy to read and easy to post on, and they even have the added convenience of notifying you when a new post goes up.
In the good old days you could have everyone come to one place and have a panel discussion or an all-hands meeting – an easy, free-flowing exchange of ideas. A blog can be just such a thing: open and inviting.
Our task force invited literally thousands of people to comment on the issues at hand. What can be done to move the fruits of basic accelerator research and development more quickly into medicine, energy development, environmental cleanup, industry, defence and national security? What good could flow from such a movement? What are the barriers – especially between the national laboratories, where most of this research is done, and the industries that could develop it into products – and how can they be overcome?
Not everyone answered, but many did. More than half of the responses that we got came in through the blog rather than as e-mail messages. Within a couple of days it became clear just from the people who blogged that the medical community is starving for facilities and infrastructure to develop radiation therapy further, mainly with heavy-ion beams. The people talked to us and among themselves. So it’s no surprise that the report we write will describe opportunities for the DOE to make its infrastructure available for researchers who want to pursue this line of work.
Others talked about the difficulties that they had in working with government agencies or national laboratories and how this could be made easier – a worthwhile read during an easy afternoon.
So, blogging is not just fun; it’s a great way to gather information and encourage dialogue. Once our task force finalizes its report, the site will be up for a while, and then, when the next issue arises, the blackboard will get cleaned and I will start a new one.
The first week of the 47th Rencontres de Moriond, devoted to weak interactions and unified theories, came to a close on 10 March, leaving participants not only impressed but also puzzled by the new results presented at the conference held in La Thuile. The focus this year was to look at the results on searches for the Higgs boson, exclusion limits, searches for dark matter, precision measurements, flavour and neutrino physics, and to assess their impact on theoretical models, in particular those based on supersymmetry (SUSY) and extra dimensions.
The first excitement came from new measurements of the branching ratio for the decay Bs→ μμ from the LHCb, CMS and ATLAS experiments at CERN’s LHC. LHCb and CMS have a sensitivity within a factor of around two of the rate expected in the Standard Model for this extremely rare decay, where contributions from new physics could be detected. LHCb is setting the best limit to date, of less than 4.5 × 10–9, barely above the Standard Model prediction of around 3.5 × 10–9. This leaves little room for new physics. However, David Straub, a theorist affiliated with Scuola Normale Superiore and INFN in Pisa, showed that finding a branching ratio smaller than predicted by the Standard Model would also open the door to new physics, something that has previously received little attention but is now becoming possible with the increase in precision at the LHC.
The ATLAS and CMS collaborations showed updates to the results reported in December 2011. These include further analyses of the full 2011 data sample. In the low-mass Higgs region, the ranges not excluded at 95% confidence level (CL) have shrunk a little more. For ATLAS, all possible Higgs masses below 122.5 GeV (except at 118 GeV) are now excluded, together with those from 129 GeV up to 539 GeV; for CMS all masses between 127.5 and 600 GeV are excluded. This leaves only a small range where the Higgs boson could still be found.
The small excesses reported in December are still there, coming mostly from H → γγ for both experiments and also from Higgs → llll for ATLAS. Having analysed the whole 2011 data set and included new decay channels, CMS observes a 2.8 σ deviation at 125 GeV, while ATLAS has a 2.5 σ excess at 126 GeV. When the “look-elsewhere effect” is taken into account in the 110–145 GeV range, the significance of this excess goes down to about 2.1 σ.
Fermilab’s Tevatron experiments provided a surprise. Having analysed almost all of their data and greatly improved their analyses, the DØ collaboration sees a slight excess of events in the Higgs mass range of 115–145 GeV while CDF sees it for mH < 150 GeV, coming mostly from the H → b-b and H → WW channels. The combined effect corresponds to a 2.2 σ excess above the predicted background. In addition, CDF and DØ greatly improved the precision on the masses of the W boson and the top quark. Both play an important role in determining the consistency of the Standard Model. In particular, CDF has measured the W mass to be 80.387 ± 0.019 GeV, while DØ finds a mass of 80.375 ± 0.023 GeV. These recent measurements now confine the Higgs mass to mH = 94+29–24 GeV.
While all four collaborations – ATLAS, CMS, CDF and DØ – insisted that it was too early to jump to conclusions about the Higgs boson, theorists have already been checking the effects of the mass of the Higgs and find that the currently allowed range is already putting constraints on SUSY models.
Away from the colliders, the announcement during the conference of the measurement of the neutrino mixing angle θ13 caused excitement (Daya Bay experiment measures θ13). Another highlight concerned the 8 σ annual modulation observed by the DAMA/LIBRA dark-matter experiment, which the collaboration interprets as a signal of dark matter. It has been suggested that the effect could be caused by cosmic muons, but new calculations show that the data are inconsistent with the cosmic muon hypothesis at 99% CL.
Possible signs of a Higgs boson with production cross-sections and branching ratios compatible with the Standard Model coupled with no signs of new physics despite extremely precise tests, left all of the participants of this first week of “Moriond” rather puzzled. Perhaps it is time to go back to the drawing board.
Running the LHC at 4 TeV per beam in 2012 was a key outcome of this year’s LHC Performance workshop in Chamonix. Announcing this in his concluding statement, Steve Myers, CERN’s Director for Accelerators and Technology, gave the main priorities for the year: delivering enough luminosity to the ATLAS and CMS experiments to allow them independently to discover or exclude the Higgs; the proton–lead-ion run; and a machine-development programme to target operation after the long technical shutdown (LS1) planned for 2013–2014. The 2012 integrated-luminosity target is to achieve more than 15 fb–1, and LHC progress will be monitored carefully with two checkpoints in the year to see if a run extension is needed to meet this target.
These conclusions derived from week-long discussions in Chamonix, which had begun with a critical review of 2011. Looking back on an excellent year for the machine and its experiments, the workshop identified possible improvements to critical systems – such as beam instrumentation and machine protection – to maximize the performance of the 2012 run.
The experiments provided their requirements for 2012, namely the need for at least 15 fb–1 either to discover the Higgs or to exclude it at 95% confidence level down to a mass of 115 GeV. Potential improvements to performance and machine availability include maximizing the time that the LHC delivers collisions to the experiments, as well as the potential of injectors to provide bunches with higher intensities and the smallest-possible beam size (translating directly into higher collision rates).
One of the big successes of 2011 was the “squeeze” – the reduction of the beam size at the interaction point – which was pushed in the latter part of the year. Further squeezing in 2012 might be possible in combination with the use of tighter collimator settings. This could give a peak luminosity of around 6 × 1033 cm–2 s–1, to be compared with a maximum of 3.6 × 1033 cm–2 s–1 in 2011. With a bunch spacing of 50 ns and a total of 1380 bunches (as in 2011), an integrated luminosity of 15 fb–1 seems to be in reach if the tighter collimator settings prove to be operationally robust and the impressive performance of the LHC’s many hardware systems continues.
While discussions took place in Chamonix, the full maintenance programme of the winter technical stop was nearing completion. The long operational periods now in place at the LHC allow only a few short technical stops between beam runs. This meant that time was tight for the much needed maintenance and upgrades during this winter stop.
When the 2011 beam run ended on 7 December, the cryogenics team emptied the magnets of helium to work on their full programme of maintaining and improving the already good level of availability. In addition, there were planned interventions to the essential technical-infrastructure systems, such as electricity, cooling and ventilation. An impressive list of maintenance included enhancements to vacuum, power converters, RF, beam instrumentation, safety, collimation, the beam dump and injection. To improve machine performance, measures were taken to mitigate the effects of radiation on equipment in and around the LHC tunnel, including the installation of additional shielding in points 1 and 5, as well as the relocation of radiation-sensitive electronics to less exposed areas.
Additional work was required around Point 5 to repair RF fingers at the connection of two beam-vacuum chambers in CMS. The repair was completed successfully and the sector was then put under vacuum. The cool-down to 1.9 K of all LHC sectors, which had been floating at about 80 K over the Christmas break, took place in February so that powering and cryogenic tests could occur before the machine restart in March. The tests included electrical qualification of the superconducting circuit, to check insulation and instrumentation integrity, followed by powering tests aimed at pushing the performance of all LHC circuits to their operational level. The tests injected current through the superconducting circuits while checking the correct behaviour of the protection mechanisms – an essential element for the safe operation of the machine. Much attention is needed to power the main dipole and quadrupole circuits at a different current level for operation at 4 TeV.
Following this impressive progress, the machine is set to run at 4 TeV. After operating at 3.5 TeV per beam for two years, the LHC is now entering another domain at a new energy level.
The ALPHA collaboration has reported the first-ever resonant interaction with the antihydrogen atom, observed in their experiment at the Antiproton Decelerator (AD) at CERN.
ALPHA synthesizes antihydrogen from cryogenic plasmas of antiprotons and positrons. While the charged constituents can be easily confined through their interactions with electric and magnetic fields, confining neutral antihydrogen is much more difficult. It can be held in a highly inhomogeneous magnetic field (a “minimum-B” configuration) because it has a magnetic dipole moment, but the interaction is so weak that only atoms with kinetic energy equivalent to 0.5 K or less in temperature can be trapped, using superconducting magnets. This is how ALPHA has already held antihydrogen atoms for up to 1000 s (CERN Courier March 2011 p13 and July/August 2011 p6).
Assuming that antihydrogen behaves like hydrogen, the 1s ground state will exhibit both hyperfine splitting (through the interaction between the spins of the positron and the antiproton) and splitting in a magnetic field (see figure). In a high magnetic field, these states are characterized by the direction of the spins of the antiproton and positron with respect to the field direction. The “low-field-seeking” states labelled |c〉and |d〉 can be trapped, because their energy increases with magnetic field strength. Atoms that end up in the |a〉and |b〉 states (“high-field-seekers”) are expelled from the trap and annihilate in the surrounding apparatus.
In the latest experiment, a horn antenna directed microwaves into the atom trap so as to flip the spin of the positron in the stored atoms, thus driving the transitions |c〉 → |b〉and |d〉→ |a〉. The experimental sequence was as follows: produce and trap antihydrogen (of the order of one trapped atom at a time on average); irradiate the trapped atom with microwaves resonant on either the |c〉 → |b〉or
|d〉→ |a〉transition (these are excited alternately for 15 s each over a total of 180 s); look for evidence of “lost” antihydrogen. To conduct control experiments, it was repeated without microwaves or with microwaves at a shifted off-resonance frequency. Each sequence took about 10 minutes of real time.
The collaboration used two methods to look for evidence of ejected antihydrogen. At the end of each sequence, the atom trap is rapidly de-energized, the fields falling with a time constant of about 9 ms. Any antihydrogen remaining in the trap is released and detected by ALPHA’s three-layer silicon detector over a 30 ms time window. It is then possible to compare the survival rate of anti-atoms for the three cases: no microwaves present, resonant microwaves present, or off-resonant microwaves present. The other detection measurement involves looking for direct annihilations from ejected antihydrogen during the times in which resonant microwaves are present; background (primarily cosmic rays) discrimination here is more challenging because of the longer observation time.
In both types of measurement, ALPHA finds a strong signal for resonant interaction. For example, in 110 trials with off-resonance microwaves, 23 annihilations were observed when the trap was de-energized; with microwaves on resonance, 2 annihilations were observed in 103 trials. (Detection efficiency is about 50% for both cases). The on- and off-resonance measurements localize the resonance to no better than 100 MHz in about 29 GHz; the collaboration has not yet attempted to scan the lineshape to further localize a resonant peak.
This measurement marks the beginning of anti-atom spectroscopy and illustrates that it is possible to make measurements on antimatter atoms using only a few atoms. In 2012 the ALPHA apparatus will give way to ALPHA-2, a new device that is further optimized for precision microwave and laser spectroscopy. ALPHA-2 will be commissioned during the upcoming run of the AD, from May to November.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.