Comsol -leaderboard other pages

Topics

MoEDAL looks to the discovery horizon

CCnew9_07_12

MoEDAL, the “magnificent seventh” LHC experiment, held its first Physics Workshop in CERN’s Globe of Science and Innovation on 20 June. This youngest LHC experiment is designed to search for the appearance of new physics signified by highly ionizing particles such as magnetic monopoles and massive long-lived electrically charged particles from a number of theoretical scenarios.

Philippe Bloch of CERN commenced the meeting, stressing CERN’s support for the MoEDAL programme. He spoke of the key role that smaller, well motivated “high-risk” experiments such as MoEDAL play in expanding the physics reach of the LHC and reminded the audience that “one cannot predict with certainty where the next discovery will be made”.

Nobel laureate Gerard ’t Hooft began the morning’s theory talks with a reprise of his work on the monopole in grand unified theories (GUTs), elegantly showing how the beautiful monopole mathematics plays an important role in QCD and other fundamental theories. Arttu Rajantie of Imperial College London deftly recounted the story of “Monopoles from the Cosmos and the LHC”, concentrating on more recent theoretical scenarios, such as that of the electroweak “Cho-Maison” monopole, which are detectable at the LHC because they involve particles that are much lighter than the GUT monopole, with masses in the range 1 TeV/c2.

John Ellis and Nikolaos Mavromatos of King’s College London then changed the emphasis from magnetic to electric charge. Ellis described supersymmetry (SUSY) scenarios with massive stable particles (MSPs), such as sleptons, stops, gluinos and R-hadrons, which should be observable by MoEDAL. Mavromatos characterized the numerous non-SUSY scenarios that could give rise to MSPs, such as D-particles, Q-balls, quirks, doubly charged Higgs etc., all of which MoEDAL could detect.

In the afternoon, Albert de Roeck of CERN and Philippe Mermod of the University of Geneva laid out the significant progress made by CMS and ATLAS, respectively, in the quest for new physics revealed by highly ionizing particles. James Pinfold, of the University of Alberta and MoEDAL spokesperson, made the physics case for MoEDAL. He pointed out how its often-superior sensitivity to monopoles and massive slowly moving charged particles expanded the physics reach of the LHC in a complementary way. The MoEDAL collaboration, with 18 institutes from 10 countries, is still a “David” compared with the LHC “Goliaths” but its potential physics impact is second to none.

No workshop dealing with magnetic monopoles would be complete without an account of the search for cosmic monopoles. The two main experiments in this arena – MACRO, installed underground at the Gran Sasso National Laboratory in Italy, and the SLIM experiment, at the high-altitude Mount Chacaltaya Laboratory in Bolivia – were presented by Zouleikha Sahnoun of the SLIM collaboration. These two experiments still have the world’s best limits for GUT and intermediate-mass monopoles. Returning to Earth, David Milstead of Stockholm University described a project to search for trapped monopoles at the LHC. Importantly, this initiative is complementary to that of both MoEDAL and the main LHC experiments.

Why has the monopole not been seen in previous searches at accelerators? Vincete Vento of the University of Valencia offered an ingenious explanation. Monopoles are hiding in monopolium, a bound state of a monopole and an antimonopole, a suggestion that Paul Dirac made in his 1931 paper. Vento went on to describe a couple of ways that MoEDAL might detect monopolium.

In the last talk of the workshop, John Swain of Northeastern University presented the remarkable speculation that at the LHC the neutral Higgs boson could predominantly decay into a nucleus–antinucleus pair. He sketched, and nimbly defended, a theoretical justification for this surprising suggestion. Certainly, such a decay mode would be easily detectable by MoEDAL.

The clear message of the workshop is that MoEDAL has a potentially revolutionary physics programme aimed exclusively at the search for new physics, with the minimum of theoretical prejudices and the maximum exploitation of experimental search techniques. After all, in the words of J B S Haldane: “… the universe is not only queerer than we suppose, but queerer than we can suppose.”

Dark-matter filament binds galaxy clusters

Numerical simulations of structure formation in the universe reveal how clusters of galaxies form at the intersection of dark-matter filaments. The presence of such a filament connecting the galaxy clusters Abell 222 and Abell 223 has finally been detected through its weak gravitational lensing effect on background galaxies.

With the advent of supercomputers it became possible to simulate the action of gravity over cosmic time starting from a rather uniform distribution of matter in the early universe (CERN Courier September 2007 p11). Time-lapse films based on these simulations show the evolution of structure formation in a large volume of the universe. While the universe expands globally, gravity tends to collapse small initial regions of over-density. Matter is therefore contracting locally while being stretched on large scales. The opposite effects of gravitational collapse and cosmic expansion result in a sponge-like structure with a web of filaments delimiting big voids. The densest regions are at the intersection of filaments and are the formation sites of clusters of galaxies. As time proceeds, matter flows along the filaments to the nearest cluster, making the filaments thinner and thinner.

This sponge-like distribution of matter in the universe has been confirmed by mapping the position of thousands of galaxies in the nearby universe for decades. According to the simulations, it is primarily cold dark matter that collapses to shape the filamentary skeleton of the universe; normal, baryonic matter follows the same route to form galaxies along these filaments. The detection of warm–hot intergalactic gas along walls of galaxy over-densities was another piece of evidence for the validity of this scenario (CERN Courier July/August 2010 p14). What remained to be detected was the actual presence of dark matter in these filaments. This has now been achieved by Jörg P Dietrich of the University of Michigan and collaborators in Germany, the UK and the US, looking at the galaxy supercluster Abell 222 and Abell 223.

The technique used to map the distribution of dark matter in clusters of galaxies is always the same. It is called weak gravitational lensing and consists of measuring the small distortion of the shape of background galaxies induced by the presence of the invisible matter (CERN Courier January/February 2007 p11). As mass distorts space–time locally, it changes the path of light from remote galaxies and thus alters their shape as observed from Earth. The problem is that the true shape of the individual galaxies is not known, so it is difficult to know how strong their distortion is. However, by analysing tens of thousands of galaxies, a global trend of distortion can emerge with statistical significance.

Dietrich and colleagues find a bridge of matter between Abell 222 and Abell 223 at the 96% confidence level. The derived surface density of this structure is unexpectedly high compared with dark-matter filaments in numerical simulations. This suggests that the filament is not seen from the side but almost along its major axis, thus increasing its projected mass. The red-shift difference between the two galaxy clusters does, indeed, suggest that they are at about 60 million light-years apart. The binding filament contributes as much as a complete galaxy cluster to the total mass of the supercluster. It is the site of an over-density of galaxies and includes hot intergalactic gas detected in X-rays. This gas contributes to about 9% of the total mass of the filament at most. The remaining mass would essentially be composed of dark matter. This discovery is new evidence that the basic assumptions of numerical simulations are valid; in particular, that cold dark matter is an essential ingredient governing the formation of large-scale structures in the universe.

CMS studies the quark–gluon plasma

CCcms1_07_12

When atomic nuclei collide at high energies, they are expected to “melt” into a quark–gluon plasma (QGP) – a hot and dense medium made out of partons (quarks and gluons). At the LHC, many of the observed properties of the produced matter are consistent with this picture, similar to earlier findings by experiments at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and at CERN’s Super Proton Synchrotron. The quantitative characterization of this medium is still far from complete, but with more than an order of magnitude increase in the collision energy, the LHC is providing a tremendous opportunity to extend the studies. In particular, the higher energy collisions create much greater abundances of rare probes of the hot matter – such as jets (groups of high transverse-momentum (pT) particles emitted within a narrow cone), or bound states of heavy quark–antiquark pairs.

With its flawless performance in the heavy-ion run, the LHC exceeded the projected luminosity for lead–lead (PbPb) collisions in 2011, allowing the CMS experiment to record an integrated luminosity of 150 μb–1 at a nucleon–nucleon centre-of-mass energy of √sNN = 2.76 TeV, the luminosity being approximately 20 times more than in 2010. This new data set gives the CMS collaboration the opportunity to perform a detailed investigation of the medium using probes that are available for the first time in heavy-ion collisions, in a physics programme that partially overlaps but largely complements and extends the range of heavy-ion research conducted by the ALICE and ATLAS collaborations at the LHC. This article describes some of the heavy-ion results that CMS has obtained so far, with an emphasis on unique findings from the high-luminosity data.

The CMS heavy-ion programme is multifaceted, based on the diverse capabilities of the CMS detector and the broad interests and expertise of the members of the collaboration. The key to its success lies in the careful planning, support, expertise and hard work of the entire CMS collaboration. A well optimized triggering strategy with robust algorithms was in place for the 2011 run, allowing CMS to take maximum advantage of the delivered luminosity. A detailed inspection of each heavy-ion event was performed by the level-1 and high-level trigger systems, and the most interesting events containing rare signals were written to tape.

Properties of the bulk medium

Using the 2010 data, the LHC experiments were able to characterize the bulk properties of the partonic medium. The CMS collaboration performed detailed studies of soft-particle production by measuring the charged-particle multiplicity, transverse energy flow, azimuthal asymmetry in charged-particle and neutral-pion production, and two-particle correlations. The number of produced particles changes by orders of magnitude depending on whether the collision is “head-on” (central) or peripheral. The centrality of the PbPb collisions is characterized by the energy deposited in the forward calorimeters of the CMS detector, covering small polar angles with respect to the beamline (i.e. the pseudorapidity interval 3 < |η| < 5.2), with the most central collisions leaving the largest amount of energy in the detector. The events are then categorized based on this energy into percentile intervals of the total inelastic hadronic PbPb cross-section (the 0–20% centrality class meaning the 20% most central collisions, etc). Quantitatively, the centrality is usually characterized by the number of nucleons participating in the actual collision (i.e. those in the overlap zone of the two nuclei) denoted by Npart.

CCcms2_07_12

As experiments at RHIC had previously observed, the hot matter produced at the LHC exhibits strong collective-flow behaviour. In off-centre collisions the initial nuclear overlap zone is spatially asymmetrical with an approximately ellipsoidal shape. This asymmetry leads to instantaneous pressure gradients that are more effective in pushing particles out from the collision zone along the minor axis of the ellipse, rather than perpendicular to it. As a result, the matter produced in the collision undergoes anisotropic expansion, which is observed as a collective flow of particles with distinct azimuthal asymmetry. A Fourier analysis of the azimuthal angular distribution of the final-state particles reveals important aspects of the collision dynamics, and provides constraints to the equation of state and the viscosity (resistance to flow) of the medium.

For head-on PbPb collisions, CMS estimates the energy density per unit volume to be about 14 GeV/fm3 at a time of 1 fm/c after the collision, which is about 100 times larger than the density of normal nuclear matter and 2.6 times greater than obtained at the highest RHIC energy. A significant increase of the mean transverse energy per particle is similarly observed. Despite this increase, the trends in the collective flow and correlation measurements show relatively modest changes compared with RHIC, indicating that the general properties of the matter produced at the LHC, as observed through the study of soft particles, are consistent with a strongly interacting partonic medium.

Jet quenching

A key diagnostic tool that provides information about the density and composition of the medium produced in high-energy heavy-ion collisions comes from the measurements of high transverse-momentum jets. These “hard probes” result from relatively rare violent scatterings of the quarks and gluons that comprise the incoming nuclei. Since the production cross-sections of these energetic partons are calculable using the well established techniques of perturbative QCD, they have long been recognized as particularly useful “tomographic” probes of the hot medium.

The majority of the produced jets originate in the scattering of gluons or light quarks (up, down or strange), which are expected to lose energy while propagating through the medium. Less frequently the outgoing parton is a heavy charm or bottom quark that may also interact – although possibly less strongly – with the medium. Of particular interest are the events that produce hard-scattering probes that do not interact strongly, such as prompt photons or weak bosons, as they provide precise constraints on the energy of the recoiling parton and enable a controlled measurement of the parton energy loss in the medium. Multiple complementary measurements involving different probes can be performed using CMS, because of the detector’s high resolution, granularity, large acceptance, high-rate read-out capability and triggering.

The enormous energy loss of the partons propagating through the hot and dense medium became immediately apparent in the online event displays of the first PbPb collisions in the LHC, which revealed strikingly unbalanced dijet events and photon–jet events (figure 1). Subsequently, both ATLAS and CMS published detailed studies of the dijet transverse-momentum asymmetry. CMS expanded on this initial observation with a comprehensive set of measurements aiming not only to quantify the amount of lost energy, but also to answer the question: “Where does the lost energy go?”

The data from jet-track correlations indicate that the large energy lost by the partons is transferred to soft hadrons, which are scattered relatively far away in rapidity from the jet axis. To investigate the possible modifications of the jet structure, measurements of the jet shapes and fragmentation functions are also pursued. The high-luminosity data set collected in 2011 allows for further characterization of the dijet momentum-

imbalance, by studying jets up to unprecedented values of transverse momenta. CMS has recently published a paper on this dijet momentum-imbalance, which is found to persist in central collisions up to the highest values of leading-jet transverse momenta studied – even the most energetic jets do not escape the medium unaltered.

CCcms3_07_12

Further tests of the jet-quenching hypothesis use control measurements involving probes that do not interact strongly, such as photons, Z and W bosons. The transverse-momentum spectra of charged hadrons and isolated photons are compared with their equivalents in pp collisions. Figure 2 shows the suppression factor, RAA, of the production rates of high transverse-momentum particles, scaled to be unity if nuclear collisions are a simple superposition of pp collisions. As expected, a strong suppression is observed for charged particles (RAA < 1), but the yields of electroweak probes appear unaffected by the medium (RAA ≈ 1). The measurement of b-decays to J/Ψ particles shows clearly that b-quarks are also strongly suppressed.

Having seen that isolated photons do not suffer suppression in the medium, CMS took the study to the next level using the high-luminosity data from 2011. The first measurement of a photon–jet imbalance was performed by examining events containing an isolated photon (γ) with pT > 60 GeV/c and an associated jet with pT > 30 GeV/c. The transverse momenta of the jet and the photon are compared by forming the ratio x = pT(jet)/pT(γ). Figure 3 shows the centrality dependence of the average momentum imbalance, as well as the fraction of isolated photons with an associated jet partner R. The measurements in PbPb collisions are compared with those in pp collisions at the same energy and simulations that do not include the jet-quenching effect. A significant decrease in <x> and Rcompared with the simulation is observed for more central PbPb collisions, indicating a larger parton energy loss in the collisions where the volume of the medium is larger.

CCcms4_07_12

While the jet-quenching phenomenon is undoubtedly established from the data, a complete theoretical understanding of the underlying parton energy-loss mechanism is still lacking. The data sample with 150 μb–1 integrated PbPb luminosity allows the study of the azimuthal anisotropy of charged-particle production up to high pT, providing additional information on the path-length dependence of the in-medium parton energy loss. Since the initial nuclear overlap zone for off-centre collisions is azimuthally asymmetrical with approximately ellipsoidal shape, partons propagating in the direction of the minor axis of the ellipse are expected to lose less energy than those propagating along the major axis. This leads to a final particle distribution (at any given transverse momentum) that is not cylindrically symmetrical, but has a cosine-shaped modulation as a function of azimuthal angle (that is, the rotation around the beamline). Figure 4 shows the half-amplitude of this cosine modulation, at different transverse-momenta and collision centralities. Nonzero elliptic anisotropy is observed even at high pT (up to pT ≈ 40 GeV/c) where most charged particles originate from the fragmentation of jets. These measurements are thus indirectly related to the amount of energy loss (and its dependence on the path length) of energetic partons inside the hot QCD medium.

Quarkonium suppression

The ultimate proof for the formation of QGP in heavy-ion collisions would be a measurement that demonstrates the presence of deconfined quarks and gluons. In the plasma state, the quark and gluon colour charges would be neutralized (or screened), similarly to the Debye screening of the electric charges of electrons and ions in an electromagnetic plasma. The colour-charge screening can be studied experimentally through the measurement of quarkonia, which consist of bound heavy quark–antiquark pairs (charm or beauty). In the QGP, the attractive force binding the pair together would be reduced, hindering the formation of the quarkonium states. Thus, observation of suppression in the production rate of these particles in comparison with the production rate in pp collisions is a signature of deconfinement, although other processes may obscure the effect.

CCcms5_07_12

CMS has excellent capabilities for muon detection and has measured the production rates of several particles (J/ψ, ψ(2S), ϒ(1S,2S,3S)) that have different radii and probe colour screening at different distance scales. The various quarkonium states are expected to “melt” in the QGP at different temperatures, corresponding to their respective binding energies. The measurement of the suppression pattern of several of these particles is thus needed to constrain the initial temperature in the collision and to demonstrate deconfinement.

The suppression of the excited states of the ϒ family was already seen in the 2010 data, albeit with limited statistical precision. With the high-luminosity data from 2011 the effect has been confirmed and studied in much more detail. Figure 5 shows the dimuon invariant-mass distribution obtained in PbPb collisions compared with the distribution measured in pp collisions, and clearly reveals the strong suppression of the excited ϒ states. To quantify the effect, the yields of the ϒ(2S) or ϒ(3S) states are compared with the yield of the ϒ(1S) state in PbPb and pp collisions by forming a double ratio: [ϒ(nS)/ϒ(1S)]PbPb/[ϒ(nS)/ϒ(1S)]pp. The values of these double ratios are determined to be 0.21 ± 0.07 (stat.) ± 0.02 (syst.) for n = 2 and less than 0.17 at 95% confidence level for n = 3. The individual ϒ(1S) and ϒ(2S) states are suppressed, compared with pp collisions, by factors of about 2 and 10, respectively. CMS thus finds the expected melting pattern, with the suppression being ordered according to the binding energy of the respective quarkonium states. The ψ(2S) state at high pT also falls into this picture, although a hint of an intriguing opposite trend was recently observed at low pT, but with limited statistical significance. More data are needed to confirm this observation, in particular a larger pp reference data set at the matching energy of 2.76 TeV.

CCcms6_07_12

By July 2012, the CMS heavy-ion programme had produced 16 papers submitted to refereed journals, of which 11 have already been published. More analyses are under way, and the collaboration is also preparing for the upcoming proton–lead collisions in the LHC, which will serve as a reference for normal nuclear effects. Such a control measurement, together with short, high-luminosity pp runs at 2.76 TeV and 5 TeV (to match the proton–lead collision energy) requested by CMS will complete the first round of pioneering investigations into the extremes of strongly interacting matter. With these data, the first steps of a journey back to the state of the universe just a few microseconds after the Big Bang are being taken at the LHC, and the CMS experiment is one of the “time-machines” making this exciting journey possible.

Particle identification in ALICE boosts QGP studies

Pion yield

Under extreme conditions of temperature and/or density, hadronic matter “melts” into a plasma of free quarks and gluons – the so-called quark–gluon plasma (QGP). To create these conditions in the laboratory, heavy ions (e.g. lead nuclei) are accelerated and made to collide head on, as was done at the LHC for two dedicated periods in 2010 and 2011. A key design consideration of the ALICE experiment at the LHC is the ability to study QCD and quark (de)confinement under these extreme conditions. This is done by using particles – created inside the hot volume as it expands and cools down – that live long enough to reach the sensitive detector layers located around the interaction region. The physics programme at ALICE relies on being able to identify all of them – i.e. to determine if they are electrons, photons, pions, etc – and to determine their charge. This involves making the most of the (sometimes slightly) different ways that particles interact with matter. This article gives an overview of the methods used for particle identification (PID) and their implementations in ALICE and describes how new technologies were used to push the state of the art.

Penetrating muons

Muons can be identified by the fact that they are the only charged particles able to pass almost undisturbed through any material. This is because muons with momenta below a few hundred GeV/c do not suffer from radiative energy losses and so do not produce electromagnetic showers. Also, being leptons, they are not subject to strong interactions with the nuclei of the material that they traverse. This behaviour is exploited in muon spectrometers in high-energy-physics experiments by installing muon detectors either behind the calorimeter systems or behind thick absorber materials. All other charged particles are completely stopped, producing electromagnetic (and hadronic) showers.

The muon spectrometer in the forward region of ALICE features a thick, complex front absorber and an additional muon filter comprising an iron wall 1.2 m thick. Muon candidates selected from tracks penetrating these absorbers are measured precisely in a dedicated set of tracking detectors. Pairs of muons are used to observe the full spectrum of heavy-quark vector-meson resonances (J/Ψ, …). Their production rates can be analysed as a function of transverse momentum and collision centrality to investigate dissociation arising from colour screening. In addition, muons from the semileptonic decay of open charm and open beauty can also be studied with the muon spectrometer.

Weighing particles

Hadron identification can be crucial for heavy-ion physics. Examples are open charm and open beauty, which allow the investigation of the mechanisms for the production, propagation and hadronization of heavy quarks in the hot and dense medium formed in the heavy-ion collisions. The most promising channel is the process D0 → K π+, which requires efficient hadron identification owing to the small signal-to-background ratio.

Charged hadrons are unambiguously identified if their mass and charge are determined

Charged hadrons (in fact, all stable charged particles) are unambiguously identified if their mass and charge are determined. The mass can be deduced from measurements of the momentum and of the velocity. Momentum and the sign of the charge are obtained by measuring the curvature of the particle’s track in a magnetic field. To obtain the particle velocity there are four methods based on measurements of time-of-flight (TOF) and ionization, and on the detection of transition radiation (TR) and Cherenkov radiation. Each method works well in different momentum ranges or for specific types of particle. They are combined in ALICE to measure, for instance, particle spectra. Figure 1, for example, shows the abundance of pions in lead–lead (PbPb) collisions as a function of transverse momentum and collision centrality.

Kicking electrons from atoms

The characteristics of the ionization process caused by fast, charged particles passing through a medium can be used for PID. The velocity dependence of the ionization strength is connected to the Bethe-Bloch formula, which describes the average energy loss of charged particles through inelastic Coulomb collisions with the atomic electrons of the medium. Multiwire proportional counters (MWPCs) or solid-state counters are often used as the detection medium because they provide signals with pulse heights that are proportional to the ionization strength. Because energy-loss fluctuations can be considerable, in general many pulse-height measurements are performed along the particle track to optimize the resolution of the ionization measurement.

In ALICE this technique is used for PID in the large time-projection chamber (TPC) and in four layers of the silicon inner tracking system (ITS). A TPC is a large volume filled with a gas as the detection medium. Almost all of this volume is sensitive to the traversing charged particles but it features a minimum material budget. The straightforward pattern recognition (continuous tracks) makes TPCs the perfect choice for high-multiplicity environments, such as in heavy-ion collisions, where thousands of particles have to be tracked simultaneously. Inside the ALICE TPC, the ionization strength of all tracks is sampled up to 159 times, resulting in a resolution of the ionization measurement as good as 5%. Figure 2 shows the TPC ionization signal as a function of the particle rigidity for negative particles, indicating the different characteristic bands for various types of particle. A particle is identified when the corresponding point in the diagram can be associated with only one such band within the measurement errors. The method works well, especially for particles with low momenta up to several hundred MeV/c.

Ionization signals

Using a stopwatch

TOF measurements yield the velocity of a charged particle by measuring the flight time over a given distance along the track trajectory. Provided the momentum is also known, the mass of the particle can then be derived from these measurements. The ALICE TOF detector is a large-area detector based on multigap resistive plate chambers (MRPCs) that cover a cylindrical surface of 141 m2, with an inner radius of 3.7 m. The MRPCs are parallel-plate detectors built of thin sheets of standard window glass to create narrow gas gaps with high electric fields. These plates are separated using fishing lines to provide the desired spacing; 10 gas gaps per MRPC are needed to arrive at a detection efficiency close to 100%.

Time-of-flight information

The simplicity of the construction allows a large system to be built with an overall TOF resolution of 80 ps at a relatively low cost. This performance allows the separation of kaons, pions and protons up to momenta of a few GeV/c. Combining such a measurement with the PID information from the ALICE TPC has proved useful in improving the separation between the different particle types, as figure 3 shows for a particular momentum range.

Detecting additional photons

The identification of electrons and positrons in ALICE is achieved using a transition radiation detector (TRD). In a similar manner to the muon spectrometer, this system enables detailed studies of the production of vector-meson resonances, but with extended coverage down to the light vector-meson ρ and in a different rapidity region. Below 1 GeV/c, electrons can be identified via a combination of PID measurements in the TPC and TOF. In the momentum range 1–10 GeV/c, the fact that electrons may create TR when travelling through a dedicated “radiator” can be exploited. Inside such a radiator, fast charged particles cross the boundaries between materials with different dielectric constants, which can lead to the emission of TR photons with energies in the X-ray range. The effect is tiny and the radiator has to provide many hundreds of material boundaries to achieve a high enough probability to produce at least one photon. In the ALICE TRD, the TR photons are detected just behind the radiator using MWPCs filled with a xenon-based gas mixture, where they deposit their energy on top of the ionization signals from the particle’s track.

The ALICE TRD was designed to derive a fast trigger for charged particles with high momentum and can significantly enhance the recorded yields of vector mesons. For this purpose, 250,000 CPUs are installed right on the detector to identify candidates for high-momentum tracks and analyse the energy deposition associated with them as quickly as possible (while the signals are still being created in the detector). This information is sent to a global tracking unit, which combines all of the information to search for electron–positron track pairs within only 6 μs.

Measuring an angle

Cherenkov radiation is a shock wave resulting from charged particles moving through a material faster than the velocity of light in that material. The radiation propagates with a characteristic angle with respect to the particle track, which depends on the particle velocity. Cherenkov detectors make use of this effect and in general consist of two main elements: a radiator in which Cherenkov radiation is produced and a photon detector. Ring-imaging Cherenkov (RICH) detectors resolve the ring-shaped image of the focused Cherenkov radiation, enabling a measurement of the Cherenkov angle and thus the particle velocity. This, in turn, is sufficient to determine the mass of the charged particle.

If a dense medium is used, only a thin radiator layer of a few centimetres is required to emit a sufficient number of Cherenkov photons

If a dense medium (large refractive index) is used, only a thin radiator layer of a few centimetres is required to emit a sufficient number of Cherenkov photons. The photon detector is then located at some distance (usually about 10 cm) behind the radiator, allowing the cone of light to expand and form the characteristic ring-shaped image. Such a proximity-focusing RICH is installed in the ALICE experiment. The High-Momentum Particle IDentificaton (HMPID) detector is a single-arm array that has a reduced geometrical acceptance. Similar to the ALICE TOF, it can identify individual charged hadrons up to momenta of a few GeV/c but with slightly higher precision.

Completing the picture

The ALICE detector also contains other components that can identify particles. A high-resolution electromagnetic calorimeter, the PHOS, which covers a limited acceptance domain at central rapidity, provides data to test the thermal and dynamical properties of the initial phase of the collision by measuring photons emerging directly from the collision. Last, a pre-shower detector, the PMD, studies the multiplicity and spatial distribution of such photons in the forward region.

Each method described in this article provides a different piece of information. However, only by combining them in the analysis of the data produced by ALICE can the particles produced in the collisions be measured in the most complete way possible. In this way they can reveal the whole picture of what happens in the collisions.

The PS Booster hits 40

CCpsb1_07_12

On 26 May 1972, the PS Booster (PSB) accelerated its first protons to the design energy of 800 MeV. The running-in team, led by Heribert Koziol, had prepared for this event by already sending beam from the 50 MeV Linac1 through the PSB injection line while the geometers were still busy aligning the ring magnets. Just five months later, the team succeeded in accelerating at half of the design intensity. This achievement was a great relief for the entire staff of the then Synchrotron Injector (SI) division, led by Giorgio Brianti and his deputy, Helmut Reich. However, the path to full intensity proved unexpectedly tough.

The concept of the PSB dates from the mid-1960s, when CERN’s 26 GeV Proton Synchrotron (PS) was getting into its stride and new and demanding clients – the Intersecting Storage Rings (ISR) and the Super Proton Synchrotron (SPS) – were on the horizon. By then, ideas to improve the performance of the PS by raising its output beam intensity from 1012 to 1013 protons per pulse (ppp) were already being considered.

Boosting intensity

Particles in synchrotrons suffer from resonances generated by residual imperfections of the magnetic guide (dipole) and focusing (quadrupole) fields, in particular for certain values of the “tunes” QH, QV (the number of horizontal/vertical oscillations per machine turn). Simple relations of the type mQH + nQV = p, with m and n small integers, are harmful “stop-bands” and must be avoided. In addition, high-intensity beams experience space charge, where the repulsive Coulomb force works against the external focusing and leads to a “Laslett tune-spread”, ΔQ. Studies at the PS (and also at the Alternating Gradient Synchrotron at Brookhaven) established that intensities leading to ΔQ > 0.25 (“space-charge limit”) cannot be digested at injection because they do not keep clear of stop-bands up to order |m| + |n| = 4. At higher (relativistic) energies, the repulsive force becomes weaker while the beam also gets stiffer – so that ΔQ shrinks with rising energy, scaling with 1/βγ2. Therefore, increasing the PS injection energy from 50 to 800 MeV would potentially boost its beam intensity by an order of magnitude.

CCpsb2_07_12

After a study led by the late Werner Hardt of several variants, including a 200 MeV linac and a rapid-cycling synchrotron, a slow-cycling 800 MeV booster synchrotron consisting of four superposed rings was adopted. Its injection energy would still be 50 MeV but each of the four rings could be filled up to the same space-charge limit as the whole PS ring, yielding a factor of 4. In addition, slow cycling would allow for longer bunches, yielding a further factor of 1.5. Hence, the new machine would accommodate – at 50 MeV – 1013 ppp rather than the 1.6 × 1012 ppp accelerated in the PS.

The four rings stacked on top of each other (figure 1), with a radius of 25 m (1/4 of the PS), consist of separate dipoles and quadrupoles (“separated function” magnets) – in contrast to the PS, which has “combined function” dipoles that incorporate gradients. Each of the 32 dipoles and 48 quadrupoles consists of a vertical stack of four magnets with a common yoke, enabling one main power supply to provide the current to all of them in series. An elaborate system of correction loops allows adjustment of the guide and focusing fields in each of the four rings. The 50 MeV beam from Linac1 is distributed vertically and multiturn-injected into each ring. Originally, the RF system for the PSB accelerated five bunches per ring (RF harmonic number h = 5) to an ejection energy of 800 MeV. After being synchronized, the five bunches were horizontally extracted and recombined vertically to form a string of 20 bunches, corresponding to the harmonic number of the PS’s RF system. The transverse optics (“lattice”) of the PSB ring, as well as the injection system and the recombination/transfer line (figure 2), were all designed by the late Claude Bovet.

Construction of the PSB started in 1968, with the centre of the machine lying exactly on the Swiss-French border. Many novel technological challenges had to be addressed, such as: unprecedented requirements on field quality and equality between the superposed magnet gaps; “kicker” magnets with rapid rise/fall times; and stable and reliable power converters operating directly from the grid. The ambitious aims for beam intensity and quality demanded special efforts for mechanical stability, beam diagnostics, vacuum equipment, radiation protection, assuring hands-on maintenance, and general reliability. Moreover, the PSB served as a “guinea pig” for the then innovative computer-control system aimed at monitoring all of the machine parameters.

Design intensity – and beyond

While the quick initial success of the running-in testified to the soundness of the basic choices and the high quality of the construction work, major difficulties later hampered the progress towards design performance. The first was a strong energy-jitter of the beam from Linac1, which was eventually stabilized at the expense of the beam current (50 mA instead of the 100 mA that was specified). With the help of experienced accelerator physicists, Jacques Gareyte and the late Frank Sacherer, the obstacles were addressed one by one. The “working point” (QH, QV) was moved from around (4.8, 4.8) to (4.2, 5.3), mitigating transverse beam blow-up caused by repeated stop-band crossing arising from the synchrotron motion of the protons within the bunch. Furthermore, a fast change of the working point during acceleration proved beneficial, profiting from the shrinking ΔQ (figure 3). Destructive coherent bunch-oscillations were stabilized by “Magnani shaking” and later by a coupled-bunch feedback system. A first pay-off came in 1973, when the search for neutral currents with the Gargamelle bubble chamber benefitted from the increased supply of protons from the PS with the PSB as injector. By 1974 the PS reached 1013 ppp – the design performance of the upgrade programme.

CCpsb3_07_12

However, this is not the end of the story. By 1978, the new Linac2 – still at 50 MeV but with 150 mA beam current – replaced Linac1 and dramatically increased the PSB’s potential, although it took a couple of years to exploit this improvement fully. Installing multipole correctors to eliminate stop-bands allowed intensities with larger tune-spread to be accommodated. The addition of “bunch-

flattening cavities” in the PSB, fostered by the late George Nassibian, lowered the peak density of the bunches and thus ΔQ, enabling an intensity some 25% higher to be accepted. Fast feedback systems compensating unwanted excitations by the bunches on the cavities (“beam loading”), as well as transverse dampers, also proved beneficial, culminating in the PSB’s acceleration of 3 × 1013 ppp by 1985. Now the PS was at pains to digest this beam at 800 MeV, in particular the high-density proton bunches for antiproton production, which were obtained by simultaneously ejecting two PSB rings and adding the bunches vertically in the recombination line, almost doubling their line density. To cope with this, the PSB was promoted to a 1 GeV machine after minor hardware modifications, increasing the PS space-charge limit by a further 25%.

Experiments involving light ions became popular in the early 1980s and the old Linac1 was successfully converted to accelerate oxygen and sulphur ions, deuterons and alpha particles. The PSB followed suit. However, the issue now was not too high an intensity but one that was very low (by three orders of magnitude), which together with new acceleration frequencies challenged the beam diagnostics and RF systems. The low-intensity ion cycles had to be added to the supercycle – during which all beam parameters are modified from cycle to cycle to adapt to the requirements of the end-users. With the advent of a dedicated ion accelerator, Linac3, the PSB made its way up the periodic table to reach lead and, later, indium ions.

When CERN’s first accelerator – the venerable 600 MeV Synchrocyclotron, by then feeding the ISOLDE online separator with protons – came to the end of its life after 33 years of meritorious services, ISOLDE looked for a new source. Following the suggestion to use the PSB’s 1 GeV beam on the many spare cycles available, ISOLDE was relocated at the PSB in 1992: an experimental area of its own was the coming-of-age present for the Booster’s 20th anniversary.

Fit for the LHC

It is a CERN tradition that new machines use existing accelerators as injectors, and the LHC is no exception. Clearly, all of the accelerators of the proton-injector chain – Linac2–PSB–PS–SPS – had to undergo major upgrade programmes to be fit for the new machine. Among the requirements, the beam would have to fit into the tiny LHC aperture while having sufficient intensity to ensure high-luminosity operation. However, this implied a beam-brilliance at injection into the PSB that would lead to an unrealistic space-charge detuning of ΔQ up to 1. This could be reduced to a more acceptable 0.5 by two-batch filling of the PS, but only if each batch could be squeezed into one half of its circumference. Accelerating one rather than five bunches in each PSB ring and applying clever timing of the ejection/recombination kickers made this feasible, although the first batch in the PS has to dwell for 1.2 s on the 1 GeV “front porch”, proving vulnerable to space-charge effects.

A further improvement came from increasing the PSB–PS transfer energy to 1.4 GeV, reducing ΔQ in the PS to 0.2, well below the space-charge limit of 0.25, owing to the 1/βγ2 scaling. Upgrading a machine built for 800 MeV to 1.4 GeV was no minor task. It involved a new main power supply, increased water cooling and a partial renewal of the magnets and their power supplies in the transfer line to the PS. The rings had to be equipped with new h = 1 (2 MHz) cavities and as well as recycled h = 2 (4 MHz) cavities, the former accelerating one bunch in each ring, the latter for bunch flattening or accelerating two bunches a ring for some users. The lion’s share of this upgrade was provided by Canada as part of its contribution to the LHC. Installation of the new hardware in both the PSB and the PS was completed by early 2000 and after a short running-in period the PS complex demonstrated its capability to supply the beams required by the LHC.

CCpsb4_07_12

Owing to its unique four-ring structure, the PSB is a versatile machine that can deliver beams of different energy, intensity, density, shape or time-structure to many users, cycle after cycle. These are grouped in supercycles of various lengths that are adapted to the operation programme. Just for the LHC, some 10 different beams were prepared and made ready to work, all of them with their own intensity (over the range 5 × 109 – 6.5 × 1012 ppp) and emittance characteristics. Other cycles produce up to 3.7 × 1013 protons for ISOLDE, around 2.5 × 1013 protons for the CERN Neutrinos to Gran Sasso project, 1.5 × 1013 protons with small emittances for the Antiproton Decelerator or intensities as low as 2 × 1011 protons for slow extraction from the PS. In particular, the versatility of the recombination line enables the bunches from the four rings to be furnished with different distances between them to satisfy the requirements of the users.

By around 2003, two modifications to beam optics were proposed and put into operation. First, the optics of the 50 MeV injection line were improved so that the dispersion and its derivative vanish at the injection point. This reduces the beam size and the losses in the last leg of the line (where the acceptance was limited) without perturbing the injection efficiency. Second, the working point of the machine was changed to avoid the systematic resonance 3QV = 16, which limited the performance of the outer rings 1 and 4. The quadrant (QH, QV) = (4.2, 4.3) instead of (4.2, 5.3) proved able to accommodate the enormous tune-spread ΔQ of 0.6 (using the same dynamic tune-change during the cycle) despite its apparent drawbacks: namely, the presence of the “Montague” coupling resonance 2Q– 2QV = 0 and the systematic resonances 4QH,V = 16. As a result, the PSB reached a record of 4.2 × 1013 protons accelerated, with the four rings having similar performances (figure 4).

By 2006, when construction of the LHC was in full swing, the operation teams of all of the accelerators moved to a common control room – the CERN Control Centre – to increase the operational efficiency of the LHC and its injector chain. For the PSB team, this meant a change in culture owing to the larger distance between the PS complex and the new control room. However, the merging of the teams proved invaluable for the running-in and operation of the LHC, which uses all of the prepared beams.

The future

The need for beams for the LHC with parameters even more demanding than what is provided today, together with the decision to operate the existing injector complex throughout the lifetime of the LHC, has triggered a major consolidation and upgrade project. As far as the PSB is concerned, the upgrade programme consists of two parts: modifications of the injection for 160 MeV charge-exchange injection from the new Linac4; and an energy upgrade of the PSB rings and extraction/transfer systems to 2 GeV.

CCpsb5_07_12

The benefits of switching from Linac2 (50 MeV protons) to Linac4 (160 MeV H) are twofold. With the increase of beam energy from 50 to 160 MeV, the relativistic βγ2 factor increases by a factor of two – doubling the intensity that can be accumulated within a given emittance and hence beam brilliance (ratio intensity/emittance). The other significant benefit is expected from changing the injection scheme to charge-exchange injection (figure 5). While the current multiturn injection is associated with a beam loss of up to 50% of the intensity on the injection septum, the new scheme will essentially be loss-free (apart from a few per cent owing to the stripping efficiency). Moreover, the new injection scheme will make it possible to tailor emittances by means of “phase-space painting” according to the needs of individual users.

The aim of the further energy upgrade of the PSB to 2 GeV is to reduce space-charge effects at injection into the PS, removing once more this bottleneck in the LHC injector chain. This should increase the beam brilliance throughout the LHC injector chain so that the LHC can reach its ultimate luminosities. The expected gain can be deduced from the values of βγ2 at 2.0 GeV and 1.4 GeV; the factor of 1.63 corresponds to an intensity increase of 60% within given emittance values.

The upgrade to 2 GeV will be the third energy increase in the history of the PSB, having gone in steps from 800 MeV to 1 GeV and then to the current 1.4 GeV. The most important technical challenges will be the operation of the main magnets at field levels that are 30% higher than those at 1.4 GeV, together with the replacement of the main power supply, as well as the upgrade of the extraction and recombination system. Also, many components need modification or replacement to operate in the new parameter range. Following completion of the upgrade, the PSB will – in many parts – be a new machine, without losing its current versatility.

In its 40-year history, the PSB has undergone several upgrades and is today operating with its highest availability and flexibility, and far beyond its original design specifications. The ongoing consolidation and upgrade programme aims to operate the PSB throughout the lifetime of the LHC. This will ensure that it remains one of CERN’s backbone accelerators for the foreseeable future.

ECLOUD12 sheds light on electron clouds

CCclo1_07_12

Electron clouds – abundantly generated in accelerator vacuum chambers by residual-gas ionization, photoemission and secondary emission – can affect the operation and performance of hadron and lepton accelerators in a variety of ways. They can induce increases in vacuum pressure, beam instabilities, beam losses, emittance growth, reductions in the beam lifetime or additional heat loads on a (cold) chamber wall. They have recently regained some prominence: since autumn 2010, all of these effects have been observed during beam commissioning of the LHC.

Electron clouds were recognized as a potential problem for the LHC in the mid-1990s and the first workshop to focus on the phenomenon was held at CERN in 2002. Ten years later, the fifth electron-cloud workshop has taken place, again in Europe. More than 60 physicists and engineers from around the world gathered at La Biodola, Elba, on 5–8 June to discuss the state of the art and review recent electron-cloud experience.

Valuable test beds

Many electron-cloud signatures have been recorded and a great deal of data accumulated, not only at the LHC but also at the CESR Damping Ring Test Accelerator (CesrTA) at Cornell, DAΦNE at Frascati, the Japan Proton Research Complex (J-PARC) and PETRA III at DESY. These machines all serve as valuable test beds for simulations of electron-cloud build-up, instabilities and heat load, as well as for new diagnostics methods. The latter include measurements of synchronous phase-shift and cryoeffects at the LHC, as well as microwave transmission, coded-aperture images and time-resolved shielded pick-ups at CesrTA. The impressive resemblance between simulation and measurement suggests that the existing electron-cloud models correctly describe the phenomenon. The workshop also analysed the means of mitigating electron-cloud effects that are proposed for future projects, such as the High-Luminosity LHC, SuperKEKB in Japan, SuperB in Italy, Project-X in the US, the upgrade of the ISIS machine in the UK and the International Linear Collider (ILC).

An international advisory committee had assembled an exceptional programme for ECLOUD12. As a novel feature for the series, members of the spacecraft community participated, including the Val Space consortium based in Valencia, the French aerospace laboratory Onera, Massachusetts Institute of Technology, the Instituto de Ciencia de Materiales de Madrid and the École Polytechnique Fédérale de Lausanne (EPFL). Indeed, satellites in space suffer from problems that greatly resemble the electron cloud in accelerators, which can be modelled and cured by similar countermeasures. These problems include the motion of the satellites through electron clouds in outer space, the relative charging of satellite components under the influence of sunlight and the loss of performance of high-power microwave devices on space satellites. Intriguingly, the “Furman formula” parameterizing the secondary emission yield, which was first introduced around 1996 to analyse electron-cloud build-up for the PEP-II B factory, then under construction at SLAC, is now widely used to describe secondary emission on the surface of space satellites. Common countermeasures for both accelerators and satellites include advanced coatings and both communities use simulation codes such as BI-RME/ECLOUD and FEST3D. A second community to be newly involved in the workshop series included surface scientists, who at this meeting explained the chemistry and secrets of secondary emission, conditioning and photon reflections. Another important first appearance at ECLOUD12 was the use of Gabor lenses, e.g. at the University of Frankfurt, to study incoherent electron-cloud effects in a laboratory set-up.

Several powerful new simulation codes were presented for the first time at ECLOUD12. These novel codes include: SYNRAD3D from Cornell, for photon tracking, modelling surface properties and 3D geometries; OSMOSEE from Onera, to compute the secondary-emission yield, including at low primary energies; PyECLOUD from CERN, to perform improved and faster build-up simulations; the latest version of WARP-POSINST from Lawrence Berkeley National Laboratory, which allows for self-consistent simulations that combine build-up, instability and emittance growth, and is used to study beam-cloud behaviour over hundreds of turns through the Super Proton Synchrotron (SPS); and BI-RME/ECLOUD from a collaborative effort of EPFL and CERN, to study various aspects of the interaction of microwaves with an electron cloud. New codes also mean more work. For example, the advocated transition from ECLOUD to PyECLOUD implies that substantial code development done at Cornell and EPFL for ECLOUD may need to be redone.

Several open questions remain

ECLOUD12 could not solve all of the puzzles, and several open questions remain. Why, for example, does the betatron sideband signal – characterizing the electron-cloud related instability – at CesrTA differ from similar signals at KEKB and PETRA III? Why was the beam-size growth at PEP-II observed in the horizontal plane, while simulations had predicted it to be vertical? How can the complex nature of intricate incoherent effects be described fully? Which ingredients are missing for correctly modelling the electron-cloud behaviour for electron beams, e.g. the existence of a certain fraction of high-energy photoelectrons? How does the secondary-emission yield of the copper coating on the LHC beam-screen decrease as a function of incident electron dose and incident electron energy (looking for the “correct” equation to describe the variation of the primary energy at which the maximum yield is attained as a function of this maximum yield, εmaxmax) and the concurrent evolution in the reflectivity of low-energy electrons, R)? Does the conditioning of stainless steel differ from that of copper? If it is the same, then why should the SPS’s beam pipe be coated but not the LHC’s? Can the secondary-emission yield change over a timescale of seconds during the accelerator cycle (a suspicion based on evidence from the Main Injector at Fermilab)? Can the surface conditioning be speeded up by the controlled injection of carbon-monoxide gas?

As for the “electron-cloud safety” of future machines, ECLOUD12 concluded that the design mitigations for the ILC and for SuperKEKB appear to be adequate. The LHC and its upgrades (HL-LHC, HE-LHC) should also be safe with regard to electron cloud if the surface conditioning (“scrubbing”) of the chamber wall progresses as expected. The situations for Project-X, the upgrade for the Relativistic Heavy Ion Collider, J-PARC and SuperB are less finalized and perhaps more challenging.

ECLOUD12 was organized jointly and co-sponsored by INFN-Frascati, INFN-Pisa, CERN, EuCARD-AccNet and the Low Emittance Ring (LER) study at CERN. In addition, the SuperB project provided a workshop pen “Made in Italy”. The participants also enjoyed a one-hour football match (another novel feature) between experimental and theoretical electron-cloud experts – the latter clearly outnumbered – as well as post-dinner discussions until well past midnight. The next workshop of the series could be ECLOUD15, which would coincide with the 50th anniversary of the first observation of the electron-cloud phenomenon at a small proton storage-ring in Novosibirsk and its explanation by Gersh Budker.

• All of the presentations at ECLOUD12.

The ECLOUD12 workshop was dedicated to the memory of the late Francesco Ruggiero, former leader of the accelerator physics group at CERN, who launched an important remedial electron-cloud crash programme for the LHC in 1997.

Discovery of a new boson – the ATLAS perspective

By the end of 2011, hopes for the discovery of the Higgs boson during 2012 were riding high on the back of tantalizing hints in the 5 fb–1 data sample. The aim was to quadruple the data set this year, with the added benefit that increasing the centre-of-mass energy from 7 TeV to 8 TeV brings a higher predicted rate of Higgs production. The first planned checkpoint was for the ICHEP 2012 conference and in the weeks preceding it the LHC performed better than ever, resulting in a total delivered luminosity of more than 6 fb–1 at 8 TeV. Thanks to the expertise and continued dedication of many people, the ATLAS detector was in great shape, and 90% of the delivered data were recorded and passed the strict quality requirements to go forward for analysis.

The strategy

The ATLAS strategy in preparation for the early ICHEP milestone was to focus first on the most sensitive decay modes, the decay of the Higgs boson to two photons (γγ), to two Z bosons or to two W bosons. The W and Z bosons are identified from their most clear final states. The two Zs decay to four leptons (llll), electrons or muons, and the W pair is identified in the mixed-flavour final state with an electron, a muon and two neutrinos: WW→eνμν. The γγ and ZZ→llll modes have excellent mass resolution because the Higgs boson decays entirely into visible, well measured particles. However, they have quite different signal-to-background rates and features, requiring appropriate analysis strategies. By contrast, the presence of two invisible neutrinos means that the WW mode has low mass-resolution.

For each final state, the approach was not to look in the signal region of the 2012 data until the analysis procedure was frozen, to avoid any bias in tuning the event selection criteria. The selections were optimized using simulated samples and control regions in the data. These are samples of events with configurations that cannot come from a Higgs signal but which allow salient features of the data to be compared with simulation.

For the γγ final state, the mass distribution of the photon pair in events with two energetic photons is shown in figure 1a. The background to the Higgs signal is dominated by genuine γγ events from known processes, plus events with one or two hadronic jets misidentified as photons. This background forms a smoothly falling spectrum on top of which there is a visible bump around 126 GeV. However, this distribution tells only part of the story. The potential significance of the signal is higher in subsets of the data that have better mass resolution. The resolution depends on whether the photons are in the central or forward parts of the detector, and also on whether one or both photons have “converted” by the process γ→e+e. Furthermore, the signal-to-background ratio also changes according to the number of additional hadronic jets in the event because this characterizes different Higgs-production mechanisms. The data were divided into 10 subsets, for each of which the background shape was derived by fitting the data themselves. By evaluating the probability that fluctuations of the smooth background could create the bump, the local significance at 126 GeV is found to be equivalent to 4.5 standard deviations (σ).

CCatl2_07_12

Comparing the mass distribution from the two-photon sample with the distribution in figure 1b, where the mass is calculated from the four leptons in ZZ→llll events, the situation is quite different. The predicted signal to background in the interesting mass range between 120 and 130 GeV is much larger for the ZZ final state, with about half of the background coming from genuine ZZ events and half from other processes. The background shape is more complicated than in the γγ case but the expected features are well reproduced by simulation. The small peak in the distribution at 125 GeV has a local significance of 3.4σ.

Combining the ZZ→llll result with the γγ result and with all of the channels measured in 2011 brings the local significance to the pivotal threshold of 5.0 σ, as was announced to cheers at the 4 July seminar. Moreover, the signal masses measured in these two high-resolution channels are consistent, with an overall best-fit mass of 126.0 ± 0.4 (stat.) ± 0.4 (syst.) GeV.

The WW→eνμν analysis was ready a few days after the seminar and is included in the publication. Although the mass cannot be calculated, a transverse-mass variable mT can be formed from the measured electron and muon and the missing transverse energy in the event that arises from the unobserved neutrinos. Figure 1c shows the distribution of mT, with the predicted broad signal from a 125 GeV Higgs boson superimposed on the known backgrounds. The visible excess of events over background lends further evidence for the presence of a signal, bringing the overall significance to 5.9σ, corresponding to a one in 600 million chance that the known background processes could fluctuate to give such a convincing excess.

CCatl4_07_12

It came as something of a shock that the discovery threshold was reached so early in 2012. After more than 20 years of development, the detector has proved that it is capable of measuring leptons, photons, jets and missing energy with excellent precision, and it is operating with remarkable efficiency. This performance has been maintained even though the LHC is delivering higher luminosity than ever, with more proton–proton interactions per bunch crossing than foreseen. The trigger menus have been fine-tuned to select the most interesting events. The intricate process of reconstructing and distributing millions of events across the worldwide LHC Computing Grid in a matter of days runs smoothly; the ability to go from recording the last data to announcing a discovery just a couple of weeks later was incredible. In all aspects of the endeavour, people were prepared to work without sleep to ensure that the next step went without a hitch. The excitement as the data were revealed for the first time was tangible, and the thrill of the announcement on 4 July was shared by the collaboration around the world, from the lucky few in the CERN auditorium to collaborators at their home institutions and the attendees at the ICHEP conference in Melbourne.

The celebratory champagne has been drunk and the next stage of the work is beginning. The question on everyone’s lips now is whether this new particle has the features of the Standard Model Higgs boson. Undoubtedly, it is a brand-new boson, and we look forward to getting to know it better.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

4 July 2012: a day to remember

It’s 2 a.m. in Chicago, 9 a.m. in Geneva and 5 p.m. in Melbourne. Around the world, particle physicists in labs, lecture theatres and in their homes are full of anticipation. They are all waiting to hear the latest update in the search for the Higgs boson at the LHC, following the tantalizing hints presented on 13 December. Everyone knows that something exciting is in the air. The seminar has been rapidly scheduled to align with the start of the 2012 International Conference of the High-Energy Physics in Melbourne. It will be webcast not only to an audience in Melbourne but to the many teams around the world who have contributed over the years.

 

The news has its roots in the 1960s. The work of Robert Brout, François Englert, Peter Higgs, Gerald Guralnik, Carl Hagen and Tom Kibble in 1964 was to become a key piece of the Standard Model, giving mass to the W and Z bosons of the electroweak force. From the 1970s, searches for the so-called Higgs boson progressed as particle accelerators grew to provide beams of higher energies, with experiments at Fermilab’s Tevatron and CERN’s Large Electron–Positron providing the best limits before the LHC entered the game in 2010.

 

It was a day that many will remember for years to come. Englert, Higgs, Guralnik and Hagen were all in the audience at CERN to hear the news directly. (Sadly, Brout died last year and Kibble was unable to attend.) The ATLAS and CMS collaborations announced that they had observed clear signs in the LHC’s proton–proton collisions of a new boson consistent with being the Higgs boson, with a mass of around 126 GeV.

 

 

The adjoining articles (Discovery of a new boson – the ATLAS perspective and Inside story: the search in CMS for the Higgs boson) give some insight into the analysis procedures behind these latest results from the ATLAS and CMS experiments.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Inside story: the search in CMS for the Higgs boson

CChcm1_07_12

9.35 a.m. 4 July 2012. In front of an expectant crowd packing CERN’s main auditorium, Joseph Incandela shows a slide on behalf of the CMS collaboration; its subject, the combination of the two search channels with the best mass resolution, H→γγ and H→ZZ→ 4 leptons (llll). The slide shows a clear excess that corresponds to 5 σ above the expected background, signalling the discovery of a new particle. The audience erupts into applause. These decay modes not only give a measure of the mass of the new particle as 125 +/– 0.6 GeV but also reveal that it is, indeed, a boson, meaning a particle with integer spin; the two-photon decay mode further implies that its spin must be different from 1 (figure 1).

The ideas that led to the announcement were seeded more than 20 years ago

The search for the Standard Model Higgs boson, the missing keystone of the current framework for describing elementary particles and forces, has been going on for some 40 years. The ideas that led to the 4 July announcement were seeded more than 20 years ago: in 1990, at the Aachen workshop where people first heard the term “Compact Muon Solenoid”, and where people such as Michel Della Negra and Tejinder Virdee – the founding fathers of the CMS collaboration – presented quantitative ideas on how the Higgs boson, if it existed, could be found at the LHC. They aimed to provide coverage down to the region of low mass, which required precision tracking and electromagnetic calorimetry.

A measure of the performance of CMS, its hardware, software, distributed computing, analysis systems and the inventiveness of the people doing the analysis, can be gauged by the fact that a discovery of a Higgs-like boson has been made at half of the design energy of the LHC, using one-third of the integrated luminosity and under fiercer than the design “pile-up” conditions that were foreseen in the pre-data-taking estimates for reaching such a significance. This success is a real tribute to the thousands of CMS physicists and several generations of students who have turned CMS from a proposal on paper to a scientific instrument, hors du commun, producing frontier physics.

On 4 July the CMS collaboration presented searches for the Standard Model Higgs boson in five distinct decay modes: γγ, ZZ→llll, WW→lνlν, ττ and bb, the so-called high-priority analyses. The 2012 data-taking campaign and physics analyses had been under preparation since the end of 2011. The CMS collaboration had been pushing to go to 8 TeV collision energy and, assuming that this would happen, started the data simulation at 8 TeV in December. The collaboration identified 21 high-priority analyses, including the ones for the Higgs searches. The reconstruction software was improved and the trigger menus prepared to select with high efficiency the events necessary for the search. The software and computing resources were for the most part dedicated to the high-priority analyses.

The limits on the Higgs boson mass, established by experiments at CERN’s Large Electron–Positron collider and Fermilab’s Tevatron, and by the LHC campaign in 2011, showed that the Standard Model Higgs boson, if it existed, would most likely inhabit the mass range 114.4–127 GeV. Another important strategic decision was to re-optimize and improve the analyses using the expected sensitivity as the driving criterion. The entire analysis procedure in each individual analysis was assessed on the basis of maximizing sensitivity without looking into the above-mentioned mass region – in other words, they were “blind”. This would inevitably lead to a day of high drama when the “unblinding” was to take place, on 15 June.

The unblinding procedure, defined before 2012 data-taking, was to proceed in two steps:

• The performance of the analyses would be evaluated and pre-approved by the collaboration based on the first 3 fb–1 of data that had been collected and fully certified. On 15 June, the results in the blinded region would be shown. The deadline of 15 June arrived and all analyses were declared ready by the analysis review committees and on seeing the results from the high mass-resolution channels most of the hundreds present at CERN or connected via videoconferencing were astounded – there were the first clear signs that a new particle could be coming into view. The indications seen in the 2011 data not only remained, but were strengthened. A day of excitement indeed!

• From 15 June onwards the analyses would be – and were – simply topped-up, once the data quality-certification process was completed. They would eventually include all of the data available up until the technical stop of the LHC planned for late June.

CChcm2_07_12

Expectations started to increase, especially when observing the fantastic performance of the LHC, which was delivering collisions at a record rate. At the same time, the considerable increase in sensitivity of all five analyses, compared with those of 2011, meant that a discovery became a real possibility. In particular, the H→ττ channel had improved in sensitivity by more than a factor of two and H→bb was also starting to contribute. All of the analyses had integrated multivariate analysis methods for selection and/or reconstruction to optimize use of the full event information, leading to improved sensitivity. The channels with high mass-resolution, H→γγ and H→ZZ→llll, achieved close-to-design resolutions, e.g. for the best categories of events, 1.1 GeV and <1 GeV for diphoton and four-lepton states, respectively (figures 2 and 3). The anticipated number of standard deviations (σ) for the expected significance came out close to 6 σ (median) using 5 fb–1 from each of the 7 TeV and 8 TeV data sets (figure 1). A higher (lower) observed significance would indicate an upwards (downwards) fluctuation of this expectation.

CChcm3_07_12

All of the five high-priority analyses were performed independently at least twice. Furthermore, improvements in the definition and selection of the physics objects were subjected to scrutiny and formal approval before deployment.

As every new batch of certified data was added, the analysts eagerly looked forward to updates. The final word would belong to the team responsible for combining the results from the five high-priority analyses, the combination procedure having been validated before the unblinding.

The combination of these five analyses reveals an excess of events above the expected background, with a maximum local significance of 5.0 σ at a mass of 125.5 GeV. The expected significance for a Standard Model Higgs boson of that mass is 5.8 σ. The signal strength σ/σSM was measured to be 0.87+/– 0.23, where σ/σSM denotes the production cross-section multiplied by the relevant branching fraction, relative to the Standard Model expectation.

CChcm4_07_12

Having clearly seen a new particle, considerable attention was then devoted to measuring properties such as mass, spin if possible, and its couplings to bosons and fermions. All in all, the results presented by CMS are consistent, within uncertainties, with expectations for a Standard Model Higgs boson. With the recent decision to extend the 2012 data-taking by 55 days, the collaboration is now eager to accumulate up to three times more data, which should enable a more significant test of this conclusion and an investigation of whether the properties of the new particle imply physics beyond the Standard Model.

This will prove to be the discovery of a particle sans precedent. If it is confirmed to be a fundamental scalar (spin 0) then it is likely to have far-reaching consequences on physicists’ thinking about nature. It would be the first fundamental scalar boson. It is known that fundamental scalar fields play an important role not only in the presumed inflation in the early instants of the universe but also in the recently observed acceleration of its expansion. There can be no doubt that exciting times lie ahead.

• For more, see the paper on these results from CMS.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Particle-physics fever in Melbourne

CCmel1_07_12

Never before did the International Conference on High-Energy Physics (ICHEP) start with such a bang. Straight after registering on 4 July in Melbourne, participants at ICHEP2012, the 36th conference in the series, were invited to join a seminar at CERN via video link, where they would see the eagerly anticipated presentations of the latest results from ATLAS and CMS. The excitement generated by the evidence for a new boson sparked a wind of optimism that permeated the whole conference. Loud cheers and sustained applause were appropriately followed by the reception to welcome more than 700 participants from around the world, where they could discuss the news over a glass of delicious Australian wine or beer.

As usual, ICHEP consisted of three days of six parallel sessions followed by a rest day and then three days of plenary talks to cover the breadth and depth of particle physics around the world. This article presents only a personal choice of the highlights.

All eyes on the new boson

Talks on the search for the Higgs boson drew huge crowds in the parallel sessions. The discovery of something that looks very much like the Higgs boson raises two pressing questions: what kind of boson was found; and what kind of limits does this discovery impose on the existing models? While the answer to the first question will come only when more data are available, it is already possible to start answering the second question.

CCmel2_07_12

Sara Bolognesi of Johns Hopkins University presented some interesting preliminary work based on a recently published study in which helicity amplitudes are used to reveal the spin and parity of the new boson. These can be measured through the angular correlations in the decay products. For example, in H → WW decays, when both Ws decay into leptons, the angular separation in the transverse plane can help not only to reduce the background but also to distinguish between spin-parity 0+ and 2+. Likewise, the parity of a spin-0 boson can be inferred from the distribution of the decay angles of H → ZZ → llll. Bolognesi and her colleagues developed a Monte Carlo generator that allows the comparison of any hypothesized spin with data and have made the full analytical computation of the angular distributions that describe the decays H → WW, ZZ and γγ. All that is needed is more data – and the nature of the new particle will be revealed.

Another approach to determine the spin of the new boson consists of studying which decay modes are observed. A Standard Model Higgs boson has spin 0, so it should couple to fermions and vector bosons. Spin 1 is already excluded for the new boson because it could not produce two photons (γγ), each with spin 1. A spin 2 boson could decay into bb– (with an extra spin 1 gluon on board) but not to two τ leptons. So, it was puzzling to hear from Joshua Swanson of the University of Wisconsin Madison that CMS does not observe H → ττ after having analysed the 10 fb–1 at hand from 2011 and 2012. The current analysis is consistent with the background-only hypothesis, yielding an exclusion limit that is 1.06 times the Standard Model production cross-section for mH = 125 GeV. Needless to say, this will be closely monitored as soon as more data become available.

CCmel3_07_12

Meanwhile, many theorists and experimentalists are already speculating on the possible impact of the discovery on the current theoretical landscape. Several people showed the effect of all known measurements in flavour physics, direct limits and the new boson mass on existing models. Nazila Mahmoudi of CERN and Clermont-Ferrand University reminded the audience that there is more to supersymmetry (SUSY) than the constrained minimal supersymmetric model (CMSSM). She showed that, assuming that the new particle is a Higgs boson, its mass has a huge impact on the allowed parameter space. Already, several constrained models such as the mSUGRA, mGMSB, no-scale and cNMSSM are severely limited or even ruled out. This impact is, in fact, complementary to direct searches for SUSY. Mahmoudi stressed the importance of going back to unconstrained SUSY models, pointing out that there is still plenty of space for the MSSM model.

So many searches, such little luck

In the search for direct detection of new phenomena, both for exotics and for SUSY, the results were humbling despite the numerous attempts. In the parallel sessions, more than 30 talks were given on SUSY alone,

sometimes covering up to five different analyses. Andy Parker of the University of Cambridge, who reviewed this field, showed how these searches have already covered all of the most obvious places. However, as he reminded the audience, there are still two big reasons to believe in SUSY. First, it provides a candidate for dark matter that has just the right cross-section to be consistent with today’s relic abundance. Second, a light Higgs particle needs this kind of new physics to stabilize its mass. Parker also pointed out that only the third generation of SUSY particles, namely stops and staus, need to be light, a point that Riccardo Barbieri of Scuola Normale Superiore and INFN also stressed in his conference review. For these particles, the current model-independent limits are still rather low, well below 1 TeV, but should improve rapidly with more data.

SUSY could also be hidden if the mass-splitting between gluinos and neutralinos is rather small. In that case there would be very little missing transverse energy (MET), when most analyses have been looking towards large MET. This is the idea behind various scenarios with compressed mass spectra. Or it might be that the SUSY particles are so long lived that they require an adapted trigger strategy because they decay beyond the first layers of the detectors. Searches have been made in all of these directions but without any success so far.

CCmel4_07_12

Nevertheless, with the discovery of a new boson, there is much more optimism than a year ago during the European Physical Society conference on High-Energy Physics (EPS HEP 2011), where Guido Altarelli had commented that given no sign for SUSY yet, it was too early for despair but enough for depression. The word of caution that Parker raised, echoing Mahmoudi, provides room for optimism. It is of utmost importance to stay away from the hypotheses of constrained models and to aim instead for the broadest possible scope. SUSY is far from being dead yet and there is plenty of unexplored parameter space, with much of it still containing particles of low mass. As Raman Sundrum of the University of Maryland remarked: “We must not only look for what’s left but rather, what’s right.”

Testing the consistency of the Standard Model with the so-called electroweak fit has been a tradition for all major conferences for the past decade or two and in this ICHEP proved no different – except for a major twist. For the first time, the mass of the newly found boson was used to test if all electroweak measurements (W and Z boson masses, the top-quark mass, single and diboson production cross-sections, lepton universality etc.) could fit together. All of these measurements were reviewed by Joao Barreiro Guimaraes da Costa of Harvard University, culminating with the overall electroweak fit in terms of W-mass versus top-mass space in figure 1. This allows testing of how consistently all of these parameters fit together under the hypothesis that the new boson is the Standard Model Higgs boson (thin blue line) or one associated with MSSM (green band). The blue ellipse shows the current status of the experimental measurements of mt and mW, whereas the black ellipse depicts what will happen if the LHC brings the uncertainty on mW down to 5 MeV – although this will be a great challenge. If the central value remains unchanged, it would bring the Standard Model into difficulty, whereas there will still be plenty of room for the MSSM parameters. One noteworthy result of this global fit is the prediction of the Standard Higgs boson mass at 125.2 GeV when all electroweak parameters are taken into account. Only the direct exclusion limits from the Large Electron–Positron collider and the Tevatron were included.

Dark matter, light neutrinos

“No theories, just guesses for dark matter.” These were the words with which Neal Weiner of New York University summarized the situation on the theory front for dark matter. He explained that, unlike for the Higgs boson, there is currently no theory that allows predictions experimentalists could try to verify. The field is faced with a completely open slate.

If weakly interacting massive particles (WIMPs), a generic class of dark-matter candidates, exist with a mass of around 100 GeV, then some 10 million would go through a person’s hand every second, as Lauren Hsu of Fermilab pointed out during her comprehensive review of direct searches for dark matter. Nevertheless, in contrast to the clarity of Hsu’s presentation, the situation remains extremely confusing. She first reminded the audience of the basics. A WIMP could scatter elastically off a nucleon and the scattering cross-section can be broken into two terms: a spin-independent term (SI), which grows as the square of the atomic mass A; and a spin-dependent term (SD) that scales with the spin of the nucleon.

Currently, xenon-based and cryogenic germanium experiments dominate the field for the SI measurements, while superheated liquid detectors such as Picasso and COUPP are competitive for SD measurements. The XENON100 collaboration’s results for 2011 exceed the sensitivity of other experiments over a range of WIMP masses (new results with 3.5 times better sensitivity appeared just after the conference). SuperCDMS, a germanium-based detector, started operation in March 2012 but first results have still to be released.

Several inconsistencies remain unexplained. In 2008, the DAMA/LIBRA collaboration first reported an annual modulation in event rate that was consistent with dark matter with a statistical significance that now reaches 8.9 σ. This modulation peaks in summer and is at its lowest in winter, making some people suspect backgrounds that are modulated by seasonal changes. COUPP and KIMS, two experiments that use iodine as DAMA/LIBRA does, have now been running for some time. However, their data are not consistent with elastic scattering of WIMPs off iodine, so the mystery continues in terms of what DAMA/LIBRA is seeing. Finally, DM-ICE is a new effort underway in which about 200 kg of sodium-iodide crystals will be deployed within the IceCube detector at the South Pole. One interesting point is that any background tied to seasonal effects will modulate with a different phase in the southern hemisphere.

This is not the only ongoing discrepancy. Two collaborations, CoGeNT and CRESST-II, announced the observation of an annual modulation in low-energy events, with the CRESST-II excess being at 4.2 σ. This contradicts many other results (from CDMS, XENON100, EDELWEISS, ZEPLIN, etc.) where no modulation is observed in low-energy data. The CoGeNT observation was particularly hard to reconcile with the CDMS results because both CoGeNT and CDMS are germanium-based detectors. However, now that the CoGeNT collaboration has modified its background estimates, the data from these two experiments are no longer in conflict. The CRESST team is working on reducing its background, which could help resolve this discrepancy.

The fact that neutrinos have mass proves that there is physics beyond the Standard Model.

Takashi Kobayashi

Moving on to the field of neutrino physics, Takashi Kobayashi of KEK reminded the audience that the mere fact that neutrinos have mass proves that there is physics beyond the Standard Model. These masses induce mixing – in that the different flavours of neutrinos are linear combinations of mass eigenstates. Neutrino mixing is now described by the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) matrix and until recently it remained to be seen if all three flavours participated in the mixing.

Without a doubt the biggest news in neutrino experiments this year came from the Daya Bay experiment’s measurement of the mixing angle between the first and third neutrino-mass eigenstates, Θ13. This is the last ingredient needed to allow future tests of CP-violation in the neutrino sector. T2K had reported the first evidence of νe appearance in 2011, which already implied a non-zero value for θ13. Jun Cao of the Institute of High-Energy Physics, Beijing showed how T2K was then followed by MINOS, Double Chooz, Daya Bay and now RENO, with a first result showing a 4.9 σ deviation from zero for θ13. The Daya Bay group has achieved the best measurement, now with sin213 = 0.089 ± 0.010, a 7.7 σ deviation from zero.

One big remaining area of questions concerns the neutrino-mass hierarchy. Which mass eigenstate is the lightest? Do we have a normal or an inverted hierarchy (figure 2)? As always, much still remains to be done in neutrino physics but, as in the past, it is bound to bring interesting or even surprising results.

Great theoretical developments

Unnoticed by most experimentalists, there have been tremendous developments in the past eight years with scattering amplitude theory. “While experimentalists were busy building the LHC experiments, theorists were improving their understanding of perturbative scattering amplitudes,” said Lance Dixon of SLAC in his overview talk. This has allowed them to “break the dam”, leaving impossibly complex Feynman-diagram-based calculations behind in performing computations at next-to-leading order (NLO). The unprecedented precision achieved has led to the description of complex multijet events, such as those observed in collisions at the Tevatron and the LHC.

These new techniques were developed within the context of the maximally supersymmetric Yang-Mills theory (N = 4 SYM), an exotic cousin of QCD. Currently, complete calculations are possible only for events producing W/Z + jets, which involve at most 110 1-loop diagrams. However, this new technique can tackle the equivalent of 256,265 diagrams. Feynman diagrams are still used but only for the simpler, tree-level processes. One major advantage of scattering amplitudes is that they can recycle tree processes into loops, bringing much simplification to the calculations. This is bringing amazing precision into the new calculations, as Dmitry Bandurin of Florida State University revealed. Now, QCD-inclusive jet cross-sections agree with recent theoretical calculations over 8–9 orders of magnitude and up to jet momenta of 2 TeV, as measurements by the CMS collaboration show (figure 3).

CCmel5_07_12

ATLAS showed the first inclusive jet data at 8 TeV, confirming the expected increase in jet-production rates and reach in transverse momentum. The current level of understanding in jet identification, systematics and jet-energy scale leads in many cases to experimental uncertainties similar to or lower than the theoretical uncertainties. The sensitivity to parton density functions (PDFs) sets the strongest constraint on the gluon PDF and the extraction of the strong coupling constant αs. It also tests αs running up to 400 GeV. The inclusive Z and W results extensively cross-check perturbative QCD calculations, leading to a triumph of NLO, matrix element and parton-shower Monte Carlo predictions. Studies of multiple parton interactions at the Tevatron and the LHC are leading to improved phenomenological nucleon models. All of these results are important for searches for new physics at high energies. The participants witnessed the impressive amount of work accomplished in measuring PDFs, cross-sections, diffractive processes and deep-inelastic scattering, all of which are much needed building blocks for the groundwork that underpins discoveries.

Last summer at EPS-HEP 2011, the LHCb and CMS collaborations created a stir when they presented their first precise search for Bs → μμ decay – a channel that is sensitive to new physics. Now, combining all of the 2011 data for CMS, ATLAS and LHCb, the 95% CL upper limit on this branching fraction is 4.2 × 10–9, closing in on the Standard Model prediction of (3.2 ± 0.2) × 10–9. This new LHC result increases the tension with the result from the CDF experiment at the Tevatron of 13+9–7 × 10–9, which is slightly reduced after including all 10 fb–1 of data.

CCmel6_07_12

The LHCb collaboration reported the first observation of a decay with a b → d transition involving a penguin diagram, which makes B+ → πμμ the rarest B decay ever observed. In the Standard Model, it is 25 times smaller than similar decays involving b → s transitions. With 1 fb–1 of collision data, the LHCb experiment obtained 25.3+6.7 –6.4 signal events – a result that is 5.2 σ above background and consistent with the predictions of the Standard Model. The collaboration also reported on the first measurement of CP violation in charmless decays.

At the Tevatron, both the CDF and DØ experiments still see a significant forwards-backwards asymmetry in tt&x#305; production in all channels with a strong dependence on mtt&x#305;, which conflicts with the Standard Model. No such asymmetry is seen by either ATLAS or CMS at the LHC, where it is defined as the asymmetry in the widths of the t and t&x#305; rapidity distributions.

Looking towards the future

CERN’s director-general, Rolf Heuer, concluded the conference by reviewing the future for high-energy-physics accelerators, stating how the LHC results will guide the way at the energy frontier. The current plans for CERN include a long shutdown in 2013–2014 to increase the centre-of-mass energy, possibly to the design value of 14 TeV. This will be followed by two other shutdowns: one in 2018, for upgrades to the injector and the LHC to go to the ultimate luminosity; and one in 2022 for new focusing magnets and crab cavities for high luminosity with levelling, with the humble goal of accumulating about 3000 fb–1 by 2030.

Numerous other plans are in the air, such as a linear collider, where Heuer stressed the importance for the international community to join forces on a single project. “We need to have accelerator laboratories in all regions of the globe planned in an international context, and maintain excellent communication and outreach to show the benefits of basic science to society,” he stressed.

There was not a dull moment at the ICHEP conference in Melbourne, thanks to the efforts of the organizers and their crew. Everyone who joined one of the many possible conference tours on Sunday was treated to views of incredibly beautiful coastlines and native wildlife. The overall experience was well worth the journey.

bright-rec iop pub iop-science physcis connect