Comsol -leaderboard other pages

Topics

The LHC’s first long run

Performance 2011

Since the first 3.5 TeV collisions in March 2010, the LHC has had three years of improving integrated luminosity. By the time that the first proton physics run ended in December 2012, the total integrated proton–proton luminosity delivered to each of the two general-purpose experiments – ATLAS and CMS – had reached nearly 30 fb–1 and enabled the discovery of a Higgs boson. ALICE, LHCb and TOTEM had also operated successfully and the LHC team was able to fulfil other objectives, including productive lead–lead and proton–lead runs.

Establishing good luminosity depends on several factors but the goal is to have the largest number of particles potentially colliding in the smallest possible area at a given interaction point (IP). Following injection of the two beams into the LHC, there are three main steps to collisions. First, the beam energy is ramped to the required level. Then comes the squeeze. This second step involves decreasing the beam size at the IP using quadrupole magnets on both sides of a given experiment. In the LHC, the squeeze process is usually parameterized by β* (the beam size at the IP is proportional to the square root of β*). The third step is to remove the separation bumps that are formed by local corrector magnets. These bumps keep the beams separated at the IPs during the ramp and squeeze.

High luminosity translates into having many high-intensity particle bunches, an optimally focused beam size at the interaction point and a small emittance (a measure of the spread of the beam in transverse phase space). The three-year run saw relatively distinct phases in the increase of proton–proton luminosity, starting with basic commissioning then moving on through exploration of the limits to full physics production running in 2012.

The beam energy remained at 3.5 TeV in 2011 and the year saw exploitation combined with exploration of the LHC’s performance limits

The first year in 2010 was devoted to commissioning and establishing confidence in operational procedures and the machine protection system, laying the foundations for what was to follow. Commissioning of the ramp to 3.5 TeV went smoothly and the first (unsqueezed) collisions were established on 30 March. Squeeze commissioning then successfully reduced β* to 2 m in all four IPs.

With June came the decision to go for bunches of nominal intensity, i.e. around 1011 protons per bunch (see table below, p27). This involved an extended commissioning period and subsequent operation with beams of up to 50 or so widely separated bunches. The next step was to increase the number of bunches further. This required the move to bunch trains with 150 ns between bunches and the introduction of well defined beam-crossing angles in the interaction regions to avoid parasitic collisions. There was also a judicious back-off in the squeeze to a β* of 3.5 m. These changes necessitated setting up the tertiary collimators again and recommissioning the process of injection, ramp and squeeze – but provided a good opportunity to bed-in the operational sequence.

A phased increase in total intensity followed, with operational and machine protection validation performed before each step up in the number of bunches. Each increase was followed by a few days of running to check system performance. The proton run for the year finished with beams of 368 bunches of around 1.2 × 1011 protons per bunch and a peak luminosity of 2.1 × 1032 cm–2 s–1. The total integrated luminosity for both ATLAS and CMS in 2010 was around 0.04 fb–1.

The beam energy remained at 3.5 TeV in 2011 and the year saw exploitation combined with exploration of the LHC’s performance limits. The campaign to increase the number of bunches in the machine continued with tests with a 50 ns bunch spacing. An encouraging performance led to the decision to run with 50 ns. A staged ramp-up in the number of bunches ensued, reaching 1380 – the maximum possible with a bunch spacing of 50 ns – by the end of June. The LHC’s performance was increased further by reducing the emittances of the beams that were delivered by the injectors and by gently increasing the bunch intensity. The result was a peak luminosity of 2.4 × 1033 cm–2 s–1 and some healthy delivery rates that topped 90 pb–1 in 24 hours.

The next step up in peak luminosity in 2011 followed a reduction in β* in ATLAS and CMS from 1.5 m to 1 m. Smaller beam size at an IP implies bigger beam sizes in the neighbouring inner triplet magnets. However, careful measurements had revealed a better-than-expected aperture in the interaction regions, opening the way for this further reduction in β*. The lower β* and increases in bunch intensity eventually produced a peak luminosity of 3.7 × 1033 cm–2 s–1, beyond expectations at the start of the year. ATLAS and CMS had each received around 5.6 fb–1 by the end of proton–proton running for 2011.

An increase in beam energy to 4 TeV marked the start of operations in 2012 and the decision was made to stay at a 50 ns bunch spacing with around 1380 bunches. The aperture in the interaction regions, together with the use of tight collimator settings, allowed a more aggressive squeeze to β* of 0.6 m. The tighter collimator settings shadow the inner triplet magnets more effectively and allow the measured aperture to be exploited fully. The price to pay was increased sensitivity to orbit movements – particularly in the squeeze – together with increased impedance, which as expected had a clear effect on beam stability.

Beam envelopes

Peak luminosity soon came close to its highest for the year, although there were determined and long-running attempts to further improve performance. These were successful to a certain extent and revealed some interesting issues at high bunch and total beam intensity. Although never debilitating, instabilities were a recurring problem and there were phases when they cut into operational efficiency. Integrated luminosity rates, however, were generally healthy at around 1 fb–1 per week. This allowed a total of about 23 fb–1 to be delivered to both ATLAS and CMS during a long operational year with the proton–proton run extended until December.

Apart from the delivery of high instantaneous and integrated proton–proton luminosity to ATLAS and CMS, the LHC team also satisfied other physics requirements. These included lead–lead runs in 2010 and 2011, which delivered 9.7 and 166 μb–1, respectively, at an energy of 3.5Z TeV (where Z is the atomic number of lead). Here the clients were ALICE, ATLAS and CMS. A process of luminosity levelling at around 4 × 1032 cm–2 s–1 via transverse separation with a tilted crossing angle enabled LHCb to collect 1.2 fb–1 and 2.2 fb–1 of proton–proton data in 2011 and 2012, respectively. ALICE enjoyed some sustained proton–proton running in 2012 at around 5 × 1030 cm–2 s–1, with collisions between enhanced satellite bunches and the main bunches. There was also a successful β* = 1 km run for TOTEM and the ATLAS forward detectors. This allowed the first LHC measurement in the Coulomb-nuclear interference region. Last, the three-year operational period culminated in a successful proton–lead run at the start of 2013, with ALICE, ATLAS, CMS and LHCb all taking data.

One of the main features of operation in 2011 and 2012 was the high bunch intensity and lower-than-nominal emittances offered by the excellent performance of the injector chain of Booster, Proton Synchrotron and Super Proton Synchrotron. The bunch intensity had been up to 150% of nominal with 50 ns bunch spacing, while the normalized emittance going into collisions had been around 2.5 mm mrad, i.e. 67% of nominal. Happily, the LHC proved to be capable of absorbing these brighter beams, notably in terms of beam–beam effects. The cost to the experiments was high pile-up, an issue that was handled successfully.

The table shows the values for the main luminosity-related parameters at peak performance of the LHC from 2010 to 2012 and the design values. It shows that, even though the beam size is naturally larger at lower energy, the LHC has achieved 77% of design luminosity at four-sevenths of the design energy with a β* of 0.6 m (compared with the design value of 0.55 m) with half of the nominal number of bunches.

LHC operations in 2010–2013

Operational efficiency has been good with the integrated luminosity per week record reaching 1.3 fb–1. This is the result of outstanding system performance combined with fundamental characteristics of the LHC. The machine has a healthy single-beam lifetime before collisions of more than 300 hours and on the whole enjoys good vacuum conditions in both warm and cold regions. With a peak luminosity of around 7 × 1033 cm–2 s–1 at the start of a fill, the luminosity lifetime is initially in the range of 6–8 hours, increasing as the fill develops. There is minimal drift in beam overlap during physics data-taking and the beams are generally stable.

At the same time, a profound understanding of the beam physics and a good level of operational control have been established. The magnetic aspects of the machine are well understood thanks to modelling with FiDel (the Field Description for the LHC). A long and thorough magnet-measurement and analysis campaign meant that the deployed settings produced a machine with a linear optics that is close to the nominal model. Measurement and correction of the optics has aligned machine and model to an unprecedented level.

A robust operational cycle is now well established, with the steps of pre-cycle, injection, 450 GeV machine, ramp, squeeze and collide mostly sequencer-driven. A strict pre-cycling regime means that the magnetic machine is remarkably reproducible. Importantly, the resulting orbit stability – or the ability to correct back consistently to a reference – means that the collimator set-up remains good for a year’s run.

Considering the size, complexity and operating principles of the LHC, its availability has generally been good. The 257-day run in 2012 included around 200 days dedicated to proton–proton physics, with 36.5% of the time being spent in stable beams. This is encouraging for a machine that is only three years into its operational lifetime. Of note is the high availability of the critical LHC cryogenics system. In addition, many other systems also have crucial roles in ensuring that the LHC can run safely and efficiently.

In general the LHC beam-dump system (LBDS) worked impeccably, causing no major operational problems or long downtime. Beam-based set-up and checks are performed at the start of the operational year. The downstream protection devices form part of the collimator hierarchy and their proper positioning is verified periodically. The collimation system maintained a high proton-cleaning efficiency and semi-automatic tools have improved collimator set-up times during alignment.

The overall protection of the machine is ensured by rigorous follow-up, qualification and monitoring. The beam drives a subtle interplay of the LBDS, the collimation system and protection devices, which rely on a well defined aperture, orbit and optics for guaranteed safe operation. The beam dump, injection and collimation teams pursued well organized programmes of set-up and validation tests, permitting routine collimation of 140 MJ beams without a single quench of superconducting magnets from stored beams.

The beam instrumentation had great performance overall. Facilitating a deep understanding of the machine, it paved the way for the impressive improvement in performance during the three-year run. The power converters performed superbly, with good tracking between reference and measured currents and between the converters around the ring. There was good performance from the key RF systems. Software and controls benefited from a coherent approach, early deployment and tests on the injectors and transfer lines.

In summary, the LHC is performing well and a huge amount of experience and understanding has been gained during the past three years

There have inevitably been issues arising during the exploitation of the LHC. Initially, single-event upsets caused by beam-induced radiation to electronics in the tunnel were a serious cause of inefficiency. This problem had been foreseen and a sustained programme of mitigation measures, which included relocation of equipment, additional shielding and further equipment upgrades, resulted in a reduction of premature beam dumps from 12 per fb–1 to 3 per fb–1 in 2012. By contrast, an unforeseen problem concerned unidentified falling objects (UFOs) – dust particles falling into the beam causing fast, localized beam-loss events. These have now been studied and simulated but might still cause difficulties after the move to higher energy and a bunch spacing of 25 ns following the current long shutdown.

Beam-induced heating has been an issue. Essentially, all cases turned out to be localized and connected with nonconformities, either in design or installation. Design problems have affected the injection-protection devices and the mirror assemblies of the synchrotron-radiation telescopes, while installation problems have occurred in a low number of vacuum assemblies.

Beam instabilities dogged operations during 2012. The problems came with the push in bunch intensity, with the peak going into stable beams reaching around 1.7 × 1011 protons per bunch, i.e. ultimate bunch intensity. Other contributory factors included increased impedance from the tight collimator settings, smaller than nominal emittance and operation with low chromaticity during the first half of the run.

A final beam issue concerns the electron cloud. Here, electrons emitted from the vacuum chamber are accelerated by the electromagnetic fields of the circulating bunches. On impacting the vacuum chamber they cause further emission of one or more electrons and there is a potential avalanche effect. The effect is strongly bunch-spacing dependent and although it has not been a serious issue with the 50 ns beam, there are potential problems with 25 ns .

In summary, the LHC is performing well and a huge amount of experience and understanding has been gained during the past three years. There is good system performance, excellent tools and reasonable availability following targeted consolidation. Good luminosity performance has been achieved by harnessing the beam quality from injectors and fully exploiting the options in the LHC. This overall performance is the result of a remarkable amount of effort from all of the teams involved.

This article is based on “The first years of LHC operation for luminosity production”, which was presented at IPAC13.

Strangely beautiful dimuons

Display of a Bs → μμ

Since its birth, the Standard Model of particle physics has proved to be remarkably successful at describing experimental measurements. Through the prediction and discovery of the W and Z bosons, as well as the gluon, it continues to reign. The recent discovery of a Higgs boson with a mass of 126 GeV by the ATLAS and CMS experiments indicates that the last piece of this jigsaw puzzle has been put into place. Yet, despite its incredible accuracy, the Standard Model must be incomplete: it offers no explanation for the cosmological evidence of dark matter, nor does it account for the dominance of matter over antimatter in the universe. The quest for what might lie beyond the Standard Model forms the core of the LHC physics programme, with ATLAS and CMS systematically searching for the direct production of a plethora of new particles that have been predicted by various proposed extensions to the model.

Complementary methods

As a consequence of its excellent performance – including collisions at much higher energies than previously achieved and record integrated luminosities – the LHC also provides complementary and elegant approaches to finding evidence of physics beyond the Standard Model, namely precision measurements and studies of rare decays. Through Heisenberg’s uncertainty principle, quantum loops can appear in the diagrams that describe Standard Model decays, which are influenced by particles that are absent from both the initial and final states. This experimentally well established general concept opens a window to observe the effects of undiscovered particles or of other new physics in well known Standard Model processes. Because these effects are predicted to be small, the proposed new-physics extensions remain consistent with existing observations. Now, the high luminosity of the LHC and the unprecedented precision of the experiments are allowing these putative effects to be probed at levels never before reached in previous measurements. Indeed, this is the prime field of study of the LHCb experiment, which is dedicated to the precision measurement of decays involving heavy quarks, beauty (b) and charm (c). The general-purpose LHC experiments can also compete in these studies, especially where the final states involve muons.

Behind the seemingly simple decay topology hides a tricky experimental search aimed at finding a few signal events in an overwhelming background

A rare confluence of factors makes the decay of beauty mesons into dimuon (μ+μ) final states an ideal place to search for this sort of evidence for physics beyond the Standard Model. The decays of B0 (a beauty antiquark, b, and a down quark, d) and Bs (a b and a strange quark, s) to μ+μ? are suppressed in the Standard Model yet several proposed extensions predict a significant enhancement (or an even stronger suppression) of their branching fractions. A measurement of the branching fraction for either of these decays that is inconsistent with the Standard Model’s prediction would be a clear sign of new physics – a realization that sparked off a long history of searches. For the past 30 years, a dozen experiments at nearly as many particle colliders have looked for these elusive decays and established limits that have improved by five orders of magnitude as the sensitivities approach the values predicted by the Standard Model (figure 2). Last November, LHCb found the first clear evidence for the decay Bs → μμ, at the 3.5σ level. Now both the CMS and LHCb collaborations have updated their results for these decays.

Behind the seemingly simple decay topology hides a tricky experimental search aimed at finding a few signal events in an overwhelming background: only three out of every thousand million Bs mesons are expected to decay to μμ, with the rate being even lower for the B0. The challenge is therefore to collect a huge data sample while efficiently retaining the signal and filtering out the background.

Several sources contribute to the large background. B hadrons decay semi-leptonically to final states with one genuine muon, a neutrino and additional charged tracks that could be misidentified as muons, therefore mimicking the signal’s topology. Because the emitted neutrino escapes with some energy, these decays create a dimuon peak that is shifted to a lower mass than that of the parent particle. The decays Λb → pμν form a dangerous background of this kind because the Λb is heavier than the B mesons, so these decays can contribute to the signal region. Two-track hadronic decays of B0 or Bs mesons also add to the background if both tracks are mistaken for muons. This “peaking background” – fortunately rare – is tricky because it exhibits a shape that is similar to that which is expected for the signal events. The third major background contribution arises from events with two genuine muons produced by unrelated sources. This “combinatorial” background leads to a continuous dimuon invariant-mass distribution, overlapping with the B0 and Bs mass windows, which is reduced by various means as discussed below.

The first hurdle to cross in finding the rare signal events is to identify potential candidates during the bursts of proton–proton collisions in the detectors. Given the peak luminosities reached in 2012 (up to 8 × 1033 cm–2 s–1), the challenge for CMS was to select by fast trigger the most interesting 400 events a second for recording on permanent storage and prompt reconstruction, with around 10 per second reserved for the B → μμ searches. With its smaller event size, LHCb could afford a higher output rate from its trigger, recording several kilohertz with a significant fraction dedicated to dimuon signatures.

Results of Bs → μμ

The events selected by the trigger are then filtered according to the properties of the two reconstructed muons to reject as much background as possible while retaining as many signal events as possible. In particular, hadrons misidentified as muons are suppressed strongly through stringent selection criteria applied on the number of hits recorded in the tracking and muon systems, on the quality of the track fit and on the kinematics of the muons. In LHCb, information from the ring-imaging Cherenkov detectors further suppresses misidentification rates. Additional requirements ensure that the two oppositely charged muons have a common origin that is consistent with being the decay point of a (long-lived) B meson. The events are also required to have candidate tracks that are well isolated from other tracks in the detector, which are likely to have originated from unrelated particles or other proton–proton collisions (pile-up). This selection is made possible by the precise measurements of the momentum and impact parameter provided by the tracking detectors in both experiments. The good dimuon-mass resolution (0.6% at mid-rapidity for CMS and 0.4% for LHCb) limits the amount of combinatorial background that remains under the signal peaks. Figure 1 shows event displays from the two experiments, each including a displaced dimuon compatible with being a B → μμ decay.

The final selection of events in both experiments is made with a multivariate “boosted decision tree” (BDT) algorithm, which discriminates signal events from background by considering many variables. Instead of applying selection criteria independently on the measured value of each variable, the BDT combines the full information, accounting for all of the correlations to maximize the separation of signal from background. CMS applies a loose selection on the BDT discriminant to ensure a powerful background rejection at the expense of a small loss in signal efficiency. Both experiments categorize events in bins of the BDT discriminant. LHCb has a higher overall efficiency, which together with the larger B cross-section in the forward region compensates for the lower integrated luminosity, so the final sensitivity is similar for both experiments.

The observable that is sensitive to potential new-physics contributions is the rate at which the B0 or Bs mesons decay to μμ, which requires a knowledge of the total numbers of B0 and Bs mesons that are produced. To minimize measurement uncertainties, these numbers are evaluated by reconstructing events where B mesons decay through the J/ψK channel, with the J/ψ decaying to two muons. This signature has many features in common with the signal being sought but has a much higher and well known branching fraction. The last ingredient required is the fraction of Bs produced relative to B+ or B0 mesons, which LHCb has determined in independent analyses. This procedure provides the necessary “normalization” without using the total integrated luminosity or the beauty production cross-section. LHCb also uses events with the decay B0 → K+π to provide another handle on the normalization.

Results

Both collaborations use unbinned maximum-likelihood fits to the dimuon-mass distribution to measure the branching fractions. The combinatorial background shape in the signal region is evaluated from events observed in the dimuon-mass sidebands, while the shapes of the semileptonic and peaking backgrounds are based on Monte Carlo simulation and are validated with data. The magnitude of the peaking background is constrained from measurements of the fake muon rate using data control samples, while the levels of semileptonic and combinatorial backgrounds are determined from the fit together with the signal yields.

Both collaborations use all good data collected in 2011 and 2012. For CMS, this corresponds to samples of 5 fb–1 and 20 fb–1, respectively, while for LHCb the corresponding luminosities are 1 fb–1 and 2 fb–1. The data are divided into categories based on the BDT discriminant, where the more signal-like categories provide the highest sensitivity. In the fit to the CMS data, events with both muons in the central region of the detector (the “barrel”) are separated from the others (the “forward” regions). Given their excellent dimuon-mass resolution, the barrel samples are particularly sensitive to the signal. All of the resulting mass distributions (12 in total for CMS and eight for LHCb) are then simultaneously fit to measure the B0 → μμ and Bs → μμ branching fractions, yielding the results that are shown in figure 3.

For both experiments, the fits reveal an excess of Bs → μμ events over the background-only expectation, corresponding to a branching fraction BF(Bs → μμ) = 3.0+1.0–0.9 × 10–9 in CMS and 2.9+1.1–1.0 × 10–9 in LHCb, where the uncertainties reflect statistical and systematic effects. These measurements have significances of 4.3σ and 4.0σ, respectively, evaluated as the ratio between the likelihood obtained with a free Bs → μμ branching fraction and that obtained by fixing BF(Bs → μμ) = 0. The results have been combined to give BF(Bs → μμ) = 2.9±0.7 × 10–9 (CMS+LHCb).

Both CMS and LHCb reported this long-sought observation at the EPS-HEP conference in Stockholm in July and in back-to-back publications submitted to Physical Review Letters (CMS collaboration 2013, LHCb collaboration 2013).

The combined measurement of Bs → μμ by CMS and LHCb is consistent with the Standard Model’s prediction, BF(Bs → μμ) = 3.6±0.3 × 10–9, showing that the model continues to resist attempts to see through its thick veil. The same fits also measure the B0 → μμ branching fraction. They reveal no significant evidence of this decay and set upper limits at the 95% confidence level of 1.1 × 10–9 (CMS) and 0.74 × 10–9 (LHCb). These limits are also consistent with the Standard Model, although the measurement fails to reach the precision required to probe the prediction.

While the observation of a decay that has been sought for so long and by so many experiments is a thrilling discovery, it is also a bittersweet outcome. Much of the appeal of the Bs → μμ decay-channel was in its potential to reveal cracks in the Standard Model – something that the measurement has so far failed to provide. However, the story is far from over. As the LHC continues to provide additional data, the precision with which its experiments can measure these key branching fractions will improve steadily and increased precision means more stringent tests of the Standard Model. While these results show that deviations from the expectations cannot be large, even a small deviation – if measured with sufficient precision – could reveal physics beyond the Standard Model.

Additionally, the next LHC run will provide the increase in sensitivity that the experiments need to measure B0 → μμ rates at the level of the Standard Model’s prediction. New physics could be lurking in that channel. Indeed, the prediction for the ratio of the Bs → μμ and B0 → μμ decay rates is well known, so a precise measurement of this quantity is a long-term goal of the LHC experiments. And even in the scenario where the Standard Model continues its undefeated triumphant path, theories that go beyond it must still describe the existing data. Tighter experimental constraints on these branching fractions would be powerful in limiting the viable extensions to the Standard Model and could point towards what might lie beyond today’s horizon in high-energy physics. With the indisputable observation of Bs → μμ decays, experimental particle physics has reached a major milestone in a 30-year-long journey. This refreshing news motivates the LHC experimental teams to continue forward into the unknown.

Do fast radio bursts signal black-hole formation?

Astronomers using the 64-m Parkes radio telescope in Australia have detected radio transients with a duration of only 4 ms. These fast radio bursts (FRBs) are a recently discovered class of mysterious sources that are found at cosmological distances. Now, two theorists suggest that FRBs are the last signal emitted by neutron stars as they collapse to form black holes.

In 2007, Duncan Lorimer and colleagues reported finding an unexpected burst of radio emission in archival observations of the Parkes telescope (CERN Courier November 2007 p10). The distance to the burst was calculated to be far outside the Galaxy at cosmological distances and hence the inferred luminosity was huge – similar to that of a quasar. This first radio “hyperburst” is now called the Lorimer burst, or FRB 010724.

Now, an international team led by Dan Thornton of the University of Manchester and the Australia Telescope National Facility has identified four additional FRBs. All bursts are found to be at cosmological distances as inferred by their dispersion measure, which is related to the integrated density of free electrons along the line of sight to the source. The free electrons in an ionized medium scatter the radio waves and cause a time delay in the arrival of the burst that increases towards longer wavelengths. The measured delays for the four FRBs suggest a strong contribution from the intergalactic medium and that the sources are several thousand-million light-years away, corresponding to cosmological redshifts, z, of between 0.45 and 0.96. This is significantly more than for the Lorimer burst (z ∼ 0.12) and confirms the cosmological origin of these events.

The detection of these bursts at a great distance implies a strong instantaneous luminosity. However, because the FRBs last only for milliseconds, the total energy released in radio waves is relatively modest – of the order of 1031–1033 J. While this is about the energy output of the Sun in days to months, it is more than 10 orders of magnitude less than the energy released by a gamma-ray burst (GRB) or a supernova explosion (∼1044 J). With four FRBs detected in the same survey it is also possible to estimate the event rate. Thornton and colleagues find a rate of about 10,000 per day for the full sky – about one burst every 10 s. Given the number of galaxies in the probed volume, they find an event rate of one burst per thousand years per galaxy. This is about 10 times less frequent than core-collapse supernovae (CERN Courier January/February 2006 p10) but a thousand times more frequent than GRBs.

With only these characteristics and the fact that there is no known transient detected simultaneously at other wavelengths, it is challenging to speculate on the nature of the objects producing FRBs. The brevity of the emission indicates small objects, typically neutron stars. A possible candidate is a magnetar – a highly magnetized neutron star that can emit powerful gamma-ray flashes (CERN Courier June 2005 p12).

Another interesting scenario has recently been proposed by Heino Falcke and Luciano Rezzolla, affiliated to institutes in the Netherlands and Germany. They claim that there should be a population of neutron stars that are stable with respect to gravitational collapse only because they are spinning quickly. Because of magnetic breaking, their spin rate must decrease slowly during several thousands to millions of years until reaching a critical value when the star collapses into a black hole. According to the “no hair” theorem, black holes cannot keep the strong magnetic field of the neutron star. The magnetosphere would be released during the collapse and result in a radio burst. FRBs would therefore be the last “cry” of neutron stars succumbing to their own gravitational pull.

Surprising studies in multiplicity

One of the key ways of looking into what happens when high-energy hadrons collide is to measure the relationship between the number, or multiplicity, of particles produced and their momentum transverse to the direction of the colliding beams. The results cast light on processes ranging from the interactions of individual partons (quarks and gluons) to the collective motion of hot, dense matter containing hundreds of partons. The ALICE experiment is investigating effects across the range of possibilities, using data collected with proton–proton (pp), proton–lead (pPb) and lead–lead collisions (PbPb) in the LHC – and the results are showing some surprises.

A correlation between the average transverse momentum 〈pT〉 and the charged particle multiplicity Nch was first observed at CERN’s SppS collider and has since been measured in pp(p) collisions over a range of centre-of-mass energies, culminating recently at the LHC. The strong correlation observed led to a change in paradigm in the modelling of such collisions, with the proposal of mechanisms that go beyond independent parton–parton collisions.

In pp collisions, one way to understand the production of high multiplicities is through multiple parton interactions, but the incoherent superposition of such interactions would lead to the same 〈pT〉 for different values of multiplicity. The observation of a strong correlation thus led to the introduction, within the models of the PYTHIA event simulator, of colour reconnections between hadronizing strings. In this mechanism, which can be interpreted as a collective final-state effect, strings from independent parton interactions do not independently produce hadrons, but fuse before hadronization. This leads to fewer, but more energetic, hadrons. Other models that employ similar mechanisms of collective behaviour also describe the data.

CCnew14_07_13

In PbPb collisions, high-multiplicity events are the result of a superposition of (single) parton interactions taking place in a large number of nucleon–nucleon collisions. In this case, substantial rescattering of constituents is thought to lead to a redistribution of the particle spectrum, with most particles being part of a locally thermalized medium that exhibits collective, hydrodynamic-type, behaviour. The moderate increase of 〈pT〉 seen in PbPb collisions (shown in figure 1 for Nch around 10 or larger) is thus usually attributed to collective flow.

Now, the first measurements by ALICE of two-particle correlations in the intermediary system of pPb collisions have sparked an intense debate about the role of initial- and final-state effects. The pPb data on 〈pT〉 indeed exhibit features of both pp and PbPb collisions, at low and high multiplicities, respectively. However, the saturation trend of 〈pT〉 versus Nch is less pronounced in pPb collisions than in PbPb and at high multiplicities leads to a much higher value of 〈pT〉 than in PbPb. Is this nevertheless a fingerprint of collective effects in pPb collisions? Predictions that incorporate collective effects within the hadron interaction model EPOS describe the data well, but alternative explanations, based on initial-state effects (gluon saturation), have also been put forward and are being tested by these data (ALICE collaboration 2013 a).

Other recent measurements of particle production in proton–nucleus collisions have shown unexpected behaviour that is reminiscent of quark–gluon plasma (QGP) signatures. But what could cause such behaviour and is a QGP the only possible explanation? To answer this in more detail, it is important to separate particle species, as collective phenomena should follow an ordering in mass. To this end, ALICE has measured the transverse-momentum spectra of identified particles in pPb collisions at √sNN = 5.02 TeV and their dependence on multiplicity (ALICE collaboration 2013b).

The measurements show that the identified particle spectra become progressively harder with multiplicity, just as in PbPb collisions, where the hardening is more pronounced for particles of higher mass. In heavy-ion collisions, this mass ordering is interpreted as a sign of a collective radial expansion of the system. To check if such an idea describes the observations, a blast-wave parameteriz ation can be used. This assumes a locally thermalized medium that undergoes a collective expansion in a common velocity field, followed by an instantaneous common freeze-out.

CCnew15_07_13

As figure 2 shows, the blast-wave fit describes the spectra well at low pT, where hydrodynamics-like behaviour should dominate. The description fails at higher momenta, however, where the non-thermal components should contribute significantly. But are QGP-like interpretations such as this one unique in describing these measurements? The colour-recombination mechanism present in PYTHIA, discussed above, leads qualitatively to similar features to those observed in the data.

The presence of flow and of a QGP in high multiplicity pPb collisions is thus not ruled out, but since other non-QGP effects could mimic collective phenomena, further investigation is needed. Nevertheless, these results are certainly a crucial step towards a better comprehension not only of pPb collisions but also of high-energy collisions involving nuclei in general.

Charmless baryonic B decays

The LHCb collaboration has made the first sightings of the decay of B mesons into two baryons containing no charm quarks. While the collaboration has previously reported on multibody baryonic B decays, these are its first results on the rare two-body charmless modes and will help to address open questions concerning baryon formation in B decays.

CCnew11_07_13

Baryonic decays of B mesons were studied extensively by the BaBar and Belle experiments at SLAC and KEK, respectively. The measured branching fractions are typically in the range 10–6–10–4, with charmless modes at the low end of this range and those with charm having larger branching fractions. Decays with double-charm final states have branching fractions up to 10–3 in some cases, which is a surprisingly large value. The channel B+ → ppK+ was the first charmless baryonic B-meson decay mode to be seen, in 2002 (Belle collaboration 2002). Soon after, Belle struck gold again with the first observation of a two-body baryonic B decay, B0 → Λcp, which manifestly has charm (Belle collaboration 2003). However, there were no signs of charmless two-body baryonic decays of B mesons until now.

The suppression of low-multiplicity compared with higher-multiplicity decay modes is a striking feature of B decays to baryons that is not replicated by their two-body and three-body decays to mesons. It is also a key to the theoretical understanding of the dynamics behind these types of decays.

CCnew12_07_13

The LHCb collaboration used the 1.0 fb–1 data sample collected in 2011 to study the proton–antiproton spectra with or without an extra light meson – a pion or a kaon. Figure 1 shows the invariant mass distribution of ppK+ candidates in the pK+ mass window 1.44–1.585 GeV/c2, where a B+ → ppK+ signal is visible. The inset shows the pK+ invariant mass distribution near the threshold for B-signal candidates weighted to remove the non-B+ → ppK+ decay background.

The analysis reveals a clear Λ(1520) resonance, with the branching fraction for the decay chain B+→ pΛ(1520) → ppK+ measured to be close to 4 × 10–7 (LHCb collaboration 2013a). With a statistical significance exceeding 5σ, the result constitutes the first observation of a two-body charmless baryonic B decay, B+ → pΛ(1520).

Figure 2 shows a fit from a related analysis, searching for B → pp decay (LHCb collaboration 2013b). An excess of B→ pp candidates with respect to background expectations is observed with a statistical significance of 3.3σ, giving a measurement of the branching fraction for B→ pp = (1.47+0.71–0.53) × 10–8. No significant signal is observed for B0s → pp but the current analysis improves the previous bound on the branching fraction by three orders of magnitude.

GERDA sets new limits on neutrinoless double beta decay

The GERDA collaboration has obtained new strong limits for neutrinoless double beta decay, which tests if neutrinos are their own antiparticles.

The GERDA (GERmanium Detector Array) experiment, which is operated at the underground INFN Laboratori Nazionali del Gran Sasso, is looking for double beta decay processes in the germanium isotope 76Ge, both with and without the emission of neutrinos. For 76Ge, normal beta decay is energetically forbidden, but the simultaneous conversion of two neutrons with the emission of two neutrinos is possible. This has been measured by GERDA with unprecedented precision with a half-life of about 2 × 1021 years, making it one of the rarest decays ever observed. However, if neutrinos are Majorana particles, neutrinoless double beta decay should also occur, at an even lower rate. In this case, the antineutrino from one beta decay is absorbed as a neutrino by the second beta-decaying neutron, which is possible if the neutrino is its own antiparticle.

In GERDA germanium crystals are both source and detector. 76Ge has an abundance of about 8% in natural germanium and its fraction was therefore enriched more than 10-fold before the special detector crystals were grown. To help to minimize the backgrounds from environmental radioactivity, the GERDA detector crystals and the surrounding detector parts have been carefully selected and processed. In addition, the detectors are located in the centre of a huge vessel filled with extremely clean liquid argon, lined by ultrapure copper, which in turn is surrounded by a 10-m diameter tank filled with high purity water. Last, but not least, it is all located underground below 1400 m of rock. The combination of all of these techniques has made it possible to reduce the background to unprecedented levels.

Data taking started in autumn 2011 using eight detectors if 2 kg each. Subsequently, five additional detectors were commissioned. Until recently, the signal region was blinded and the researchers focused on the optimization of the data analysis procedures. The experiment has now completed its first phase, with 21 kg years of accumulated data. The analysis, in which all calibrations and cuts had been defined before the data in the signal region were processed, revealed no signal of neutrinoless double beta decay in 76Ge, which leads to the world’s best lower limit for the half-life of 2.1 × 1025 years. Combined with information from other experiments, this result rules out an earlier claim for a signal by others.

The next steps for GERDA will be to add new detectors, effectively doubling the amount of 76 Ge. Data taking will then continue in a second phase after some further improvements are implemented to achieve even stronger background suppression.

• GERDA is a European collaboration with scientists from 19 research institutes or universities in Germany, Italy, Russia, Switzerland, Poland and Belgium.

T2K observes νμ→νe definitively

The first candidate νe event

The international T2K collaboration chose the EPSHEP2013 meeting in Stockholm as the forum to announce its definitive observation of the transformation of muon-neutrinos to electron-neutrinos, νμ→νe.

In 2011, the collaboration announced the first signs of this process – at the time a new type of neutrino oscillation. Now with 3.5 times more data, T2K has firmly established the transformation at a 7.5σ significance level.

In the T2K experiment, a νμ beam is produced in the Japan Proton Accelerator Research Complex (J-PARC) in Tokai on the east coast of Japan. The beam – monitored by a near detector in Tokai – is aimed at the Super-Kamiokande detector, which lies underground in Kamioka near the west coast, 295 km away. Analysis of the data from Super-Kamiokande reveals that there are more νe (a total of 28 events) than would be expected (4.6 events) without the transformation process.

Observation of this type of neutrino oscillation opens the way to new studies of charge-parity (CP) violation in neutrinos, which may be linked to the domination of matter over antimatter in the present-day universe. The T2K collaboration expects to collect 10 times more data in the near future, including data with an antineutrino beam for studies of CP violation.

In announcing the discovery, the collaboration paid tribute to the unyielding and tireless effort by the J-PARC staff and management to deliver high-quality beam to T2K after the devastating earthquake in eastern Japan in March 2011. The earthquake caused severe damage to the accelerator complex and abruptly halted the data-taking run of the T2K experiment.

• The T2K experiment was constructed and is operated by an international collaboration, which currently consists of more than 400 physicists from 59 institutions in 11 countries: Canada, France, Germany, Italy, Japan, Poland, Russia, Switzerland, Spain, UK and the US.

EPS-HEP2013: these are good times for physics

Stockholm, with its many stretches of water, islands and old town, provided an attractive setting for the 2013 International Europhysics Conference on High-Energy Physics, EPS-HEP2013 on 18–24 July. Hosted by the KTH (Royal Institute of Technology) and Stockholm University, the conference centres on a busy programme of parallel and plenary sessions.

Like particle physics itself, EPS-HEP has a global reach, with people attending from Asia and the Americas, as well as from Europe. This year there were some 750 participants, including many young people who presented results in both parallel and poster sessions. As many as 440 speakers and more than 100 presenters of posters brought news from a host of experiments around the world, ranging from those at particle accelerators and colliders to others deep underground and in space.

CCnew5_07_13

Coming just one year after the announcement of the discovery of a new boson at CERN’s LHC, the conference provided a showcase for the latest results from the ATLAS and CMS experiments, as well as from Fermilab’s Tevatron. Together, they confirm the new particle as a Higgs boson, compatible with the Standard Model, and are making progress in pinning down its properties. Other measurements from the LHC and the Tevatron continue to test the Standard Model, as in the search for rare decay modes. The CMS and LHCb collaborations presented results on the decay Bs → μμ, two years after the CDF collaboration reported a first measurement, in slight tension with the Standard Model, at EPS-HEP2011 in Grenoble. CMS and LHCb now observe this decay at more than 4σ, with a branching fraction that is in good agreement with the Standard Model, therefore closing a potential window on new physics (Strangely beautiful dimuons).

All four of the large LHC collaborations – ALICE, ATLAS, CMS and LHCb – presented results in the dedicated sessions on ultrarelativistic heavy ions, which also featured presentions of measurements from the Relativistic Heavy-Ion Collider at Brookhaven. First results from the proton–lead run at the LHC are yielding surprises, including some intriguing similarities with findings in lead–lead collisions (Charmless baryonic B decays).

CCnew6_07_13

Beyond the Standard Model, the worldwide search for dark matter has progressed with experiments that are becoming increasingly precise, gaining a factor of 10 in sensitivity every two years. There are also improved results from experiments at the intensity frontier, in the study of neutrinos and in particle astrophysics. Highlights here included the T2K collaboration’s updated measurement with improved background rejection, which now indicates electron-neutrino appearance at a significance of 7σ (T2K observes νμ→νe definitively). Other news included results from the GERDA experiment, which sets a new lower limit on the half-life for neutrinoless double-beta decay of 2.1 × 1025 years.

Other sessions looked to the continuing health of the field, with presentations of studies on novel ideas for future particle accelerators and detection techniques. These topics also featured in the special session for the European Committee for Future Accelerators, which looked at future developments in the context of the update of the European Strategy for Particle Physics.

An important highlight of the conference was the awarding of the European Physical Society High Energy and Particle Physics Prize to the ATLAS and CMS collaborations “for the discovery of a Higgs boson, as predicted by the Brout-Englert-Higgs mechanism”, and to Michel Della Negra, Peter Jenni and Tejinder Virdee, “for their pioneering and outstanding leadership roles in the making of the ATLAS and CMS experiments”. François Englert and Peter Higgs were there in person to present the prizes and to take part in a press conference together with the prizewinners. Spokespersons Dave Charlton and Joe Incandela accepted the prizes on behalf of ATLAS and CMS, respectively.

Wrapping up the conference in a summary talk, Sergio Bertolucci, CERN’s director for research and computing, noted that it had brought together many beautiful experimental results for comparison with precise theoretical predictions. “These are lucky times for physics,” he concluded, with experiments and theory providing an “unprecedented convergence of the extremes of scales around a common set of questions”.

• For details on all the talks see http://eps-hep2013.eu. A longer report will appear in a future edition of the CERN Courier.

ALICE goes to Stockholm and Birmingham

Second fourier coefficient

The ALICE collaboration had a significant presence at two recent major conferences, the 2013 European Physical Society Conference on High-Energy Physics (EPSHEP 2013), in Stockholm (EPS-HEP2013: these are good times for physics), and the 14th Topical Conference on Strangeness in Heavy Flavour Production in Heavy-Ion Collisions – Strangeness in Quark Matter 2013 (SQM2013) – that took place on 22–27 July at Birmingham University in the UK.

The many contributed talks and plenary presentations, in particular at SQM2013, highlighted new results from the proton–lead (pPb) data recorded in early 2013. While this run was initially intended to provide control data sets, several unexpected, and currently unexplained, results have been observed.

Presentations also covered updates on both soft (low pT) and hard (high pT and heavy flavour) probes of PbPb collisions

Most intriguingly, both the spectra of identified particles and charged-hadron correlations in high-multiplicity pPb events reveal signals suggestive of collective flow, which are similar to those observed in heavy-ion collisions, as the figure shows. These mass-dependent phenomena do not arise trivially in either the colour glass condensate or in the gluon saturation framework that describes the initial state of the colliding nuclei at the relevant small values of Bjorken-x.

Also in pPb collisions, ALICE’s measurements of minimum-bias spectra for a variety of hadronic species and jets, reveal no strong deviations from the expectations of the scaled number of nucleon–nucleon (binary) collisions. This confirms that the striking suppressions observed so far for all final-state hadrons in lead–lead (PbPb) collisions are a specific feature of quark and/or gluon energy loss via interactions with the quark–gluon plasma (QGP).

Presentations also covered updates on both soft (low pT) and hard (high pT and heavy flavour) probes of PbPb collisions. Higher precision results on nuclear-modification factors and elliptic flow – including measurements on heavy quarks – as a function of the event centrality gave the most detailed picture to date of partonic interactions with the QGP. The measurements of D mesons also indicate that the initial density and temperature of the QGP are so high that the heavy, charm quarks thermalize with the QGP before hadronization. Interestingly, the J/ψ results reveal much less suppression than at Brookhaven’s Relativistic Heavy-Ion Collider, suggesting that significant late-stage regeneration of these quarkonia states occurs as a result of the initial copious production of charm quarks in the heavy-ion collisions at the LHC.

Last, the high-precision soft physics results from the PbPb data underscored the potential significance of a hadronic re-scattering phase at the end of the produced medium’s evolution at the LHC. This phase has not previously been considered important when predicting signatures of the QGP, but it must now be accounted for to model accurately the full dynamics of a heavy-ion collision at the LHC.

There was lively debate at both conferences about the possible interpretations of all of these interesting new results, continuing well after the talks were over. Future studies were proposed that should help to unravel the origin of these intriguing phenomena observed in both pPb and PbPb collisions.

The LHC’s first long run

From the first 3.5 TeV collisions in March 2010 to the start of the first long shutdown in March 2013, the LHC went through three years of improving performance. This led in 2012 to the discovery of a Higgs boson, which made headlines around the world and brought many accolades to CERN, including the 2013 EPS-HEPP prize (EPS-HEP2013: these are good times for physics). This issue takes a look behind the scenes at what underpinned the successful operation of the LHC during this first long run. With thanks to Theresa Harrison, Warwick University, for her editorial work with the authors of these articles. Thanks also to Jesse Karjalainen, IOP Publishing, for his work on the design of what will be his last issue of CERN Courier as he heads for pastures new after six years.

bright-rec iop pub iop-science physcis connect