Comsol -leaderboard other pages

Topics

Safeguarding the superconducting magnets

The total electromagnetic energy stored in the LHC superconducting magnets is about 10,000 MJ, which is more than an order of magnitude greater than in the nominal stored beams. Any uncontrolled release of this energy presents a danger to the machine. One way in which this can occur is through a magnet quench, so the LHC employs a sophisticated system to detect quenches and protect against their harmful effects.

The magnets of the LHC are superconducting if the temperature, the applied magnetic induction and the current density are below a critical set of interdependent values – the critical surface (figure 1). A quench occurs if the limits of the critical surface are exceeded locally and the affected section of magnet coil changes from a superconducting to a normal conducting state. The resulting drastic increase in electrical resistivity causes Joule heating, further increasing the temperature and spreading the normal conducting zone through the magnet.

An uncontrolled quench poses a number of threats to a superconducting magnet and its surroundings. High temperatures can destroy the insulation material or even result in a meltdown of superconducting cable: the energy stored in one dipole magnet can melt up to 14 kg of cable. The excessive voltages can cause electric discharges that could further destroy the magnet. In addition, high Lorentz forces and temperature gradients can cause large variations in stress and irreversible degradation of the superconducting material, resulting in a permanent reduction of its current-carrying capability.

The LHC main superconducting dipole magnets achieve magnetic fields of more than 8 T. There are 1232 main bending dipole magnets, each 15 m long, that produce the required curvature for proton beams with energies up to 7 TeV. Both the main dipole and the quadrupole magnets in each of the eight sectors of the LHC are powered in series. Each main dipole circuit includes 154 magnets, while the quadrupole circuits consist of 47 or 51 magnets, depending on the sector. All superconducting components, including bus- bars and current leads as well as the magnet coils, are vulnerable to quenching under adverse conditions.

The LHC employs sophisticated magnet protection, the so-called quench-protection system (QPS), both to safeguard the magnetic circuits and to maximize beam availability. The effectiveness of the magnet-protection system is dependent on the timely detection of a quench, followed by a beam dump and rapid disconnection of the power converter and current extraction from the affected magnetic circuit. The current decay rate is determined by the inductance, L, and resistance, R, of the resulting isolated circuit, with a discharge time constant of τ = L/R. For the purposes of magnet protection, reducing the current discharge time can be viewed as equivalent to the extraction and dissipation of stored magnetic energy. This is achieved by increasing the resistance of both the magnet and its associated circuit.

Additional resistance in the magnet is created by using quench heaters to heat up large fractions of the coil and spread the quench over the entire magnet. This dissipates the stored magnetic energy over a larger volume and results in lower hot-spot temperatures. The resistance in the circuit is increased by switching-in a dump resistor, which extracts energy from the circuit (figure 2). As soon as one magnet quenches, the dump resistor is used to extract the current from the chain. The size of the resistor is chosen such that the current does not decrease so quickly as to induce large eddy-current losses, which would cause further magnets in the chain to quench.

Detection and mitigation

A quench in the LHC is detected by monitoring the resistive voltage across the magnet, which rises as the quench appears and propagates. However, the total measured voltage also includes the inductive-voltage component, which is driven by the magnet current ramping up or down. Reliably extracting the resistive-voltage signal from the total voltage-measurement is done using detection systems with inductive-voltage compensation. In the case of fast-ramping corrector magnets with large inductive voltages, it is more difficult to detect a resistive voltage because of the low signal-to-noise ratio; higher threshold voltages have to be used and a quench is therefore detected later. Following the detection and validation of a quench, the beam is aborted and the power converter is switched off. The time between the start of a quench and quench validation (i.e. activating the beam and powering interlocks) must be independent of the selected method of protection.

Creating a parallel path to the magnet via a diode allows the circuit current to by-pass the quenching magnet (figure 2). As soon as the increasing voltage over the quenched coil reaches the threshold voltage of the diode, the current starts to transfer into the diode. The magnet is by-passed by its diode and discharges independently. The diode must withstand the radiation environment, carry the current of the magnet chain for a sufficient time and provide sufficiently high turn-on voltage, to hold during the ramp up of the current. The LHC’s main magnets use cold diodes, which are mounted within the cryostat. These have a significantly larger threshold voltage than diodes that operate at room temperature – but the threshold can be reached sooner if quench heaters are fired.

The sequence of events following quench detection and validation can be summarized as follows:

• 1. The beam is dumped and the power converter turned off.

• 2. The quench-heaters are triggered and the dump-resistor is switched-in.

• 3. The current transfers into the dump resistor and starts to decrease.

• 4. Once the quench heaters take effect, the voltage over the quenched magnet rises and switches on the cold diode.

• 5. The magnet starts now to be by-passed in the chain and discharges over the internal resistance.

• 6. The cold diode heats up and the forward voltage decreases.

• 7. The current decrease induces eddy-current losses in the magnet windings yielding enhanced quench propagation. • 8. The current of the quenched magnet transfers fully into the cold diode.

• 9. The magnet chain is completely switched off a few hundred seconds after the quench detection.

QPS in practice

The QPS must perform with high reliability and high LHC beam availability. Satisfying these contradictory requirements requires careful design to optimize the sensitivity of the system. While failure to detect and control a quench can clearly have a significant impact on the integrity of the accelerator, QPS settings that are too tight may increase the number of false triggers significantly. As well as causing additional downtime of the machine, false triggers – which can result from electromagnetic perturbations, such as network glitches and thunderstorms – can contribute to the deterioration of the magnets and quench heaters by subjecting them to unnecessary spurious quenches and fast de-excitation.

One of the important challenges for the QPS is coping with the conditions experienced during a fast power abort (FPA) following quench validation. Switching off the power converter and activating the energy extraction to the dump resistors causes electromagnetic transients and high voltages. The sensitivity of the QPS to spurious triggers from electromagnetic transients caused a number of multiple-magnet quench events in 2010 (figure 3). Following simulation studies of transient behaviour, a series of modifications were implemented to reduce the transient signals from a FPA. A delay was introduced between switching off the power converter and switching-in the dump resistors, with “snubber” capacitors installed in parallel to the switches to reduce electrical arcing and related transient voltage waves in the circuit (these are not shown in figure 2). These improvements resulted in a radically reduced number of spurious quenches in 2011 – only one such quench was recorded, in a single magnet, and this was probably due to an energetic neutron, a so-called “single-event upset” (SEU). The reduction in falsely triggered quenches between 2010 and 2011 was the most significant improvement in the QPS performance and impacted directly on the decision to increase the beam energy to 4 TeV in 2012.

To date, there have been no beam-induced quenches with circulating beams above injection current

To date, there have been no beam-induced quenches with circulating beams above injection current. This operational experience shows that the beam-loss monitor thresholds are low enough to cause a beam dump before beam losses cause a quench. However, the QPS had to act on several occasions in the event of real quenches in the bus-bars and current leads, demonstrating real protection in operation. The robustness of the system was evident on 18 August 2011 when the LHC experienced a total loss of power at a critical moment for the magnet circuits. At the time, the machine was ramping up and close to maximum magnet current with high beam intensity: no magnet tripped and no quenches occurred.

A persistent issue for the vast and complex electronics systems used in the QPS is exposure to radiation. In 2012 some of the radiation-to-electronics problems were partly mitigated by the development of electronics more tolerant to radiation. The number of trips per inverse femtobarn owing to SEUs was reduced by about 60% from 2011 to 2012 thanks to additional shielding and firmware upgrades. The downtime from trips is also being addressed by automating the power cycling to reset electronics after a SEU. While most of the radiation-induced faults are transparent to LHC operation, the number of beam dumps caused by false triggers remains an issue. Future LHC operation will require improvements in radiation-tolerant electronics, coupled with a programme of replacement where necessary.

Future operation

During the LHC run in 2010 and 2011 with a beam energy of 3.5 TeV, the normal operational parameters of the dipole magnets were well below the critical surface required for superconductivity. The main dipoles operated at about 6 kA and 4.2 T, while the critical current at this field is about 35 kA, resulting in a safe temperature margin of 4.9 K. However, this value will become 1.4 K for future LHC operation at 7 TeV per beam. The QPS must therefore be prepared for operation with tighter margins. Moreover, at higher beam energy quench events will be considerably larger, involving up to 10 times more magnetic energy. This will result in longer recuperation times for the cryogenic system. There is also a higher likelihood of beam-induced quench events and quenches induced by conditions such as faster ramp rates and FPAs.

The successful implementation of magnet protection depends on a high-performance control and data acquisition system, automated software analysis tools and highly trained personnel for technical interventions. These have all contributed to the very good performance during 2010–2013. The operational experience gained during this first long run will allow the QPS to meet the challenges of the next run.

The challenge of keeping cool

Distribution of the cryoplants

The LHC is one of the coldest places on Earth, with superconducting magnets – the key defining feature – that operate at 1.9 K. While there might be colder places in other laboratories, none compares to the LHC’s scale and complexity. The cryogenic system that provides the cooling for the superconducting magnets, with their total cold mass of 36,000 tonnes, is the largest and most advanced of its kind. It has been running continuously at some level since January 2007, providing stalwart service and achieving an availability equivalent to more than 99% per cryogenic plant.

The task of keeping the 27-km-long collider at 1.9 K is performed by helium that is cooled to its superfluid state in a huge refrigeration system. While the niobium-titanium alloy in the magnet coils would be superconducting if normal liquid helium were used as the coolant, the performance of the magnets is greatly enhanced by lowering their operating temperature and by taking advantage of the unique properties of superfluid helium. At atmospheric pressure, helium gas liquefies at around 4.2 K but on further cooling it undergoes a second phase change at about 2.17 K and becomes a superfluid. Among many remarkable properties, superfluid helium has a high thermal conductivity, which makes it the coolant of choice for the refrigeration and stabilization of large superconducting systems.

The LHC consists of eight 3.3-km-long sectors with sites for access shafts to services on the surface at the ends of each sector. Five of these sites are used to locate the eight separate cryogenic plants, each dedicated to serving one sector (figure 1). An individual cryoplant consists of a pair of refrigeration units: one, the 4.5 K refrigerator, provides a cooling-capacity equivalent to 18 kW at 4.5 K; while the other, the 1.8 K refrigeration unit, provides a further cooling capacity of 2.4 kW at 1.8 K. Therefore, each of the eight cryoplants must distribute and recover kilowatts of refrigeration across a distance of 3.3 km, to be achieved with a temperature change of less than 0.1 K.

Four of the 4.5 K refrigerators were recovered from the second phase of the Large Electron–Positron collider

Four of the 4.5 K refrigerators were recovered from the second phase of the Large Electron–Positron collider (LEP2), where they were used to cool its superconducting radiofrequency cavities. These “recycled” units have been upgraded to operate on the LHC sectors that have a lower demand for refrigeration. The four high-load sectors are instead cooled by new 4.5 K refrigerators. The refrigeration capacity needed to cool the 4500 tonnes of material in each sector of the LHC is enormous and can be produced only by using liquid nitrogen. Consequently, each 4.5 K refrigerator is equipped with a 600-kW liquid-nitrogen pre-cooler. This is used to cool a flow of helium down to 80 K while the corresponding sector is cooled before being filled with helium – a procedure that takes just under a month. Using only helium in the tunnel considerably reduces the risk of oxygen deficiency in the case of an accidental release.

The 4.5 K refrigeration system works by first compressing the helium gas and then allowing it to expand. During expansion it cools by losing energy through mechanical turbo-expanders that run at up to 140,000 rpm on helium-gas bearings. Each of the refrigerators consists of a helium-compressor station equipped with systems to remove oil and water, as well as a vacuum-insulated cold box (60 tonnes) where the helium is cooled, purified and liquefied. The compressor station supplies compressed helium gas at 20 bar and room temperature. The cold box houses the heat exchangers and turbo-expanders that provide the cooling capacities necessary to liquefy the helium at 4.5 K. The liquid helium then passes to the 1.8 K refrigeration unit, where the cold-compressor train decreases its saturation pressure and consequently its saturation temperature down to 1.8 K. Each cryoplant is equipped with a fully automatic process-control system that manages about 1000 inlets and outlets per plant. The system takes a total electrical input power of 32 MW and reaches an equivalent cooling capacity of 144 kW at 4.5 K – enough to provide almost 40,000 litres of liquid helium per hour.

The compressor unit

In the LHC tunnel, a cryogenic distribution line runs alongside the machine. It consists of eight continuous cryostats, each about 3.2 km long and housing four (or five) headers to supply and recover helium, with temperatures ranging from 4 K to 75 K. A total of 310 service modules, of 44 different types feed the machine. These contain sub-cooling heat exchangers, all of the cryogenic control valves for the local cooling loops and 1–2 cold pressure-relief valves that protect the magnet cold masses, as well as monitoring and control instrumentation. Overall, the LHC cryogenic system contains about 60,000 inlets and outlets, which are managed by 120 industrial-process logic controllers that implement more than 4000 PID control loops.

Operational aspects

The structure of the group involved with the operation of the LHC’s cryogenics has evolved naturally since the installation phase, so maintaining experience and expertise. Each cryogenically independent sector of the LHC and its pair of refrigerators is managed by its own dedicated team for process control and operational procedures. In addition, there are three support teams for mechanics, electricity-instrumentation controls and metrology instrumentation. A further team handles scheduling, maintenance and logistics, including cryogen distribution. Continuous monitoring and technical support is provided by personnel who are on shift “24/7” in the CERN Control Centre and on standby duties. This constant supervision is necessary because any loss of availability for the cryogenic system impacts directly on the availability of the accelerator. Furthermore, the response to cryogenic failures must be rapid to mitigate the consequences of loss of cooling.

In developing a strategy for operating the LHC it was necessary to define the overall availability criteria. Rather than using every temperature sensor or liquid-helium level as a separate interlock to the magnet powering and therefore the beam permit, it made more sense to organize the information according to the modularity of the magnet-powering system. As a result, each magnet-powering subsector is attributed a pair of cryogenic signals: “cryo-maintain” (CM) and “cryo-start” (CS). The CM signal corresponds to any condition that requires a slow discharge of the magnets concerned, while the CS signal has more stringent conditions to enable powering to take place with sufficient margins for a smooth transition to the CM threshold. A global CM signal is defined as the combination of all of the required conditions for the eight sectors. This determines the overall availability of the LHC cryogenics.

During the first LHC beams in 2009, the system immediately delivered availability of 90% despite there being no means of dealing quickly with identified faults. These were corrected whenever possible during the routine technical stops of the accelerator and the end-of-year stops. The main issues resolved during this phase were the elimination of two air leaks in sub-atmospheric circuits, the consolidation of all of the 1200 cooling valves for current leads and the 1200 electronic cards for temperature sensors that were particularly affected by energetic neutron impacts, so-called single-event upsets (SEUs).

Availability of the cryogenic system 2010–2012.

Since early operation for physics began in November 2009, the availability has been above 90% for more than 260 days per year. A substantial improvement occurred in 2012–2013 because of progress in the operation of the cryogenic system. The operation team undertook appropriate training that included the evaluation and optimization of operation settings. There were major improvements in handling utilities-induced failures. In particular, in the case of electrical-network glitches, fine-tuning the tolerance thresholds for the helium compressors and cooling-water stations represented half of the gain. A reduction in the time taken to recover nominal cryogenic conditions after failures also led to improved availability. The progress made during the past three years led to a reduction in the number of short stops, i.e. less than eight hours, from 140 to 81 per year. By 2012, the efforts of the operation and support teams had resulted in a global availability of 94.8%, corresponding to an equivalent availability of more than 99.3% for each of the eight cryogenically independent sectors.

In addition, the requirement to undertake an energy-saving programme contributed significantly to the improved availability and efficiency of the cryogenic system – and resulted in a direct saving of SwFr3 million a year. Efforts to improve efficiency have also focused on the consumption of helium. The overall LHC inventory comes to 136 tonnes of helium, with an additional 15 tonnes held as strategic storage to cope with urgent situations during operation. For 2010 and 2011, the overall losses remained high because of increased losses from the newly commissioned storage tanks during the first end-of-year technical stop. However, the operational losses were substantially reduced in 2011. Then, in 2012, the combination of a massive campaign to localize all detectable leaks – combined with the reduced operational losses – led to a dramatic improvement in the overall figure, nearly halving the losses.

Towards the next run

Thanks to the early consolidation work already performed while ramping up the LHC luminosity, no significant changes are being implemented to the cryogenic system during the first long shutdown (LS1) of the LHC. However, because it has been operating continuously since 2007, a full preventive-maintenance plan is taking place. A major overhaul of helium compressors and motors is being undertaken at the manufacturers’ premises. The acquisition of important spares for critical rotating machinery is already completed. Specific electronic units will be upgraded or relocated to cope with future radiation levels. In addition, identified leaks in the system must be repaired. The consolidation of the magnet interconnections – including the interface with the current leads – together with relocation of electronics to limit SEUs, will require a complete re-commissioning effort before cool-down for the next run.

The scheduled consolidation work – together with lessons learnt from the operational experience so far – will be key factors for the cryogenic system to maintain its high level of performance under future conditions at the LHC. The successful systematic approach to operations will continue when the LHC restarts at close to nominal beam energy and intensity. With greater heat loads corresponding to increased beam parameters and magnet currents, expectations are high that the cryogenic system will meet the challenge.

The LHC’s first long run

Performance 2011

Since the first 3.5 TeV collisions in March 2010, the LHC has had three years of improving integrated luminosity. By the time that the first proton physics run ended in December 2012, the total integrated proton–proton luminosity delivered to each of the two general-purpose experiments – ATLAS and CMS – had reached nearly 30 fb–1 and enabled the discovery of a Higgs boson. ALICE, LHCb and TOTEM had also operated successfully and the LHC team was able to fulfil other objectives, including productive lead–lead and proton–lead runs.

Establishing good luminosity depends on several factors but the goal is to have the largest number of particles potentially colliding in the smallest possible area at a given interaction point (IP). Following injection of the two beams into the LHC, there are three main steps to collisions. First, the beam energy is ramped to the required level. Then comes the squeeze. This second step involves decreasing the beam size at the IP using quadrupole magnets on both sides of a given experiment. In the LHC, the squeeze process is usually parameterized by β* (the beam size at the IP is proportional to the square root of β*). The third step is to remove the separation bumps that are formed by local corrector magnets. These bumps keep the beams separated at the IPs during the ramp and squeeze.

High luminosity translates into having many high-intensity particle bunches, an optimally focused beam size at the interaction point and a small emittance (a measure of the spread of the beam in transverse phase space). The three-year run saw relatively distinct phases in the increase of proton–proton luminosity, starting with basic commissioning then moving on through exploration of the limits to full physics production running in 2012.

The beam energy remained at 3.5 TeV in 2011 and the year saw exploitation combined with exploration of the LHC’s performance limits

The first year in 2010 was devoted to commissioning and establishing confidence in operational procedures and the machine protection system, laying the foundations for what was to follow. Commissioning of the ramp to 3.5 TeV went smoothly and the first (unsqueezed) collisions were established on 30 March. Squeeze commissioning then successfully reduced β* to 2 m in all four IPs.

With June came the decision to go for bunches of nominal intensity, i.e. around 1011 protons per bunch (see table below, p27). This involved an extended commissioning period and subsequent operation with beams of up to 50 or so widely separated bunches. The next step was to increase the number of bunches further. This required the move to bunch trains with 150 ns between bunches and the introduction of well defined beam-crossing angles in the interaction regions to avoid parasitic collisions. There was also a judicious back-off in the squeeze to a β* of 3.5 m. These changes necessitated setting up the tertiary collimators again and recommissioning the process of injection, ramp and squeeze – but provided a good opportunity to bed-in the operational sequence.

A phased increase in total intensity followed, with operational and machine protection validation performed before each step up in the number of bunches. Each increase was followed by a few days of running to check system performance. The proton run for the year finished with beams of 368 bunches of around 1.2 × 1011 protons per bunch and a peak luminosity of 2.1 × 1032 cm–2 s–1. The total integrated luminosity for both ATLAS and CMS in 2010 was around 0.04 fb–1.

The beam energy remained at 3.5 TeV in 2011 and the year saw exploitation combined with exploration of the LHC’s performance limits. The campaign to increase the number of bunches in the machine continued with tests with a 50 ns bunch spacing. An encouraging performance led to the decision to run with 50 ns. A staged ramp-up in the number of bunches ensued, reaching 1380 – the maximum possible with a bunch spacing of 50 ns – by the end of June. The LHC’s performance was increased further by reducing the emittances of the beams that were delivered by the injectors and by gently increasing the bunch intensity. The result was a peak luminosity of 2.4 × 1033 cm–2 s–1 and some healthy delivery rates that topped 90 pb–1 in 24 hours.

The next step up in peak luminosity in 2011 followed a reduction in β* in ATLAS and CMS from 1.5 m to 1 m. Smaller beam size at an IP implies bigger beam sizes in the neighbouring inner triplet magnets. However, careful measurements had revealed a better-than-expected aperture in the interaction regions, opening the way for this further reduction in β*. The lower β* and increases in bunch intensity eventually produced a peak luminosity of 3.7 × 1033 cm–2 s–1, beyond expectations at the start of the year. ATLAS and CMS had each received around 5.6 fb–1 by the end of proton–proton running for 2011.

An increase in beam energy to 4 TeV marked the start of operations in 2012 and the decision was made to stay at a 50 ns bunch spacing with around 1380 bunches. The aperture in the interaction regions, together with the use of tight collimator settings, allowed a more aggressive squeeze to β* of 0.6 m. The tighter collimator settings shadow the inner triplet magnets more effectively and allow the measured aperture to be exploited fully. The price to pay was increased sensitivity to orbit movements – particularly in the squeeze – together with increased impedance, which as expected had a clear effect on beam stability.

Beam envelopes

Peak luminosity soon came close to its highest for the year, although there were determined and long-running attempts to further improve performance. These were successful to a certain extent and revealed some interesting issues at high bunch and total beam intensity. Although never debilitating, instabilities were a recurring problem and there were phases when they cut into operational efficiency. Integrated luminosity rates, however, were generally healthy at around 1 fb–1 per week. This allowed a total of about 23 fb–1 to be delivered to both ATLAS and CMS during a long operational year with the proton–proton run extended until December.

Apart from the delivery of high instantaneous and integrated proton–proton luminosity to ATLAS and CMS, the LHC team also satisfied other physics requirements. These included lead–lead runs in 2010 and 2011, which delivered 9.7 and 166 μb–1, respectively, at an energy of 3.5Z TeV (where Z is the atomic number of lead). Here the clients were ALICE, ATLAS and CMS. A process of luminosity levelling at around 4 × 1032 cm–2 s–1 via transverse separation with a tilted crossing angle enabled LHCb to collect 1.2 fb–1 and 2.2 fb–1 of proton–proton data in 2011 and 2012, respectively. ALICE enjoyed some sustained proton–proton running in 2012 at around 5 × 1030 cm–2 s–1, with collisions between enhanced satellite bunches and the main bunches. There was also a successful β* = 1 km run for TOTEM and the ATLAS forward detectors. This allowed the first LHC measurement in the Coulomb-nuclear interference region. Last, the three-year operational period culminated in a successful proton–lead run at the start of 2013, with ALICE, ATLAS, CMS and LHCb all taking data.

One of the main features of operation in 2011 and 2012 was the high bunch intensity and lower-than-nominal emittances offered by the excellent performance of the injector chain of Booster, Proton Synchrotron and Super Proton Synchrotron. The bunch intensity had been up to 150% of nominal with 50 ns bunch spacing, while the normalized emittance going into collisions had been around 2.5 mm mrad, i.e. 67% of nominal. Happily, the LHC proved to be capable of absorbing these brighter beams, notably in terms of beam–beam effects. The cost to the experiments was high pile-up, an issue that was handled successfully.

The table shows the values for the main luminosity-related parameters at peak performance of the LHC from 2010 to 2012 and the design values. It shows that, even though the beam size is naturally larger at lower energy, the LHC has achieved 77% of design luminosity at four-sevenths of the design energy with a β* of 0.6 m (compared with the design value of 0.55 m) with half of the nominal number of bunches.

LHC operations in 2010–2013

Operational efficiency has been good with the integrated luminosity per week record reaching 1.3 fb–1. This is the result of outstanding system performance combined with fundamental characteristics of the LHC. The machine has a healthy single-beam lifetime before collisions of more than 300 hours and on the whole enjoys good vacuum conditions in both warm and cold regions. With a peak luminosity of around 7 × 1033 cm–2 s–1 at the start of a fill, the luminosity lifetime is initially in the range of 6–8 hours, increasing as the fill develops. There is minimal drift in beam overlap during physics data-taking and the beams are generally stable.

At the same time, a profound understanding of the beam physics and a good level of operational control have been established. The magnetic aspects of the machine are well understood thanks to modelling with FiDel (the Field Description for the LHC). A long and thorough magnet-measurement and analysis campaign meant that the deployed settings produced a machine with a linear optics that is close to the nominal model. Measurement and correction of the optics has aligned machine and model to an unprecedented level.

A robust operational cycle is now well established, with the steps of pre-cycle, injection, 450 GeV machine, ramp, squeeze and collide mostly sequencer-driven. A strict pre-cycling regime means that the magnetic machine is remarkably reproducible. Importantly, the resulting orbit stability – or the ability to correct back consistently to a reference – means that the collimator set-up remains good for a year’s run.

Considering the size, complexity and operating principles of the LHC, its availability has generally been good. The 257-day run in 2012 included around 200 days dedicated to proton–proton physics, with 36.5% of the time being spent in stable beams. This is encouraging for a machine that is only three years into its operational lifetime. Of note is the high availability of the critical LHC cryogenics system. In addition, many other systems also have crucial roles in ensuring that the LHC can run safely and efficiently.

In general the LHC beam-dump system (LBDS) worked impeccably, causing no major operational problems or long downtime. Beam-based set-up and checks are performed at the start of the operational year. The downstream protection devices form part of the collimator hierarchy and their proper positioning is verified periodically. The collimation system maintained a high proton-cleaning efficiency and semi-automatic tools have improved collimator set-up times during alignment.

The overall protection of the machine is ensured by rigorous follow-up, qualification and monitoring. The beam drives a subtle interplay of the LBDS, the collimation system and protection devices, which rely on a well defined aperture, orbit and optics for guaranteed safe operation. The beam dump, injection and collimation teams pursued well organized programmes of set-up and validation tests, permitting routine collimation of 140 MJ beams without a single quench of superconducting magnets from stored beams.

The beam instrumentation had great performance overall. Facilitating a deep understanding of the machine, it paved the way for the impressive improvement in performance during the three-year run. The power converters performed superbly, with good tracking between reference and measured currents and between the converters around the ring. There was good performance from the key RF systems. Software and controls benefited from a coherent approach, early deployment and tests on the injectors and transfer lines.

In summary, the LHC is performing well and a huge amount of experience and understanding has been gained during the past three years

There have inevitably been issues arising during the exploitation of the LHC. Initially, single-event upsets caused by beam-induced radiation to electronics in the tunnel were a serious cause of inefficiency. This problem had been foreseen and a sustained programme of mitigation measures, which included relocation of equipment, additional shielding and further equipment upgrades, resulted in a reduction of premature beam dumps from 12 per fb–1 to 3 per fb–1 in 2012. By contrast, an unforeseen problem concerned unidentified falling objects (UFOs) – dust particles falling into the beam causing fast, localized beam-loss events. These have now been studied and simulated but might still cause difficulties after the move to higher energy and a bunch spacing of 25 ns following the current long shutdown.

Beam-induced heating has been an issue. Essentially, all cases turned out to be localized and connected with nonconformities, either in design or installation. Design problems have affected the injection-protection devices and the mirror assemblies of the synchrotron-radiation telescopes, while installation problems have occurred in a low number of vacuum assemblies.

Beam instabilities dogged operations during 2012. The problems came with the push in bunch intensity, with the peak going into stable beams reaching around 1.7 × 1011 protons per bunch, i.e. ultimate bunch intensity. Other contributory factors included increased impedance from the tight collimator settings, smaller than nominal emittance and operation with low chromaticity during the first half of the run.

A final beam issue concerns the electron cloud. Here, electrons emitted from the vacuum chamber are accelerated by the electromagnetic fields of the circulating bunches. On impacting the vacuum chamber they cause further emission of one or more electrons and there is a potential avalanche effect. The effect is strongly bunch-spacing dependent and although it has not been a serious issue with the 50 ns beam, there are potential problems with 25 ns .

In summary, the LHC is performing well and a huge amount of experience and understanding has been gained during the past three years. There is good system performance, excellent tools and reasonable availability following targeted consolidation. Good luminosity performance has been achieved by harnessing the beam quality from injectors and fully exploiting the options in the LHC. This overall performance is the result of a remarkable amount of effort from all of the teams involved.

This article is based on “The first years of LHC operation for luminosity production”, which was presented at IPAC13.

Strangely beautiful dimuons

Display of a Bs → μμ

Since its birth, the Standard Model of particle physics has proved to be remarkably successful at describing experimental measurements. Through the prediction and discovery of the W and Z bosons, as well as the gluon, it continues to reign. The recent discovery of a Higgs boson with a mass of 126 GeV by the ATLAS and CMS experiments indicates that the last piece of this jigsaw puzzle has been put into place. Yet, despite its incredible accuracy, the Standard Model must be incomplete: it offers no explanation for the cosmological evidence of dark matter, nor does it account for the dominance of matter over antimatter in the universe. The quest for what might lie beyond the Standard Model forms the core of the LHC physics programme, with ATLAS and CMS systematically searching for the direct production of a plethora of new particles that have been predicted by various proposed extensions to the model.

Complementary methods

As a consequence of its excellent performance – including collisions at much higher energies than previously achieved and record integrated luminosities – the LHC also provides complementary and elegant approaches to finding evidence of physics beyond the Standard Model, namely precision measurements and studies of rare decays. Through Heisenberg’s uncertainty principle, quantum loops can appear in the diagrams that describe Standard Model decays, which are influenced by particles that are absent from both the initial and final states. This experimentally well established general concept opens a window to observe the effects of undiscovered particles or of other new physics in well known Standard Model processes. Because these effects are predicted to be small, the proposed new-physics extensions remain consistent with existing observations. Now, the high luminosity of the LHC and the unprecedented precision of the experiments are allowing these putative effects to be probed at levels never before reached in previous measurements. Indeed, this is the prime field of study of the LHCb experiment, which is dedicated to the precision measurement of decays involving heavy quarks, beauty (b) and charm (c). The general-purpose LHC experiments can also compete in these studies, especially where the final states involve muons.

Behind the seemingly simple decay topology hides a tricky experimental search aimed at finding a few signal events in an overwhelming background

A rare confluence of factors makes the decay of beauty mesons into dimuon (μ+μ) final states an ideal place to search for this sort of evidence for physics beyond the Standard Model. The decays of B0 (a beauty antiquark, b, and a down quark, d) and Bs (a b and a strange quark, s) to μ+μ? are suppressed in the Standard Model yet several proposed extensions predict a significant enhancement (or an even stronger suppression) of their branching fractions. A measurement of the branching fraction for either of these decays that is inconsistent with the Standard Model’s prediction would be a clear sign of new physics – a realization that sparked off a long history of searches. For the past 30 years, a dozen experiments at nearly as many particle colliders have looked for these elusive decays and established limits that have improved by five orders of magnitude as the sensitivities approach the values predicted by the Standard Model (figure 2). Last November, LHCb found the first clear evidence for the decay Bs → μμ, at the 3.5σ level. Now both the CMS and LHCb collaborations have updated their results for these decays.

Behind the seemingly simple decay topology hides a tricky experimental search aimed at finding a few signal events in an overwhelming background: only three out of every thousand million Bs mesons are expected to decay to μμ, with the rate being even lower for the B0. The challenge is therefore to collect a huge data sample while efficiently retaining the signal and filtering out the background.

Several sources contribute to the large background. B hadrons decay semi-leptonically to final states with one genuine muon, a neutrino and additional charged tracks that could be misidentified as muons, therefore mimicking the signal’s topology. Because the emitted neutrino escapes with some energy, these decays create a dimuon peak that is shifted to a lower mass than that of the parent particle. The decays Λb → pμν form a dangerous background of this kind because the Λb is heavier than the B mesons, so these decays can contribute to the signal region. Two-track hadronic decays of B0 or Bs mesons also add to the background if both tracks are mistaken for muons. This “peaking background” – fortunately rare – is tricky because it exhibits a shape that is similar to that which is expected for the signal events. The third major background contribution arises from events with two genuine muons produced by unrelated sources. This “combinatorial” background leads to a continuous dimuon invariant-mass distribution, overlapping with the B0 and Bs mass windows, which is reduced by various means as discussed below.

The first hurdle to cross in finding the rare signal events is to identify potential candidates during the bursts of proton–proton collisions in the detectors. Given the peak luminosities reached in 2012 (up to 8 × 1033 cm–2 s–1), the challenge for CMS was to select by fast trigger the most interesting 400 events a second for recording on permanent storage and prompt reconstruction, with around 10 per second reserved for the B → μμ searches. With its smaller event size, LHCb could afford a higher output rate from its trigger, recording several kilohertz with a significant fraction dedicated to dimuon signatures.

Results of Bs → μμ

The events selected by the trigger are then filtered according to the properties of the two reconstructed muons to reject as much background as possible while retaining as many signal events as possible. In particular, hadrons misidentified as muons are suppressed strongly through stringent selection criteria applied on the number of hits recorded in the tracking and muon systems, on the quality of the track fit and on the kinematics of the muons. In LHCb, information from the ring-imaging Cherenkov detectors further suppresses misidentification rates. Additional requirements ensure that the two oppositely charged muons have a common origin that is consistent with being the decay point of a (long-lived) B meson. The events are also required to have candidate tracks that are well isolated from other tracks in the detector, which are likely to have originated from unrelated particles or other proton–proton collisions (pile-up). This selection is made possible by the precise measurements of the momentum and impact parameter provided by the tracking detectors in both experiments. The good dimuon-mass resolution (0.6% at mid-rapidity for CMS and 0.4% for LHCb) limits the amount of combinatorial background that remains under the signal peaks. Figure 1 shows event displays from the two experiments, each including a displaced dimuon compatible with being a B → μμ decay.

The final selection of events in both experiments is made with a multivariate “boosted decision tree” (BDT) algorithm, which discriminates signal events from background by considering many variables. Instead of applying selection criteria independently on the measured value of each variable, the BDT combines the full information, accounting for all of the correlations to maximize the separation of signal from background. CMS applies a loose selection on the BDT discriminant to ensure a powerful background rejection at the expense of a small loss in signal efficiency. Both experiments categorize events in bins of the BDT discriminant. LHCb has a higher overall efficiency, which together with the larger B cross-section in the forward region compensates for the lower integrated luminosity, so the final sensitivity is similar for both experiments.

The observable that is sensitive to potential new-physics contributions is the rate at which the B0 or Bs mesons decay to μμ, which requires a knowledge of the total numbers of B0 and Bs mesons that are produced. To minimize measurement uncertainties, these numbers are evaluated by reconstructing events where B mesons decay through the J/ψK channel, with the J/ψ decaying to two muons. This signature has many features in common with the signal being sought but has a much higher and well known branching fraction. The last ingredient required is the fraction of Bs produced relative to B+ or B0 mesons, which LHCb has determined in independent analyses. This procedure provides the necessary “normalization” without using the total integrated luminosity or the beauty production cross-section. LHCb also uses events with the decay B0 → K+π to provide another handle on the normalization.

Results

Both collaborations use unbinned maximum-likelihood fits to the dimuon-mass distribution to measure the branching fractions. The combinatorial background shape in the signal region is evaluated from events observed in the dimuon-mass sidebands, while the shapes of the semileptonic and peaking backgrounds are based on Monte Carlo simulation and are validated with data. The magnitude of the peaking background is constrained from measurements of the fake muon rate using data control samples, while the levels of semileptonic and combinatorial backgrounds are determined from the fit together with the signal yields.

Both collaborations use all good data collected in 2011 and 2012. For CMS, this corresponds to samples of 5 fb–1 and 20 fb–1, respectively, while for LHCb the corresponding luminosities are 1 fb–1 and 2 fb–1. The data are divided into categories based on the BDT discriminant, where the more signal-like categories provide the highest sensitivity. In the fit to the CMS data, events with both muons in the central region of the detector (the “barrel”) are separated from the others (the “forward” regions). Given their excellent dimuon-mass resolution, the barrel samples are particularly sensitive to the signal. All of the resulting mass distributions (12 in total for CMS and eight for LHCb) are then simultaneously fit to measure the B0 → μμ and Bs → μμ branching fractions, yielding the results that are shown in figure 3.

For both experiments, the fits reveal an excess of Bs → μμ events over the background-only expectation, corresponding to a branching fraction BF(Bs → μμ) = 3.0+1.0–0.9 × 10–9 in CMS and 2.9+1.1–1.0 × 10–9 in LHCb, where the uncertainties reflect statistical and systematic effects. These measurements have significances of 4.3σ and 4.0σ, respectively, evaluated as the ratio between the likelihood obtained with a free Bs → μμ branching fraction and that obtained by fixing BF(Bs → μμ) = 0. The results have been combined to give BF(Bs → μμ) = 2.9±0.7 × 10–9 (CMS+LHCb).

Both CMS and LHCb reported this long-sought observation at the EPS-HEP conference in Stockholm in July and in back-to-back publications submitted to Physical Review Letters (CMS collaboration 2013, LHCb collaboration 2013).

The combined measurement of Bs → μμ by CMS and LHCb is consistent with the Standard Model’s prediction, BF(Bs → μμ) = 3.6±0.3 × 10–9, showing that the model continues to resist attempts to see through its thick veil. The same fits also measure the B0 → μμ branching fraction. They reveal no significant evidence of this decay and set upper limits at the 95% confidence level of 1.1 × 10–9 (CMS) and 0.74 × 10–9 (LHCb). These limits are also consistent with the Standard Model, although the measurement fails to reach the precision required to probe the prediction.

While the observation of a decay that has been sought for so long and by so many experiments is a thrilling discovery, it is also a bittersweet outcome. Much of the appeal of the Bs → μμ decay-channel was in its potential to reveal cracks in the Standard Model – something that the measurement has so far failed to provide. However, the story is far from over. As the LHC continues to provide additional data, the precision with which its experiments can measure these key branching fractions will improve steadily and increased precision means more stringent tests of the Standard Model. While these results show that deviations from the expectations cannot be large, even a small deviation – if measured with sufficient precision – could reveal physics beyond the Standard Model.

Additionally, the next LHC run will provide the increase in sensitivity that the experiments need to measure B0 → μμ rates at the level of the Standard Model’s prediction. New physics could be lurking in that channel. Indeed, the prediction for the ratio of the Bs → μμ and B0 → μμ decay rates is well known, so a precise measurement of this quantity is a long-term goal of the LHC experiments. And even in the scenario where the Standard Model continues its undefeated triumphant path, theories that go beyond it must still describe the existing data. Tighter experimental constraints on these branching fractions would be powerful in limiting the viable extensions to the Standard Model and could point towards what might lie beyond today’s horizon in high-energy physics. With the indisputable observation of Bs → μμ decays, experimental particle physics has reached a major milestone in a 30-year-long journey. This refreshing news motivates the LHC experimental teams to continue forward into the unknown.

Do fast radio bursts signal black-hole formation?

Astronomers using the 64-m Parkes radio telescope in Australia have detected radio transients with a duration of only 4 ms. These fast radio bursts (FRBs) are a recently discovered class of mysterious sources that are found at cosmological distances. Now, two theorists suggest that FRBs are the last signal emitted by neutron stars as they collapse to form black holes.

In 2007, Duncan Lorimer and colleagues reported finding an unexpected burst of radio emission in archival observations of the Parkes telescope (CERN Courier November 2007 p10). The distance to the burst was calculated to be far outside the Galaxy at cosmological distances and hence the inferred luminosity was huge – similar to that of a quasar. This first radio “hyperburst” is now called the Lorimer burst, or FRB 010724.

Now, an international team led by Dan Thornton of the University of Manchester and the Australia Telescope National Facility has identified four additional FRBs. All bursts are found to be at cosmological distances as inferred by their dispersion measure, which is related to the integrated density of free electrons along the line of sight to the source. The free electrons in an ionized medium scatter the radio waves and cause a time delay in the arrival of the burst that increases towards longer wavelengths. The measured delays for the four FRBs suggest a strong contribution from the intergalactic medium and that the sources are several thousand-million light-years away, corresponding to cosmological redshifts, z, of between 0.45 and 0.96. This is significantly more than for the Lorimer burst (z ∼ 0.12) and confirms the cosmological origin of these events.

The detection of these bursts at a great distance implies a strong instantaneous luminosity. However, because the FRBs last only for milliseconds, the total energy released in radio waves is relatively modest – of the order of 1031–1033 J. While this is about the energy output of the Sun in days to months, it is more than 10 orders of magnitude less than the energy released by a gamma-ray burst (GRB) or a supernova explosion (∼1044 J). With four FRBs detected in the same survey it is also possible to estimate the event rate. Thornton and colleagues find a rate of about 10,000 per day for the full sky – about one burst every 10 s. Given the number of galaxies in the probed volume, they find an event rate of one burst per thousand years per galaxy. This is about 10 times less frequent than core-collapse supernovae (CERN Courier January/February 2006 p10) but a thousand times more frequent than GRBs.

With only these characteristics and the fact that there is no known transient detected simultaneously at other wavelengths, it is challenging to speculate on the nature of the objects producing FRBs. The brevity of the emission indicates small objects, typically neutron stars. A possible candidate is a magnetar – a highly magnetized neutron star that can emit powerful gamma-ray flashes (CERN Courier June 2005 p12).

Another interesting scenario has recently been proposed by Heino Falcke and Luciano Rezzolla, affiliated to institutes in the Netherlands and Germany. They claim that there should be a population of neutron stars that are stable with respect to gravitational collapse only because they are spinning quickly. Because of magnetic breaking, their spin rate must decrease slowly during several thousands to millions of years until reaching a critical value when the star collapses into a black hole. According to the “no hair” theorem, black holes cannot keep the strong magnetic field of the neutron star. The magnetosphere would be released during the collapse and result in a radio burst. FRBs would therefore be the last “cry” of neutron stars succumbing to their own gravitational pull.

Surprising studies in multiplicity

One of the key ways of looking into what happens when high-energy hadrons collide is to measure the relationship between the number, or multiplicity, of particles produced and their momentum transverse to the direction of the colliding beams. The results cast light on processes ranging from the interactions of individual partons (quarks and gluons) to the collective motion of hot, dense matter containing hundreds of partons. The ALICE experiment is investigating effects across the range of possibilities, using data collected with proton–proton (pp), proton–lead (pPb) and lead–lead collisions (PbPb) in the LHC – and the results are showing some surprises.

A correlation between the average transverse momentum 〈pT〉 and the charged particle multiplicity Nch was first observed at CERN’s SppS collider and has since been measured in pp(p) collisions over a range of centre-of-mass energies, culminating recently at the LHC. The strong correlation observed led to a change in paradigm in the modelling of such collisions, with the proposal of mechanisms that go beyond independent parton–parton collisions.

In pp collisions, one way to understand the production of high multiplicities is through multiple parton interactions, but the incoherent superposition of such interactions would lead to the same 〈pT〉 for different values of multiplicity. The observation of a strong correlation thus led to the introduction, within the models of the PYTHIA event simulator, of colour reconnections between hadronizing strings. In this mechanism, which can be interpreted as a collective final-state effect, strings from independent parton interactions do not independently produce hadrons, but fuse before hadronization. This leads to fewer, but more energetic, hadrons. Other models that employ similar mechanisms of collective behaviour also describe the data.

CCnew14_07_13

In PbPb collisions, high-multiplicity events are the result of a superposition of (single) parton interactions taking place in a large number of nucleon–nucleon collisions. In this case, substantial rescattering of constituents is thought to lead to a redistribution of the particle spectrum, with most particles being part of a locally thermalized medium that exhibits collective, hydrodynamic-type, behaviour. The moderate increase of 〈pT〉 seen in PbPb collisions (shown in figure 1 for Nch around 10 or larger) is thus usually attributed to collective flow.

Now, the first measurements by ALICE of two-particle correlations in the intermediary system of pPb collisions have sparked an intense debate about the role of initial- and final-state effects. The pPb data on 〈pT〉 indeed exhibit features of both pp and PbPb collisions, at low and high multiplicities, respectively. However, the saturation trend of 〈pT〉 versus Nch is less pronounced in pPb collisions than in PbPb and at high multiplicities leads to a much higher value of 〈pT〉 than in PbPb. Is this nevertheless a fingerprint of collective effects in pPb collisions? Predictions that incorporate collective effects within the hadron interaction model EPOS describe the data well, but alternative explanations, based on initial-state effects (gluon saturation), have also been put forward and are being tested by these data (ALICE collaboration 2013 a).

Other recent measurements of particle production in proton–nucleus collisions have shown unexpected behaviour that is reminiscent of quark–gluon plasma (QGP) signatures. But what could cause such behaviour and is a QGP the only possible explanation? To answer this in more detail, it is important to separate particle species, as collective phenomena should follow an ordering in mass. To this end, ALICE has measured the transverse-momentum spectra of identified particles in pPb collisions at √sNN = 5.02 TeV and their dependence on multiplicity (ALICE collaboration 2013b).

The measurements show that the identified particle spectra become progressively harder with multiplicity, just as in PbPb collisions, where the hardening is more pronounced for particles of higher mass. In heavy-ion collisions, this mass ordering is interpreted as a sign of a collective radial expansion of the system. To check if such an idea describes the observations, a blast-wave parameteriz ation can be used. This assumes a locally thermalized medium that undergoes a collective expansion in a common velocity field, followed by an instantaneous common freeze-out.

CCnew15_07_13

As figure 2 shows, the blast-wave fit describes the spectra well at low pT, where hydrodynamics-like behaviour should dominate. The description fails at higher momenta, however, where the non-thermal components should contribute significantly. But are QGP-like interpretations such as this one unique in describing these measurements? The colour-recombination mechanism present in PYTHIA, discussed above, leads qualitatively to similar features to those observed in the data.

The presence of flow and of a QGP in high multiplicity pPb collisions is thus not ruled out, but since other non-QGP effects could mimic collective phenomena, further investigation is needed. Nevertheless, these results are certainly a crucial step towards a better comprehension not only of pPb collisions but also of high-energy collisions involving nuclei in general.

Charmless baryonic B decays

The LHCb collaboration has made the first sightings of the decay of B mesons into two baryons containing no charm quarks. While the collaboration has previously reported on multibody baryonic B decays, these are its first results on the rare two-body charmless modes and will help to address open questions concerning baryon formation in B decays.

CCnew11_07_13

Baryonic decays of B mesons were studied extensively by the BaBar and Belle experiments at SLAC and KEK, respectively. The measured branching fractions are typically in the range 10–6–10–4, with charmless modes at the low end of this range and those with charm having larger branching fractions. Decays with double-charm final states have branching fractions up to 10–3 in some cases, which is a surprisingly large value. The channel B+ → ppK+ was the first charmless baryonic B-meson decay mode to be seen, in 2002 (Belle collaboration 2002). Soon after, Belle struck gold again with the first observation of a two-body baryonic B decay, B0 → Λcp, which manifestly has charm (Belle collaboration 2003). However, there were no signs of charmless two-body baryonic decays of B mesons until now.

The suppression of low-multiplicity compared with higher-multiplicity decay modes is a striking feature of B decays to baryons that is not replicated by their two-body and three-body decays to mesons. It is also a key to the theoretical understanding of the dynamics behind these types of decays.

CCnew12_07_13

The LHCb collaboration used the 1.0 fb–1 data sample collected in 2011 to study the proton–antiproton spectra with or without an extra light meson – a pion or a kaon. Figure 1 shows the invariant mass distribution of ppK+ candidates in the pK+ mass window 1.44–1.585 GeV/c2, where a B+ → ppK+ signal is visible. The inset shows the pK+ invariant mass distribution near the threshold for B-signal candidates weighted to remove the non-B+ → ppK+ decay background.

The analysis reveals a clear Λ(1520) resonance, with the branching fraction for the decay chain B+→ pΛ(1520) → ppK+ measured to be close to 4 × 10–7 (LHCb collaboration 2013a). With a statistical significance exceeding 5σ, the result constitutes the first observation of a two-body charmless baryonic B decay, B+ → pΛ(1520).

Figure 2 shows a fit from a related analysis, searching for B → pp decay (LHCb collaboration 2013b). An excess of B→ pp candidates with respect to background expectations is observed with a statistical significance of 3.3σ, giving a measurement of the branching fraction for B→ pp = (1.47+0.71–0.53) × 10–8. No significant signal is observed for B0s → pp but the current analysis improves the previous bound on the branching fraction by three orders of magnitude.

GERDA sets new limits on neutrinoless double beta decay

The GERDA collaboration has obtained new strong limits for neutrinoless double beta decay, which tests if neutrinos are their own antiparticles.

The GERDA (GERmanium Detector Array) experiment, which is operated at the underground INFN Laboratori Nazionali del Gran Sasso, is looking for double beta decay processes in the germanium isotope 76Ge, both with and without the emission of neutrinos. For 76Ge, normal beta decay is energetically forbidden, but the simultaneous conversion of two neutrons with the emission of two neutrinos is possible. This has been measured by GERDA with unprecedented precision with a half-life of about 2 × 1021 years, making it one of the rarest decays ever observed. However, if neutrinos are Majorana particles, neutrinoless double beta decay should also occur, at an even lower rate. In this case, the antineutrino from one beta decay is absorbed as a neutrino by the second beta-decaying neutron, which is possible if the neutrino is its own antiparticle.

In GERDA germanium crystals are both source and detector. 76Ge has an abundance of about 8% in natural germanium and its fraction was therefore enriched more than 10-fold before the special detector crystals were grown. To help to minimize the backgrounds from environmental radioactivity, the GERDA detector crystals and the surrounding detector parts have been carefully selected and processed. In addition, the detectors are located in the centre of a huge vessel filled with extremely clean liquid argon, lined by ultrapure copper, which in turn is surrounded by a 10-m diameter tank filled with high purity water. Last, but not least, it is all located underground below 1400 m of rock. The combination of all of these techniques has made it possible to reduce the background to unprecedented levels.

Data taking started in autumn 2011 using eight detectors if 2 kg each. Subsequently, five additional detectors were commissioned. Until recently, the signal region was blinded and the researchers focused on the optimization of the data analysis procedures. The experiment has now completed its first phase, with 21 kg years of accumulated data. The analysis, in which all calibrations and cuts had been defined before the data in the signal region were processed, revealed no signal of neutrinoless double beta decay in 76Ge, which leads to the world’s best lower limit for the half-life of 2.1 × 1025 years. Combined with information from other experiments, this result rules out an earlier claim for a signal by others.

The next steps for GERDA will be to add new detectors, effectively doubling the amount of 76 Ge. Data taking will then continue in a second phase after some further improvements are implemented to achieve even stronger background suppression.

• GERDA is a European collaboration with scientists from 19 research institutes or universities in Germany, Italy, Russia, Switzerland, Poland and Belgium.

T2K observes νμ→νe definitively

The first candidate νe event

The international T2K collaboration chose the EPSHEP2013 meeting in Stockholm as the forum to announce its definitive observation of the transformation of muon-neutrinos to electron-neutrinos, νμ→νe.

In 2011, the collaboration announced the first signs of this process – at the time a new type of neutrino oscillation. Now with 3.5 times more data, T2K has firmly established the transformation at a 7.5σ significance level.

In the T2K experiment, a νμ beam is produced in the Japan Proton Accelerator Research Complex (J-PARC) in Tokai on the east coast of Japan. The beam – monitored by a near detector in Tokai – is aimed at the Super-Kamiokande detector, which lies underground in Kamioka near the west coast, 295 km away. Analysis of the data from Super-Kamiokande reveals that there are more νe (a total of 28 events) than would be expected (4.6 events) without the transformation process.

Observation of this type of neutrino oscillation opens the way to new studies of charge-parity (CP) violation in neutrinos, which may be linked to the domination of matter over antimatter in the present-day universe. The T2K collaboration expects to collect 10 times more data in the near future, including data with an antineutrino beam for studies of CP violation.

In announcing the discovery, the collaboration paid tribute to the unyielding and tireless effort by the J-PARC staff and management to deliver high-quality beam to T2K after the devastating earthquake in eastern Japan in March 2011. The earthquake caused severe damage to the accelerator complex and abruptly halted the data-taking run of the T2K experiment.

• The T2K experiment was constructed and is operated by an international collaboration, which currently consists of more than 400 physicists from 59 institutions in 11 countries: Canada, France, Germany, Italy, Japan, Poland, Russia, Switzerland, Spain, UK and the US.

EPS-HEP2013: these are good times for physics

Stockholm, with its many stretches of water, islands and old town, provided an attractive setting for the 2013 International Europhysics Conference on High-Energy Physics, EPS-HEP2013 on 18–24 July. Hosted by the KTH (Royal Institute of Technology) and Stockholm University, the conference centres on a busy programme of parallel and plenary sessions.

Like particle physics itself, EPS-HEP has a global reach, with people attending from Asia and the Americas, as well as from Europe. This year there were some 750 participants, including many young people who presented results in both parallel and poster sessions. As many as 440 speakers and more than 100 presenters of posters brought news from a host of experiments around the world, ranging from those at particle accelerators and colliders to others deep underground and in space.

CCnew5_07_13

Coming just one year after the announcement of the discovery of a new boson at CERN’s LHC, the conference provided a showcase for the latest results from the ATLAS and CMS experiments, as well as from Fermilab’s Tevatron. Together, they confirm the new particle as a Higgs boson, compatible with the Standard Model, and are making progress in pinning down its properties. Other measurements from the LHC and the Tevatron continue to test the Standard Model, as in the search for rare decay modes. The CMS and LHCb collaborations presented results on the decay Bs → μμ, two years after the CDF collaboration reported a first measurement, in slight tension with the Standard Model, at EPS-HEP2011 in Grenoble. CMS and LHCb now observe this decay at more than 4σ, with a branching fraction that is in good agreement with the Standard Model, therefore closing a potential window on new physics (Strangely beautiful dimuons).

All four of the large LHC collaborations – ALICE, ATLAS, CMS and LHCb – presented results in the dedicated sessions on ultrarelativistic heavy ions, which also featured presentions of measurements from the Relativistic Heavy-Ion Collider at Brookhaven. First results from the proton–lead run at the LHC are yielding surprises, including some intriguing similarities with findings in lead–lead collisions (Charmless baryonic B decays).

CCnew6_07_13

Beyond the Standard Model, the worldwide search for dark matter has progressed with experiments that are becoming increasingly precise, gaining a factor of 10 in sensitivity every two years. There are also improved results from experiments at the intensity frontier, in the study of neutrinos and in particle astrophysics. Highlights here included the T2K collaboration’s updated measurement with improved background rejection, which now indicates electron-neutrino appearance at a significance of 7σ (T2K observes νμ→νe definitively). Other news included results from the GERDA experiment, which sets a new lower limit on the half-life for neutrinoless double-beta decay of 2.1 × 1025 years.

Other sessions looked to the continuing health of the field, with presentations of studies on novel ideas for future particle accelerators and detection techniques. These topics also featured in the special session for the European Committee for Future Accelerators, which looked at future developments in the context of the update of the European Strategy for Particle Physics.

An important highlight of the conference was the awarding of the European Physical Society High Energy and Particle Physics Prize to the ATLAS and CMS collaborations “for the discovery of a Higgs boson, as predicted by the Brout-Englert-Higgs mechanism”, and to Michel Della Negra, Peter Jenni and Tejinder Virdee, “for their pioneering and outstanding leadership roles in the making of the ATLAS and CMS experiments”. François Englert and Peter Higgs were there in person to present the prizes and to take part in a press conference together with the prizewinners. Spokespersons Dave Charlton and Joe Incandela accepted the prizes on behalf of ATLAS and CMS, respectively.

Wrapping up the conference in a summary talk, Sergio Bertolucci, CERN’s director for research and computing, noted that it had brought together many beautiful experimental results for comparison with precise theoretical predictions. “These are lucky times for physics,” he concluded, with experiments and theory providing an “unprecedented convergence of the extremes of scales around a common set of questions”.

• For details on all the talks see http://eps-hep2013.eu. A longer report will appear in a future edition of the CERN Courier.

bright-rec iop pub iop-science physcis connect