Comsol -leaderboard other pages

Topics

ILC-type cryomodule makes the grade

CCnew14_10_14

For the first time, the gradient specification of the International Linear Collider (ILC) design study of 31.5 MV/m has been achieved on average across an entire ILC-type cryomodule made of ILC-grade cavities. A team at Fermilab reached the milestone in early October. The cryomodule, called CM2, was developed to advance superconducting radio-frequency technology and infrastructure at laboratories in the Americas region, and was assembled and installed at Fermilab after initial vertical testing of the cavities at Jefferson Lab. The milestone – an achievement for scientists at Fermilab, Jefferson Lab, and their domestic and international partners in superconducting radio-frequency (SRF) technologies – has been nearly a decade in the making, from when US scientists started participating in ILC research and development in 2006.

Between 2008 and 2010, all of the eight cavities in CM2, after being electropolished, had been individually pushed to gradients above 35 MV/m at Jefferson Lab in vertical tests. They were subjected to additional horizontal tests at Fermilab. They were among 60 cavities being evaluated globally for the prospect of reaching the ILC gradient. This evaluation was known as the S0 Global Design Effort, and was a build-up to the S1-Global Experiment, which put to the test the possibility of reaching 31.5 MV/m across an entire cryomodule. The final assembly of the S1 cryomodule set-up took place at KEK in Japan between 2010 and 2011. In S1, seven nine-cell 1.3 GHz niobium cavities strung together inside a cryomodule achieved an average gradient of 26 MV/m. An ILC-type cryomodule consists of eight such cavities.

Over the years, teams in the Americas region have acquired significant expertise in SRF technology, including increasing cavity gradients. Cavities manufactured by companies in the US, for example, have improved in quality: three of the eight cavities that make up CM2 were fabricated locally.

The CM2 group at Fermilab will push the gradients higher to determine the limits of the technology and to continue to understand and advance it. They expect to send an actual electron beam through CM2 in 2015, to understand better how the beam and cryomodule respond together. The aim is to use CM2 in the Advanced Superconducting Test Accelerator currently being commissioned at Fermilab. The SRF technology developed for FLASH at DESY, the European XFEL and now CM2 also has applications for the proposed PIP-II at Fermilab and at light sources such as LCLS-II at SLAC.

The Global Neutrino Network takes off

CCnew15_10_14

On 20–12 September, CERN hosted the fifth annual Mediterranean-Antarctic Neutrino Telescope Symposium (MANTS). For the first time, the meeting was organized under the GNN umbrella.

The idea to link more closely the various neutrino telescope projects under both water and ice has been a topic for discussion in the international community of high-energy neutrino astrophysicists for several years. On 15 October 2013, representatives of the ANTARES, BAIKAL, IceCube and KM3NeT collaborations signed a memorandum of understanding for co-operation within a Global Neutrino Network (GNN). GNN aims for extended inter-collaboration exchanges, more coherent strategy planning and exploitation of the resulting synergistic effects.

No doubt, the evidence for extraterrestrial neutrinos recently reported by IceCube at the South Pole (“Cosmic neutrinos and more: IceCube’s first three years”) has given wings to GNN, and is encouraging the KM3NeT (in the Mediterranean Sea) and GVD (Lake Baikal) collaborations in their efforts to achieve appropriate funding to build northern-hemisphere cubic-kilometre detectors. IceCube is also working towards an extension of its present configuration.

One focus of the MANTS meeting was, naturally, on the most recent results from IceCube and ANTARES, and their relevance for future projects. The initial configurations of KM3NeT (with three to four times the sensitivity of ANTARES) and GVD (with sensitivity similar to ANTARES) could provide additional information on the characteristics of the IceCube signals, first because they look at a complementary part of the sky, and second because water has optical properties that are different from ice. Cross-checks with different systematics are of the highest importance for these detectors in natural media. As an example, KM3NeT will measure down-going muons from cosmic-ray interactions in the atmosphere with superb precision. This could help in determining more precisely the flux of atmospheric neutrinos co-generated with those muons, in particular those from the decay of charmed mesons, which are expected to have particularly high energies and therefore could mimic an extraterrestrial signal.

A large part of the meeting was devoted to finding the best “figures of merit” characterizing the physics capabilities of the detectors. These not only allow comparison of the different projects, but also provide an important tool to optimize future detector configurations. The latter also concerns the two sub-projects that aim to determine the neutrino mass hierarchy using atmospheric neutrinos. These are both small, high-density versions of the huge kilometre-scale arrays: PINGU at the South Pole and ORCA in the Mediterranean Sea. In this effort a particularly close co-operation has emerged during the past year, down to technical details.

Combining data from different detectors is another aspect of GNN. A recent common analysis of IceCube and ANTARES sky maps has provided the best sensitivity ever for point sources in certain regions of the sky, and will be published soon. Further goals of GNN include the co-ordination of alert and multimessenger policies, exchange and mutual checks of software, creation of a common software pool, development of standards for data representation, cross-checks of results with different systematics, and the organization of schools and other forums for exchanging expertise and experts. Mutual representation in the experiments’ science advisory committees is another way to promote close contact and mutual understanding.

Contingent upon availability of funding, the mid 2020s could see one Global Neutrino Observatory, with instrumented volumes of 5–8 km3 in each hemisphere. This would, finally, fully raise the curtain just lifted by IceCube, and provide a rich view on the high-energy neutrino sky.

Ultra-luminous X-ray source is ‘just’ a pulsar

Until now, ultra-luminous X-ray sources (ULXs) were thought to be black holes, because their high luminosity implied a mass exceeding by far the maximal mass of a neutron star. The most luminous of them were thought, furthermore, to be of a rare class of intermediate-mass black holes. The surprising discovery of pulsations from one of them now shakes this interpretation, and suggests that at least some neutron stars can become much more luminous than previously thought.

ULXs were discovered in nearby galaxies by the Einstein Observatory in the 1980s. These sources are characterized by X-ray luminosities that are intermediate between normal X-ray binaries and active galactic nuclei (AGN). If luminosity simply scaled with the mass of the accreting compact object, ULXs should be intermediate black holes with masses typically 100 to 10,000 times that of the Sun. This is an unusual mass range for black holes, which are more commonly found either with masses of about 10 solar masses, typical of stars, or with millions of solar masses, as in the case of those powering AGNs at the centres of galaxies.

A simple mass-luminosity relation arises naturally from the equilibrium between the inward gravitational force and the outward radiation pressure acting on matter accretion. Indeed, accretion can only increase as long as the resulting luminosity does not exceed what is known as the Eddington limit, at which the radiation pressure stops accretion and generates an intense outward wind. The Eddington luminosity is linearly proportional to mass and has a value of about 1031 W for a solar-mass star, which is about 10,000 times the luminosity of the Sun.

Although the Eddington limit holds, strictly, only for isotropic accretion, it serves as an order-of-magnitude upper limit to the luminosity of a source of a given mass. A ULX with a luminosity of 1033 W should, therefore, indicate the presence of a black hole at least 100 times the mass of the Sun. This argument is now disproved strongly by the detection of pulsed X-ray emission from a ULX in the nearby galaxy Messier 82 (M82), reported by Matteo Bachetti from the University of Toulouse and colleagues.

This source, M82 X-2, is the second brightest X-ray source in this star-forming galaxy, and can reach a luminosity exceeding 1033 W. The clear detection of pulsations with a period of 1.37 s and an orbital modulation of 2.5 days identifies the source as a binary system that is composed of a neutron star accreting gas from a massive companion star. The pulsed emission was observed in the 3–30 keV X-ray range by the Nuclear Spectroscopic Telescope Array, a NASA satellite launched from below an aeroplane on 13 June 2012. Confirmation that the pulsating source is indeed the ULX M82 X-2 came from contemporaneous observations by the Chandra X-ray Observatory and the Swift satellite.

The discovery of pulsations in M82 X-2 was made possible thanks to a long observation campaign in early 2014 of the M82 galaxy triggered by the explosion of the supernova SN 2014J (CERN Courier October 2014 p17). It proves that at least some ULXs can be accreting pulsars, rather than massive black holes. Theorists are now left with the challenge of proposing a model to explain how a pulsar can radiate at about 100 times its Eddington luminosity.

RHIC’s new gold record

The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory completed its 14th physics run in July, during which gold-ion beams were brought into collision at both low (7.3 GeV/nucleon) and high (100 GeV/nucleon) energies. The runs at high energy set new records for instantaneous and average-store luminosities, and the latter now stands at 5 × 1027 cm–2 s–1, or 25 times the design value. This stellar performance also allowed the introduction of another combination of species to the study of quark–gluon plasma (QGP). For the first time, a collider brought ions of helium-3 – a rare helium isotope with two protons and a single neutron – into collision with gold nuclei.

For the first three weeks of the 2014 run, RHIC delivered gold–gold collisions at 7.3 GeV/nucleon to complete the first phase of a beam-energy scan. The aim of the energy scan is to find a critical point in the QCD phase diagram that marks the end point of a first-order phase transition from cold nuclear matter into QGP. The majority of the scan was done in 2010, with five different collision energies. To date, gold ions have collided with 3.85, 4.6, 5.75, 9.8, 19.5, 27.9, 31.2, 65.2 and 100 GeV/nucleon. A second phase of the beam-energy scan is now planned for energies below the nominal injection energy of 9.8 GeV/nucleon, with a luminosity increase ranging from a factor of three to 10. The large increase in luminosity requires the implementation of electron cooling. Meanwhile, the low-energy part has already allowed the STAR and PHENIX collaborations to test new components in preparation for the high-energy part of the run: the heavy-flavour tracker for STAR and the vertex detector for PHENIX.

The subsequent 18 weeks of the 2014 run with collisions at a beam energy of 100 GeV/nucleon had the goal of delivering as much luminosity as possible, following a luminosity upgrade. This year marked the end of this upgrade period, which began in 2007 and saw the average store luminosity for gold–gold collisions increase by more than a factor of four. The two main elements of the upgrade have been an increase in the bunch intensity from 1.1 × 109 to 1.6 × 109 gold ions, and the implementation of three-dimensional stochastic cooling in both of RHIC’s rings (Blaskiewicz et al. 2008). The increase in the bunch intensity was achieved through many small upgrades in all of the machines, from the source to RHIC (figure 1), and led to a 2.5-fold increase in the initial luminosity. In addition, stochastic cooling led to a luminosity lifetime that is now determined by “burn-off”, with more than 90% of the gold ions lost in collisions with the other beam. Without stochastic cooling, the increased initial luminosity would decay so fast as to be of no use.

Figure 2 shows the dramatic effect that the upgrades had on the instantaneous and average-store luminosities, where an operating period of 48 h is shown in 2007 and 2014. The 2014 luminosity starts at a much higher value, but still decays for about half an hour before the cooling takes full effect. The cooling then reduces the beam sizes fast enough that the luminosity begins to increase, and typically exceeds the initial value. It then decays with time as more and more ions are lost in the collision process. The 2014 stores ended with luminosity values that are as high as the initial values in 2007.

With the high-luminosity stores and excellent reliability, the integrated luminosity of the 2014 run exceeds the integrated luminosity of all previous gold–gold runs combined. Figure 3 shows the integrated nucleon-pair luminosity, LNN = A1A2L, where L is the integrated luminosity, and A1 and A2 are the number of nucleons of the ions in the two beams, respectively. The use of LNN allows different ion combinations to be compared.

The shutdown period preceding the 2014 run provided the opportunity to upgrade a number of subsystems. Bunch-merging in the Booster and Alternating Gradient Synchrotron (AGS) was improved with a new low-level RF system, leading to an overall increase of about 30% in the maximum extracted beam intensity, compared with the 2012 run, the last time that gold ions were used in RHIC. The RHIC stochastic-cooling system, made fully operational in all three planes for the first time for the 2012 uranium run, featured new longitudinal pick-ups and kickers, for better correction of the spread of particles within the bunches (CERN Courier October 2012 p17). Additionally, a new 56 MHz passive superconducting RF storage cavity was commissioned, to provide larger RF buckets and reduce longitudinal diffusion caused by intra-beam scattering. The cavity is the first superconducting cavity in RHIC, and it reached 300 kV, which is below the 2 MV design voltage because it was limited by quenches in a higher-order mode damper. With a redesign of the damper and the full voltage of the cavity, the average store luminosity is expected to increase even further in the future, by at least 30%.

To minimize the commissioning time of the collider for the 100 GeV gold–gold run, the decision was taken to use the lattice design for the 2012 uranium run. This provides an increased off-momentum dynamic aperture, compared with previous high-energy heavy-ion runs, of up to 5σ for δp/p = 1.8 × 10–4. The machine performance during the 2012 run with uranium–uranium and copper–gold was such that beam losses were already dominated by burn-off (Luo et al. 2014).

Further highlights

One of the highlights of the 2014 run was the implementation of a dynamic β*-squeeze scheme to increase the integrated luminosity delivered to the STAR and PHENIX experiments. The beam size at any given point in the storage ring is given by √βε, where β is the β function and ε is the emittance. In this scheme, the β function at the interaction point is reduced while the beams are in collision. The scheme takes advantage of the fact that, owing to stochastic cooling, the emittance ε decreases during the store, so that a larger β function can be accommodated in the final focus triplets. With this, the β function at the interaction point (β*) – and therefore the beam size – is reduced, leading to an increase in luminosity. After about one hour, the transverse emittance ε is reduced by a factor 2.5–3, and eventually to less than 1 μm rms.

The lattice design for this dynamic squeeze relies on the principles of the achromatic telescopic squeeze (ATS) developed at CERN in the context of the LHC upgrade (Fartoukh 2013). The ATS method was adapted for RHIC to match the machine constraints, both in engineering (the magnet power-supplies’ wiring scheme) and in beam dynamics (the location of experimental insertions and phase-advance requirements). Once the linear optics had been corrected – reducing the β beat in the machine from 40% to 10% – it was possible to ramp the lattice dynamically into its new set point, sending the β* from 0.70 m down to 0.50 m. This could only be done reliably with the help of orbit and tune feedbacks. Prior to their operational implementation, each new β* set point was commissioned during dedicated beam-experiment periods.

Figure 4 shows the luminosity at the STAR detector before, during and after a β squeeze, 7 h into a physics store. The dynamic β squeeze became part of the routine operation in the second half of the 2014 run, and will be the main method to level the luminosity in future. To maximize the physics output, RHIC’s detectors might require that the instantaneous luminosity does not exceed a certain value. With cooling and the dynamic β squeeze, ions can be stored and burned off in the collisions in a way that is most useful to the experiments.

With the success of the gold–gold run at 100 GeV/nucleon, the last weeks of the 2014 run were reassigned to allow a new type of collision to be studied: helium-3 on gold ions. Understanding the properties of QGP can be advanced by looking into the emission patterns of the subatomic particles that it generates while cooling down. These patterns are a function of the initial “shape” of the QGP, which is given by the type of collision between the two stored beams. In colliders such as RHIC and the LHC, almost spherical particles (e.g. protons, gold and lead) are sent to collide head on, resulting in showers of subatomic particles that are projected in a circular pattern. The idea behind using helium-3 is that it features a triangular shape, allowing the STAR and PHENIX collaborations to look at different initial forms of QGP.

The biggest challenge for RHIC was to send the helium-3 beam onto the gold-ion beam for head-on collisions. Given the large difference in the charge-to-mass ratio of the two species – 2:3 for helium-3 versus 79:197 for gold – the beam trajectories were such that it became necessary to add crossing angles through the collision points in STAR and PHENIX, and a large horizontal orbit excursion in both interaction regions. The orbit excursions reached 10 mm – for comparison, well-controlled orbits have rms values of 20 μm. In addition, with a circumference of 3833 m, the path length of the helium-3 beam was 10 mm longer than that of the gold-ion beam.

Thanks to a modified bunch-merging mechanism through the RHIC injector chain, the bunch intensity for the helium-3 beam was increased by a factor of four compared with the previous year. With this significant improvement, the successful implementation of the specific beam paths in all of the interaction regions, and a short commissioning time despite the complexity of running with two particle species that are so different, this dedicated run was also a major success, exceeding the luminosity goals for both experiments.

In their own words

CERN appeared a gigantic enterprise to the young people who started to work for the fledgling organization from 1952 onwards, even before its official foundation in 1954. The adventure is traced here via some recollections recorded in interviews carried out by Marilena Streit-Bianchi for the CERN Archives between 1993 and 1997. These edited extracts cover some of the different evolving facets of the organization from the early 1950s to the late 1970s, and pay tribute to some of those who have passed away before the 60th anniversary of the young CERN they describe so vividly. Their enthusiasm and competences brought the organization to the level of excellence that has now become familiar.

A first recruit and the first machine

CCcer2_10_14

I was working at Philips Hilversum where I met Professor Bakker, and I became an assistant at the Zeeman Laboratory in Amsterdam. In 1951 I was asked to join his team, which was working on building CERN. I attended the Copenhagen meeting in 1952, when the alternate gradient principles became known – I believe that it was at this very moment that CERN was born. Although many of us, including myself, did not completely understand it, we immediately believed that an interesting new machine could be built, going from weak focusing to strong focusing. UNESCO provided money for our salaries. It was not much that we earned, but it was a terrific experience to arrive at a place where French and English had to be spoken.

Building the Synchrocyclotron (SC)
The alternating gradient machine was in development, and in the meantime it was decided to build a weak-focusing 600 MeV proton synchrocyclotron. I was asked to look after its set-up. We were a nice small team of 12 people sitting in barracks near the airport, whereas the [Proton Synchrotron] PS team, much larger, was staying at the University of Geneva. The construction of the site started, and it took 3–4 years. The six of us that really built the SC machine were working in a free atmosphere. It could look wild from the outside, but among us [there] was a strong discipline based on trust. We developed the RF system and conceived the tuning fork called the vibrating capacitor, made out of parts of very soft aluminium alloy. I made the design…and it was published in Nuclear Instruments and Methods. It worked well. The high-frequency system did not take too much time, whereas the magnet being 3000 tonnes of steel 5 m [in] diameter was not simple. After the war, all of the big countries wanted to contribute to it [France, Germany, Italy, England]. Each of them had some capability but didn’t have them all, so there was a choice to be made, and it was decided to make it based on technical and not on political grounds.

Frank Krienen 1917–2008.

Frank Krienen was one of the first recruits for CERN’s 600 MeV Synchrocyclotron, in 1952. In the 1960s, he turned to developing particle detectors, in particular wire spark chambers using different types of read-out. Later, he worked on the construction and operation of the electric quadrupoles for the muon storage ring, for the third and last g-2 experiment at CERN in the years 1969–1977, followed by the design and development of the electron-cooling apparatus for CERN’s Initial Cooling Experiment ring. 

Accelerating expertise

I went to Imperial College where Professor Dennis Gabor was, because I wanted to study beyond what university had given me. He had excellent courses in advanced particle dynamics, statistical physics, etc. It was an extremely important year for me, although [his] lectures were not on accelerators. The work I did for myself was on accelerators, and more specifically on linear accelerators, and the reason is simple. In Norway…a small country with few accelerators…the idea came up that perhaps it was possible to make accelerators in…the low-energy field. But in my stay with Gabor it was more the general knowledge I gained that was very much useful later in life.

The Proton Synchrotron (PS) Group
I was involved already a bit with Odd Dahl in 1951, then I became part of CERN full time from summer 1952. At that time, Dahl was leading the PS Group, as it was called in those days, from Bergen, [where] a group was working on the study of possible accelerators for what was called CERN. We were sitting in the home institute. It was an interesting experience – we had to communicate by letters and by travelling. There was no computer connection like now. I have never written so much in my life as I did in the first two years, or travelled for meetings as I did during 1952. I must admit that a very good spirit was struck…we were a good group, elected in a very specific way. Senior people like Dahl, [Wolfgang] Gentner, [John] Cockcroft, [Edoardo] Amaldi, etc, selected very good young people among their collaborators and students. We were enthusiastic, and we had the fortunate happening that the [alternating] gradient principle was invented at the beginning of the study. I have ever since admired Dahl for having the courage to switch the whole activity onto this new principle, only weeks after it was, shall we say, invented. It was a tremendous challenge to study if the energy could go to 30 GeV, instead of the 10 GeV we were talking of before.

The Intersecting Storage Rings (ISR)
I never thought of becoming the project head. I thought my ability beautifully fit the study-preparation phase, and then what I would have considered a more practical person could take over and lead the project. I hoped that he would ask me as deputy. So it was a surprise when I was asked to become the head of the ISR project.

A difficult thing to achieve technologically for the machine was to have an RF system that could do the job. The next most difficult big problem was the vacuum system, because we realized that it was tremendously important for the lifetime of the beam, an essential element for having efficient operation. The vacuum was improved continuously, far beyond what we first thought was necessary. There were many aspects related to vacuum that had to be solved. Indeed, to achieve the vacuum that we needed, most improvements were done after we put the machine into operation. I think for 10 years the vacuum was gradually improved, and improved and improved.

Kjell Johnsen 1921–2007.

Kjell Johnsen joined CERN in 1952, and became a world-leading accelerator expert through his work on the design of the PS. He went on to lead the ISR project, CERN’s first hadron collider and forerunner of the LHC.

Cosmic neutrinos and more: IceCube’s first three years

For the past four years, the IceCube Neutrino Observatory, located at the South Pole, has been collecting data on some of the most violent collisions in the universe. Fulfilling its pre-construction aspirations, the detector has observed astrophysical neutrinos with energies above 60 TeV, at the “magic” 5σ significance. The most energetic neutrino observed had an energy of about 2 PeV (2 × 1015 eV) – 250 times higher than the beam energy of the LHC.

These neutrinos are just one highlight of IceCube’s broad physics programme, which encompasses searches for astrophysical neutrinos, searches for neutrinos from dark matter, studies of neutrino oscillations, cosmic-ray physics, and searches for supernovae and a variety of exotica. All of these studies take advantage of a unique detector at a unique location: the South Pole.

IceCube observes the Cherenkov light emitted by charged particles produced in neutrino interactions in 1 km3 of transparent Antarctic ice. The detector is the ice itself, and is read out by 5160 optical sensors. Figure 1 shows how the optical sensors are distributed throughout the 1 km3 of ice, 1.5 km beneath the geographic South Pole. They are deployed 17 m apart, on 86 vertical cables or “strings”. Seventy-eight of the strings are spaced horizontally, 125 m apart in a grid of equilateral triangles forming a hexagonal array across an area of a square kilometre. The remaining eight strings form a more densely instrumented sub-array called DeepCore. In DeepCore, most of the sensors are concentrated in the lower 350 m of the detector.

Each sensor, or digital optical module (DOM), is like a miniature satellite made up of a 10 inch (25 cm) photomultiplier tube together with data-acquisition and control electronics. These include a custom 300 megasample/s waveform digitizer with 14 bits of dynamic range, plus light sources for calibrations, all consuming a power of less than 5 W. The hardware is protected by a centimetre-thick pressure vessel.

The ice in IceCube formed from compacted snow that fell on Antarctica 100,000 years ago.

The ice in IceCube formed from compacted snow that fell on Antarctica 100,000 years ago. Its properties vary with depth, with layers reflecting the atmospheric conditions when the snow first fell. Measuring the optical properties of this ice has been one of the major challenges of IceCube, involving custom “dust loggers”, studies with LED “flashers” and cosmic-ray muons. During the past decade, the collaboration has found that the ice is layered, that the layers are not perfectly flat and, most recently, that the light scattering is somewhat anisotropic. Each insight has led to a better understanding of the detector and to smaller systematic uncertainties. Fortunately, advances in computing technology have allowed IceCube’s simulations to keep up, more or less, with the increasingly complex models of light propagation in the ice.

The distributed sensors give IceCube strong pattern-recognition capabilities. The three neutrino flavours – νe, νμ and ντ – each leave different signatures in the detector. Charged-current νμ produce high-energy muons, which leave long tracks. All νe interactions, and all neutral-current interactions, produce hadronic or electromagnetic showers. High-energy ντ produce a characteristic “double-bang” signature – one shower when the ντ interacts and a second when the τ decays. More complex topologies have also been studied, including tracks that start in the detector as well as pairs of parallel tracks.

Despite past doubts, IceCube works and works well. More than 98% of the sensors are fully operational, and another 1% are usable – most of the failures occurred during deployment. The post-deployment attrition rate is a few DOMs per year, so IceCube will be able to operate for as long as required. The “live” times are also impressive – in the range of 99%.

IceCube has excellent reconstruction capabilities. For kilometre-long muon tracks, the angular resolution is better than 0.4°, verified by studying the shadow of the Moon cast by cosmic rays. For high-energy contained events, the angular resolution can reach 15°, and at high energies the visible energy can be determined to better than 15%.

Cosmic neutrinos

The detector’s dynamic range covers from 10 GeV to infinity. The higher energy the neutrino, the easier it is to detect. Every six minutes, IceCube records an atmospheric neutrino, from the decay of pions, kaons and heavier particles produced in cosmic-ray air showers. These 100,000 neutrinos collected every year are interesting in their own right, but they are also the background to any search for cosmic neutrinos. On top of this, the detector records about 3000 atmospheric muons every second. This is a painful background for neutrino searches, but a gold mine for cosmic-ray physics.

Although IceCube has an extremely rich physics programme, the centrepiece is clearly the search for cosmic neutrinos. Many signatures have been proposed for these neutrinos: point source searches, a high-energy diffuse flux, identified ντ, and others. IceCube has looked for all of these.

Point-source searches are the simplest strategy conceptually – just create a sky map showing the arrival directions of all of the detected neutrinos. Figure 2 shows the IceCube sky map containing 400,000 events gathered across four years (Aartsen et al. 2014c). In the southern hemisphere, the large background of downgoing muons is only partially counteracted by selecting high-energy muons, which are less likely to be of atmospheric origin. The 177,544 events in the northern-hemisphere sample are mostly from νμ. So far, there is no statistically significant evidence for any hot spots, even in searches for spatially extended sources. IceCube has also looked for variable sources, whether episodic or periodic, with similar results. These limits constrain theoretical models, especially those involving gamma-ray bursts.

If there are enough weak sources in the cosmos, they should be visible as an aggregate, diffuse flux. This diffuse flux is expected to have a harder energy spectrum than do atmospheric neutrinos. Calculations have indicated that IceCube would be more sensitive to this diffuse flux than to point sources, which is indeed the case. Several early searches, using the partially completed detector, turned up intriguing hints of an excess over the expected atmospheric neutrino flux. Then the search diverged from the anticipated script.

One of the first searches for diffuse neutrinos with the complete detector looked for ultra-high-energy cosmogenic neutrinos – neutrinos produced when ultra-high-energy cosmic-ray protons (E > 4 × 1019 eV) interact with photons of around 10–4 eV in the cosmic-microwave background, exciting them to a Δ+ resonance. The decay products of the pion produced in the Δ’s decay include a neutrino with a typical energy of 1018 eV (1 EeV). The search found two spectacular events, one of which is shown in figure 3. Both events were well contained within the detector – clearly neutrinos. Both had energies around 1 PeV – spectacular, but too low to be produced by cosmic rays interacting with CMB photons. Such events were completely unexpected.

Inspired by these events, the IceCube collaboration instigated a follow-up search that used two powerful techniques (Aartsen et al. 2013). The first was a filter to identify neutrino interactions that originate inside the detector, as distinct from events originating outside it. The filter divides the instrumented volume into an outer-veto shield and a 420 megatonne inner active volume. Figure 4 shows how this veto works: by rejecting events with significant in-time energy deposition in the veto region, neutrino interactions within the detector’s fiducial volume can be separated from backgrounds. For neutrinos that are contained within the instrumented volume of ice, the detector functions as a total absorption calorimeter, measuring energy with 15% resolution. It is flavour-blind, equally sensitive to hadronic or electromagnetic showers and to muon tracks. This veto analysis also used a “tagging” approach to estimate the atmospheric-muon background using the data, rather than relying on simulations. Because of the veto, the analysis could observe neutrinos from all directions in the sky.

The second innovation was to take advantage of the fact that downgoing atmospheric neutrinos should be accompanied by a cosmic-ray air shower depositing one or more muons inside IceCube. In contrast, cosmic neutrinos should be unaccompanied. A very high-energy, isolated downgoing neutrino is highly likely to be cosmic.

The follow-up search found 26 additional events. Although no new events had an energy near 1 PeV, the analysis produced evidence for cosmic neutrinos at the 4σ level. To clinch the case, the collaboration added a third year of data, pushing the significance above the “magic” 5σ level (Aartsen et al. 2014a). One of the new events had an energy above 2 PeV, making it the most energetic neutrino ever seen.

The observation of a flux of cosmic neutrinos was soon confirmed by the independent and more traditional analysis recording the diffuse flux of muon neutrinos penetrating the Earth. Both observations are consistent with a diffuse flux composed equally of the three neutrino flavours. No statistically significant hot spots were seen. The observed flux is consistent with that expected from cosmic accelerators producing equal energies in gamma rays, neutrinos and, possibly, cosmic rays.

Newer studies are shedding more light on these events, extending contained-event studies down to lower energies and adding flavour identification. At energies above 10 TeV, the astrophysical neutrino flux can be fit by a single power-law spectrum that is significantly harder than the background cosmic-ray muon spectrum:
φν = 2.06+0.4–0.3 × 10–18 (Ev/100TeV)–2.46±0.12 GeV–1 cm–2 sr–1 s (Aartsen et al. 2014d).

Within the limited statistics, the flux appears isotropic and consistent with the νeμτ ratio of 1:1:1 that is expected for cosmic neutrinos. The majority of the events appear to be extragalactic. Some might originate in the Galaxy, but there is no compelling statistical evidence for that at this point.

Many explanations have been proposed for the IceCube observations, ranging from the relativistic particle jets emitted by active galactic nuclei to gamma-ray bursts, to starburst galaxies to magnetars. IceCube’s dedicated searches do, however, disfavour gamma-ray bursts as the source. A spectral index of –2 (dNν/dE ~ E–2), predicted by Fermi shock-acceleration models, is also disfavoured, but many other scenarios are possible. Of course, the answer is clear: more data are needed.

Other physics

The 100,000 neutrinos and 85 × 109 cosmic-ray events recorded each year provide ample opportunities to search for dark matter and to study cosmic rays as well as neutrinos themselves. IceCube has measured the cosmic-ray spectrum and composition and observed anisotropies in the spectrum at the 10–4 level that have thus far defied explanation. It has also studied atypical events, such as muon-free showers expected from photons with peta-electron-volt energies, produced in the Galaxy, and investigated isolated muons produced in air showers. The latter have separations that shift from an exponential decrease to a power-law separation spectrum, as predicted by perturbative QCD.

IceCube observes atmospheric neutrinos across an energy range from 10 GeV to 100 TeV – at higher energies, the atmospheric flux is swamped by the flux of cosmic neutrinos. As figure 5 shows, the flux is consistent with expectations across a large energy range. Lower-energy neutrinos are of particular interest because they are sensitive to neutrino oscillations. For neutrinos passing vertically through the Earth, the νμ flux develops a first minimum at 28 GeV.

Figure 6 shows the observed νμ flux, seen in one year of data, using well-reconstructed events contained within DeepCore. The change in flux with distance travelled/energy (L/E) is consistent with neutrino oscillations and inconsistent with a no-oscillation scenario. IceCube constraints on the mixing angle θ23 and |Δm232| are comparable to constraints from other experiments.

IceCube also searched for neutrinos from dark-matter annihilation. Dark matter can be gravitationally captured by the Earth, the Sun, or in the centre or halo of the Galaxy. It then accumulates and the dark-matter particles annihilate, producing neutrinos. IceCube has searched for signatures of this annihilation, and has set limits. The Sun is a particularly interesting option, producing a characteristic dark-matter signature that cannot be explained by any astrophysical scenario. It is also mostly protons, allowing IceCube to set the world’s best limits on the spin-dependent cross-section for the interaction of dark-matter particles with ordinary matter.

The collaboration has also looked for even more exotic signatures, such as magnetic monopoles and pairs of upgoing particles. One particularly spectacular and interesting signature could come from the next supernova in the Galaxy. These explosions produce a blast of neutrinos with 10–50 MeV energy. This energy level is far too low to trigger IceCube directly, but the neutrinos would be visible as a collective increase in the singles rate in the buried IceCube photomultipliers. Moreover, IceCube has a huge effective area, which will allow measurements of the time structure of the supernova-neutrino pulse with millisecond precision.

IceCube is still a novel instrument unlikely to have exhausted its discovery potential. However, at high energies, it might not be big enough. Doing neutrino astronomy could require samples of 1000 or more, high-energy neutrino events. In addition, some key physics questions require a detector with a lower energy threshold. These two considerations are driving two different upgrade projects.

The IceCube high-energy extension (IceCube-gen2) aims for a detector with a 10-times-larger instrumented volume.

DeepCore has demonstrated that IceCube is capable of making precise measurements of neutrino-oscillation parameters. If precision studies can be extended to neutrino energies below 10 GeV, it will be possible to determine the neutrino-mass hierarchy. Neutrinos passing through the Earth interact coherently with matter electrons, modifying the oscillation pattern in a way that differs for normal and inverted hierarchies. In addition to a threshold of a few giga-electron-volts, this measurement requires improved control of systematic uncertainties. An expanded collaboration has come together to pursue the construction of a high-density infill array called Precision In Ice Next-Generation Upgrade, or PINGU (Aartsen et al. 2014b). The present design consists of 40 additional high-sensitivity strings equipped with improved calibration devices. PINGU should be able to determine the mass hierarchy with 3σ significance within about three years, independent of the value of the CP-violation phase.

The IceCube high-energy extension (IceCube-gen2) aims for a detector with a 10-times-larger instrumented volume, albeit with a higher energy threshold. It will explore the observed cosmic neutrino flux and pin down its origin. With a sample of more than 100 cosmic neutrinos per year, it will be possible to observe multiple neutrinos from the same sources, and so do astronomy. The instrument will also have an improved sensitivity to study the ultra-high-energy neutrinos produced in the interactions of cosmic rays with microwave photons.

Of course, IceCube is not the only collaboration studying high-energy neutrinos. Projects on the cubic-kilometre scale are also being prepared in the Mediterranean Sea (KM3NeT) and in Lake Baikal (GVD), with a field of view complementary to that of IceCube. Within KM3NeT, ORCA, a proposed low-threshold detector, would pursue the same physics as PINGU. And the radio-detection experiments ANITA, ARA, GNO and ARIANNA are beginning to explore the neutrino sky at energies above 1017 eV.

After a decade of construction, the completed IceCube detector came on line in December 2010. It has achieved the outstanding goal of observing cosmic neutrinos and has produced important results in diverse areas: cosmic-ray physics, dark-matter searches and neutrino oscillations, not to mention its contributions to glaciology and solar physics. The observation of cosmic neutrinos at the peta-electron-volt energy scale has attracted enormous attention, with many suggestions about the location of the requisite cosmic accelerators.

Looking ahead, IceCube anticipates two important extensions: PINGU, which will determine the neutrino-mass hierarchy, and IceCube-gen2, which will expand a discovery instrument into an astronomical telescope.

Heavy-ion collisions: where size matters

Recent observations made by the LHC experiments in proton–lead and high-multiplicity proton–proton events are reminiscent of the collective hydrodynamic-like behaviour observed in lead–lead collisions. However, the results have not been conclusive, and can also be explained in terms of the formation of another state of matter in the initial state – the colour glass condensate. Measuring the space–time extent of the final hadronic state created at “freeze-out” in nuclear collisions – when the majority of particles cease interacting – yields unique information about the initial state and its dynamical evolution. This, in turn, offers an additional constraint on the interpretation of the observed collective-like features. In particular, if the collision proceeds with a hydrodynamic-like expansion, then the final hadronic state should extend to a size significantly larger than that of the initial collision system.

The characteristic length scale of freeze-out is femtoscopic (10–15 m) and cannot be measured directly. However, sizes on this scale can instead be measured indirectly through the quantum interference of identical bosons or fermions. These measurements employ the technique of intensity interferometry that was invented by Robert Hanbury Brown and Richard Twiss in 1956, using the relative arrival time of photons from a distant star. In high-energy particle collisions, instead of the relative arrival time, experiments measure the relative momentum of the emitted particles to learn about the size and structure of the source.

Often, the correlation of two identical charged pions is measured as a function of their relative momentum. In hadron and ion collisions, Bose–Einstein statistics lead to enhanced production of bosons that are close together in phase space, and therefore to an excess of pairs – in this case pions – at low relative momentum. The width of the resulting Bose–Einstein peak at low relative momentum is inversely proportional to the characteristic radius of the source at freeze-out.

In high-multiplicity events such as those produced in lead–lead collisions, all background contributions (i.e. mini jets) to the correlation function are diluted to a negligible amount. However, in events with lower multiplicity, such as those produced in proton–proton and proton-lead collisions, sizable backgrounds exist, and these can significantly bias the extracted radii. One way to overcome the problem is to consider cumulants of higher-order Bose–Einstein correlations. Three-pion Bose–Einstein cumulant correlations are advantageous here in two ways. First, the construction of the three-pion cumulant explicitly removes all of the two-pion background correlations. Second, the genuine three-pion Bose–Einstein signal is twice as large as the two-pion signal, owing to the increased symmetrization possibilities.

The ALICE collaboration has measured three-pion Bose–Einstein correlations in proton–proton (√s =7 TeV), proton–lead (√sNN = 5.02 TeV), and lead–lead (√sNN = 2.76 TeV) collisions at the LHC. The correlation functions were constructed from three types of measured triplet momentum (p) distributions. The first distribution, N(p1, p2, p3), is measured by sampling all three pions from the same event. The second distribution, N(p1, p2) N(p3), is measured by taking two pions from the same event and the third from a different event. Finally, the third distribution, N(p1) N(p2) N(p3), is measured by taking all three pions from different events.

From the measured distributions, the full three-pion correlation function (C3) can be formed and projected onto the relative momentum variable Q3 = √(q122 + q312 + q232), as shown in figure 1, where the invariant relative momentum of a pair is defined as qij = √(– (p– pj)μ (p– pj)μ). The figure shows the cumulant correlation function (c3), which subtracts the second distribution as described above, to remove two-pion correlations. The top panels are for same-charge triplets, while the bottom panels are for mixed-charge triplets.

Bose–Einstein correlations occur only for same-charge pions, while Coulomb and strong final-state interactions occur for both same- and mixed-charge combinations. The cumulant correlation functions are corrected for these final-state interactions as well as for the dilution from long-lived emitters (resonance decays and secondary contamination). For same-charge triplets, the three-pion cumulant Bose–Einstein correlation is clearly visible, while for mixed-charge triplets the same cumulant correlation function is consistent with unity, as expected when final-state interactions are removed. In addition, for each of the systems measured, the figure shows model calculations that do not take quantum and final-state interactions into account, demonstrating the power of the three-pion cumulants in removing backgrounds.

The extraction of the source radius at freeze-out is done by means of Gaussian, Edgeworth, as well as exponential fits to the same-charge three-pion cumulant correlations. The Edgeworth fit represents a Hermite polynomial expansion of a Gaussian function, and provides generally a good description of the correlation functions. Figure 2 shows the resulting radii from the Edgeworth fits, as a function of charged-particle multiplicity for each of the three collision systems. For comparison, the radius fit parameters from two-pion correlation functions are shown with hollow points.

The regions of overlapping multiplicity for the lead–lead, proton–lead and proton–proton results provide an interesting comparison of system sizes

The regions of overlapping multiplicity for the lead–lead, proton–lead and proton–proton results provide an interesting comparison of system sizes: the lead–lead radii are 35–55% larger than those in proton–lead at similar multiplicity. This observation points to the importance of the initial state as the number of participating nucleons, and the initial size in a lead–lead collision, is clearly different from that in proton–lead and proton–proton collisions. The proton–proton and proton–lead overlap zone suggests that the proton–lead system is only 5–15% larger than the proton–proton system at similar multiplicity.

These quantitative observations in the zones of overlapping multiplicity are well described with initial conditions alone, without the additional expansion from a phase of hydrodynamics. However, the measurements do not rule out the presence of hydrodynamics simultaneously in all three collision systems.

From stars to hadrons

“Correlations between identical particles emitted simultaneously in hadron collisions can be used to determine the dimensions of the region where the [particles] are produced. The method is similar to that used by radio-astronomers to measure the angular dimensions of sources.” So begins a paper by Giuseppe Cocconi at CERN, published in 1974. Twenty years earlier, Hanbury Brown and Twiss in the UK had developed a new type of interferometer that used correlations in the intensities of radio signals to measure the angular sizes of sources. They extended this later to visible light and stars. In particle physics, around the same time, Gerson Goldhaber and colleagues in the US found correlations in identical pions produced in proton–antiproton annihilations. Subsequent work showed that indeed there are similarities between the statistics in the detection of photons (bosons) and those of the detection of pions (also bosons) in hadron collisions. The energetic collision can be likened to a thermal light source, with correlated pion momenta offering a window on the size of the source.

• Further reading

G Cocconi 1974 Phys. Lett. 49b 459.
R Hanbury Bown and R Q Twiss 1956 Nature 177 27.
G Goldhaber et al. 1960 Phys. Rev. 120 300.

ICTP: theorists in the developing world

Fernando Quevedo

Fernando Quevedo, director of ICTP since 2009, came to CERN in September to take part in the colloquium “From physics to daily life”, organized for the launch of two books of the same name, to which he is one of the contributors. His participation in such an initiative is not just a fortunate coincidence, but testimony of his willingness to explain the prominent role that theoretical and fundamental physics have in human development. “Theory is the driving force behind the creation of a culture of science, and this is of paramount importance to developing societies,” he explains. “Abdus Salam founded the ICTP because he believed in this strong potential, which comes at a very low cost to the countries that cannot afford expensive experimental infrastructures.”

Unfortunately, theorists are not usually credited properly for their contributions to the development of society. “The reason is that a lot of time separates the theoretical advancement from the practical application,” says Quevedo. “People and policy makers at some point stop seeing the link, and do not see the primary origin of it anymore.” However, although these links are often lost in the complicated ripples of history, it is often the case that when people are asked to recall names of famous scientists, most likely they are theorists. Examples include Albert Einstein, Richard Feynmann, James Clerk Maxwell and, of course, Stephen Hawking. More importantly, theories such as quantum mechanics or relativity have changed not just the way that scientists understand the universe but also, years later, everyday life, with applications that range from lasers and global-positioning systems to quantum computation. For Quevedo, “The example I like best is Dirac’s story. He was a purist. He wanted to see the beauty in the mathematical equations. He predicted the existence of antimatter because it came out of his equations. Today, we use positrons – the first antimatter particle predicted by Dirac – in PET scanners, but people never go back to remember his contribution.”

Theorists often have an impact that is difficult to predict, even by their fellow colleagues. “When I was a student in Texas,” recalls Quevedo, “we were studying supersymmetry and string theory for high-energy physics, and we saw that some colleagues were working on even more theoretical subjects. At that time, we thought that they were not on the right track because they were trying to develop a new interpretation of quantum mechanics. Two decades later, some of those people had become the leaders of quantum-information theory and had given birth to quantum computing. Today, this field is booming!” Perhaps surprisingly, there is also an extremely practical “application” of string theory: the arXiv project. This online repository of electronic preprints of scientific papers was invented by string theorist Paul Ginsparg. Perhaps this will be the only practical application of string theory.

While Quevedo considers it important to credit the role of the theorists in the development of society and in creating the culture of science, at the same time, he recognizes an equivalent need for the theorists to open their research horizon and accept the challenge of the present time to tackle more applied topics. “Theorists are very versatile scientists,” he says. “They are trained to be problem solvers, and their skills can be applied to a variety of fields, not just physics.” This year, ICTP is launching a new Master’s course in high-performance computing, which will use a new cluster of computers. In line with Quevedo’s thinking, during the first year, the students will be trained in general matters related to computing techniques. Then, during the second year, they will have the opportunity to specialize not only in physics but also in other subjects, including climate change, astrophysics, renewable energy and mathematical modelling.

If you are from a poor country, why should you be limited to do agriculture, health, etc?

All of these arguments should not be seen as justifications for the need to support theoretical physics. Rather, wondering about the universe and its functioning should be a recognized right for anyone. “I come from Guatemala and have the same rights as Americans and Europeans to address the big questions,” confirms Quevedo. “If you are from a poor country, why should you be limited to do agriculture, health, etc? As human beings, we have the right to dream about becoming scientists and understanding the world around us. We have the right to be curious. After all, politicians decide where to put the money, but the person who is spending his/her life on scientific projects is the scientist.”

ICTP has the specific mandate to focus on supporting scientists from developing countries. Across its long history, the institute has proudly welcomed visitors from 188 countries – that is, almost the entire planet. While CERN’s activities are concentrated mainly in developed countries, the activity map of ICTP spreads across all continents more uniformly, including Africa and the whole of Latin America. “Some countries do not have the right level of development for science to get involved in CERN yet. ICTP can play the role of being an intermediate point to attract the participation of scientists from the least developed countries to then get involved with CERN’s projects,” Quevedo comments.

Quevedo’s relationship with CERN goes beyond his role as ICTP’s director. CERN was his first employer when he was a young postdoc, coming from the University of Texas. He still comes to CERN every year, and thinks of it not only as a model but, more importantly, as a “home away from home” for any scientist. Like two friends, CERN and ICTP have a variety of projects that they are developing together. “CERN’s director-general, Rolf Heuer, and myself recently signed a new memorandum of understanding,” he explains. “ICTP scientists collaborate directly in the ATLAS computing working groups. With CERN we are also involved in the EPLANET project (CERN Courier June 2014 p58), and in the organization of the African School of Physics (CERN Courier November 2014 p37). More recently, we are developing new collaborations in teacher training and the field of medical physics.”

Does Quevedo have a dream about the future of CERN? “Yes, I would like to see more Africans, Asians and Latin Americans here,” he says. “Imagine a more coloured cafeteria, with people really coming from all corners of the planet. This could be the CERN of the future.”

ICTP’s 50th anniversary

In June 1960, the Department of Physics at the University of Trieste organized a seminar on elementary particle physics in the Castelletto in Miramare Park. The notion of creating an institute of theoretical physics open to scientists from around the world was discussed at that meeting. That proposal became a reality in Trieste in 1964. Pakistani-born physicist Abdus Salam, who spearheaded the drive for the creation of ICTP by working through the International Atomic Energy Agency, became the centre’s director, and Paolo Budinich, who worked tirelessly to bring the centre to Trieste, became ICTP’s deputy director.

From 6 to 9 October this year, ICTP celebrated its 50 years of success in international scientific co-operation, and the promotion of scientific excellence in the developing world. More than 250 distinguished scientists, ministers and others attended the anniversary celebration. In parallel, the programme included exhibitions, lectures and special initiatives for schools and the general public.

• For the whole programme of events with photos and videos, visit www.ictp.it/ictp-50th-anniversary.aspx.

 

Cosmic particles meet the LHC at ISVHECRI

In August this year, CERN hosted the International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI), the 18th meeting in the series that started in 1980 in Nakhodka, Russia, and is supported by the International Union for Pure and Applied Physics. In the early years, the symposia focused mainly on studying hadronic interactions of cosmic rays in the atmosphere and in emulsion chambers, which were the main cosmic-ray detectors at the time. The scope of the series has since widened, and it has become a frontier for scientists from both the cosmic-ray and high-energy physics communities to discuss hadronic interactions as a common research subject of the two fields.

At this year’s symposium, which was organized jointly by high-energy and cosmic-ray physicists – Albert de Roeck, Michelangelo Mangano and Bryan Pattison, of CERN, and David Berge of NIKHEF – the participants focused on the latest data on hadron production from CERN’s LHC, and the implications for interpreting cosmic-ray measurements. The LHC is the first collider to provide data at an equivalent proton–nucleon energy that exceeds that of the so-called “knee” – the observed change in cosmic-ray flux at 3 × 1015 eV, which is still to be explained. A series of review talks provided a comprehensive, cross-experiment overview of the latest LHC data, ranging from dedicated measurements of hadron production in the forward direction to a multitude of minimum-bias measurements in proton–proton and heavy-ion collisions. In addition, presentations showed how the forward measurements made at the HERA electron–proton collider at DESY have proved to be very useful for cosmic-ray studies. These reviews were complemented by an evening lecture on Higgs physics by John Ellis of Kings College London.

Tanguy Pierog of Karlsruhe Institute of Technology (KIT) and CERN’s Peter Skands reviewed the different approaches chosen for developing hadronic-interaction models for applications in cosmic-ray and high-energy physics. Even though the predictions of such models that were developed for cosmic-ray interactions turned out to cover the first LHC data rather well, some retuning was necessary, both to improve the description of the measurements at the LHC and to obtain more reliable high-energy extrapolations. The predictions of the models show an increasing convergence after such tuning, and lead to a more consistent description of air-shower data.

However, even the latest generation of interaction models does not solve the discrepancies found for the production of muons in extensive air showers at very high energy. A discrepancy in the number of muons at giga-electron-volt energies is seen, for example, in the data from the Pierre Auger Observatory on inclined showers whose electromagnetic component is absorbed in the atmosphere before reaching the detectors at the Earth’s surface (figure 1). Furthermore, data from the KASCADE-Grande experiment presented by Juan Carlos Arteaga of Universidad Michoacana, Morelia, indicate a much weaker attenuation of the muonic-shower component than expected from simulations. KIT’s Ralf Ulrich pointed out that, in contrast to the electromagnetic-shower profile, which depends on neutral-pion production in high-energy interactions only, both high- and low-energy interactions are important for understanding the production of muons in air showers. Therefore, measurements from fixed-target experiments such as NA61/SHINE at CERN and the Main Injector Particle Production experiment at Fermilab, which Boris Popov of JINR reviewed, are also important for obtaining a better understanding of muon production in air showers. Alternative scenarios for enhancing this muon production, involving extensions of the Standard Model, were discussed by Glennys Farrar of New York University.

Many talks at the symposium illustrated the importance of multimessenger observations in astroparticle physics, for understanding not only the sources and the mass composition of cosmic rays but also a plethora of astrophysical phenomena. Examples are the review by Eli Waxman of the Weizmann Institute on different cosmic-particle accelerators and discussion of the propagation of ultra-high-energy cosmic rays by Andrew Taylor of the Dublin Institute for Advances Studies.

One highlight of the meeting was the discussion of high-energy neutrinos from astrophysical sources recently detected by IceCube (figure 2). Kota Murase of the Institute for Advanced Study, Princeton, reviewed different theoretical scenarios for the production of neutrinos in the tera- to peta-electron-volt energy range (1012 – 1015 eV). Tom Gaisser of the University of Delaware summarized the knowledge on neutrinos produced in interactions of cosmic rays in the atmosphere, which constitute the dominant background of non-astrophysical origin in the IceCube data. At peta-electron-volt and higher neutrino energies, the atmospheric lepton flux is dominated by the decay of charm particles, and LHC measurements on the production of heavy flavours are the only experimental data that reach the equivalent relevant energies. Given the limited acceptance in the forward direction at the LHC, QCD calculations and models are still of central importance for understanding high-energy neutrino production, as Victor Gonzalez of Universidade Federal de Pelotas and others discussed. Similarly, as Ina Sarcevic of the University of Arizona pointed out, calculating the interaction cross-section of neutrinos of energies up to 1019 eV is a challenge in perturbative QCD because of the need for parton densities at very low x. Anna Stasto of Penn State presented different theoretical approaches to understand low-x QCD phenomena, concluding that there is no multipurpose framework of general applicability.

The remaining uncertainties in predicting hadron production in high-energy interactions were one of the central questions discussed at the meeting, and highlighted by Paolo Lipari of INFN/Roma in his concluding remarks. There was general agreement that, in addition to ongoing theoretical and experimental efforts, the measurement of particle production in LHC collisions of protons with light nuclei, for example oxygen, would be the next step needed to reduce the uncertainties further.

CERN: a forward look

On 1 July, the cycle of events celebrating CERN’s 60th anniversary opened in Paris with an event commemorating the anniversary of the CERN Convention, which was signed at the UNESCO headquarters in 1953 by representatives of the founding members. These 12 signatures are indeed worth commemorating. For more than half a century, the convention has stood the test of time as a masterpiece of simple and minimalistic legal language that focuses wisely on the essential cornerstones of CERN’s institutional basis and governance. At the same time, it provides for the leeway that is necessary to adapt the organization to a changing political environment, and to new scientific and technological challenges. The convention is a testimony to the wisdom and foresight of CERN’s founding fathers, on a par with their vision of rebuilding peace in Europe by establishing a unique focal point that would foster scientific collaboration on an unprecedented scale, between nations that had fought a war against each other only a few years earlier. On the basis of this convention, CERN has served as a model for other successful European science organizations, and most recently for the SESAME synchrotron light source in the Middle East.

Some of the most intriguing aspects of the CERN Convention are in the provisions for membership in the organization. Whereas Article II stipulates that “the Organization shall provide for collaboration among European States in nuclear research of a pure scientific and fundamental character…”, nowhere is it stated explicitly that membership in CERN is restricted to European states. This ambiguity is by no means fortuitous. It reflects the fact that already in the early 1950s, a possible enlargement of membership beyond Europe was a hotly debated issue on which the provisional council could not reach agreement. It agreed, however, on a carefully crafted compromise that left a door open to shaping the membership policy of CERN at a later stage, and to adapting it to an evolving scientific and political landscape.

Indeed, Council has debated a widening of membership on several occasions, and confirmed repeatedly a restrictive interpretation of Article II, whereby membership remained reserved for European countries. Only in 2010 did Council approve the most radical shift of paradigm of CERN’s membership policy to date, embedded in a policy of “geographical enlargement” and opening full membership to non-European states, irrespective of their geographical location. At the same time, Council introduced the new instrument of associate membership to facilitate the accession of new members, including emerging countries outside Europe, which might not command sufficient resources to sustain full membership in the foreseeable future.

CERN’s new membership policy follows a twofold rationale. It reflects the globalization of particle physics, which in turn has become a prominent paradigm for the globalization of science at large, and it prepares CERN for its long-term future. Since 2004, the community of CERN “users” has grown from just above 6000 to almost 11,000 scientists and engineers. This dramatic growth has been driven by non-member states more than by the member states. Whereas the numbers are dominated by North America, in recent years the most important growth rates have been observed in communities from Asia and Latin America, where new players emerge on the field of international science. Particle physics has a strong tradition of defying political and geographical boundaries. CERN’s new membership policy underpins, in part, the global migration of the particle-physics community, which reflects the scientific attractiveness and success of the LHC.

More important, geographical enlargement is a first step in preparing CERN’s membership and governance for the post-LHC future. Whereas the LHC experiments today are truly global operations, the LHC machine was built as a predominantly European project, with a technically and politically important contribution of about 10% from outside Europe, mostly provided in kind. This model is not likely to work for a large next-generation facility in Europe. With the CLIC and FCC studies, CERN is exploring two different, challenging avenues to prepare its future, and the future of the field, after the LHC. No cost estimate exists yet for the various options, but it seems inconceivable that any of them could be approved and built within the same membership, governance and funding structures that worked 20 years ago – successfully, but under great labour pains – for the LHC.

With 10 applications for membership or associate membership received from countries of varying size, and from inside and outside Europe (Brazil, Croatia, Cyprus, Israel, Pakistan, Russia, Serbia, Slovenia, Turkey and Ukraine), during the past four years, the enlargement process has made a promising start. Some of the accession procedures have been completed (Israel has become CERN’s 21st member state), Serbia is an associate member in the pre-stage to membership, and other accession procedures are expected to conclude in the near future. (Romania, which applied for membership before the introduction of the new policy in 2010, has been integrated a posteriori in the same accession procedure as the other, more recent applicant states.) Other countries that would seem natural candidates acknowledge the promise and potential of a continued scientific and technological partnership, but have remained absent so far, or are hesitant on political or financial grounds.

More work, stamina, and patience will be needed to enlarge the membership of CERN to a size that is commensurate with its future ambitions in quantity and quality. Moreover, not all states that are obvious candidates for a closer scientific and technical partnership might share today the values of a governance that is excellence driven and consensus oriented, and that has prevailed most of the time in CERN’s 60-year history. In the long term, broadening the institutional base without sacrificing the traditional values of European co-operation that have been a key ingredient in CERN’s past successes is likely to emerge as the true challenge of the enlargement process.

bright-rec iop pub iop-science physcis connect