Comsol -leaderboard other pages

Topics

Can heavy-ion collisions cast light on strong CP?

The symmetries of parity (P) and its combination with charge conjugation (C) are known to be broken in the weak interaction. However, in the strong interaction the P and CP invariances are respected – although QCD provides no reason for their conservation. This is the “strong CP problem”, one of the remaining puzzles of the Standard Model.

The possibility of observing parity violation in the hot and dense hadronic matter formed in relativistic heavy-ion collisions has been discussed for many years. Various theoretical approaches suggest that in the vicinity of the deconfinement phase transition, the QCD vacuum could create domains – local in space and time – that could lead to CP-violating effects. These could manifest themselves via a separation of charge along the direction of the system’s angular momentum – or, equivalently, along the direction of the strong, approximately 1014 T, magnetic field that is created in non-central heavy-ion collisions and perpendicular to the reaction plane (i.e. the plane of symmetry of a collision, defined by the impact-parameter vector and the beam direction). This phenomenon is called the chiral magnetic effect (CME). Fluctuations in the sign of the topological charge of these domains cause the resulting charge separation to be zero when averaged over many events. This makes the observation of the CME possible only via P-even observables, expressed in terms of two- and multi-particle correlations.

The ALICE collaboration has studied the charge-dependent azimuthal particle correlations at mid-rapidity in lead–lead collisions at the centre-of-mass energy per nucleon pair, √sNN = 2.76 TeV. The analysis was performed over the entire event sample recorded with a minimum-bias trigger in 2010 (about 13 million events). A multi-particle correlator was used to probe the magnitude of the potential signal while at the same time suppressing any background correlations unrelated to the reaction plane. This correlator has the form 〈cos(φα + φβ – 2ΨRP)〉, where φ is the azimuthal angle of the particles and the subscript indicates the charge or the particle type. The orientation of the reaction plane angle is represented by ΨRP; it is not known experimentally but is instead estimated by constructing the event plane using azimuthal particle distributions.

The figure shows the correlator as a function of the collision centrality compared with model calculations, together with results from the Relativistic Heavy-Ion Collider (RHIC). The points from ALICE, shown as full and open red markers for pairs with the same and opposite charge, respectively, indicate a significant difference not only in the magnitude but also in the sign of the correlations for different charge combinations, which is consistent with the qualitative expectations for the CME. The effect becomes more pronounced moving from central to peripheral collisions, i.e. moving from left to right along the x-axis. The previous measurement of charge separation by the STAR collaboration at RHIC in gold–gold collisions at √sNN = 0.2 TeV, also shown in the figure (blue stars), is in both qualitative and quantitative agreement with the measurement at the LHC.

CCnew7_08_12

The thick solid line in the figure shows a prediction for the same-sign correlations caused by the CME at LHC energies, based on a model that makes certain assumptions about the duration and time-evolution of the magnetic field. This model underestimates the observed magnitude of the same-sign correlations seen at the LHC. However, parallel calculations based on arguments related to the initial time at which the magnetic field develops, as well as the same value of the magnetic flux for both energies, suggest that the CME might have the same magnitude at the energies of both colliders. Conventional event-generators, such as HIJING, which do not include P-violating effects, do not exhibit any significant difference between correlations of pairs with the same and opposite charge (green triangles). They were averaged in the figure.

An alternative explanation to the CME assumption was recently provided by a hydrodynamical calculation, suggesting that the correlator being studied may have a negative (i.e. out-of-plane), charge-independent, dipole-flow contribution that originates from fluctuations in the initial conditions of a heavy-ion collision. This could lead to a shift of the baseline, which when coupled to the well known effect in which the local charge conservation induced in a medium exhibits strong azimuthal (i.e. elliptic) modulations, could potentially give a quantitative description of the centrality-dependence observed by both ALICE and STAR. The results from ALICE for the charge-independent correlations are indicated by the blue band in the figure.

The measurements are supplemented by a differential analysis and will be extended with a study of higher harmonics, which will also investigate the correlations of identified particles. These studies are expected to shed light on one of the remaining fundamental questions of the Standard Model.

Searching for new physics in rare kaon-decays

The LHCb experiment was originally conceived of to study particles containing the beauty-flavoured b quark. However, there are many other possibilities for interesting measurements that exploit the unique forward acceptance of the detector. For example, the physics programme has already been extended to include the study of particles containing charm quarks, as well as electroweak physics. Now, a new result from LHCb on a search for a rare kaon-decay has further increased the breadth of the experiment’s physics goals.

This search is for the decay K0S→μ+μ, which is predicted to be greatly suppressed in the Standard Model. The branching ratio is expected to be 5 × 10–12, while the current experimental upper limit (dating from 1973) is 3.2 × 10–7 at 90% confidence level (CL). Although the dimuon decay of the K0L has been observed, with a branching fraction of the order of 10–8, searches for the counterpart decay of the K0S meson are well motivated because such decays can be mediated in independent ways to the K0L decay.

CCnew9_08_12

The analysis is based on the 1.0 fb–1 of data collected by LHCb in 2011. To suppress the background most efficiently, it involves several techniques that were originally developed for the search for B0S → μ+μ, for which LHCb has set the best limit in the world. The analysis also benefits from knowledge of K0S production and reconstruction that has been developed in several previous measurements (including LHCb’s first published paper, on the production of K0S mesons in 900 GeV proton–proton collisions).

To extract an upper limit on the branching fraction, the yield is normalized relative to that in the copious K0S→π+π decay mode. The 90% CL upper limit on the branching ratio B(K0S→μ+μ) is determined to be less than 9 × 10–9, a factor of 30 improvement over the previous most restrictive limit. As the figure shows, no significant evidence of the decay is seen.

Although the new limit is still three orders of magnitude above the Standard Model prediction, it starts to approach the level where new physics effects might begin to appear. Moreover, the data collected by LHCb in 2012 already exceed the sample from 2011 and by the end of the year the total data set should have more than trebled. The collaboration is continuing to search for ways to broaden its physics reach further to make the best use of this unprecedented amount of data and to tune the trigger algorithms for future data-taking and for the LHCb upgrade.

The search for ‘big news’ continues

The big news this summer was on the new Higgs-like boson and how the hint of an excess in last year’s 7 TeV data from the LHC became an observation with this year’s 8 TeV data. Yet there were many other search results, first presented at the International Conference on High-Energy Physics (ICHEP) in Melbourne, which benefited greatly from the new higher-energy data. The search for hypothetical heavy partners of the Standard Model W and Z bosons – the W’ and Z’ – were the CMS collaboration’s priorities for analysis with the 8 TeV data, both because the 7 TeV data included a hint of a high-mass excess and because the 8 TeV data provide a large boost in sensitivity at high mass. Searches for other heavy particles, such as the supersymmetric partners of the gluon and quarks (the gluino and squarks) were similarly priorities that benefited from the increased LHC energy.

Building on last year’s interesting results, the collaboration searched for narrow high-mass Z’ resonances decaying to pairs of electrons or muons in the 8 TeV data collected between April and June this year. At the same time, a search was conducted for a W’, which should decay to a neutrino and a single lepton (electron or muon). Because the Z’ and W’ can be massive, the searches require the identification of highly energetic leptons and a detailed understanding of their behaviour in the detector. The figure shows the spectra for the decay of the Z’ to electron pairs, for the 7 TeV and 8 TeV data combined. It illustrates the importance of understanding the high masses – just a few events appearing there may indicate a discovery.

CCnew11_08_12

The search for supersymmetric particles also relies on the production of a few events with massive particles, e.g. gluinos or squarks. These typically undergo cascading decays culminating in multi-jet final states with apparent momentum nonconservation in the detector, owing to the production of two neutral, weakly interacting particles at the end of the cascades that escape detection. (These particles would serve as excellent dark-matter candidates). Decays involving multiple b quarks, photons or same-sign dileptons were all priority search modes with the 8 TeV data. Each benefited from last year’s methods to measure backgrounds from control samples in the data. They also benefited from the rarity of Standard Model processes with such high-mass and complex final states. One particularly interesting background that affects the same-sign dilepton search is the production of a W or Z boson in association with top quarks, which leads to spectacular final states. A first measurement of these processes – obtained with the 8 TeV data – was also presented at ICHEP.

These high-mass searches have found the data to be consistent with Standard Model processes and have significantly improved limits on the range of possible masses for these hypothetical particles. The W’ and Z’ searches set 95% CL limits at 2.85 TeV and 2.59 TeV, respectively, and the gluino/squark searches excluded their masses up to 1.0 TeV. These results correspond to large increases in sensitivity, thanks to the LHC’s energy increase and improved analysis of the new data. At CMS, the search for more “big news” continues.

BELLA laser achieves 1PW at 1 pulse a second

CCnew12_08_12

The laser system of the Berkeley Lab Laser Accelerator (BELLA) has achieved a world record for laser performance by delivering 1 PW of power in a 1 Hz pulse only 40 fs long. No other laser system has achieved this peak power at such a rapid pulse rate. Although the laser’s average power is only 42.4 W, it achieves the enormous peak power in part through compression into an extremely short pulse. This laser system will drive the acceleration of electron beams in a metre-long plasma channel, with the aim of reaching 10 GeV for the first time with a laser-driven plasma accelerator.

BELLA, conceived of in 2006 by Wim Leemans, head of the Lasers and Optical Accelerator Systems Integrated Studies programme (LOASIS), is nearing completion at the Lawrence Berkeley National Laboratory (LBNL). The facility builds on previous experiments on laser-driven plasma acceleration by the LOASIS programme. It promises to pave the way for developing compact particle accelerators for high-energy physics, as well as table-top free-electron lasers for investigating materials and biological systems. Experiments to demonstrate the production of 10-GeV electron beams are now beginning.

The atomic nucleus: fissile liquid or molecule of life?

CCnew13_08_12

The atomic nucleus is generally described as a drop of quantum liquid. In particular, such liquid-like behaviour explains nuclear fission and applies especially to heavy nuclei such as uranium. The so-called liquid drop mass formula is a typical textbook model in nuclear physics. On the other hand, light nuclei can behave like tiny molecules – or clusters – made up of neutrons and protons within the nucleus. This molecular aspect at the femtometre scale makes it possible to understand the stellar nucleosynthesis of 12C and consequently of heavier elements such as oxygen.

So far, both the “molecular nucleus” and the “liquid nucleus” views have co-existed. Now, a team from the Institut de Physique Nucléaire d’Orsay (Université Paris-Sud/CNRS) and the French Atomic Energy Commission (CEA), in collaboration with the University of Zagreb, has proposed a unified view of these two aspects. By using relativistic-energy density functionals, the researchers have demonstrated that, although a light nucleus can show molecule-like behaviour (tending towards the crystalline state), heavier nuclei take on a more liquid-like behaviour.

The team took inspiration from neutron stars – remnants of core-collapse supernovae that are composed mainly of neutrons with a few protons. Inside the crust of neutron star, matter passes from being a nucleonic crystalline medium to becoming a nuclear-liquid medium. Thanks to this analogy, the team identified a mechanism of transition from the liquid to the crystalline state in the nucleus.

When the interactions between neutrons and protons – through the depth of the confining nuclear potential – are not strong enough to fix them within the nucleus, the latter is in a quantum-liquid-like state where protons and neutrons are delocalized. Conversely, in a crystalline state, neutrons and protons would be fixed at regular intervals within the nucleus. The nuclear molecule is interpreted as being an intermediate state between a quantum liquid and a crystal. In the long term, the aim is to attain a unified understanding of these various states of the nucleus.

ALMA tastes sugar around a Sun-like star

A team of astronomers using the Atacama Large Millimetre/submillimetre Array (ALMA) has identified sugar molecules in the gas surrounding a young Sun-like star. This is the first time that sugar has been found in space around such a star and the discovery suggests that the building blocks of life are available when planets form.

A long list of molecules has already been detected in the interstellar medium. They range from simple diatomic compounds such as O2 or CO to complex molecules, including alcohol or even fullerenes (C60 or “buckyballs”). The simple form of sugar found by ALMA is glycolaldehyde, H2COHCHO, a molecule that was first detected in 2000 in a big molecular cloud, Sagittarius B2, near the centre of the Galaxy. It has now been detected in a planet-forming disc around IRAS 16293-2422, a young binary star of about the same mass as the Sun. The star is located some 400 light-years away in the relatively nearby Rho Ophiuchi star-forming region.

Finding molecules in space requires an extremely precise spectrometer in the microwave range of the electromagnetic spectrum. The vibration and rotation of a molecule is quantized and so can take only certain fixed values. As in atomic de-excitation, the transition from one vibrational or rotational level to another with less energy results in the emission of a photon of the corresponding energy. This leads to an excess of photons at given energies resulting in characteristic emission-lines in the spectrum of a source.

The ALMA observatory – which saw “first light” a year ago – offers the high sensitivity and spectral resolution needed for such studies (CERN Courier November 2011 p13). The project is a partnership between Europe, North America and East Asia in co-operation with the Republic of Chile. Led by the European Southern Observatory (ESO) on behalf of Europe, it is the largest astronomical project in existence. When completed in 2013, ALMA will be a giant array of 66 antennae that can be moved in different configurations with a maximum extension of 16 km. Located in the harsh environment of the Chajnantor plateau in northern Chile, at 5000 m above sea level, it benefits from a dry and thin high-altitude atmosphere.

The emission lines corresponding to glycolaldehyde were discovered by a team of astronomers led by Jes Jørgensen of the Niels Bohr Institute, University of Copenhagen. The importance of the discovery lies in glycolaldehyde being one of the ingredients in the formation of ribonucleic acid (RNA) – a macromolecule similar to DNA – which is essential for life on Earth. Finding such complex molecules around a star at distances similar to the distance between Uranus and the Sun means that the building blocks of life exist around new-born stars at the time of planet formation.

There are two open issues linked to the finding. First, how do these complex molecules form? Second, can they survive planet formation? Random interactions of atoms floating in space would not be sufficient to explain the amount and complexity of molecules that have been found there. Dust grains are thought to be a favoured place where atoms could be ionized by cosmic rays and then combined with nearby atoms or molecules by electrostatic attraction. If these space-born molecules are to be considered as the seeds for life, then a means of protecting them from excessive heat and the ionizing ultraviolet radiation from the star during the chaotic process of planet formation needs to be understood. What is much more certain, is that ALMA is keeping its promises and will bring new clues to the origin of life on Earth and possibly elsewhere.

3D cooling for uranium collisions at RHIC

In May this year, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) finished its first run with beams of uranium ions – the heaviest ions ever used in a collider. Heavy ions contain large numbers of protons and neutrons and, when colliding at high energies, they create quark–gluon plasma, the state of matter that probably existed at the dawn of the universe. Not only was this the first time that uranium ions have been used in a particle collider, it was also the first time that the complete bunched-beam stochastic cooling system was used at RHIC, allowing cooling in the longitudinal, vertical and horizontal planes in both of the collider’s interlaced magnet rings.

Uranium ions are now available at RHIC courtesy of the recently commissioned electron-beam ion source (EBIS). Physicists at the STAR and PHENIX experiments are particularly interested in uranium nuclei because of their prolate shape, more like a rugby ball than a sphere. Some of these nuclei will collide along their long axes, creating a quark–gluon plasma denser than the plasma discovered and now routinely created at RHIC in collisions of gold nuclei, which are more spherical. Some nuclei will collide with their long axes parallel, although perpendicular to their directions of motion. This arrangement creates a quark–gluon plasma with an oblong cross-section but without the strong magnetic field generated by grazing incidence collisions of spherical nuclei. Both of these possibilities make uranium–uranium collisions a new tool for studying quark–gluon plasma, adding to the toolbox that is currently available at both RHIC and the LHC.

A hadron-collider ‘first’

CCura1_08_12

The amount of data delivered to the STAR and PHENIX experiments in the three-week exploratory run was increased five-fold by stochastic cooling, a feedback technique that shrinks the ion beams while they are colliding. This technique was developed at RHIC by a team that included Mike Blaskiewicz, Mike Brennan and Kevin Mernick (Blaskiewicz et al. 2010). The cooling is so strong that the beam size is reduced by half after an hour of storage time (figure 1) and the peak luminosity – or collision rate – rises to three times its initial value (figure 2). This has never been achieved in a hadron collider before. With a re-optimized lattice and stochastic cooling, no ions were lost by any mechanism other than through the uranium–uranium collisions themselves, which is also a first for a hadron collider.

CCura2_08_12

In stochastic cooling, invented by Simon van der Meer and first demonstrated at CERN’s Intersecting Storage Rings in 1975, random fluctuations of particle distributions are detected and corrected for. The result is smaller and smaller distributions. The technique involves sending a signal from a pickup at one location to activate a kicker to correct the same bunch at a point further round the ring. While stochastic cooling was and is used in a number of low-energy storage rings, RHIC is the first collider with operational stochastic cooling. The procedure was first demonstrated in 2006 using a low-intensity proton bunch with 109 particles. Operational longitudinal cooling of gold ions in one of RHIC’s two rings was demonstrated the following year. Since then, both the Blue ring (clockwise) and the Yellow ring (anticlockwise) have been fitted with horizontal, vertical and longitudinal cooling, with full 3D cooling now available.

From pickup to kicker

The detection of fluctuations of distributions with high numbers of particles requires bandwidths in the gigahertz range. At RHIC, the ion beams at storage energy are composed of bunches of 5 ns full-width, separated by 107 ns. Cooling times of about 1 hour are obtained with a system bandwidth of 3 GHz and optimal kicker voltages of typically 3 kV. To reduce the microwave power required, a set of kicker cavities with a bandwidth of only 10 MHz has been adopted to take advantage of the bunch spacing. Each kicker consists of 16 cavities. Therefore, with three cooling planes there are 96 cavities in all for the two rings. The systems in the two rings are quite similar, so the following describes only the set-up for the Blue ring.

CCura3_08_12

The longitudinal pickup is located in the 2 o’clock straight section (figure 3). Before the pickup signal is transmitted, it is first put through a traversal filter that repeats the signal 16 times to stretch it, with output S1(t) = S0(t)+S0(t–τ)+ … +S0(t–15τ) and τ = 5.000 ns. The effect of the filter, which is a key feature of the system at RHIC, is to maintain all of the information in the 5-ns-long bunch core while reducing the peak signal. This, in turn, lightens the load on the specially adapted commercial microwave link that is used to send the signal to the longitudinal kicker in the 4 o’clock straight section of the Blue ring. There, a one-turn filter is applied, where S2(t) = S1(t)–S1(t–Trev) with the revolution period Trev accurate to better than 1 ps. This filter ensures that the kick to the beam is proportional to the rate at which the beam is changing, similar to a viscous damping force being proportional to the velocity of a particle, not its position. The traversal filter causes the spectrum of the signal to have peaks of width 10 MHz separated by 200 MHz.

CCura4_08_12

The 16 kicker cavities in the longitudinal system operate at frequencies of 6.0 GHz, 6.2 GHz, …, 9.0 GHz. To drive them, the pickup signal is split into 16 channels, corresponding to the individual cavities. Each channel goes through a band-pass filter with a width of 100 MHz centred at its cavity frequency so that a given cavity is driven by a sinusoidal signal whose phase and amplitude change from one bunch to the next. The individual signals are put through analogue linear modulators that adjust the phase and amplitude to obtain optimal cooling. The amplifiers are located in the tunnel close to the kickers and have a peak power of 40 W.

To set up the system, open-loop beam transfer-functions are measured at each cavity frequency. The phase and amplitude are optimized using the signal suppression observed in the pickup spectrum. Signal suppression occurs because the observed signal from the beam is the sum of the Schottky signal and the coherent beam response of the cooling system. When things are tuned correctly the observed signal has 1/4 the power of the signal without cooling. During operation the full aperture of the kicker is only 2 cm, so the cavities are open during injection and acceleration and close only after storage energy is reached.

The vertical and horizontal stochastic cooling systems employ fibre-optic links between the pickups and the kickers, with a net delay of about 2/3 of a turn. The use of fibres, with their reduced signal velocities, is possible because these transverse systems can tolerate the extra delay without compromising performance. Here the Blue cavities operate at frequencies of 4.7 GHz, 4.9 GHz, … 7.7 GHz, the Yellow cavities at 4.8 GHz, 5.0 GHz, …, 7.8 GHz. The offset in frequency between the rings is needed to avoid ring-to-ring interference via microwaves propagating from one ring to the other through the common straight sections. The Blue low-level system employs the antisymmetrical filter with S1(t) =  S0(t)–S0(t–τ) + S0(t–2τ)… –S0(t–15τ) with τ = 5.000 ns to get the peaks in the signal spectrum at the cavity frequencies. Like their longitudinal counterparts, the transverse cavities are open during injection and acceleration and close once storage energy is reached.

As the beam distribution evolves and components warm up, the optimal loop parameters change. The gain and phase of the system-transfer functions are therefore automatically optimized, approximately every 5 to 15 minutes. This is done one cavity at a time so that cooling is not compromised. The open-loop system-transfer function for each cavity is measured using a network analyser. The measured transfer function is compared with a stored reference function and the phase and amplitude of the low-level gain are adjusted to minimize the mean-square difference between the measured and stored transfer functions. The one-turn delay filters of the longitudinal systems are also corrected automatically by adjusting a piezoelectric delay module in the fibreoptic cable that supplies the delay.

The stochastic cooling system has significantly improved the integrated luminosity. During 2011 vertical and longitudinal cooling was used in both rings with gold ions, while horizontal cooling was achieved using betatron coupling. With all of the other parameters held constant, the cooling system doubled the integrated luminosity per store. After the installation of horizontal cooling systems, RHIC ran with uranium–uranium collisions in 2012. Figure 2 shows collision rates in the STAR and PHENIX detectors. The cooling reduced the beam size to such an extent that the collision rates were increased by almost a factor of 3 and, when compared with no cooling, the integrated luminosity was increased by a factor of 5.

The history of QCD

CCqcd1_08_12

About 60 years ago, many new particles were discovered, in particular the four Δ resonances, the six hyperons and the four K mesons. The Δ resonances, with a mass of about 1230 MeV, were observed in pion–nucleon collisions at what was then the Radiation Laboratory in Berkeley. The hyperons and K mesons were discovered in cosmic-ray experiments.

Murray Gell-Mann and Yuval Ne’eman succeeded in describing the new particles in a symmetry scheme based on the group SU(3), the group of unitary 3 × 3 matrices with determinant 1 (Gell-Mann 1962, Ne’eman 1961). SU(3)-symmetry is an extension of isospin symmetry, which was introduced in 1932 by Werner Heisenberg and is described by the group SU(2).

The observed hadrons are members of specific representations of SU(3). The baryons are octets and decuplets, the mesons are octets and singlets. The baryon octet contains the two nucleons, the three Σ hyperons, the Λ hyperon and the two Ξ hyperons (see figure 1). The members of the meson octet are the three pions, the η meson, the two K mesons and the two K mesons.

In 1961, nine baryon resonances were known, including the four Δ resonances. These resonances could not be members of an octet. Gell-Mann and Ne’eman suggested that they should be described by an SU(3)-decuplet but one particle was missing. They predicted that this particle, the Ω, should soon be discovered with a mass of around 1680 MeV. It was observed in 1964 at the Brookhaven National Laboratory by Nicholas Samios and his group. Thus the baryon resonances were members of an SU(3) decuplet.

It was not clear at the time why the members of the simplest SU(3) representation, the triplet representation, were not observed in experiments. These particles would have non-integral electric charges: 2/3 or –1/3.

The quark model

In 1964, Gell-Mann and Feynman’s PhD student George Zweig, who was working at CERN, proposed that the baryons and mesons are bound states of the hypothetical triplet particles (Gell-Mann 1964, Zweig 1964). Gell-Mann called the triplet particles “quarks”, using a word that had been introduced by James Joyce in his novel Finnegans Wake.

Since the quarks form an SU(3) triplet, there must be three quarks: a u quark (charge 2/3), a d quark (charge –1/3) and an s quark (charge –1/3). The proton is a bound state of two u quarks and one d quark (uud). Inside the neutron are two d quarks and one u quark (ddu). The Λ hyperon has the internal structure uds. The three Σ hyperons contain one s quark and two u or two d quarks (uus or dds). The Ξ hyperons are the bound states uss and dss. The Ω is a bound state of three s quarks: sss. The eight mesons are bound states of a quark and an antiquark.

In the quark model, the breaking of the SU(3)-symmetry can be arranged by the mass term for the quarks. The mass of the strange quark is larger than the masses of the two non-strange quarks. This explains the mass differences inside the baryon octet, the baryon decuplet and the meson octet.

Introducing colour

In the summer of 1970, I spent some time at the Aspen Center of Physics, where I met Gell-Mann and we started working together. In the autumn we studied the results from SLAC on the deep-inelastic scattering of electrons and atomic nuclei. The cross-sections depend on the mass of the virtual photon and the energy transfer. However, the experiments at SLAC found that the cross-sections at large energies depend only on the ratio of the photon mass and the energy transfer – they showed a scaling behaviour, which had been predicted by James Bjorken.

CCqcd2_08_12

In the SLAC experiments, the nucleon matrix-element of the commutator of two electromagnetic currents is measured at nearly light-like distances. Gell-Mann and I assumed that this commutator can be abstracted from the free-quark model and we formulated the light-cone algebra of the currents (Fritzsch and Gell-Mann 1971). Using this algebra, we could understand the scaling behaviour. We obtained the same results as Richard Feynman in his parton model, if the partons are identified with the quarks. It later turned out that the results of the light-cone current algebra are nearly correct in the theory of QCD, owing to the asymptotic freedom of the theory.

The Ω is a bound state of three strange quarks. Since this is the ground state, the space wave-function should be symmetrical. The three spins of the quarks are aligned to give the spin of the omega minus. Thus the wave function of the Ω does not change if two quarks are interchanged. However, the wave function must be antisymmetric according to the Pauli principle. This was a great problem for the quark model.

In 1964, Oscar Greenberg discussed the possibility that the quarks do not obey the Pauli statistics but rather a “parastatistics of rank three”. In this case, there is no problem with the Pauli statistics but it was unclear whether parastatistics makes any sense in a field theory of the quarks.

Two years later, Moo-Young Han and Yoichiro Nambu considered nine quarks instead of three. The electric charges of these quarks were integral. In this model there were three u quarks: two of them had electric charge of 1, while the third one had charge 0 – so on average the charge was 2/3. The symmetry group was SU(3) × SU(3), which was assumed to be strongly broken. The associated gauge bosons would be massive and would have integral electric charges.

In 1971, Gell-Mann and I found a different solution of the statistics problem (Fritzsch and Gell-Mann 1971). We considered nine quarks, as Han and Nambu had done, but we assumed that the three quarks of the same type had a new conserved quantum number, which we called “colour”. The colour symmetry SU(3) was an exact symmetry. The wave functions of the hadrons were assumed to be singlets of the colour group. The baryon wave-functions are antisymmetric in the colour indices, denoted by red (r), green (g) and blue (b):

Thus the wave function of a baryon changes sign if two quarks are exchanged, as required by the Pauli principle. Likewise, the wave functions of the mesons are colour singlets:

The cross-section for electron–positron annihilation into hadrons at high energies depends on the squares of the electric charges of the quarks and on the number of colours. For three colours this leads to:

Without colours this ratio would be 2/3. The experimental data, however, were in agreement with a ratio of 2.

In 1971–1972, Gell-Mann and I worked at CERN. Together with William Bardeen we investigated the electromagnetic decay of the neutral pion into two photons. It was known that in the quark model the decay rate is about a factor nine less than the measured decay rate – another problem for the quark model.

The decay amplitude is given by a triangle diagram, in which a quark–antiquark pair is created virtually and subsequently annihilates into two photons. We found that after the introduction of colour, the decay amplitude increases by a factor three – each colour contributes to the amplitude with the same strength. For three colours, the result agrees with the experimental value.

CCqcd3_08_12

In the spring of 1972, we started to interpret the colour group as a gauge group. The resulting gauge theory is similar to quantum electrodynamics (QED). The interaction of the quarks is generated by an octet of massless colour gauge bosons, which we called gluons (Fritzsch and Gell-Mann 1972). We later introduced the name “quantum chromodynamics”, or QCD. We published details of this theory one year later together with Heinrich Leutwyler (Fritzsch et al. 1972).

In QCD, the gluons interact not only with the quarks but also with themselves. This direct gluon–gluon interaction is important – it leads to the reduction of the coupling constant at increasing energy, i.e. the theory is asymptotically free, as discovered in 1972 by Gerard ’t Hooft (unpublished) and in 1973 by David Gross, David Politzer and Frank Wilczek. Thus at high energies the quarks and gluons behave almost as free particles. This leads to the approximate “scaling behaviour” of the cross-sections in the deep-inelastic lepton–hadron scattering. The quarks behave almost as free particles at high energies.

The logarithmic decrease of the coupling constant depends on the QCD energy-scale parameter, Λ, which is a free parameter and has to be measured in the experiments. The current experimental value is:

Experiments at SLAC, DESY, CERN’s Large Electron–Positron (LEP) collider and Fermilab’s Tevatron have measured the decrease of the QCD coupling-constant (figure 2). With LEP, it was also possible to determine the QCD coupling-constant at the mass of the Z boson rather precisely:

It is useful to consider the theory of QCD with just one heavy quark Q. The ground-state meson in this hypothetical case would be a quark–antiquark bound state. The effective potential between the quark and its antiquark at small distances would be a Coulomb potential proportional to 1/r, where r is the distance between the quark and the antiquark. However, at large distances the self-interaction of the gluons becomes important. The gluonic field lines at large distances do not spread out as in electrodynamics. Instead, they attract each other. Thus the quark and the antiquark are connected by a string of gluonic field lines (figure 3). The force between the quark and the antiquark is constant, i.e. it does not decrease as in electrodynamics. The quarks are confined. It is still an open question whether this applies also to the light quarks.

CCqcd4_08_12

In electron–positron annihilation, the virtual photon creates a quark and an antiquark, which move away from each other with high speed. Because of the confinement property, mesons – mostly pions – are created, moving roughly in the same direction. The quark and the antiquark “fragment” to produce two jets of particles. The sum of the energies and momenta of the particles in each jet should be equal to the energy of the original quark, which is equal to the energy of each colliding lepton. These quark jets were observed for the first time in 1978 at DESY (figure 4). They had already been predicted in 1975 by Feynman.

CCqcd5_08_12

If a quark pair is produced in electron–positron annihilation, then QCD predicts that sometimes a high-energy gluon should be emitted from one of the quarks. The gluon would also fragment and produce a jet. So, sometimes three jets should be produced. Such events were observed at DESY in 1979 (figure 4).

CCqcd6_08_12

The basic quanta of QCD are the quarks and the gluons. Two colour-octet gluons can form a colour singlet. Such a state would be a neutral gluonium meson. The ground state of the gluonium mesons has a mass of about 1.4 GeV. In QCD with only heavy quarks, this state would be stable but in the real world it would mix with neutral quark–antiquark mesons and would decay quickly into pions. Thus far, gluonium mesons have not been identified clearly in experiments.

The simplest colour-singlet hadrons in QCD are the baryons – consisting of three quarks – and the mesons, made of a quark and an antiquark. However, there are other ways to form a colour singlet. Two quarks can be in an antitriplet – they can form a colour singlet together with two antiquarks. The result would be a meson consisting of two quarks and two antiquarks. Such a meson is called a tetraquark. Three quarks can be in a colour octet, as well as a quark and an antiquark. They can form a colour-singlet hadron, consisting of four quarks and an antiquark. Such a baryon is called a pentaquark. So far, tetraquark mesons and pentaquark baryons have not been clearly observed in experiments.

The three quark flavours were introduced to describe the symmetry given by the flavour group SU(3). However, we now know that in reality there are six quarks: the three light quarks u, d, s and the three heavy quarks c (charm), b (bottom) and t (top). These six quarks form three doublets of the electroweak symmetry group SU(2):

CCqcd7_08_12

The masses of the quarks are arbitrary parameters in QCD, just as the lepton masses are in QED. Since the quarks do not exist as free particles, their masses cannot be measured directly. They can, however, be estimated using the observed hadron masses. In QCD they depend on the energy scale under consideration. Typical values of the quark masses at the energy of 2 GeV are:

The mass of the t quark is large, similar to the mass of a gold atom. Owing to this large mass, the t quark decays by the weak interaction with a lifetime that is less than the time needed to form a meson. Thus there are no hadrons containing a t quark.
The theory of QCD is the correct field theory of the strong interactions and of the nuclear forces. Both hadrons and atomic nuclei are bound states of quarks, antiquarks and gluons. It is remarkable that a simple gauge theory can describe the complicated phenomena of the strong interactions.

ESO and CERN: a tale of two organizations

CCeso1_08_12

On 5 October 1962, five nations signed the convention that founded the European Southern Observatory (ESO). Belgium, France, the Federal Republic of Germany, the Netherlands and Sweden where soon followed by Denmark. They were later joined by Switzerland, Italy, Portugal, the UK, Finland, Spain, the Czech Republic and, most recently, Austria in 2009. Brazil, whose membership is pending ratification, will be the 15th member state and the first from outside Europe. The organization’s main mission, laid down in the convention signed in 1962, is to provide state-of-the-art research facilities to astronomers and astrophysicists, allowing them to conduct front-line science in the best conditions. With headquarters in Garching near Munich, ESO operates three observing sites high in the Atacama Desert region of Chile, which are home to a world-leading collection of observing facilities.

CCeso2_08_12

ESO’s ruling body is its council, which delegates day-to-day responsibility to the executive under the director-general, while other governing bodies of ESO include the Finance Committee and the Committee of Council. If this sounds familiar, it is probably because the origins of ESO bear more than a passing resemblance to those of CERN. The founding of ESO has its roots in a statement signed on 26 January 1954 by leading astronomers from six countries – the five nations that would later sign the ESO convention, plus the UK (which was to go in a different direction and join ESO only in 2002). The statement pointed to the lack of coverage of the skies of the southern hemisphere – which include interesting regions such as the Magellanic Clouds – by powerful telescopes at that time. It went on to put the case that although no one country had sufficient resources for such a project, it could be possible through international collaboration. Finally, it recommended the establishment of a joint observatory in South Africa that would house a 3 m telescope and a 1.2 m Schmidt telescope with a wide field of view, which would be valuable for surveys. These instruments would complement the 5 m Hale Telescope and the 1.2 m Schmidt that had been observing the skies of the northern hemisphere from the Palomar Observatory in California since 1948.

CCeso3_08_12

The idea for a joint European effort had originated the previous spring, when the pioneering Dutch astronomer, Jan Oort, invited Walter Baade, a renowned German working at the Mt Wilson and Palomar Observatories, to stay at Leiden for a couple of months. Oort mobilized a group of leading European astronomers for a meeting with the influential visitor on 21 June 1953, where Baade proposed capitalizing on existing designs for a 3 m telescope being built for the Lick Observatory in California and for the Schmidt telescope at Palomar. Also present at the meeting was Jan Bannier, director of the Dutch national science foundation and president of the provisional CERN Council.

The ESO convention

CCeso4_08_12

In November 1954, Bannier and Gösta Funke, director of the Swedish National Research Council and a member of the newly established formal CERN Council, drew up the first draft of a convention for ESO, with key similarities to the CERN convention. ESO would have a council with two delegates (at least one an astronomer) from each member state; each country would have an equal vote; financial contributions would be in proportion to national income up to a fixed limit.

CCeso5_08_12

Further progress was slow because the project’s supporters grappled with financial and political difficulties in their countries. Important impetus came with Oort’s successful application in 1959 for a grant from the Ford Foundation in the US for a $1 million – a fifth of the estimated cost at the time – on condition that at least four of the five potential members sign the convention. It took another three years for further issues to be resolved and for the convention to be signed on 5 October 1962, in the Ministry of Foreign Affairs in Paris. Even then, it was only in early 1964 that real work could begin (and the grant from the Ford Foundation released) when France became the fourth country to ratify the convention, after the Netherlands, Sweden and the German Federal Republic.

CCeso6_08_12

The original idea had been to locate the observatory in South Africa and over the period 1953–1963 searches for suitable places were followed by systematic tests at three sites in the Karoo region. However, in 1959 astronomers in the US began to explore the possibilities in the Chilean Andes, through the Association of Universities for Research in Astronomy (AURA). It soon became clear that the Andes might offer better climatic conditions than South Africa for astronomy and in November 1962 two members of ESO’s site-testing team went to Chile. Their findings indicated a general superiority, in particular longer spells of clear weather and smaller temperature differences during the night (owing, in fact, to the higher altitude).

So, in June 1963 Otto Heckmann, the embryonic organization’s provisional director-general, and others including Oort went to Chile to meet members of AURA and see the mountains chosen by the Americans. Although the ESO convention had still to be ratified by the requisite four countries, in November the ESO Committee opted unanimously for the Andes, a decision that the formal ESO Council approved at its first meeting in 1964. Later that year, ESO decided on a site that was independent of the Americans – a mountaintop at 2400 m that Heckmann proposed naming La Silla (the saddle).

CCeso7_08_12

ESO went on to develop La Silla, first installing a number of intermediate-size telescopes that had been foreseen in the convention, as well as some smaller national telescopes. The official inauguration, by the president of the Republic of Chile, Eduardo Frei Montalva, took place on 25 March 1969.

In the meantime, there was mounting concern about the slow progress on the larger telescopes described in the ESO convention and in March 1969 a working group was set up to advise the ESO Council on this and various administrative matters. In particular, it was to look into budget procedures and the project for the 3.6 m telescope. (The proposed size had grown after experience in the US had shown that the observer’s cage for a 3 m instrument raised problems for larger astronomers.) The working group was chaired by Funke and both he and Augustin Alline, the French government ESO Council delegate, were members of CERN Council. Their recommendations led to the introduction at ESO of the “Bannier process”, which had been established at CERN for budgetary matters; and at Alline’s suggestion, ESO also followed CERN’s example in setting up a Committee of Council, whose informal meetings of fewer people could iron out potential difficulties between meetings of Council.

It was at the meeting of CERN’s Committee of Council in November 1969 that CERN’s director-general, Bernard Gregory, reported on discussions with his counterpart at ESO about a possible collaboration between the two organizations – in essence, a rescue plan for the 3.6 m telescope. The project was similar in size and complexity to that of a large bubble chamber and there was also a strong feeling that particle physicists and astronomers could benefit from closer contact. The committee gave Gregory the go-ahead to report to the meeting of CERN Council in December, which in turn authorized him to continue the discussions with ESO. At the meeting, Bannier, who was currently president of the ESO Council, pointed out that with its greater experience in building large-scale apparatus and in dealing with industry CERN would bring valuable expertise to advance the 3.6 m project.

CCeso8_08_12

By June 1970, a draft co-operation agreement had been drawn up that foresaw the setting up of ESO’s Telescope Project Division at CERN. CERN would provide administrative, technical and professional services – the latter covering the project management as well as technical and scientific advice. This would be at no cost to CERN because all would be financed by ESO and no additional staff at CERN would be required. The June council meetings at ESO and CERN consented to collaboration between the two organizations and on 16 September the agreement was signed by Gregory and Adriaan Blaauw, ESO’s director-general. Within six months, the nucleus of the Telescope Project (TP) Division had formed at CERN. Led by ESO’s Svend Laustsen, it included his small technical group. The division then grew to comprise some 40 astronomers, engineers and technicians, all involved in the final design, construction and testing of the 3.6 m telescope, while benefiting from CERN’s experience in engineering and the administrative aspects of implementing a large project.

The members of TP interacted mainly with CERN’s Proton Synchrotron Department (particularly Wolfgang Richter and the department head, Kees Zilverschoon), the Technical Services and Building Division (Henri Laporte and E Leroy) and the Data Handling Division (Detmar Wiskott), while the placing of contracts involved working with the Finance Division. The first two years focused on completing the design of the telescope and the building to house it, with a first design report issued in February 1971. A year later, the group was awarding contracts related to the construction of the telescope, the building and a computer system, both to steer the telescope and for data-acquisition and some online analysis.

Further developments

CCeso9_08_12

November 1972 saw another development at CERN, with the inauguration of the ESO Sky Atlas Laboratory. To match the atlas of the northern sky made by the 1.2 m Schmidt telescope at Mt Palomar, ESO and the UK were pooling the resources of ESO’s 1 m Schmidt in Chile and the UK’s 1.2 Schmidt in Australia. A copy of each of the glass plates recorded in Chile was sent to the lab at CERN for further copying onto film. After a first rapid survey, ESO’s Schmidt telescope went on to cover red wavelengths in detail, the UK’s instrument covering blue. The Sky Atlas Lab was involved in producing 200 copies of the complete atlas, the full view totalling 200 m2 of film. One highlight of this work was the discovery of a new comet on 5 November 1975, named after its discoverer, the lab’s head, Danish astronomer Richard West.

In April 1975, the 3.6 m telescope was ready for testing in Europe. One innovation concerned the use of a fully automated control system, which involved some 120 individual computer-controlled motors for steering. The 18 m tall structure was assembled in a hall with a specially constructed pit to accommodate it at the Société Creusot-Loire at St Chamond. There, a van from CERN packed with electronic control-circuitry tested out the control system, determining the optimum configuration for driving the telescope’s two orientation axes. With testing complete, the telescope was dismantled and packed up for its journey to Chile, where it would be fitted with its giant mirror. The mirror blank had been ordered from Corning in the US as early as 1965 but a number of problems meant that its final processing to achieve a surface accuracy of 0.06 μm was not completed by the Recherches et études d’optique et de sciences connexes (REOSC), near Paris, until early 1972.

A year after it arrived in Chile, the telescope finally saw its “first light” on the night of 7–8 November 1976. The links with CERN were not quite over, however. A smaller 1.4 m instrument – the Coudé Auxiliary telescope (CAT) – was later designed by the TP team at CERN. Manufactured mainly by industry, it was assembled in CERN in early 1979 before going to Chile, where it fed the 3.6 m Coudé Echelle Spectrometer through a light tunnel. Fully computer controlled, the CAT was used for many different astronomical observations, including measuring the ages of ancient stars. The 3.6 m itself has since gone on to be highly productive, most recently with the High Accuracy Radial velocity Planet Searcher (HARPS), the world’s foremost hunter of planets beyond the solar system.

Writing in ESO’s journal, The Messenger, in 1981, Charles Fehrenbach, the director of the Haute Provence Observatory, who was involved with ESO for many of the early years, stated: “There is no doubt in my mind that it was the installation in Geneva which saved our organization.” The strong links with CERN certainly helped to set ESO on its way and the older organization can now look on with pleasure at its younger sibling’s many achievements.

Particle and nuclear physics intersect in Florida

CCcip1_08_12

The Conferences on the Intersections of Particle and Nuclear Physics (CIPANP) form a triennial series that focuses on topics of interest to particle physicists, nuclear physicists, astrophysicists, cosmologists and accelerator physicists. Since the first conference took place in Steamboat Springs, Colorado, in 1984, the overlap in the interests of these areas has increased markedly. For example, the LHC is exploring both elementary-particle physics and heavy-ion physics, with the ALICE detector designed in particular for studies of lead-ion collisions. Explorations of the neutrino sector have attracted traditional nuclear physicists as well as particle physicists to measurements of solar neutrinos, reactor neutrinos, cosmic neutrinos, long-baseline neutrinos and neutrinoless double-beta decay. Facilities with rare-isotope beams have opened possibilities for innovative studies of questions in fundamental physics. The searches for physics beyond the Standard Model cover the whole range, from table-top experiments to those at the large collider facilities.

CIPANP 2012, the 11th conference in the series, took place at the Renaissance Vinoy Resort and Golf Club in St Petersburg, Florida, on 28 May – 3 June, the venue and dates being chosen according to well established CIPANP criteria. Plenary and parallel sessions were organized following the 14 topics selected for the conference: the high-energy frontier; the low-energy precision frontier; neutrino masses and neutrino mixing; electroweak tests of the Standard Model; the cosmic frontier; dark matter and dark energy; particle and nuclear astrophysics; heavy flavour and the CKM matrix; QCD, hadron spectroscopy and exotics; hadron physics and spin; nucleon structure; nuclear structure; quark matter and high-energy heavy-ion collisions; new facilities and their instrumentation. Each parallel session was organized with on average five two-hour sessions under two convenors. There were 29 invited plenary talks and a concluding “vision statement”. This report covers some of the highlights from the many excellent presentations at the meeting.

The ATLAS and CMS collaborations reported on results of the first two years of operation of the LHC, giving tantalizing hints at 2.5 σ and 2.8 σ, respectively, at a mass of about 125 GeV for the much searched-for Higgs boson. Within the Standard Model, the Higgs-boson searches plus electroweak precision data give the combined hints for the Higgs from the LHC and the Tevatron a 3.4 σ significance. (These results have been superseded by those reported at CERN on 4 July. The CDF collaboration presented a more precise value for the mass of the W boson with an uncertainty of ± 19 MeV, giving the world average an error of ± 16 MeV.

Elsewhere, the Alpha Magnetic Spectrometer experiment mounted on board the International Space Station may yield information on ultrarelativistic cosmic particles and their interactions. The MuLan and MuCap collaborations at PSI reported their final determinations of the Fermi constant and the nucleon’s weak induced pseudoscalar coupling-constant, respectively. Current and future heavy-flavour experiments will search for evidence of physics beyond the Standard Model and, if found, characterize its make-up. Understanding hadron properties from lattice QCD calculations is making considerable progress.

Sessions on neutrino physics at CIPANP 2012 addressed a variety of questions. What is the hierarchy of the neutrino masses? Are the neutrinos their own antiparticles? What is their mass scale? Are there more than three neutrino species? The talks also covered CP violation in the neutrino sector related to the preponderance of matter over antimatter and the limits on neutrinoless double-beta decay. The highlight in this area was the electron-antineutrino oscillation results from the Daya Bay, RENO and Chooz experiments, with Daya Bay measuring sin2(2θ13) = 0.092 ± 0.016, significantly different from zero.

In the sessions on electroweak tests, emphasis was placed on the two-boson corrections in parity-violating electron scattering, which are important for the Qweak experiment at Jefferson Laboratory and the Olympus experiment at DESY. Consensus is slowly emerging on the corrections that need to be applied in the determination of the Weinberg angle by the NuTeV experiment at Fermilab. The size of the proton is a question that remains, with newer electron-scattering experiments in agreement with the earlier ones. However, the discrepant atomic-spectroscopy result from PSI still stands.

First time of topics such as the cosmic frontier, dark matter and dark energy

This was the first time that CIPANP included prominently such topics as the cosmic frontier and the related fields of dark matter and dark energy. Cosmological observations indicate that only 4.5% of the mass/energy of the universe is baryonic matter, with the remaining 95% still unknown. Of the latter, 22% is dark matter, which interacts via gravity like ordinary matter. The evidence for this physics beyond the Standard Model is entirely based on cosmological observations, since many laboratory experiments undertaken so far have not presented any compelling evidence. Searches for dark matter (as well as neutrinoless double-beta decay) rely on the ultraquiet environment afforded by current and planned deep-underground laboratories with the depth and volume of the detectors being the most important parameters.

The sensitivity of gravitational-wave detectors is steadily improving with the laser interferometer experiments, Advanced LIGO and Advanced VIRGO. It is possible that at the next Intersections Conference the first results from gravitational-wave astronomy may be presented.

With the baryon-to-photon ratio well determined by the Wilkinson Microwave Anisotropy Probe, standard Big Bang nucleosynthesis no longer has any free parameters. The theoretical predictions for the abundances of 2H, 4He and 7Li can be compared with the observed abundances, indicating an over-prediction of 7Li by a factor of four. Rare isotopes with unusual proton-to-neutron ratios are the stepping-stones to nuclear element synthesis and the generation of nuclear energy in stellar explosions. The ultimate configuration in this context is a neutron star wrapped with a layer of rare isotopes. It is the rare-isotope beam facilities that elucidate the intricacies of these processes.

The quest for super-heavy elements continues

The structure of the nucleon is as complex an object as can be imagined. After a fair measure of scrutiny the electric and magnetic form factors are now well established. There is however a plethora of required descriptions of the quark and gluon distributions, especially if the longitudinal and transverse spins of the nucleon are included (Boer-Mulders, Collins, Sivers functions). The overriding question is: where is the spin of the nucleon hidden?

Understanding nuclear structure and nuclear reactions from first principles with input from QCD, and employing Hamiltonians constructed within chiral effective field theory, have come far. The nuclear interaction comprises two-nucleon, three-nucleon, and even four-nucleon components. Questions remain however about incorporating relativistic effects.

The quest for super-heavy elements continues. With the recent acceptance of the evidence for elements with Z = 114 and 116, further investigations are focusing on the elements with Z = 118, 113 and 115. The formation of doubly magic nuclei with neutron number N = 184 and the (possibly) matching proton numbers of Z = 120 or 126 may not be too far off in the future.

The utilization of high-energy heavy-ion collisions has allowed the detailed study of the quark–gluon plasma in the laboratory under conditions like those that existed in the first instances of the universe. The recent studies performed at the LHC with the ALICE detector and at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven have enabled a mapping out of the phase diagram of nuclear matter.

For the future

Upgrades for the LHC and the LHCb experiment at CERN were presented at the conference, as well as for RHIC and the PHENIX experiment, and for the 12 GeV Continuous Electron Beam Accelerator Facility at Jefferson Laboratory. An illuminating talk discussed the science and prospects for an electron–ion collider, with proposals from Brookhaven (e-RHIC) and Jefferson Laboratory (EIC) – soon to be amalgamated to become a priority item as part of the US Nuclear Science Advisory Committee’s Long Range Plan for Nuclear Physics – and from CERN (LHeC). The status of the Facility for Antiproton and Ion Research at GSI with its all-encompassing PANDA detector was another topic presented. Also discussed were the planned Facility for Rare Isotope Beams at Michigan State University and TRIUMF’s rare-isotope beam programme with the Isotope Separator and Accelerator facility and the Advanced Rare Isotope Laboratory, as well as Project-X at Fermilab, which has an important future high-intensity frontier research programme.

Ernest J Moniz, from Massachusetts Institute of Technology (MIT) and its Energy Initiative Institute gave the traditional public lecture, entitled “Energy and the Future (a Worldwide Perspective)”. The banquet speech was an exposé on the life and art of Salvador Dali given by Peter Tush of the Dali Museum in St Petersburg. CIPANP 2012 ended with a vision statement presented by Richard G Milner, director of the Laboratory for Nuclear Science at MIT.

CIPANP 2012 was organized with the help of TRIUMF and Jefferson Laboratory.

bright-rec iop pub iop-science physcis connect