Comsol -leaderboard other pages

Topics

Baryon oscillation spectra for all

CCnew3_08_12

By professional astronomy standards, the 2.5 m telescope at Apache Point Observatory is quite small. More than 50 research telescopes are larger and many are located at much better sites. Apache Point Observatory is also a little too close to city lights – the atmospheric turbulence that dominates the sharpness of focus is about two times worse than at the best sites on Earth – and summer monsoons shut down the observatory for two months each year.

Yet, the Sloan Digital Sky Survey (SDSS), using this telescope, has produced the most highly cited data set in the history of astronomy (Trimble and Ceja 2008; Madrid and Macchetto 2009). Its success is rooted in the combination of high-quality, multipurpose data and open access for everyone: SDSS has obtained 5-filter images of about a quarter of the sky, spectra of 2.4 million objects and has made them publicly available on a yearly basis, even as the survey continues.

SDSS-III launched its ninth data release (DR9) on 31 July. This is the first release to include data from the upgraded spectrographs of the Baryon Oscillation Spectroscopic Survey (BOSS) – the largest of the four subsurveys of SDSS-III. By measuring more distant galaxies, these spectra probe a larger volume of the universe than all previous surveys combined.

BOSS has already published its flagship measurement of baryon acoustic oscillations (BAO) to constrain dark energy using these data (Anderson et al. 2012). BAO are the leftover imprint of primordial matter-density fluctuations that froze out as the universe expanded, leaving correlations in the distances between galaxies. The size scale of these correlations acts as a “standard ruler” to measure the expansion of the universe, complementing the “standard candles” of Type Ia supernovae that led to the discovery of the accelerating expansion of the universe.

Another major BOSS analysis using these data is still in progress. In principle, BAO can also be measured by using bright, distant quasars as backlights and measuring the “Lyman alpha forest” absorption in the spectra as intervening neutral hydrogen absorbs the quasars’ light. The wavelength of the absorption traces the red shift of the hydrogen and the amount of absorption traces its density. Thus, this also measures the structure of matter – including BAO – but at much further distances than is possible with galaxies. BOSS has the first data set with enough quasars to make this measurement and the collaboration is nearing completion of this analysis. However, the final results are not yet published and now the data are public for anyone else to try this.

Are there any surprises in the results? Not yet. BOSS has the most accurate BAO measurements yet, with distances measured to 1.7%, but the results are consistent with the “ΛCDM” cosmological standard model, which includes a dark-energy cosmological constant (Λ) and cold dark matter (CDM). But DR9 contains only about a third of the full BOSS survey and BOSS has already finished observations for data release 10 (DR10), due to be released in July 2013. DR10 will also include the first data from APOGEE, another SDSS-III subsurvey that probes the dynamical structure and chemical history of the Milky Way.

Illuminating extra dimensions with photons

Photons are a critical tool at the LHC, and the ATLAS detector has been carefully designed to measure photons precisely. In addition to playing a central role in the recent discovery of a new particle resembling the Higgs boson, final states with photons are used both to make sensitive tests of the Standard Model and to search for physics beyond it.

Recent results from the ATLAS experiment using the full 2011 data set are shining new light – in more than one sense – on theoretical models that propose the existence of extra dimensions. In these models, which were originally inspired by string theory, the extra dimensions are “compactified” – finite in extent, they are curled up on themselves and so small that they have not yet been observed. Such models could answer a major mystery in particle physics, namely the weakness of gravity as compared with the other forces. The basic idea is that gravity’s influence could be diluted by the presence of the extra dimensions. Different variants of these models exist, with corresponding differences in how they could be detected experimentally.

CCnew5_08_12

Events with two energetic photons provide a good place to search. In the Randall-Sundrum (RS) models of extra dimensions, a new heavy particle could decay to a pair of photons. A plot of the diphoton mass should then reveal a narrow peak above the smooth background expected from Standard Model backgrounds. In Arkani-Hamed-Dimopoulos-Dvali (ADD) models, on the other hand, the influence of extra dimensions should lead to a broad excess of events with large diphoton masses.

The figure shows the diphoton mass spectrum measured by ATLAS. The Standard Model background expectation has been superimposed, as have contributions expected for examples of RS or ADD signals. The data agree well with the background expectation and provide stringent constraints on the extra-dimension models. For instance, the mass of the RS graviton must be larger than 1–2 TeV, depending on the strength of the graviton’s couplings to Standard Model particles.

ADD models can also be probed via the single-photon final state. The ATLAS collaboration has searched for single photons accompanied by a large apparent imbalance in the energy measured in the event, which would result from a particle escaping into the extra dimensions and taking its energy with it. The ATLAS analysis found a total number of such events in agreement with the expectation for the small Standard Model backgrounds. The final result, therefore, was used to establish new constraints on the fundamental scale parameter MD of the so-called ADD Large Extra Dimension (LED) model. The lower limits set on the scale, which improve on previous limits, lie in the range 1.74–1.87 TeV, depending upon the number of extra dimensions.

As expected, photons are proving to be an extremely useful probe for new physics at the LHC, providing important tests of many models. With the higher LHC energy in 2012 and the larger data set being accumulated, photon analyses will continue to provide an ever greater potential for discovery.

Can heavy-ion collisions cast light on strong CP?

The symmetries of parity (P) and its combination with charge conjugation (C) are known to be broken in the weak interaction. However, in the strong interaction the P and CP invariances are respected – although QCD provides no reason for their conservation. This is the “strong CP problem”, one of the remaining puzzles of the Standard Model.

The possibility of observing parity violation in the hot and dense hadronic matter formed in relativistic heavy-ion collisions has been discussed for many years. Various theoretical approaches suggest that in the vicinity of the deconfinement phase transition, the QCD vacuum could create domains – local in space and time – that could lead to CP-violating effects. These could manifest themselves via a separation of charge along the direction of the system’s angular momentum – or, equivalently, along the direction of the strong, approximately 1014 T, magnetic field that is created in non-central heavy-ion collisions and perpendicular to the reaction plane (i.e. the plane of symmetry of a collision, defined by the impact-parameter vector and the beam direction). This phenomenon is called the chiral magnetic effect (CME). Fluctuations in the sign of the topological charge of these domains cause the resulting charge separation to be zero when averaged over many events. This makes the observation of the CME possible only via P-even observables, expressed in terms of two- and multi-particle correlations.

The ALICE collaboration has studied the charge-dependent azimuthal particle correlations at mid-rapidity in lead–lead collisions at the centre-of-mass energy per nucleon pair, √sNN = 2.76 TeV. The analysis was performed over the entire event sample recorded with a minimum-bias trigger in 2010 (about 13 million events). A multi-particle correlator was used to probe the magnitude of the potential signal while at the same time suppressing any background correlations unrelated to the reaction plane. This correlator has the form 〈cos(φα + φβ – 2ΨRP)〉, where φ is the azimuthal angle of the particles and the subscript indicates the charge or the particle type. The orientation of the reaction plane angle is represented by ΨRP; it is not known experimentally but is instead estimated by constructing the event plane using azimuthal particle distributions.

The figure shows the correlator as a function of the collision centrality compared with model calculations, together with results from the Relativistic Heavy-Ion Collider (RHIC). The points from ALICE, shown as full and open red markers for pairs with the same and opposite charge, respectively, indicate a significant difference not only in the magnitude but also in the sign of the correlations for different charge combinations, which is consistent with the qualitative expectations for the CME. The effect becomes more pronounced moving from central to peripheral collisions, i.e. moving from left to right along the x-axis. The previous measurement of charge separation by the STAR collaboration at RHIC in gold–gold collisions at √sNN = 0.2 TeV, also shown in the figure (blue stars), is in both qualitative and quantitative agreement with the measurement at the LHC.

CCnew7_08_12

The thick solid line in the figure shows a prediction for the same-sign correlations caused by the CME at LHC energies, based on a model that makes certain assumptions about the duration and time-evolution of the magnetic field. This model underestimates the observed magnitude of the same-sign correlations seen at the LHC. However, parallel calculations based on arguments related to the initial time at which the magnetic field develops, as well as the same value of the magnetic flux for both energies, suggest that the CME might have the same magnitude at the energies of both colliders. Conventional event-generators, such as HIJING, which do not include P-violating effects, do not exhibit any significant difference between correlations of pairs with the same and opposite charge (green triangles). They were averaged in the figure.

An alternative explanation to the CME assumption was recently provided by a hydrodynamical calculation, suggesting that the correlator being studied may have a negative (i.e. out-of-plane), charge-independent, dipole-flow contribution that originates from fluctuations in the initial conditions of a heavy-ion collision. This could lead to a shift of the baseline, which when coupled to the well known effect in which the local charge conservation induced in a medium exhibits strong azimuthal (i.e. elliptic) modulations, could potentially give a quantitative description of the centrality-dependence observed by both ALICE and STAR. The results from ALICE for the charge-independent correlations are indicated by the blue band in the figure.

The measurements are supplemented by a differential analysis and will be extended with a study of higher harmonics, which will also investigate the correlations of identified particles. These studies are expected to shed light on one of the remaining fundamental questions of the Standard Model.

Searching for new physics in rare kaon-decays

The LHCb experiment was originally conceived of to study particles containing the beauty-flavoured b quark. However, there are many other possibilities for interesting measurements that exploit the unique forward acceptance of the detector. For example, the physics programme has already been extended to include the study of particles containing charm quarks, as well as electroweak physics. Now, a new result from LHCb on a search for a rare kaon-decay has further increased the breadth of the experiment’s physics goals.

This search is for the decay K0S→μ+μ, which is predicted to be greatly suppressed in the Standard Model. The branching ratio is expected to be 5 × 10–12, while the current experimental upper limit (dating from 1973) is 3.2 × 10–7 at 90% confidence level (CL). Although the dimuon decay of the K0L has been observed, with a branching fraction of the order of 10–8, searches for the counterpart decay of the K0S meson are well motivated because such decays can be mediated in independent ways to the K0L decay.

CCnew9_08_12

The analysis is based on the 1.0 fb–1 of data collected by LHCb in 2011. To suppress the background most efficiently, it involves several techniques that were originally developed for the search for B0S → μ+μ, for which LHCb has set the best limit in the world. The analysis also benefits from knowledge of K0S production and reconstruction that has been developed in several previous measurements (including LHCb’s first published paper, on the production of K0S mesons in 900 GeV proton–proton collisions).

To extract an upper limit on the branching fraction, the yield is normalized relative to that in the copious K0S→π+π decay mode. The 90% CL upper limit on the branching ratio B(K0S→μ+μ) is determined to be less than 9 × 10–9, a factor of 30 improvement over the previous most restrictive limit. As the figure shows, no significant evidence of the decay is seen.

Although the new limit is still three orders of magnitude above the Standard Model prediction, it starts to approach the level where new physics effects might begin to appear. Moreover, the data collected by LHCb in 2012 already exceed the sample from 2011 and by the end of the year the total data set should have more than trebled. The collaboration is continuing to search for ways to broaden its physics reach further to make the best use of this unprecedented amount of data and to tune the trigger algorithms for future data-taking and for the LHCb upgrade.

The search for ‘big news’ continues

The big news this summer was on the new Higgs-like boson and how the hint of an excess in last year’s 7 TeV data from the LHC became an observation with this year’s 8 TeV data. Yet there were many other search results, first presented at the International Conference on High-Energy Physics (ICHEP) in Melbourne, which benefited greatly from the new higher-energy data. The search for hypothetical heavy partners of the Standard Model W and Z bosons – the W’ and Z’ – were the CMS collaboration’s priorities for analysis with the 8 TeV data, both because the 7 TeV data included a hint of a high-mass excess and because the 8 TeV data provide a large boost in sensitivity at high mass. Searches for other heavy particles, such as the supersymmetric partners of the gluon and quarks (the gluino and squarks) were similarly priorities that benefited from the increased LHC energy.

Building on last year’s interesting results, the collaboration searched for narrow high-mass Z’ resonances decaying to pairs of electrons or muons in the 8 TeV data collected between April and June this year. At the same time, a search was conducted for a W’, which should decay to a neutrino and a single lepton (electron or muon). Because the Z’ and W’ can be massive, the searches require the identification of highly energetic leptons and a detailed understanding of their behaviour in the detector. The figure shows the spectra for the decay of the Z’ to electron pairs, for the 7 TeV and 8 TeV data combined. It illustrates the importance of understanding the high masses – just a few events appearing there may indicate a discovery.

CCnew11_08_12

The search for supersymmetric particles also relies on the production of a few events with massive particles, e.g. gluinos or squarks. These typically undergo cascading decays culminating in multi-jet final states with apparent momentum nonconservation in the detector, owing to the production of two neutral, weakly interacting particles at the end of the cascades that escape detection. (These particles would serve as excellent dark-matter candidates). Decays involving multiple b quarks, photons or same-sign dileptons were all priority search modes with the 8 TeV data. Each benefited from last year’s methods to measure backgrounds from control samples in the data. They also benefited from the rarity of Standard Model processes with such high-mass and complex final states. One particularly interesting background that affects the same-sign dilepton search is the production of a W or Z boson in association with top quarks, which leads to spectacular final states. A first measurement of these processes – obtained with the 8 TeV data – was also presented at ICHEP.

These high-mass searches have found the data to be consistent with Standard Model processes and have significantly improved limits on the range of possible masses for these hypothetical particles. The W’ and Z’ searches set 95% CL limits at 2.85 TeV and 2.59 TeV, respectively, and the gluino/squark searches excluded their masses up to 1.0 TeV. These results correspond to large increases in sensitivity, thanks to the LHC’s energy increase and improved analysis of the new data. At CMS, the search for more “big news” continues.

The atomic nucleus: fissile liquid or molecule of life?

CCnew13_08_12

The atomic nucleus is generally described as a drop of quantum liquid. In particular, such liquid-like behaviour explains nuclear fission and applies especially to heavy nuclei such as uranium. The so-called liquid drop mass formula is a typical textbook model in nuclear physics. On the other hand, light nuclei can behave like tiny molecules – or clusters – made up of neutrons and protons within the nucleus. This molecular aspect at the femtometre scale makes it possible to understand the stellar nucleosynthesis of 12C and consequently of heavier elements such as oxygen.

So far, both the “molecular nucleus” and the “liquid nucleus” views have co-existed. Now, a team from the Institut de Physique Nucléaire d’Orsay (Université Paris-Sud/CNRS) and the French Atomic Energy Commission (CEA), in collaboration with the University of Zagreb, has proposed a unified view of these two aspects. By using relativistic-energy density functionals, the researchers have demonstrated that, although a light nucleus can show molecule-like behaviour (tending towards the crystalline state), heavier nuclei take on a more liquid-like behaviour.

The team took inspiration from neutron stars – remnants of core-collapse supernovae that are composed mainly of neutrons with a few protons. Inside the crust of neutron star, matter passes from being a nucleonic crystalline medium to becoming a nuclear-liquid medium. Thanks to this analogy, the team identified a mechanism of transition from the liquid to the crystalline state in the nucleus.

When the interactions between neutrons and protons – through the depth of the confining nuclear potential – are not strong enough to fix them within the nucleus, the latter is in a quantum-liquid-like state where protons and neutrons are delocalized. Conversely, in a crystalline state, neutrons and protons would be fixed at regular intervals within the nucleus. The nuclear molecule is interpreted as being an intermediate state between a quantum liquid and a crystal. In the long term, the aim is to attain a unified understanding of these various states of the nucleus.

3D cooling for uranium collisions at RHIC

In May this year, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) finished its first run with beams of uranium ions – the heaviest ions ever used in a collider. Heavy ions contain large numbers of protons and neutrons and, when colliding at high energies, they create quark–gluon plasma, the state of matter that probably existed at the dawn of the universe. Not only was this the first time that uranium ions have been used in a particle collider, it was also the first time that the complete bunched-beam stochastic cooling system was used at RHIC, allowing cooling in the longitudinal, vertical and horizontal planes in both of the collider’s interlaced magnet rings.

Uranium ions are now available at RHIC courtesy of the recently commissioned electron-beam ion source (EBIS). Physicists at the STAR and PHENIX experiments are particularly interested in uranium nuclei because of their prolate shape, more like a rugby ball than a sphere. Some of these nuclei will collide along their long axes, creating a quark–gluon plasma denser than the plasma discovered and now routinely created at RHIC in collisions of gold nuclei, which are more spherical. Some nuclei will collide with their long axes parallel, although perpendicular to their directions of motion. This arrangement creates a quark–gluon plasma with an oblong cross-section but without the strong magnetic field generated by grazing incidence collisions of spherical nuclei. Both of these possibilities make uranium–uranium collisions a new tool for studying quark–gluon plasma, adding to the toolbox that is currently available at both RHIC and the LHC.

A hadron-collider ‘first’

CCura1_08_12

The amount of data delivered to the STAR and PHENIX experiments in the three-week exploratory run was increased five-fold by stochastic cooling, a feedback technique that shrinks the ion beams while they are colliding. This technique was developed at RHIC by a team that included Mike Blaskiewicz, Mike Brennan and Kevin Mernick (Blaskiewicz et al. 2010). The cooling is so strong that the beam size is reduced by half after an hour of storage time (figure 1) and the peak luminosity – or collision rate – rises to three times its initial value (figure 2). This has never been achieved in a hadron collider before. With a re-optimized lattice and stochastic cooling, no ions were lost by any mechanism other than through the uranium–uranium collisions themselves, which is also a first for a hadron collider.

CCura2_08_12

In stochastic cooling, invented by Simon van der Meer and first demonstrated at CERN’s Intersecting Storage Rings in 1975, random fluctuations of particle distributions are detected and corrected for. The result is smaller and smaller distributions. The technique involves sending a signal from a pickup at one location to activate a kicker to correct the same bunch at a point further round the ring. While stochastic cooling was and is used in a number of low-energy storage rings, RHIC is the first collider with operational stochastic cooling. The procedure was first demonstrated in 2006 using a low-intensity proton bunch with 109 particles. Operational longitudinal cooling of gold ions in one of RHIC’s two rings was demonstrated the following year. Since then, both the Blue ring (clockwise) and the Yellow ring (anticlockwise) have been fitted with horizontal, vertical and longitudinal cooling, with full 3D cooling now available.

From pickup to kicker

The detection of fluctuations of distributions with high numbers of particles requires bandwidths in the gigahertz range. At RHIC, the ion beams at storage energy are composed of bunches of 5 ns full-width, separated by 107 ns. Cooling times of about 1 hour are obtained with a system bandwidth of 3 GHz and optimal kicker voltages of typically 3 kV. To reduce the microwave power required, a set of kicker cavities with a bandwidth of only 10 MHz has been adopted to take advantage of the bunch spacing. Each kicker consists of 16 cavities. Therefore, with three cooling planes there are 96 cavities in all for the two rings. The systems in the two rings are quite similar, so the following describes only the set-up for the Blue ring.

CCura3_08_12

The longitudinal pickup is located in the 2 o’clock straight section (figure 3). Before the pickup signal is transmitted, it is first put through a traversal filter that repeats the signal 16 times to stretch it, with output S1(t) = S0(t)+S0(t–τ)+ … +S0(t–15τ) and τ = 5.000 ns. The effect of the filter, which is a key feature of the system at RHIC, is to maintain all of the information in the 5-ns-long bunch core while reducing the peak signal. This, in turn, lightens the load on the specially adapted commercial microwave link that is used to send the signal to the longitudinal kicker in the 4 o’clock straight section of the Blue ring. There, a one-turn filter is applied, where S2(t) = S1(t)–S1(t–Trev) with the revolution period Trev accurate to better than 1 ps. This filter ensures that the kick to the beam is proportional to the rate at which the beam is changing, similar to a viscous damping force being proportional to the velocity of a particle, not its position. The traversal filter causes the spectrum of the signal to have peaks of width 10 MHz separated by 200 MHz.

CCura4_08_12

The 16 kicker cavities in the longitudinal system operate at frequencies of 6.0 GHz, 6.2 GHz, …, 9.0 GHz. To drive them, the pickup signal is split into 16 channels, corresponding to the individual cavities. Each channel goes through a band-pass filter with a width of 100 MHz centred at its cavity frequency so that a given cavity is driven by a sinusoidal signal whose phase and amplitude change from one bunch to the next. The individual signals are put through analogue linear modulators that adjust the phase and amplitude to obtain optimal cooling. The amplifiers are located in the tunnel close to the kickers and have a peak power of 40 W.

To set up the system, open-loop beam transfer-functions are measured at each cavity frequency. The phase and amplitude are optimized using the signal suppression observed in the pickup spectrum. Signal suppression occurs because the observed signal from the beam is the sum of the Schottky signal and the coherent beam response of the cooling system. When things are tuned correctly the observed signal has 1/4 the power of the signal without cooling. During operation the full aperture of the kicker is only 2 cm, so the cavities are open during injection and acceleration and close only after storage energy is reached.

The vertical and horizontal stochastic cooling systems employ fibre-optic links between the pickups and the kickers, with a net delay of about 2/3 of a turn. The use of fibres, with their reduced signal velocities, is possible because these transverse systems can tolerate the extra delay without compromising performance. Here the Blue cavities operate at frequencies of 4.7 GHz, 4.9 GHz, … 7.7 GHz, the Yellow cavities at 4.8 GHz, 5.0 GHz, …, 7.8 GHz. The offset in frequency between the rings is needed to avoid ring-to-ring interference via microwaves propagating from one ring to the other through the common straight sections. The Blue low-level system employs the antisymmetrical filter with S1(t) =  S0(t)–S0(t–τ) + S0(t–2τ)… –S0(t–15τ) with τ = 5.000 ns to get the peaks in the signal spectrum at the cavity frequencies. Like their longitudinal counterparts, the transverse cavities are open during injection and acceleration and close once storage energy is reached.

As the beam distribution evolves and components warm up, the optimal loop parameters change. The gain and phase of the system-transfer functions are therefore automatically optimized, approximately every 5 to 15 minutes. This is done one cavity at a time so that cooling is not compromised. The open-loop system-transfer function for each cavity is measured using a network analyser. The measured transfer function is compared with a stored reference function and the phase and amplitude of the low-level gain are adjusted to minimize the mean-square difference between the measured and stored transfer functions. The one-turn delay filters of the longitudinal systems are also corrected automatically by adjusting a piezoelectric delay module in the fibreoptic cable that supplies the delay.

The stochastic cooling system has significantly improved the integrated luminosity. During 2011 vertical and longitudinal cooling was used in both rings with gold ions, while horizontal cooling was achieved using betatron coupling. With all of the other parameters held constant, the cooling system doubled the integrated luminosity per store. After the installation of horizontal cooling systems, RHIC ran with uranium–uranium collisions in 2012. Figure 2 shows collision rates in the STAR and PHENIX detectors. The cooling reduced the beam size to such an extent that the collision rates were increased by almost a factor of 3 and, when compared with no cooling, the integrated luminosity was increased by a factor of 5.

The history of QCD

CCqcd1_08_12

About 60 years ago, many new particles were discovered, in particular the four Δ resonances, the six hyperons and the four K mesons. The Δ resonances, with a mass of about 1230 MeV, were observed in pion–nucleon collisions at what was then the Radiation Laboratory in Berkeley. The hyperons and K mesons were discovered in cosmic-ray experiments.

Murray Gell-Mann and Yuval Ne’eman succeeded in describing the new particles in a symmetry scheme based on the group SU(3), the group of unitary 3 × 3 matrices with determinant 1 (Gell-Mann 1962, Ne’eman 1961). SU(3)-symmetry is an extension of isospin symmetry, which was introduced in 1932 by Werner Heisenberg and is described by the group SU(2).

The observed hadrons are members of specific representations of SU(3). The baryons are octets and decuplets, the mesons are octets and singlets. The baryon octet contains the two nucleons, the three Σ hyperons, the Λ hyperon and the two Ξ hyperons (see figure 1). The members of the meson octet are the three pions, the η meson, the two K mesons and the two K mesons.

In 1961, nine baryon resonances were known, including the four Δ resonances. These resonances could not be members of an octet. Gell-Mann and Ne’eman suggested that they should be described by an SU(3)-decuplet but one particle was missing. They predicted that this particle, the Ω, should soon be discovered with a mass of around 1680 MeV. It was observed in 1964 at the Brookhaven National Laboratory by Nicholas Samios and his group. Thus the baryon resonances were members of an SU(3) decuplet.

It was not clear at the time why the members of the simplest SU(3) representation, the triplet representation, were not observed in experiments. These particles would have non-integral electric charges: 2/3 or –1/3.

The quark model

In 1964, Gell-Mann and Feynman’s PhD student George Zweig, who was working at CERN, proposed that the baryons and mesons are bound states of the hypothetical triplet particles (Gell-Mann 1964, Zweig 1964). Gell-Mann called the triplet particles “quarks”, using a word that had been introduced by James Joyce in his novel Finnegans Wake.

Since the quarks form an SU(3) triplet, there must be three quarks: a u quark (charge 2/3), a d quark (charge –1/3) and an s quark (charge –1/3). The proton is a bound state of two u quarks and one d quark (uud). Inside the neutron are two d quarks and one u quark (ddu). The Λ hyperon has the internal structure uds. The three Σ hyperons contain one s quark and two u or two d quarks (uus or dds). The Ξ hyperons are the bound states uss and dss. The Ω is a bound state of three s quarks: sss. The eight mesons are bound states of a quark and an antiquark.

In the quark model, the breaking of the SU(3)-symmetry can be arranged by the mass term for the quarks. The mass of the strange quark is larger than the masses of the two non-strange quarks. This explains the mass differences inside the baryon octet, the baryon decuplet and the meson octet.

Introducing colour

In the summer of 1970, I spent some time at the Aspen Center of Physics, where I met Gell-Mann and we started working together. In the autumn we studied the results from SLAC on the deep-inelastic scattering of electrons and atomic nuclei. The cross-sections depend on the mass of the virtual photon and the energy transfer. However, the experiments at SLAC found that the cross-sections at large energies depend only on the ratio of the photon mass and the energy transfer – they showed a scaling behaviour, which had been predicted by James Bjorken.

CCqcd2_08_12

In the SLAC experiments, the nucleon matrix-element of the commutator of two electromagnetic currents is measured at nearly light-like distances. Gell-Mann and I assumed that this commutator can be abstracted from the free-quark model and we formulated the light-cone algebra of the currents (Fritzsch and Gell-Mann 1971). Using this algebra, we could understand the scaling behaviour. We obtained the same results as Richard Feynman in his parton model, if the partons are identified with the quarks. It later turned out that the results of the light-cone current algebra are nearly correct in the theory of QCD, owing to the asymptotic freedom of the theory.

The Ω is a bound state of three strange quarks. Since this is the ground state, the space wave-function should be symmetrical. The three spins of the quarks are aligned to give the spin of the omega minus. Thus the wave function of the Ω does not change if two quarks are interchanged. However, the wave function must be antisymmetric according to the Pauli principle. This was a great problem for the quark model.

In 1964, Oscar Greenberg discussed the possibility that the quarks do not obey the Pauli statistics but rather a “parastatistics of rank three”. In this case, there is no problem with the Pauli statistics but it was unclear whether parastatistics makes any sense in a field theory of the quarks.

Two years later, Moo-Young Han and Yoichiro Nambu considered nine quarks instead of three. The electric charges of these quarks were integral. In this model there were three u quarks: two of them had electric charge of 1, while the third one had charge 0 – so on average the charge was 2/3. The symmetry group was SU(3) × SU(3), which was assumed to be strongly broken. The associated gauge bosons would be massive and would have integral electric charges.

In 1971, Gell-Mann and I found a different solution of the statistics problem (Fritzsch and Gell-Mann 1971). We considered nine quarks, as Han and Nambu had done, but we assumed that the three quarks of the same type had a new conserved quantum number, which we called “colour”. The colour symmetry SU(3) was an exact symmetry. The wave functions of the hadrons were assumed to be singlets of the colour group. The baryon wave-functions are antisymmetric in the colour indices, denoted by red (r), green (g) and blue (b):

Thus the wave function of a baryon changes sign if two quarks are exchanged, as required by the Pauli principle. Likewise, the wave functions of the mesons are colour singlets:

The cross-section for electron–positron annihilation into hadrons at high energies depends on the squares of the electric charges of the quarks and on the number of colours. For three colours this leads to:

Without colours this ratio would be 2/3. The experimental data, however, were in agreement with a ratio of 2.

In 1971–1972, Gell-Mann and I worked at CERN. Together with William Bardeen we investigated the electromagnetic decay of the neutral pion into two photons. It was known that in the quark model the decay rate is about a factor nine less than the measured decay rate – another problem for the quark model.

The decay amplitude is given by a triangle diagram, in which a quark–antiquark pair is created virtually and subsequently annihilates into two photons. We found that after the introduction of colour, the decay amplitude increases by a factor three – each colour contributes to the amplitude with the same strength. For three colours, the result agrees with the experimental value.

CCqcd3_08_12

In the spring of 1972, we started to interpret the colour group as a gauge group. The resulting gauge theory is similar to quantum electrodynamics (QED). The interaction of the quarks is generated by an octet of massless colour gauge bosons, which we called gluons (Fritzsch and Gell-Mann 1972). We later introduced the name “quantum chromodynamics”, or QCD. We published details of this theory one year later together with Heinrich Leutwyler (Fritzsch et al. 1972).

In QCD, the gluons interact not only with the quarks but also with themselves. This direct gluon–gluon interaction is important – it leads to the reduction of the coupling constant at increasing energy, i.e. the theory is asymptotically free, as discovered in 1972 by Gerard ’t Hooft (unpublished) and in 1973 by David Gross, David Politzer and Frank Wilczek. Thus at high energies the quarks and gluons behave almost as free particles. This leads to the approximate “scaling behaviour” of the cross-sections in the deep-inelastic lepton–hadron scattering. The quarks behave almost as free particles at high energies.

The logarithmic decrease of the coupling constant depends on the QCD energy-scale parameter, Λ, which is a free parameter and has to be measured in the experiments. The current experimental value is:

Experiments at SLAC, DESY, CERN’s Large Electron–Positron (LEP) collider and Fermilab’s Tevatron have measured the decrease of the QCD coupling-constant (figure 2). With LEP, it was also possible to determine the QCD coupling-constant at the mass of the Z boson rather precisely:

It is useful to consider the theory of QCD with just one heavy quark Q. The ground-state meson in this hypothetical case would be a quark–antiquark bound state. The effective potential between the quark and its antiquark at small distances would be a Coulomb potential proportional to 1/r, where r is the distance between the quark and the antiquark. However, at large distances the self-interaction of the gluons becomes important. The gluonic field lines at large distances do not spread out as in electrodynamics. Instead, they attract each other. Thus the quark and the antiquark are connected by a string of gluonic field lines (figure 3). The force between the quark and the antiquark is constant, i.e. it does not decrease as in electrodynamics. The quarks are confined. It is still an open question whether this applies also to the light quarks.

CCqcd4_08_12

In electron–positron annihilation, the virtual photon creates a quark and an antiquark, which move away from each other with high speed. Because of the confinement property, mesons – mostly pions – are created, moving roughly in the same direction. The quark and the antiquark “fragment” to produce two jets of particles. The sum of the energies and momenta of the particles in each jet should be equal to the energy of the original quark, which is equal to the energy of each colliding lepton. These quark jets were observed for the first time in 1978 at DESY (figure 4). They had already been predicted in 1975 by Feynman.

CCqcd5_08_12

If a quark pair is produced in electron–positron annihilation, then QCD predicts that sometimes a high-energy gluon should be emitted from one of the quarks. The gluon would also fragment and produce a jet. So, sometimes three jets should be produced. Such events were observed at DESY in 1979 (figure 4).

CCqcd6_08_12

The basic quanta of QCD are the quarks and the gluons. Two colour-octet gluons can form a colour singlet. Such a state would be a neutral gluonium meson. The ground state of the gluonium mesons has a mass of about 1.4 GeV. In QCD with only heavy quarks, this state would be stable but in the real world it would mix with neutral quark–antiquark mesons and would decay quickly into pions. Thus far, gluonium mesons have not been identified clearly in experiments.

The simplest colour-singlet hadrons in QCD are the baryons – consisting of three quarks – and the mesons, made of a quark and an antiquark. However, there are other ways to form a colour singlet. Two quarks can be in an antitriplet – they can form a colour singlet together with two antiquarks. The result would be a meson consisting of two quarks and two antiquarks. Such a meson is called a tetraquark. Three quarks can be in a colour octet, as well as a quark and an antiquark. They can form a colour-singlet hadron, consisting of four quarks and an antiquark. Such a baryon is called a pentaquark. So far, tetraquark mesons and pentaquark baryons have not been clearly observed in experiments.

The three quark flavours were introduced to describe the symmetry given by the flavour group SU(3). However, we now know that in reality there are six quarks: the three light quarks u, d, s and the three heavy quarks c (charm), b (bottom) and t (top). These six quarks form three doublets of the electroweak symmetry group SU(2):

CCqcd7_08_12

The masses of the quarks are arbitrary parameters in QCD, just as the lepton masses are in QED. Since the quarks do not exist as free particles, their masses cannot be measured directly. They can, however, be estimated using the observed hadron masses. In QCD they depend on the energy scale under consideration. Typical values of the quark masses at the energy of 2 GeV are:

The mass of the t quark is large, similar to the mass of a gold atom. Owing to this large mass, the t quark decays by the weak interaction with a lifetime that is less than the time needed to form a meson. Thus there are no hadrons containing a t quark.
The theory of QCD is the correct field theory of the strong interactions and of the nuclear forces. Both hadrons and atomic nuclei are bound states of quarks, antiquarks and gluons. It is remarkable that a simple gauge theory can describe the complicated phenomena of the strong interactions.

Particle and nuclear physics intersect in Florida

CCcip1_08_12

The Conferences on the Intersections of Particle and Nuclear Physics (CIPANP) form a triennial series that focuses on topics of interest to particle physicists, nuclear physicists, astrophysicists, cosmologists and accelerator physicists. Since the first conference took place in Steamboat Springs, Colorado, in 1984, the overlap in the interests of these areas has increased markedly. For example, the LHC is exploring both elementary-particle physics and heavy-ion physics, with the ALICE detector designed in particular for studies of lead-ion collisions. Explorations of the neutrino sector have attracted traditional nuclear physicists as well as particle physicists to measurements of solar neutrinos, reactor neutrinos, cosmic neutrinos, long-baseline neutrinos and neutrinoless double-beta decay. Facilities with rare-isotope beams have opened possibilities for innovative studies of questions in fundamental physics. The searches for physics beyond the Standard Model cover the whole range, from table-top experiments to those at the large collider facilities.

CIPANP 2012, the 11th conference in the series, took place at the Renaissance Vinoy Resort and Golf Club in St Petersburg, Florida, on 28 May – 3 June, the venue and dates being chosen according to well established CIPANP criteria. Plenary and parallel sessions were organized following the 14 topics selected for the conference: the high-energy frontier; the low-energy precision frontier; neutrino masses and neutrino mixing; electroweak tests of the Standard Model; the cosmic frontier; dark matter and dark energy; particle and nuclear astrophysics; heavy flavour and the CKM matrix; QCD, hadron spectroscopy and exotics; hadron physics and spin; nucleon structure; nuclear structure; quark matter and high-energy heavy-ion collisions; new facilities and their instrumentation. Each parallel session was organized with on average five two-hour sessions under two convenors. There were 29 invited plenary talks and a concluding “vision statement”. This report covers some of the highlights from the many excellent presentations at the meeting.

The ATLAS and CMS collaborations reported on results of the first two years of operation of the LHC, giving tantalizing hints at 2.5 σ and 2.8 σ, respectively, at a mass of about 125 GeV for the much searched-for Higgs boson. Within the Standard Model, the Higgs-boson searches plus electroweak precision data give the combined hints for the Higgs from the LHC and the Tevatron a 3.4 σ significance. (These results have been superseded by those reported at CERN on 4 July. The CDF collaboration presented a more precise value for the mass of the W boson with an uncertainty of ± 19 MeV, giving the world average an error of ± 16 MeV.

Elsewhere, the Alpha Magnetic Spectrometer experiment mounted on board the International Space Station may yield information on ultrarelativistic cosmic particles and their interactions. The MuLan and MuCap collaborations at PSI reported their final determinations of the Fermi constant and the nucleon’s weak induced pseudoscalar coupling-constant, respectively. Current and future heavy-flavour experiments will search for evidence of physics beyond the Standard Model and, if found, characterize its make-up. Understanding hadron properties from lattice QCD calculations is making considerable progress.

Sessions on neutrino physics at CIPANP 2012 addressed a variety of questions. What is the hierarchy of the neutrino masses? Are the neutrinos their own antiparticles? What is their mass scale? Are there more than three neutrino species? The talks also covered CP violation in the neutrino sector related to the preponderance of matter over antimatter and the limits on neutrinoless double-beta decay. The highlight in this area was the electron-antineutrino oscillation results from the Daya Bay, RENO and Chooz experiments, with Daya Bay measuring sin2(2θ13) = 0.092 ± 0.016, significantly different from zero.

In the sessions on electroweak tests, emphasis was placed on the two-boson corrections in parity-violating electron scattering, which are important for the Qweak experiment at Jefferson Laboratory and the Olympus experiment at DESY. Consensus is slowly emerging on the corrections that need to be applied in the determination of the Weinberg angle by the NuTeV experiment at Fermilab. The size of the proton is a question that remains, with newer electron-scattering experiments in agreement with the earlier ones. However, the discrepant atomic-spectroscopy result from PSI still stands.

First time of topics such as the cosmic frontier, dark matter and dark energy

This was the first time that CIPANP included prominently such topics as the cosmic frontier and the related fields of dark matter and dark energy. Cosmological observations indicate that only 4.5% of the mass/energy of the universe is baryonic matter, with the remaining 95% still unknown. Of the latter, 22% is dark matter, which interacts via gravity like ordinary matter. The evidence for this physics beyond the Standard Model is entirely based on cosmological observations, since many laboratory experiments undertaken so far have not presented any compelling evidence. Searches for dark matter (as well as neutrinoless double-beta decay) rely on the ultraquiet environment afforded by current and planned deep-underground laboratories with the depth and volume of the detectors being the most important parameters.

The sensitivity of gravitational-wave detectors is steadily improving with the laser interferometer experiments, Advanced LIGO and Advanced VIRGO. It is possible that at the next Intersections Conference the first results from gravitational-wave astronomy may be presented.

With the baryon-to-photon ratio well determined by the Wilkinson Microwave Anisotropy Probe, standard Big Bang nucleosynthesis no longer has any free parameters. The theoretical predictions for the abundances of 2H, 4He and 7Li can be compared with the observed abundances, indicating an over-prediction of 7Li by a factor of four. Rare isotopes with unusual proton-to-neutron ratios are the stepping-stones to nuclear element synthesis and the generation of nuclear energy in stellar explosions. The ultimate configuration in this context is a neutron star wrapped with a layer of rare isotopes. It is the rare-isotope beam facilities that elucidate the intricacies of these processes.

The quest for super-heavy elements continues

The structure of the nucleon is as complex an object as can be imagined. After a fair measure of scrutiny the electric and magnetic form factors are now well established. There is however a plethora of required descriptions of the quark and gluon distributions, especially if the longitudinal and transverse spins of the nucleon are included (Boer-Mulders, Collins, Sivers functions). The overriding question is: where is the spin of the nucleon hidden?

Understanding nuclear structure and nuclear reactions from first principles with input from QCD, and employing Hamiltonians constructed within chiral effective field theory, have come far. The nuclear interaction comprises two-nucleon, three-nucleon, and even four-nucleon components. Questions remain however about incorporating relativistic effects.

The quest for super-heavy elements continues. With the recent acceptance of the evidence for elements with Z = 114 and 116, further investigations are focusing on the elements with Z = 118, 113 and 115. The formation of doubly magic nuclei with neutron number N = 184 and the (possibly) matching proton numbers of Z = 120 or 126 may not be too far off in the future.

The utilization of high-energy heavy-ion collisions has allowed the detailed study of the quark–gluon plasma in the laboratory under conditions like those that existed in the first instances of the universe. The recent studies performed at the LHC with the ALICE detector and at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven have enabled a mapping out of the phase diagram of nuclear matter.

For the future

Upgrades for the LHC and the LHCb experiment at CERN were presented at the conference, as well as for RHIC and the PHENIX experiment, and for the 12 GeV Continuous Electron Beam Accelerator Facility at Jefferson Laboratory. An illuminating talk discussed the science and prospects for an electron–ion collider, with proposals from Brookhaven (e-RHIC) and Jefferson Laboratory (EIC) – soon to be amalgamated to become a priority item as part of the US Nuclear Science Advisory Committee’s Long Range Plan for Nuclear Physics – and from CERN (LHeC). The status of the Facility for Antiproton and Ion Research at GSI with its all-encompassing PANDA detector was another topic presented. Also discussed were the planned Facility for Rare Isotope Beams at Michigan State University and TRIUMF’s rare-isotope beam programme with the Isotope Separator and Accelerator facility and the Advanced Rare Isotope Laboratory, as well as Project-X at Fermilab, which has an important future high-intensity frontier research programme.

Ernest J Moniz, from Massachusetts Institute of Technology (MIT) and its Energy Initiative Institute gave the traditional public lecture, entitled “Energy and the Future (a Worldwide Perspective)”. The banquet speech was an exposé on the life and art of Salvador Dali given by Peter Tush of the Dali Museum in St Petersburg. CIPANP 2012 ended with a vision statement presented by Richard G Milner, director of the Laboratory for Nuclear Science at MIT.

CIPANP 2012 was organized with the help of TRIUMF and Jefferson Laboratory.

Boosting sensitivity to new physics

The poster for BOOST2012

The LHC is a tremendously powerful tool built to explore physics at the tera-electron-volt scale. This year it is being operated with a centre-of-mass energy of 8 TeV, which is a little over half of the full design energy. This beats by a factor of four the previous world record held by Fermilab’s Tevatron, which shut down a year ago. For the study of ultrahigh-energy collisions, a second figure of merit of the machine – luminosity – is also of crucial importance. Here, the LHC is living up to its promise. In the first eight months of proton–proton operations in 2012, the ATLAS and CMS experiments have registered close to 10 fb–1 at 8 TeV, a data set similar to that collected during the entire 10-year Run II at the Tevatron.

In this uncharted territory at the energy frontier, known particles behave in unfamiliar ways. For the first time, the heaviest known particles – the W and Z bosons, the top quark and the recently discovered new boson – do not seem quite so heavy. Their rest masses (of the order 100 GeV) are small compared with the energy unleashed in the most energetic collisions, which can be up to several tera-electron-volts. Therefore, every so often these massive particles are produced with an enormous surplus of kinetic energy such that they fly through the laboratory at enormous speed.

A serious challenge

The velocity of these massive particles has implications for the way that they are observed in experiments. For particles produced with a large boost, the decay products (leptons or jets of hadrons) are emitted at small angles to the original direction of their parent. The full energy of the massive particle is deposited in a tiny area of the detector. Reconstructing and selecting these highly collimated topologies represents a serious challenge. For a sufficiently large boost, the two jets of particles that appear in hadronic two-body decays (W, Z, H → qq) cannot be resolved by standard reconstruction algorithms.

An approach pioneered by Michael Seymour, now at Manchester University, provides an interesting alternative by simply turning the problem round (Seymour 1994). Instead of trying to resolve two jets and adding up their momenta to reconstruct the parent particle, the technique is to reconstruct a single jet that contains the full energy of the decay. The fat jet containing the decay of a boosted object must then be distinguished from ordinary jets that are produced by the million at the LHC. This is achieved through an analysis of the jet’s internal structure. This alternative appears to be the most promising approach whenever the energy of the massive particle exceeds its rest mass by a factor of three or more. The boosted regime thus starts at an energy a little over 200 GeV for a W boson and at roughly 500 GeV for a top quark.

Dawn of the boosted era

The LHC is the first machine where boosted objects are a crucial part of the physics programme. A more quantitative grasp of exactly how the LHC crosses the threshold of the boosted regime is obtained by comparing the expectation in the Standard Model for the production rate of top quarks – the heaviest known particle – at past, present and future colliders.

Since the discovery of the top quark in 1995, the Tevatron has produced tens of thousands of these particles. A large majority of these were produced approximately at rest; only two dozen or so top quark pairs had a mass exceeding 1 TeV. By contrast, the LHC is a real top factory. In 2012 alone, it has already produced more than 20 times as many top-quark pairs as the Tevatron had in its lifetime. At the LHC, most top-quark pairs are still produced close to threshold but production in the boosted regime increases by several orders of magnitude. Several tens of thousands of top-quark pairs will be produced this year with mtt > 1 TeV.

Expected number of events

Impressive as these numbers may be, these first years mark just the start of a long programme. After a shutdown in 2013–2014, the LHC should emerge in its full glory, with protons colliding close to the design energy of 14 TeV and the experiments collecting tens of inverse femtobarns of data each year. Boosted top quarks will be produced by the millions in the next phase of the LHC and a sizeable sample of top quarks with tera-electron-volt energies is expected.

Over the past few years, much work has been done to address the experimental challenges inherent in the new approach for boosted objects. Using the substructure of jets requires a precise understanding of how they are formed. Sophisticated new algorithms to identify boosted objects – W-taggers, top-taggers, Higgs-taggers – have been put forward and developed further by the LHC experiments.

The potential of these new methods to improve the sensitivity of LHC analyses has been estimated by using Monte Carlo simulations. One obvious area where tools tailored to boosted topologies might make a difference is in searches for signals of physics beyond the Standard Model in the most energetic collisions. Several such cases have been studied in detail. A significant pay-off in physics return is expected in resonance searches in the tt mass spectrum and studies of diboson production at high energy. Boosted techniques may also be applied to the high-energy tails of continuum production in the Standard Model. In what has become a seminal paper, the seemingly hopeless Higgs search in the WH, H → bb channel was resurrected by requiring that the Higgs boson is produced with moderate boost (Butterworth et al. 2008).

BOOST2012

By bringing together key theorists and experimentalists every year, a series of workshops known as BOOST offers a forum for discussion of the progress in this fast-moving field. The first of these at SLAC (2009) and in Oxford (2010) focused on Monte Carlo studies that laid the foundations for what was to come. At Princeton in 2011, the first measurements on LHC data of jet substructure were shown, as well as candidate events for the world’s first boosted top quarks. The display of one of these was chosen as the logo for the latest workshop, BOOST2012, organized by the Instituto de Física Corpuscular (IFIC) in Valencia. Held near the Mediterranean in late July – soon after the historic announcement at CERN of the discovery of a Higgs-like boson – this latest workshop definitely held the promise of becoming the “hottest” BOOST event so far. The 80 or so participants definitely did not let the organizers down.

A lively debate arose in the session centred on attempts to predict the invariant mass of energetic jets

A lively debate arose in the session centred on attempts to predict the invariant mass of energetic jets, comparing them with the more sophisticated measurements that have become available this year. Experimentalists and theorists joined efforts to develop new techniques to deal with the impact of the 30 overlapping collisions that occur every time that the LHC bunches cross. The recent discovery at CERN fuelled the discussion on the potential of these techniques to help isolate a Higgs signal in the crucial bb decay channel. However, perhaps the most exciting results were presented in the session on applications of these new ideas to searches for new physics with top quarks at the LHC.

Speakers from the ATLAS and CMS collaborations reviewed their experiments’ searches for top-quark pair production through processes not present in the Standard Model. Some of these use the classical scheme to reconstruct top quarks, where the hadronic top-quark decay (t → Wb → b qq)) is reconstructed by looking for three jets and then combining their four-vectors. Other searches adopt the “boosted” approach and reconstruct highly boosted top quarks as a single jet. While all searches have yielded negative results – reconstructed tt mass spectra following the Standard Model template to a frustrating precision – an evaluation of their relative sensitivity yields an encouraging conclusion. In both experiments, searches employing novel techniques specifically designed for boosted top-quark decay topologies are found to be considerably more sensitive than their classical counterparts in the high-mass region. This was expected from Monte Carlo studies, but these analyses show that the systematic uncertainties in the description of jet substructure, as well as the impact of pile-up on the experiments’ performance, are under control. In that sense, seeing these excellent results so early in the LHC programme constitutes a real proof of principle for this new approach.

The LHC produces – for the first time in the laboratory – large numbers of highly boosted heavy Standard Model particles. Results presented at BOOST2012 show that the development of new tools is on track to extract the maximum knowledge from the most energetic collisions. After careful commissioning and with conservative estimates of the uncertainties that affect this new approach, the first analyses employing boosted techniques to search for tt resonances clearly outperform their classical counterparts. These results are a milestone for the people in the field. The boosted paradigm is clearly ready to take on a major role in the LHC physics programme.

• The author would like to thank Gavin Salam for his useful comments in the preparation of this document

bright-rec iop pub iop-science physcis connect