Comsol -leaderboard other pages

Topics

The high-energy frontier (archive)

The principal goal of the experimental programme at the LHC is to make the first direct exploration of a completely new region of energies and distances, to the tera-electron-volt scale and beyond. The main objectives include the search for the Higgs boson and whatever new physics may accompany it, such as supersymmetry or extra dimensions, and also – perhaps above all – to find something that the theorists have not predicted.

The Standard Model of particles and forces summarizes our present knowledge of particle physics. It extends and generalizes the quantum theory of electromagnetism to include the weak nuclear forces responsible for radioactivity in a single unified framework; it also provides an equally successful analogous theory of the strong nuclear forces.

CCPhy1_10_08

The conceptual basis for the Standard Model was confirmed by the discovery at CERN of the predicted weak neutral-current form of radioactivity and, subsequently, of the quantum particles responsible for the weak and strong forces, at CERN and DESY respectively. Detailed calculations of the properties of these particles, confirmed in particular by experiments at the LEP collider, have since enabled us to establish the complete structure of the Standard Model; data taken at LEP agreed with the calculations at the per mille level.

These successes raise deeper problems, however. The Standard Model does not explain the origin of mass, nor why some particles are very heavy while others have no mass at all; it does not explain why there are so many different types of matter particles in the universe; and it does not offer a unified description of all the fundamental forces. Indeed, the deepest problem in fundamental physics may be how to extend the successes of quantum physics to the force of gravity. It is the search for solutions to these problems that define the current objectives of particle physics – and the programme for the LHC.

Higgs, hierarchy and extra dimensions

Understanding the origin of mass will unlock some of the basic mysteries of the universe: the mass of the electron determines the sizes of atoms, while radioactivity is weak because the W boson weighs as much as a medium-sized nucleus. Within the Standard Model the key to mass lies with an essential ingredient that has not yet been observed, the Higgs boson; without it the calculations would yield incomprehensible infinite results. The agreement of the data with the calculations implies not only that the Higgs boson (or something equivalent) must exist, but also suggests that its mass should be well within the reach of the LHC.

Experiments at LEP at one time found a hint for the existence of the Higgs boson, but these searches proved unsuccessful and told us only that it must weigh at least 114 GeV. At the LHC, the ATLAS and CMS experiments will be looking for the Higgs boson in several ways. The particle is predicted to be unstable, decaying for example to photons, bottom quarks, tau leptons, W or Z bosons (figure 1). It may well be necessary to combine several different decay modes to uncover a convincing signal, but the LHC experiments should be able to find the Higgs boson even if it weighs as much as 1 TeV.

CCPhy2_10_08

While resolving the Higgs question will set the seal on the Standard Model, there are plenty of reasons to expect other, related new physics, within reach of experiments at the LHC. In particular, the elementary Higgs boson of the Standard Model seems unlikely to exist in isolation. Specifically, difficulties arise in calculating quantum corrections to the mass of the Higgs boson. Not only are these corrections infinite in the Standard Model, but, if the usual procedure is adopted of controlling them by cutting the theory off at some high energy or short distance, the net result depends on the square of the cut-off scale. This implies that, if the Standard Model is embedded in some more complete theory that kicks in at high energy, the mass of the Higgs boson would be very sensitive to the details of this high-energy theory. This would make it difficult to understand why the Higgs boson has a (relatively) low mass and, by extension, why the scale of the weak interactions is so much smaller than that of grand unification, say, or quantum gravity.

This is known as the “hierarchy problem”. One could try to resolve it simply by postulating that the underlying parameters of the theory are tuned very finely, so that the net value of the Higgs boson mass after adding in the quantum corrections is small, owing to some suitable cancellation. However, it would be more satisfactory either to abolish the extreme sensitivity to the quantum corrections, or to cancel them in some systematic manner.

One way to achieve this would be if the Higgs boson is composite and so has a finite size, which would cut the quantum corrections off at a relatively low energy scale. In this case, the LHC might uncover a cornucopia of other new composite particles with masses around this cut-off scale, near 1 TeV.

CCPhy3_10_08

The alternative, more elegant, and in my opinion more plausible, solution is to cancel the quantum corrections systematically, which is where supersymmetry could come in. Supersymmetry would pair up fermions, such as the quarks and leptons, with bosons, such as the photon, gluon, W and Z, or even the Higgs boson itself. In a supersymmetric theory, the quantum corrections due to the pairs of virtual fermions and bosons cancel each other systematically, and a low-mass Higgs boson no longer appears unnatural. Indeed, supersymmetry predicts a mass for the Higgs boson probably below 130 GeV, in line with the global fit to precision electroweak data.

The fermions and bosons of the Standard Model, however, do not pair up with each other in a neat supersymmetric manner. The theory, therefore, requires that a supersymmetric partner, or sparticle, as yet unseen, accompanies each of the Standard Model particles. Thus, this scenario predicts a “scornucopia” of new particles that should weigh less than about 1 TeV and could be produced by the LHC (figure 3).

Another attraction of supersymmetry is that it facilitates the unification of the fundamental forces. Extrapolating the strengths of the strong, weak and electromagnetic interactions measured at low energies does not give a common value at any energy, in the absence of supersymmetry. However, there would be a common value, at an energy around 1016 GeV, in the presence of supersymmetry. Moreover, supersymmetry provides a natural candidate, in the form of the lightest supersymmetric particle (LSP), for the cold dark matter required by astrophysicists and cosmologists to explain the amount of matter in the universe and the formation of structures within it, such as galaxies. In this case, the LSP should have neither strong nor electromagnetic interactions, since otherwise it would bind to conventional matter and be detectable. Data from LEP and direct searches have already excluded sneutrinos as LSPs. Nowadays, the “scandidates” most considered are the lightest neutralino and (to a lesser extent) the gravitino.

Assuming that the LSP is the lightest neutralino, the parameter space of the constrained minimal supersymmetric extension of the Standard Model (CMSSM) is restricted by the need to avoid the stau being the LSP, by the measurements of b → sγ decay that agree with the Standard Model, by the range of cold dark-matter density allowed by astrophysical observations, and by the measurement of the anomalous magnetic moment of the muon (gμ–2). These requirements are consistent with relatively large masses for the lightest and next-to-lightest visible supersymmetric particles, as figure 4 indicates. The figure also shows that the LHC can detect most of the models that provide cosmological dark matter (though this is not guaranteed), whereas the astrophysical dark matter itself may be detectable directly for only a smaller fraction of models.

Within the overall range allowed by the experimental constraints, are there any hints at what the supersymmetric mass scale might be? The high precision measurements of mW tend to favour a relatively small mass scale for sparticles. On the other hand, the rate for b → sγ shows no evidence for light sparticles, and the experimental upper limit on Bs → μ+μ begins to exclude very small masses. The strongest indication for new low-energy physics, for which supersymmetry is just one possibility, is offered by gμ–2. Putting this together with the other precision observables gives a preference for light sparticles.

Other proposals for additional new physics postulate the existence of new dimensions of space, which might also help to deal with the hierarchy problem. Clearly, space is three-dimensional on the distance scales that we know so far, but the suggestion is that there might be additional dimensions curled up so small as to be invisible. This idea, which dates back to the work of Theodor Kaluza and Oskar Klein in the 1920s, has gained currency in recent years with the realization that string theory predicts the existence of extra dimensions and that some of these might be large enough to have consequences observable at the LHC. One possibility that has emerged is that gravity might become strong when these extra dimensions appear, possibly at energies close to 1 TeV. In this case, some variants of string theory predict that microscopic black holes might be produced in the LHC collisions. These would decay rapidly via Hawking radiation, but measurements of this radiation would offer a unique window onto the mysteries of quantum gravity.

If the extra dimensions are curled up on a sufficiently large scale, ATLAS and CMS might be able to see Kaluza–Klein excitations of Standard Model particles, or even the graviton. Indeed, the spectroscopy of some extra-dimensional theories might be as rich as that of supersymmetry while, in some theories, the lightest Kaluza–Klein particle might be stable, rather like the LSP in supersymmetric models.

Back to the beginning

By colliding particles at very high energies we can recreate the conditions that existed a fraction of a second after the Big Bang, which allows us to probe the origins of matter. Experiments at LEP revealed that there are just three “families” of elementary particles: one that makes up normal stable matter, and two heavier unstable families that were revealed in cosmic rays and accelerator experiments. The Standard Model does not explain why there are three and only three families, but it may be that their existence in the early universe was necessary for matter to emerge from the Big Bang, with little or no antimatter.

CCPhy4_10_08

Andrei Sakharov was the first to point out that particle physics could explain the origin of matter in the universe by the fact that matter and antimatter have slightly different properties, as discovered in the decays of K and B mesons, which contain strange and bottom quarks, members of the heavier families. These differences are manifest in the phenomenon of CP violation. Present data are in good agreement with the amount of CP violation allowed by the Standard Model, but this would be insufficient to generate the matter seen in the universe.

The Standard Model accounts for CP violation within the context of the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which links the interactions between quarks of different type (or flavour). Experiments at the B-factories at KEK and SLAC have established that the CKM mechanism is dominant, so the question is no longer whether this is “right”. The task is rather to look for additional sources of CP violation that must surely exist, to create the cosmological matter–antimatter asymmetry via baryogenesis in the early universe. If the LHC does observe any new physics, such as the Higgs boson and/or supersymmetry, it will become urgent to understand its flavour and CP properties.

The LHCb experiment will be dedicated to probing the differences between matter and antimatter, notably looking for discrepancies with the Standard Model. The experiment has unique capabilities for probing the decays of mesons containing both bottom and strange quarks. It will be able to measure subtle CP-violating effects in Bs decays, and will also improve measurements of all the angles of the unitarity triangle, which expresses the amount of CP violation in the Standard Model. The LHC will also provide high sensitivity to rare B decays, to which the ATLAS and CMS experiments will contribute, in particular, and which may open another window on CP violation beyond the CKM model.

In addition to the studies of proton–proton collisions, heavy-ion collisions at the LHC will provide a window onto the state of matter that would have existed in the early universe at times before quarks and gluons “condensed” into hadrons, and ultimately the protons and neutrons of the primordial elements. When heavy ions collide at high energies they form for an instant a “fireball” of hot, dense matter. Studies, in particular by the ALICE experiment, may resolve some of the puzzles posed by the data already obtained at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. These data indicate that there is very rapid thermalization in the collisions, after which a fluid with very low viscosity and large transport coefficients seems to be produced. One of the surprises is that the medium produced at RHIC seems to be strongly interacting . The final state exhibits jet quenching and the semblance of cones of energy deposition akin to Machian shock waves or Cherenkov radiation patterns, indicative of very fast particles moving through a medium faster than sound or light.

Experiments at the LHC will enter a new range of temperatures and pressures, thought to be far into the quark–gluon plasma regime, which should test the various ideas developed to explain results from RHIC. The experiments will probably not see a real phase transition between the hadronic and quark–gluon descriptions; it is more likely to be a cross-over that may not have a distinctive experimental signature at high energies. However, it may well be possible to see quark–gluon matter in its weakly interacting high temperature phase. The larger kinematic range should also enable ideas about jet quenching and radiation cones to be tested.

First expectations

The first step for the experimenters will be to understand the minimum-bias events and compare measurements of jets with the predictions of QCD. The next Standard Model processes to be measured and understood will be those producing the W- and Z-vector bosons, followed by top-quark physics. Each of these steps will allow the experimental teams to understand and calibrate their detectors, and only after these steps will the search for the Higgs boson start in earnest. The Higgs will not jump out in the same way as did the W and Z bosons, or even the top quark, and the search for it will demand an excellent understanding of the detectors. Around the time that Higgs searches get underway, the first searches for supersymmetry or other new physics beyond the Standard Model will also start.

In practice, the teams will look for generic signatures of new physics that could be due to several different scenarios. For example, missing-energy events could be due to supersymmetry, extra dimensions, black holes or the radiation of gravitons into extra dimensions. The challenge will then be to distinguish between the different scenarios. For example, in the case of distinguishing between supersymmetry and universal extra dimensions, the spectra of higher excitations would be different in the two scenarios, the different spins of particles in cascade decays would yield distinctive spin correlations, and the spectra and asymmetries of, for instance, dileptons, would be distinguishable.

CCPhy5_10_08

What is the discovery potential of this initial period of LHC running? Figure 5a shows that a Standard Model Higgs boson could be discovered with 5 σ significance with 5 fb–1 of integrated and well-understood luminosity, whereas 1 fb–1 would already suffice to exclude a Standard Model Higgs boson at the 95% confidence level over a large range of possible masses. However, as mentioned above, this Higgs signal would receive contributions from many different decay signatures, so the search for the Higgs boson will require researchers to understand the detectors very well to find each of these signatures with good efficiency and low background. Therefore, announcement of the Higgs discovery may not come the day after the accelerator produces the required integrated luminosity!

Paradoxically, some new physics scenarios such as supersymmetry may be easier to spot, if their mass scale is not too high. For example, figure 5b shows that 0.1 fb–1 of luminosity should be enough to detect the gluino at the 5 σ level if its mass is less than 1.2 TeV, and to exclude its existence below 1.5 TeV at the 95% confidence level. This amount of integrated luminosity could be gathered with an ideal month’s running at 1% of the design instantaneous luminosity.

We do not know which, if any, of the theories that I have mentioned nature has chosen, but one thing is sure: once the LHC starts delivering data, our hazy view of this new energy scale will begin to clear dramatically.

Based on the concluding talk at Physics at the LHC, Cracow, 3–8 July 2006 (http://arxiv.org/abs/hep-ph/0611237).

LHCb: A question of asymmetry

LHCb experiment

Unlike the general-purpose detectors, the geometry of the LHCb experiment does not cover the full solid angle, but is developed along the forward direction with respect to the collision point. For 20 m a series of detector planes collects information on the particles coming from the collision point. This design is optimized for the study of B mesons, which, given their relatively small mass compared with the high energy of the LHC collisions, fly mostly in the forward direction.

B mesons have received increasing attention from theorists and experimentalists alike over recent years because their behaviour seems linked to various quantum phenomena that could shed light on new physics. “Today’s Standard Model of particle physics leaves many unanswered questions,” says Andrei Golutvin, spokesperson of the LHCb collaboration. He has recently taken over this role from Tatsuya Nakada who was the first spokesperson and a founder of the experiment. “A lot of physicists expect new physics to be just around the corner and already accessible at the LHC,” he continues. “General-purpose detectors like ATLAS and CMS will look for direct evidence of the existence of new particles. We have a different strategy. We focus on the study of B mesons, where some of their behaviour is very precisely predicted by the Standard Model. However small, a deviation from these predictions would indicate the existence of new phenomena.”

In recent years, two experiments at B-factories – BaBar at SLAC and Belle at KEK – have shown that the B particles are a key element in the process of understanding CP violation – the subtle asymmetry between matter and antimatter within the Standard Model. However, this does not seem to be enough to generate the absence of antimatter in the universe. “We will study with an unprecedented precision how CP violation takes place in the B-system,” explains Golutvin. “The yet undiscovered heavy particles could be a new source of CP violation that could affect the decays of B particles. The Bs mesons seem particularly interesting,” he continues. “Their loop-dominated decays are potentially very sensitive to new particles that could ‘enter’ in the loop virtually and cause observable effects. For example, if we find that the decay rate of the Bs to a particular final state, such as two muons, is higher than predicted by the Standard Model, it could be an indication of a contribution coming from Higgs bosons or supersymmetric particles.”

The LHC, with its high luminosity and high energy, will provide the LHCb collaboration with a particularly rich harvest of beauty particles

The LHC, with its high luminosity and high energy, will provide the LHCb collaboration with a particularly rich harvest of beauty particles, hundreds of times more than those made available by other accelerators to previous experiments. “Both BaBar and Belle, as well as CDF and D0 at the Tevatron proton–antiproton collider, made big contributions to flavour physics, the physics of processes that involve the transformation of quark flavours,” says Golutvin. “Now we know that the indirect contribution of new physics in CP violation is not big, certainly below the 10% level for the most of the decay modes. Thanks to the LHC performance, LHCb will be able to study very rare events and show possible new avenues to physics.”

In its 15-year history, the LHCb detector underwent one major layout modification. The modification – known as the “LHCb light” option – reduced the amount of material in the layers the particles cross, thus reducing the background produced by the interaction of primary particles with the material of the detector. “We work out the momentum of charged particles by measuring the bending angle after the dipole magnets. The original idea was to have additional detectors to follow the trajectory of particles inside the magnet, which means of course a more complicated detector,” Golutvin explains. “After an idea by Nakada and with the help of computer simulations, we understood that we could have very robust pattern recognition even without all those chambers.” The results was that about six years ago the LHCb collaboration decided to simplify detector a little by having no chambers in the magnet. “This minimizes the amount of material along the trajectories of particles and also simplifies the operation of the detector,” says Golutvin. “Besides that, there were a few other minor changes. For example, we decided to use a beryllium beam pipe also to minimize the background.”

LHCb is designed to run at a luminosity of a few times 1032 cm–2s–1, much smaller than the nominal LHC luminosity, 1032 cm–2s–1

During normal running of the LHC, one of the most beautiful and delicate subdetectors of LHCb, the VErtex LOcator (VELO), sits only 5 mm away from the beam. Its mission is to identify the vertices where the B mesons are produced and decayed. Given the number of particles that will be produced closed to the beam direction, the VELO will receive a great deal of radiation in a short time. “The current VELO will have to be replaced after 3 to 4 years of nominal operation,” confirms Golutvin. “The work on the replacement VELO modules started in July this year and should be completed by April 2010. As for the rest of the detector, it is designed to withstand the radiation during the initial physics programme.”

LHCb is designed to run at a luminosity of a few times 1032 cm–2s–1, much smaller than the nominal LHC luminosity, 1032 cm–2s–1. This will be achieved by focusing the beams less at the LHCb collision point. The collaboration is considering a possibility for a major upgrade to work at an order of magnitude higher luminosity, after the initial physics programme is completed in about five to six years from now.

As with the other experiments at the LHC, the LHCb collaborations will use the first run to understand and calibrate the various parts of the detector. After that, it will start physics analysis at the same time as ATLAS and CMS. So just what does the collaboration expect? “As expressed by many people, the following three possible situations would be very exciting for particle physics,” says Golutvin. “The first one is that ATLAS and CMS see some new physics and we don’t. This will be very exciting for them and may be not too much for us. Still, the physics community will have to explain why the new physics does not seem to affect the quantum loop, in order to understand the exact nature of the new physics. Then there is the second option: ATLAS and CMS don’t see new physics while we see a clear deviation from the Standard Model. This might happen if the new particles are very heavy. We would see their virtual effect but they could not be directly produced at the LHC energies in the other experiments. Of course, the best case is if all the experiments see new physics effects and a coherent scenario can be built for this new physics.”

Nature alone knows which of these scenarios will eventually occur, but it could be that new physics might emerge quickly in LHCb, so Golutvin and the LHCb collaboration remain very optimistic.

LHC milestones (archive)

What next after LEP?

CCmil1_10_08

Work for the LEP electron–positron collider continues to drive ahead, however LEP is far from being the last word in CERN’s long term plans. A clue was already in the LEP Design Study ” …by the adoption of a beam height of only 80 cm, there is enough room left (in the tunnel) for the installation of a second machine at a later stage…”.

A workshop, organized by ECFA and CERN in March 1984, examined the feasibility of a hadron collider in the LEP tunnel (Lausanne LHC workshop). There the idea emerged for a ring of superconducting magnets, installed above the LEP ring, to collide protons together (or protons with antiprotons) at as high an energy as possible. Since this meeting, considerably more work has been done to firm up ideas.

Using 10 Tesla dipole bending magnets, collision energies of 17 TeV (8500 GeV per beam) could be achieved with a respectable collision rate (luminosity 1033 cm–2 s–1). A ‘two-in-one’ aperture solution for the superconducting magnets is recommended for economy and compactness.

It is the relative ease of colliding proton beams (as compared to the difficulties of first making and then handling antiprotons) which promise high collision rates and make the proton–proton option the preferred solution. Despite the need to provide a large number of bunches (a figure of 3564 has been quoted), the two proton rings in the LEP tunnel could be filled using CERN’s existing 450 GeV SPS machine and its proton supply in only a few minutes. Of course new injection lines would have to be built.

• July/August 1986 pp5–4 (abridged).

 

Elsewhere

In Europe the news of the initial approval for the US Superconducting Supercollider was received enthusiastically as it showed that the future of high-energy physics is regarded as being of paramount importance at the highest levels. While the US plans gather momentum, the possibility of a hadron ring in the LEP tunnel at CERN is still attractive. Although restricted in energy by the ‘modest’ dimensions of the LEP tunnel compared to the SSC (27 km circumference against 84), the LHC scheme scores points for the magnificent beam injection systems already in place at CERN, a complete tunnel, and several collision options.

• March 1987 p2 (abridged).

 

Superconducting magnet success

Technical preparations for a possible proton–proton collider (LHC) in the LEP tunnel have made substantial progress with the successful testing of the first LHC superconducting high-field 1 m long model magnet. The single aperture niobium-titanium wound dipole was designed by R Perin and his LHC magnet study team, and manufactured by Ansaldo Componenti, Genova.

Operating at 2 K, it reached and passed its 8 Tesla nominal field without any quench, the first three quenches occurring at central fields of 8.55, 8.9 and 9 Tesla respectively. It then attained 9.1 Tesla without quenching and operated at this level for some time.

This is the first time a high field ‘accelerator quality’ superconducting dipole model has been designed and built as a joint venture between a scientific laboratory and industry. CERN provided most of the know-how and the superconductor, while manufacture was taken over by Ansaldo.

• June 1988 p13 (abridged).

 

Magnets: beyond niobium-titantium

CCmil2_10_08

The superconducting proton ring being built for the HERA electron–proton collider at DESY has already demonstrated that niobium-titanium technology is mature, even on an industrial scale. The HERA-type design (coils around the beam-pipe, mechanical support collars and cold iron return) has gone on to become widely adopted, but reaches its natural limit for dipole construction using niobium-titanium near 10 Tesla.

This is now well understood and has been demonstrated with several test magnets developed in a collaboration between CERN and Italian supplier Ansaldo. A similar geometry was used with niobium-tin in a collaboration between CERN and Elin (Austria) which reached a record field for this kind of magnet of 9.45 Tesla in September 1989.

CERN’s proposed LHC collider in the LEP tunnel envisages 10 T fields with a double aperture carrying the two beam pipes for the proton beams inside a single cryostat. Four contracts have been placed with European firms for the development of one-metre, double-aperture niobium-titanium magnets with a view to placing further orders for full-scale, 10 m prototype units. Using superfluid helium at 1.8 K instead of conventional 4.2 K cryogenics provides the necessary additional potential.

• Sept/October 1990 pp17–18 (extract).

Neutron-rich nuclei reveal new secrets

Two research teams at Michigan State University’s National Superconducting Cyclotron Laboratory (NSCL) have reported fresh findings about neutron-rich nuclei. In separate experiments, groups measured a critical energy gap in oxygen nuclei and achieved their first-ever success using a new technique for finding isomers.

One important area of study with these nuclei focuses on the neutron drip line – the limit in the number of neutrons (N) that can bind to a given number of protons. For oxygen, that line was known to lie at 16 neutrons, and indeed indicated a new shell closure at N=16 in neutron-rich nuclei. However, theoretical calculations disagreed on the difference in binding energy between 24O, with a closed shell of 16 neutrons, and 25O, the first isotope beyond the drip line – in other words, the binding energy of the 17th neutron.

Calem Hoffman from Florida State University and colleagues have now pinpointed this quantity. The group used the NSCL’s coupled cyclotrons to accelerate a beam of 26F onto a fixed target, where they observed 25O for the first time. The 25O decays too quickly for direct detection, but the group was able instead to track its decay products: 24O and a single free neutron, measured with the Modular Neutron Array. The team then used the angles, energies and momenta of the decay products to calculate the mass of the 25O, which in turn allowed them to infer the difference in binding energy from 24O, and ultimately the N=16 shell gap, which they find to be 4.86(13) MeV (Hoffman et al. 2008).

The second experiment, conducted by NSCL’s Georg Bollen and colleagues, focused on nuclear isomers, in which neutrons are excited to a higher-energy arrangement for anywhere from fractions of a second to years. The team has discovered a previously unknown isomer of 65Fe, a nucleus that is intriguing for its proximity in terms of proton and neutron numbers to 68Ni, a particularly enigmatic isotope. 68Ni displays some characteristics of doubly magic nuclei, but nuclei with slightly fewer protons and neutrons than 68Ni reveal pronounced changes in structure – which generally is not the case for isotopes near others that are doubly magic. Researchers have little idea what is happening in this nuclear region, and so are keen to make more measurements.

These nuclei are a target for the Low Energy Beam and Ion Trap (LEBIT), which experimenters at NSCL use to collect high-speed products of cyclotron-spawned collisions. After firing a beam of germanium nuclei into a thin target, Bollen’s team captured the products in LEBIT and directed them into a Penning trap, allowing them to make very precise mass measurements of the particles caught. The team measured two distinct masses for 65Fe, indicating nuclei with different energy states – one the ground state and one a novel isomer at an excitation energy of 402(5) keV (Block et al. 2008) This is the first use of Penning trap mass spectrometry of this kind. Previous isomer studies have instead employed gamma-ray spectroscopy.

Neutrino physicists get together down under

In recent years neutrinos have moved onto centre stage in both astrophysics and particle physics, and the latest developments were on show at the XXIII International Conference on Neutrino Physics and Astrophysics on 26–31 May. Supported by the International Union of Pure and Applied Physics, Neutrino 2008 took place in Christchurch, New Zealand, where it was organized by the University of Canterbury and the IceCube collaboration, which uses Christ church as its staging area and gateway to Antarctica. Conference-goers celebrated the 100th anniversary of the award of the Nobel Prize to a former undergraduate of the University of Canterbury, Ernest Rutherford, whose life was the topic of the opening presentation by Cecilia Jarlskog from Lund.

CCneu1_09_08

The question “Where are we?” is beloved of neutrino physicists. Alexei Smirnov of the Abdus Salam International Centre for Theoretical Physics in Trieste noted that a quarter of the papers found on the SPIRES high-energy physics database with this title are in neutrino physics. With the discoveries of neutrino masses and lepton-flavour mixing now established, there is a standard neutrino scenario in which neutrinos have masses in the sub-electron-volt range and there are two large mixings and one small or zero mixing between the three neutrino flavours. Neutrino experiments have moved into an era of precision measurements, motivated by the belief that neutrino mass and mixing are manifestations of physics beyond the Standard Model. However, as Smirnov noted, despite many years of effort and many trials, the physics underlying neutrino mass and mixing remains unidentified.

Roadmap of theoretical possibilities

Understanding neutrinos is a two-step process. The first step is to determine the values of the three mixing angles, the masses of the three mass eigenstates, and the value of the CP-violating phase. It is also necessary to find out whether the neutrino is its own antiparticle, that is whether it is as described by the physics of Paul Dirac or of Ettore Majorana. The second step is to try to understand why the neutrino matrix elements and the neutrino masses are what they are and what they tell us about physics well beyond the Standard Model. Stephen King from Southampton presented a roadmap of theoretical possibilities, including extra dimensions and possible grand unified theories, with each theoretical path linked to future experimental results.

Two of the mixing angles are now well determined: one through the solar-neutrino experiments and the other through the atmospheric- and accelerator-neutrino studies. The third angle, θ13, is much less constrained but is no less important because it determines how close the mixing matrix is to the theoretically interesting, highly symmetric “tribimaximal” configuration. The best limits on θ13 are currently from the Double Chooz experiment. If θ13 is large enough, it may be possible to observe CP violation with neutrinos, and Yosef Nir from the Weizmann Institute explained how a large value for the CP-violating parameter, δ, could explain the observed baryon asymmetry in the universe via the process called leptogenesis.

Speakers from solar-neutrino experiments were the first to present their results, beginning with reports from the Borexino detector located at Gran Sasso National Laboratory in Italy, and from the third and final phase of the Sudbury Neutrino Observatory (SNO) in Canada. SNO’s third phase included 3He proportional counters to measure the rate of neutral-current interactions in the detector’s heavy water. The Borexino experiment has results from 192 days of data taking and, as with earlier solar-neutrino measurements, these are best described by neutrino-flavour oscillation. The electron-neutrino flavour eigenstate, to a good approximation, is a linear combination of two mass eigenstates with masses m1 and m2. Neutrinos from the same energy range but at a much shorter baseline are detected by the KamLAND experiment in Japan, which observes antineutrinos from nuclear reactors. A combined analysis of the solar and KamLAND data now gives precise results for the mixing angle, Δ12, and mass difference Δm122, of the two mass eigenstates. The result of analysis with two flavours gives Δ12 = 33.8 + 1.4 –1.3 ° and Δm122 = 7.94 + 0.42 – 0.26 × 10–5 eV2.

The Super-Kamiokande experiment in Japan is now fully recovered from the accident in 2001, which destroyed around half of the original photomultiplier tubes. It has provided a high-precision measurement of neutrino oscillations by detecting atmospheric neutrinos in an energy range of hundreds of millions of electron-volts to a few tera-electron-volts. Jennifer Raaf from Boston gave the results from a combined analysis of the pre-accident and post-accident data taking. These include a mixing angle with sin223 > 0.94 at 90% confidence, which is the best constraint so far obtained for this parameter. The experiment also places limits on non-oscillation physics, such as neutrino decoherence, which is excluded at 5.0 σ, and neutrino decay, which is excluded at 4.1 σ.

Neutrino beams produced at particle accelerators offer the greatest control over the neutrino sources. They have been used to study the same neutrino oscillations that take place in atmospheric neutrino oscillation. The KEK-to-Kamioka (K2K) experiment was the first long-baseline neutrino experiment to operate, using neutrinos sent from the KEK laboratory to the Super-Kamiokande detector 250 km away. The K2K collaboration has previously reported results consistent with the Super-Kamiokande atmospheric neutrino results using data collected between 1999 and 2004. At the conference Hugh Gallagher from Tufts University presented new results from the Main Injector Neutron Oscillation Search (MINOS) experiment. This uses a muon–neutrino beam that is produced at Fermilab and observed at two sites: a near detector at Fermilab and a far detector 734 km away at the Soudan Underground Laboratory in Minnesota. MINOS now has the tightest constraint on the mass difference, finding Δm232 = 2.43 ±0.13 × 10–3 eV–2 and a result for the mixing angle that is consistent with that for Super-Kamiokande.

The conference also heard reports on future experiments that aim to measure θ13. These include the reactor-neutrino experiments Double Chooz in France, Daya Bay in China and the Reactor Experiment for Neutrino Oscillation at Yonggwang in Korea, as well as the accelerator-neutrino experiments T2K, OPERA at the Gran Sasso National Laboratory, and NOvA at Fermilab.

Many efforts are under way to determine directly the absolute neutrino mass scale in laboratory experiments through nuclear beta-decay or neutrinoless double beta-decay, which is possible if the neutrino is Majorana. Beta-decay experiments can be categorized by the detector type and there were reviews of tracking, solid-state, calorimetric and scintillator detectors, with energy resolution being the crucial common ingredient. The neutrino mass scale can also be probed through cosmology; the relic neutrino density influences the evolution of large-scale structure in the universe. Richard Easther from Yale presented the latest results obtained by combining cosmic microwave background and supernova observations. The best fit constrains the mass sum from all neutrino flavours to be less than 1 eV, with better precision obtainable if the Hubble constant is known independently.

Neutrinos also probe a range of physical processes, from the heat source of the Earth to the location of high-energy cosmic accelerators. Bill McDonough of Maryland discussed how the detection of geoneutrinos can put limits on the amount of heat generated by uranium and thorium inside the Earth. KamLAND has already placed limits on this but is restricted by the background from reactor neutrinos. The next step may be the Hawaii Anti-Neutrino Observatory, HANOHANO – a proposed 10 kilotonne liquid scintillation detector designed to be transportable and deployable in the deep ocean. Its goal is to measure the neutrino flux from the Earth’s mantle for the first time.

Cosmic neutrinos may also unveil the very high-energy, cosmic-ray accelerators. Unlike photons or charged particles, neutrinos can emerge from deep inside their sources and travel across the universe uninterrupted. Julia Becker of Gothenberg University discussed some potential sources of cosmic neutrinos, including some of the most energetic objects in the universe, such as supernova remnants, microquasars and active galactic nuclei. To date, no experiment has observed extraterrestrial high-energy neutrinos, but cubic-kilometre telescopes (e.g. KM3Net, which is planned for the Mediterranean, and IceCube, under construction at the South Pole) are expected to be large enough to observe these cosmic neutrinos. Spencer Klein from the Lawrence Berkeley National Laboratory gave an update on the IceCube neutrino observatory, which uses the ice at the South Pole as a Cherenkov medium for the detection of high-energy neutrinos. The observatory comprises an in-ice, three-dimensional array of photomultiplier tubes and a surface air shower array. In February, half of the detector had been deployed, bringing the instrumented volume to roughly 0.5 km3.

Although the field of neutrino physics has moved into a precision era, many puzzles remain and there is still much to be explained. A number of experiments are anticipating new results in the near future, so we can look forward to the next Neutrino conference, to be held in Athens in 2010.

London welcomes DIS international workshop

This spring the XVI International Workshop on Deep-Inelastic Scattering and Related Subjects (DIS 2008) took place at University College London (UCL), and was jointly organized by the high-energy particle physics groups of the University of Oxford and UCL. Some 300 participants attended the workshop, which was held on 7–11 April and consisted of approximately 270 talks covering a multitude of subjects.

The provost and president of UCL, Malcolm Grant, opened the first day, which consisted mainly of plenary talks, with speakers detailing recent experimental and theoretical highlights, and looking at future developments in the field of deep-inelastic scattering (DIS), QCD and collider physics. The opening plenary speakers greatly helped to set the tone of the meeting with excellent overviews and positive outlooks. In the late afternoon, the workshop split into working groups with specialized talks, with up to six groups in parallel at any one time.

CCdis1_09_08

The parallel sessions covered a range of subjects, including structure functions and low-x; diffraction and vector mesons; electroweak measurements and physics beyond the Standard Model; hadronic final states and QCD; heavy flavours; spin physics; and future facilities. There were many excellent presentations, including high-quality results from both experiment and theory, together with extensive discussions. The parallel sessions continued throughout the next two days, culminating with a packed additional session organised by Hannes Jung from DESY on “What HERA can still provide”. That so many people were prepared to forego an evening meal to participate in an extra session at the end of a busy day demonstrates the unique legacy of HERA, the world’s first and only electron–proton collider, which ceased operation at DESY in June 2007. On the afternoon and morning of the final two days, the convenors of the working groups reported on the highlights of their sessions. Finally, Brian Foster of the University of Oxford beautifully summarized the whole workshop, again highlighting the vitality of both the field and the workshop.

Work on the structure of the proton – the main subject of the DIS workshop series – has seen tremendous advances recently. The H1 and ZEUS collaborations have made the first measurements of the longitudinal structure function, FL, and have combined data on inclusive DIS cross sections from the HERA I run in a preliminary HERA fit of the parton density functions. The quantity FL is an integral part of the description of the proton’s structure and is directly sensitive to the gluon density and the QCD evolution with momentum transfer. Both collaborations have measured FL using two special low-energy proton runs taken at the end of HERA data taking. While the data are consistent with QCD predictions of the parton densities, which are based on fits to the inclusive measurements of F2, they cannot yet distinguish between different predictions, although significant improvements to the measurements are expected.

Taking advantage of the different detectors and their systematics, the combination of the F2 measurements from H1 and ZEUS has produced results that are significantly more precise than the simple effect of doubling statistics. The effective “cross calibration” has led to uncertainties of 1–2% over a wide range in Bjorken-x and in photon virtuality, Q2. The combined HERA data alone have in turn been used in a fit of the parton distributions in the proton and this leads to results that are competitive with global fits that use data from many different sources (see figure). Data from the Tevatron at Fermilab are also placing strong constraints on the structure of the proton. Results on the charge asymmetry of the W particle from the CDF experiment have a precision that is significantly better than the uncertainties on the parton distribution functions. Additionally, inclusive-jet cross sections from the D0 experiment yield constraints at the highest scales, up to 600 GeV. They also provide a wonderful verification of QCD predictions across 10 orders of magnitude in the cross section, differential in jet pT and rapidity.

All of the above results are crucial inputs to our understanding of QCD, and in particular the structure of the proton, which is needed as the starting point for most of the physics at CERN’s LHC. Along with the new measurements, theory is keeping pace with a number of advances that are either already made or planned. With the recent development of next-to-next-to-leading order QCD corrections (NNLO) for F2, groups are working on the implementation of NNLO for general 2 &raar; 2 parton scattering and the extension to the next order for F2. Of course with every order in the perturbation expansion, the number of diagrams increases exponentially, but new approaches using formal mathematics developed for other applications, such as twistors, are helping to reduce the number of diagrams by over an order of magnitude.

CCdis2_09_08

Spin physics – fully polarized DIS – attracted many talks. The exquisite experiments of HERMES at HERA, COMPASS at CERN and those at RHIC and Jefferson Lab are matched by exotic new varieties of observables and dreams of reconstructing the proton structure in 3D. Despite all this activity, however, the “spin crisis” remains. The quarks do not carry much of the proton’s spin, and new results show that neither do the gluons. That leaves angular momentum – dubbed “dark angular momentum” by Xiangdong Ji of Maryland during his introductory talk on spin, because it will be so difficult to measure. Much remains to be done to clarify this area at the upgraded Jefferson Lab and/or RHIC.

The workshop programme made room for several social events including a welcome reception, held in the North Cloisters at UCL, and a brilliant concert at the Queen Elizabeth Hall by violinist Jack Liebeck and pianist Katya Apekisheva. The social highlight was the dinner held at Lord’s Cricket Ground – “the home of cricket”. After an excellent dinner, Norman McCubbin from the Science and Technology Facilities Council/Rutherford Appleton Laboratory gave a speech entitled “The scattering of balls: an English obsession”. He explained the delights of this English game, such as its length, the many and complicated options for when tea can be taken and the history of Lord’s. This was all supported by props showing how the game relates to physics and specifically deep-inelastic scattering.

DIS 2008 demonstrated how “DIS and Related Subjects” permeates almost all areas of high-energy physics, from hadron colliders to spin physics, neutrino physics and more. There is still much to be done and learnt in the field. Apart from the immediate excitement of the LHC start-up, another promising development for the future is the LHeC project, discussed on the last day, which would see the introduction of an electron ring in the LHC tunnel, allowing electron–proton collisions.

The European Committee for Future Accelerators has recently approved a conceptual design study and work is rapidly increasing on this project to assess its physics potential and technical realization, with a series of dedicated workshops starting this year. We are now all looking forward to seeing how this flourishing subject will be continued in Madrid at DIS 2009.

• The workshop was generously supported by CERN, DESY, FNAL, Jefferson Lab, STFC, IPPP Durham, UCL Maths and Physical Sciences Faculty, John Adams Institute, Cockcroft Institute, Cambridge University Press and Oxford University Press. As co-chairs we would like to thank all members of the Local Organizing Committee, in particular Christine Johnston, who quietly and efficiently carried most of the administrative burden, and the student helpers who made the conference such a great success.

BaBar gets right to the bottom

The BaBar collaboration, working at SLAC has observed the ground state of the bottomonium family, the ηb meson. Bottomonium particles are bound states of a bottom quark and its antiquark. The first such state, the Υ(1S), was discovered 30 years ago and revealed the existence of the bottom quark. Physicists have been searching for the lowest-energy state of the system ever since.

CCnew4_09_08

The ηb was observed in the energy distribution of the photons produced in the radiative decay of the Υ(3S). The two-body decay, Υ(3S) → γηb, produces a monochromatic line with an energy that can be used to determine the ηb mass. The crucial point of the analysis was to understand the photon backgrounds, especially those that form peaks in the spectrum. These include photons emitted in radiative processes such as e+e → γΥ(1S), which produces photons with energies close to the expected ηb signal, and transitions to intermediate bottomonium states, χbJ(2P).

The team used more than 100 million Υ(3S) events produced from e+e collisions recorded with the BaBar detector at the PEP-II accelerator. These data were recorded in the final data-collection run of the experiment in 2008. After the analysis selection, approximately 19,000 ηb candidates were identified as forming a peak in the photon-energy spectrum at 921.2 MeV. The significance of this peak is 10 σ.

The corresponding mass of the ηb is 9388.9+3.1-2.1±2.7 MeV/c2, giving a hyperfine mass splitting of 71.4+2.3–3.1±2.7 MeV/c2 between the Υ(1S) and the ηb. This measurement represents the first experimental data on hyperfine mass-splittings in the heaviest meson system, and will allow for more precise tests of the role of spin–spin interactions in QCD.

The BaBar collaboration expects to release further results on bottomonium spectroscopy in the near future.

D0 snares last rare boson pair

The D0 collaboration at Fermilab has announced the observation of pairs of Z bosons produced in proton–antiproton collisions. This is the final and rarest state in the series of gauge boson pairs observed and studied by D0 and the CDF experiment at the Tevatron: Wγ, Zγ, WW, WZ and ZZ. Earlier this year CDF published evidence for ZZ production, but the D0 results presented on 25 July showed for the first time sufficient significance to rank as an observation.

CCnew2_09_08

D0 observed ZZ production in 2.7 fb–1 of data with a combination of two analyses that look for Z decays into different final states. One analysis looked for a Z decaying into two electrons or two muons, the other for a Z decaying into neutrinos. The neutrino signature is challenging experimentally, but worthwhile to pursue because it occurs relatively frequently, although even this decay signature is predicted to occur less than once every 1012 collisions. The process of both Zs decaying to either electrons or muons is an even rarer process. In this analysis, three candidate events were observed with an expected background of less than 0.2 events. The statistical significance of the combined analysis is 5.7 σ, which firmly establishes the discovery of ZZ production at the Tevatron.

D0 measured a cross section for ZZ production of 1.5±0.6 pb, which is in excellent agreement with the prediction of the Standard Model. This is important as Z bosons in the Standard Model do not couple directly to one other. A higher rate would have implied anomalous self-couplings.

CCnew3_09_08

The observation of ZZ is connected with the search for the Higgs boson in several ways. The next rarest diboson production processes after ZZ are those involving Higgs bosons; seeing ZZ is an essential step in demonstrating the ability of an experiment to see the Higgs. Pairs of Z bosons also constitute one of the backgrounds to Higgs searches. At small values of the Higgs mass, ZZ can mimic the signature for a Higgs boson produced in association with a Z boson. At large values of the Higgs mass, the Higgs can decay into WW or ZZ. In more ways than one, ZZ observation is an essential prelude to finding, or excluding, the Higgs boson at the Tevatron.

Will the LHC reveal the unexpected?

This autumn, commissioning should be in full swing on the LHC at CERN, the world’s largest laboratory for the study of subnuclear physics. So it is entirely appropriate that the 46th Course of the International School of Subnuclear Physics, the oldest of the 123 schools of the Ettore Majorana Foundation and Centre for Scientific Culture (EMFCSC) in Erice, will look at what may come from the LHC – both the expected and the unexpected.

This year’s course, directed by Antonino Zichichi and Gerardus ’t Hooft, is to be held in Erice in September. It will provide the perfect opportunity to focus on the highlights from CERN, and in particular the goals of the LHC. This was also the theme of the 45th in the series, held in 2007, when CERN’s director-general, Robert Aymar, stated that these goals “could determine the future course of high-energy physics and should allow us to go beyond the Standard Model”.

CCEins1_07-08

Physics beyond the Standard Model first appeared before the Standard Model itself, when Raymond Davis observed neutrinos from the Sun in the 1960s. At Erice last year, Alessandro Bettini from the Galileo Galilei physics department at Padua University pointed out: “From 1962 neutrinos were used to look into the Sun’s core, but their behaviour was totally unexpected.” This led to the case for neutrino oscillations – a phenomenon that the Italian Laboratori Nazionali Gran Sasso (LNGS) is studying through the CERN Neutrinos to Gran Sasso project, which started in August 2006. “The observation of neutrino oscillations has now established beyond doubt that neutrinos have mass and mix,” claimed Eugenio Coccia, director of LNGS, during his talk. “This existence of neutrino masses is the first solid experimental fact requiring physics beyond the Standard Model.”

The physics of neutrinos is also linked to the unseen matter of the universe. In 1933, Fritz Zwicky, on measuring the mean quadratic velocity of galaxies, proposed the existence of a kind of “invisible matter” – he named it dark matter – that could have neither electromagnetic nor strong nuclear interactions. Neutrinos became the obvious candidates for dark-matter particles, but the study of the evolution of large-scale structures in the universe has unexpectedly shown that the contribution of neutrinos must be extremely small, if it exists at all. Indeed, no Standard Model particle can be considered as the dominant component of dark matter. One new particle candidate is the sterile neutrino, as Lisa Randall from Harvard University explained last year. “This new ‘flavour’ of neutrino could be trapped, like gravitons, in a different brane from the one we live on,” she said. “For this reason we have not observed it directly so far. But the LHC should manage to see many particles that were created during the dawn of the universe and disappeared soon after the Big Bang.”

There are many questions in particle physics that the LHC could help to solve, which the 46th course will again discuss this year. A key question is whether the expectations from the LHC predictable.

To answer this, during his talk at the 45th course, Zichichi recalled a front-line scientist of the 20th century, whose birth centenary was celebrated last year at the World Nuclear Physics Conference in Tokyo. In 1935 Hideki Yukawa proposed the existence of a particle with a mass between that of the light electron and the heavy nucleon – the first meson. “No-one was able to predict the ‘gold mine’ hidden in the production, decay and intrinsic structure of the Yukawa particle,” said Zichichi. “This gold mine is still being explored today, and its present frontier is the quark–gluon-coloured world.” Zichichi also pointed out: “It is considered standard wisdom that nuclear physics is based on perfect theoretical predictions, but people forget the impressive series of unexpected events with enormous consequences [UEEC] discovered inside the Yukawa gold mine.”.

Such UEEC events are a common feature of the greatest scientific discoveries and the most important historical facts. However, there is a difference. Analysing history on the basis of “what if?” leads historians to conclude that the world would not be as it is if one or any number of “what if?” events had not occurred. This is not the case for science, as Zichichi underlines: “The world would have exactly the same laws and regularities, even if Galileo Galilei or somebody else had not made their discoveries.”

UEEC events will be crucial evidence for understanding the existence of complexity at the elementary level. “No one could predict a UEEC event on the basis of present knowledge,” Zichichi pointed out. “In fact predictions are based on the mathematical description of UEEC events, so they come only after a UEEC event. Moreover, we should be prepared with powerful experimental instruments, technologically at the frontier of our knowledge, to discover all the pieces of the Yukawa gold mine.”

With the advent of the LHC at CERN, a new supercollider will study the properties of a “new world” produced in collisions between heavy nuclei (208Pb82+) at the maximum energy so far available (1150 TeV). This world is the quark–gluon-coloured world, totally different from all that we have so far been dealing with.

As Aymar underlined: “If new physics is there, the LHC should find it.” There is nothing left for us but to await the unexpected.

Protons and neutrons certainly prefer each other’s company

Researchers at the Jefferson Lab have found that neutron–proton pairs in the ground state carbon-12 nucleus are far more common than proton–proton pairs and neutron–neutron pairs. As many as 18% of the nucleons are involved in proton–neutron short-range correlations (SRCs), a result that could have implications for neutron stars.

In a typical nucleus, nucleons maintain an average distance of 1.7 fm. However, roughly one-fifth of nucleons are involved in short-range correlations, where two nucleons come to within a femtometre of each other. These pairs can create local densities five times that of average nuclear matter, thus providing a glimpse of dense nuclear matter as found in neutron stars.

CCEnew3_07-08

Now a team working in Jefferson Lab’s Hall A has made the first simultaneous measurement of SRCs and their constituents. The experiment used an incident electron beam of 4.627 GeV and a carbon-12 target. Proton-knockout events were defined by the two High-Resolution Spectrometers (HRS) in Hall A. The left HRS detected scattered electrons and the right HRS detected knock-out protons. A large acceptance spectrometer (BigBite) and a neutron array detected correlated high-momentum recoiling protons and neutrons, respectively.

The experiment selected (e,e’p) events with high missing momentum, greater than 300 MeV/c, and revealed that the missing momentum was balanced almost entirely by a single recoiling nucleon. This nucleon was initially back to back with the knock-out proton. The team found that 90% of these SRCs involved proton–neutron pairs. The remaining 10% were split between proton–proton and neutron–neutron pairs (Subedi et al. 2008). Calculations of this effect in recent theoretical work indicate that the large prevalence of neutron–proton pairs over proton–proton and neutron–neutron pairs is a result of the nucleon–nucleon tensor force (Sargsian et al. 2005 and Schiavilla et al. 2007).

CCEnew4_07-08

Together with previous work, including cross-section ratio measurements at Jefferson Lab and proton-knockout experiments at Brookhaven National Laboratory, the new result yields a consistent picture of the short-distance structure of nuclear systems, from light nuclei to neutron stars. Most accepted models of neutron stars assume a make-up of 95% neutrons and 5% protons at the core. The presence of strong short-range, neutron–proton pairing could alter assumptions about the protons’ momenta, thus affecting calculations of the density and/or lifetime of neutron stars.

bright-rec iop pub iop-science physcis connect