Comsol -leaderboard other pages

Topics

Two years of LHC physics: ATLAS takes stock

Production cross-section

Since the first LHC collisions about two years ago, the ATLAS experiment has performed superbly – collecting quality data with high efficiency, processing the data in a timely way and preparing and publishing many new physics results. During this time the LHC has delivered more than 5 fb–1 of proton–proton collision data at √s=7 TeV. Using a good fraction of these data, the collaboration has carried out Higgs-boson searches, as well as searches beyond the Standard Model. While no new physics has been observed, ATLAS is setting stringent limits on the production cross-sections of new particles, including – but not limited to – the Higgs boson predicted by the Standard Model. Accurate measurements of Standard Model physics quantities have been performed covering many orders of magnitude in cross-section, with a precision often comparable to or exceeding that of the predictions.

These accomplishments have not been as easy as they may appear. Built on the tremendous success of the LHC, they are also the product of the strength and hard work of the 3000-member ATLAS collaboration. This article recounts the story of the first two years of the ATLAS physics programme, with an eye to some of the special occurrences and accomplishments along the way. However, it represents only the tip of the iceberg as far as the reach of the LHC physics programme is concerned. So far, ATLAS has collected just a small percentage of the total luminosity expected over the lifetime of the LHC.

Major progress

Before any physics analysis can be carried out, many members of the ATLAS collaboration work tirelessly to ready and tune the detectors, data acquisition, and trigger, to collect and reconstruct the data, and to check the quality of the data. Other members work to understand and characterize the various reconstructed objects seen in the detector. These are electrons, muons, τ leptons, photons, jets, missing transverse energy and identified heavy flavour. In two years ATLAS has gone from beginning to understand charged particles in the inner detector to using complex neural networks for flavour tagging. Algorithms have also progressed, for example from simple calorimeter-based definitions of missing transverse energy to more sophisticated definitions correcting for the calibrations and energies of the various objects.

A candidate Z→μμ event

The general complexity of the ATLAS results has followed a similar progression, from counting events for processes with large cross-sections, through using advanced analysis techniques to extract small signals from large backgrounds. Some analyses used complex unfolding techniques to compare measurements in the best way to parton-level predictions, and some have used modern statistical tools that allow the combination of many different channels into a single physics interpretation.

Figure 1 summarizes the production cross-sections for the main Standard Model processes and shows the luminosity used to measure these processes. The W and Z inclusive cross-sections and the Wγ and Zγ cross-sections were measured with the approximately 35 pb–1 from 2010. The tt cross-section is based on a statistical combination of measurements using dilepton final states with 0.70 fb–1 of data and single-lepton final states with 35 pb–1. The other measurements were made with the 2011 dataset. After only two years of running ATLAS can now measure processes with cross-section times branching ratio down to around 10 pb.

One of the by-products of the excellent LHC performance is another increase in complexity: the presence of multiple interactions within the same bunch crossing (pile-up). The 40 pb–1 of data recorded in 2010 had an average of 3 interactions per bunch crossing (<μ>), allowing good quality measurements in a relatively clean environment. The LHC run in 2011 was characterized by a rapid increase in instantaneous luminosity from the beginning of the machine operations in March, reaching <μ> of 10 in August. Figure 2 shows the complexity of an event with a Z→μμ candidate produced in a bunch crossing with 11 reconstructed proton–proton interaction vertices. The time for the integrated luminosity to double at the beginning of the run was less than one week. Since the end of May this year, the machine has regularly delivered more luminosity in one day than the total delivered in 2010.

First results

In the beginning, analyses focused mostly on understanding and measuring properties using single detectors. The first ATLAS publication on collisions reported the charged-particle multiplicities and distributions as a function of the transverse momentum pT and pseudo-rapidity η, analysing the data taken in December 2009 (√s=0.9 TeV). This allowed validation of the charged-particle reconstruction, providing important feedback on the modelling of the alignment of the detector as well as on the distribution of the material. These results, and those from a more detailed later study, were also used to tune the parameters of the Monte Carlo modelling of non-perturbative processes, which is now used to model the effect of the multiple interactions for the latest data.

Transverse momentum of 1.3 TeV.

Using only 17 nb–1 of recorded data, the production cross-sections of inclusive jets and dijets have been measured over a range of jet transverse-momenta, up to 0.6 TeV. Figure 3 shows the event with the highest central dijet-mass recorded by ATLAS in 2010. These measurements allowed the first accurate tests of QCD at the LHC. The differential cross-sections showed a remarkably good agreement with next-to-leading (NLO) perturbative QCD (pQCD) calculations, corrected for non-perturbative effects, in this unexplored kinematic regime. Given the good agreement of data and the Standard Model predictions, the study of dijet final states was used to set limits on the mass of new physics objects such as excited quarks Q*: excluding 0.30<mQ* <1.26 TeV at 95% confidence level (CL).

The first few hundred inverse nanobarns of data allowed early searches for new physics, looking for quark-contact interactions in dijet final-state events by studying the χ variable associated with a jet pair, where χ=exp|y*| and y* is the scattering rapidity evaluated in the centre-of-mass frame. The data were fully consistent with Standard Model expectations and allowed quark-contact interactions to be excluded at 95% CL with a compositeness scale Λ=3.4 TeV. Using about 100 times more data than the first jet results, ATLAS studied more complex multi-jet production, with up to six jets per event in the kinematic region of pTjet>60 GeV.

Accurate measurements of Standard Model quantities have been performed covering many orders of magnitude in cross-section

ATLAS was soon able to combine measurements from many detector components and reconstruct more of the Standard Model particles. A study of Standard Model production of gauge bosons (W± and Z0*) was performed with the first 330 nb–1 of data, in the lν and ll final states (l=e or μ). Both the measurements of the inclusive production cross-section times branching fraction σW,Z × BR(W,Z→ lν, ll) and the ratio of the two agree well with next-to-NLO calculations (NNLO) within experimental and theoretical uncertainties.

By combining measurements from the calorimeters and the central inner tracker, ATLAS was able to analyse the production of inclusive high-pT photons and perform extensive QCD studies. For this, additional complexity resides in the need to model and understand significant background contributions. These were estimated for this analysis from data, based on the observed distribution of the transverse isolation energy around the photon candidate. A comparison of results to predictions from NLO pQCD calculations again showed remarkable agreement.

An initial measurement of the production cross-section for top-quark pairs was already possible with just a small fraction of the 2010 data. Combining information from almost all detector systems, events were selected with either a single lepton produced together with at least four high-pT jets plus large transverse missing energy, or two leptons in association with at least two high-pT jets and large transverse missing energy. In this analysis ATLAS used b-jet identification algorithms for the first time – crucial for the rejection of the large backgrounds that do not contain b-quarks. A total of 37 single-lepton and 9 dilepton events were selected, in good agreement with Standard Model predictions.

The use of all of the 2010 data allowed ATLAS to study more complex quantities and distributions. For example, the W and Z differential cross-sections were measured as functions of the boson transverse-momentum, allowing more extensive tests of pQCD. Using the ratio of the difference between the number of the positive and the negative Ws to their sum, ATLAS measured the W charge-asymmetry as a function of the boson pseudo-rapidity. These results provided the first input from ATLAS on the fractions of u and d quark momentum of the proton.

At this stage, the analysis of W→τν and Z→ττ represented an important step in commissioning the selection of hadronic τ final states that are crucial in searches for new physics as well as for the Higgs boson. More accurate studies of top physics were also performed, from the inclusive cross-section for tt pairs to a preliminary measurement of the top quark’s mass.

New experiences

The full 2010 dataset also offered the first concrete possibility to look for physics signals beyond the Standard Model over a wide spectrum of final states. Events with high-pT jets or leptons and with large missing transverse momentum were studied extensively to search for supersymmetric (SUSY) particles. Here, more complex variables were used, such as the “effective mass” (the sum of the transverse momentum of selected jets, leptons and missing transverse energy), which is sensitive to the production of new particles. No significant excess of events was found in the data and limits were set on the mass of squarks and gluinos, m&qtilde; and m&gtilde;, assuming simplified SUSY models. If m&qtilde; = m&gtilde; and the mass of the lightest stable SUSY particle is mχ01=0, then the limit is about 850 GeV at 95% CL. Limits have been placed assuming other SUSY interpretations, such as minimal supergravity grand unification (MSUGRA) and the constrained minimal supersymmetric extension of the Standard Model (CMSSM).

Event display

The 2010 run ended with a short period in November dedicated to lead-ion collisions with a centre-of-mass energy per nucleon √sNN = 2.76 TeV. This was certainly one of the most amazing experiences for ATLAS during the first two years of collisions at the LHC. As the online event display in the ATLAS control room brought up the first event images, the calorimeter plot showed many events where a narrow cluster of calorimeter cells with high-energy deposits (a jet) were poorly – or not at all – balanced in the transverse plane by equivalent activity in the back-to-back region (figure 4). The gut feeling was clear: this was the first direct observation of jet-quenching in heavy-ion collisions. A detailed analysis of the early lead-collision data studied the dijet asymmetry – defined as Aj = (ETj1–ETj2)/(ETj1+ETj2), where ETji is the transverse jet energy calibrated at the hadronic scale – as a function of the event “centrality”. This showed that the transverse energies of dijets in opposite hemispheres become systematically more unbalanced with increasing event “centrality”, leading to a large number of events that contain highly asymmetric dijets. Such an effect was not observed in proton–proton collisions, pointing to an interpretation in terms of strong jet-energy loss in a hot, dense medium.

The early part of data-taking in 2011 was an extremely intense period. Already in June, ATLAS presented the first preliminary results on eight analyses using about 300 pb–1 of data, almost 10 times the integrated luminosity of 2010. This allowed more stringent limits to be placed on SUSY particles, heavy bosons W’ and Z’, new particles decaying to tt pairs, and on the production cross-section of a Standard Model Higgs boson decaying to photon-pair final states. It also allowed the first limits on particles with masses above 1 TeV.

Using a similar dataset ATLAS reported preliminary results on the production of single top at the LHC

Using a similar dataset ATLAS reported preliminary results on the production of single top at the LHC, looking in particular to the t-channel process, where a b-quark from the sea scatters with a valence quark. This process is particularly important, as any deviation from QCD predictions may indicate the presence of new physics beyond the Standard Model. The analysis is extremely difficult as the signal is hidden under a large non-reducible background from W+jets. It requires complex methods that make use of the full kinematic information of the events, looking simultaneously at many different distributions.

Extending the search with 1 fb–1

While ATLAS was carrying out these analyses, the LHC completed delivery of the first inverse femtobarn, opening up a number of new physics channels. The collaboration quickly released results on a total of 35 physics analyses with these data, most of them having to deal with the increased level of pile-up.

Preliminary WW, WZ and ZZ diboson production cross-section measurements show an overall precision of about 15% (WW and WZ) or 30% (ZZ). These measurements, all consistent with the Standard Model predictions, represent an important foundation for searches for the Standard Model Higgs boson. Triple-gauge couplings have been studied and found to be in agreement with the Standard Model, allowing limits to be placed on the size of anomalous couplings of this kind.

The 2011 data allowed more sensitive searches for SUSY particles, using similar or more complex distributions than in 2010. Once more, the results are in good agreement with the Standard Model expectations and have again been interpreted in the MSUGRA/CMSSM models as well as in simplified models. These studies exclude squarks and gluinos in simplified models with masses less than about 1.08 TeV at 95% CL.

Transverse mass distribution

ATLAS has also performed searches for dijet, lepton–neutrino and lepton–lepton resonances. Figure 5 shows the invariant mass distribution for an electron and missing transverse-energy. These searches have placed limits on new heavy-quark masses, mQ* >1.92 TeV, and on the mass of W’ and Z’ predicted by a number of different models, mW’ > 2.15 TeV and mZ’ > 1.83 TeV, all at the 95% CL.

The 2011 data began to open up the search for the Standard Model Higgs boson. To cover the entire mass range, from about 110 GeV (the limit from the Large Electron Positron collider is 114.4 GeV at 95% CL) to the highest possible values (around 600 GeV), this exploration was conducted in several final states: H→ γγ; ττ; WW(*)→lνlν, lνqq; ZZ(*)→llll, llνν, llqq as well as H→bb produced in association with a W or Z.

In the analysis dedicated to the search for H→γγ processes, events with pairs of high-pT photons were selected and the photons combined to reconstruct the invariant mass. The accurate measurement of the direction of flight of the photons is crucial for obtaining high mass-resolution and hence a strong rejection of background processes, in particular, QCD diphoton production. This is possible in ATLAS thanks to the longitudinal segmentation of the electromagnetic calorimeter, which in addition allows a strong rejection of fake photons produced by QCD jets. This channel alone allowed exclusion at the level of about three times the cross-section predicted by the Standard Model.

The analysis dedicated to the search for H→WW*→ lνlν was based on the selection of high-pT lepton pairs, electrons and muons, produced in association with large transverse missing energy. Two independent final-state classes were considered, depending on whether 0 or 1 high-pT jets were reconstructed in the same event. The analysis revealed no excess events, excluding the production of a Standard Model Higgs boson with mass in the interval 154<mH<186 GeV at 95% CL and thereby enlarging the mass region already excluded by the Tevatron.

Higgs boson production cross-section

The golden channel H→ZZ(*)→4l is based on a conceptually simple analysis: the selection of events with isolated dimuon or di-electron pairs, associated to the same hard-scattering proton–proton vertex. ATLAS found the rate of 4-lepton events to be fully consistent with the expectations from background; the analysis excludes a Higgs boson produced with a cross-section close to that predicted by the Standard Model throughout nearly the entire mass interval from 200 to 400 GeV. No evidence for an excess of events has been found in all other analysed channels, allowing 95% CL exclusion limits to be placed for each of them.

Last, ATLAS used complex statistical methods to combine the information from all of these Higgs decay channels into a single limit. While the Standard Model does not predict the mass of the Higgs boson, it does predict the production cross-section and branching ratios once the mass is known. Figure 6 shows, as a function of the Higgs mass, the Higgs boson production cross-section excluded at 95% CL by ATLAS, in terms of the Standard Model cross-section. If the solid black line (the observed limit) dips below 1, then the data exclude the production of the Standard Model Higgs at 95% CL at that mass. If the solid black line is above 1, the production of a Standard Model Higgs cannot be excluded at that mass. As figure 6 shows, the data exclude the Standard Model Higgs boson in the mass range 146<mH<466 GeV at 95% CL, with the exception of the mass intervals 232<mH <256 GeV and 282<mH <296 GeV.

This article has been able to present only a few of the ATLAS results using up to the first inverse femtobarn of data. In 2011 the experiment collected more than 5 fb–1, so the collaboration is working hard on the analysis of the new data in time for presentation of new results at the “winter” conferences early in 2012 . The first results presented here represent only a few per cent of the total data ultimately expected from the LHC. We look forward to many more exciting and impressive years.

For information about all these results and more, see https://twiki.cern.ch/twiki/bin/view/AtlasPublic.

ARIS 2011 charts the nuclear landscape

CCari1_10_11

The roots of the first conference on Advances in Radioactive Isotope Science, ARIS 2011, go back to CERN in 1964, when the then director-general Victor Weisskopf called for proposals for on-line experiments to study radioactive nuclei at the 600 MeV synchrocyclotron. Why this should be done – and how – became the subject of a conference held in Lysekil, Sweden, in 1966 and a year later experiments began at ISOLDE, CERN’s Isotope Separator On Line. Following this successful start, in 1970 CERN organized a first meeting on nuclei far from stability in Leysin, Switzerland.

Since then there have been regular conferences within the field, with more specialized meetings arising hand in hand with increasingly sophisticated technical developments (see box). Three years ago the community felt that the time was ripe to streamline the conferences by merging all of the physics into a single meeting held every three years. The result was that at the end of May this year some 300 physicists met in the beautiful medieval town of Leuven in Belgium to attend ARIS 2011. The success of the meeting, with its excellent scientific programme, indicates that this was the perfect decision.

Over the past two decades the experimental possibilities for studying exotic nuclear systems have increased dramatically thanks to impressive technical developments for the production of rare nuclear species, both at rest and as energetic beams. New sophisticated detection methods and data-acquisition techniques with on- and off-line analysis methods have also been developed. The two basic techniques now used at laboratories worldwide are the isotope separator on-line (ISOL) and in-flight production methods, with several variations.

Conference highlights

CCari2_10_11

The conference heard the latest news about plans to make major improvements to existing facilities or to build new facilities, offering new research opportunities. The review of the first results from the new major in-flight facility, the Radioactive Isotope Beam Factory at the RIKEN research institute in Japan, was particularly exciting. The production of 45 new neutron-rich isotopes together with results from the Zero-Degree Spectrometer and the radioactive-ion beam separator, BigRIPS, gave a glimpse of the facility’s quality. Future installations, such as the Facility for Antiproton and Ion Research (FAIR) at GSI, SPIRAL2 at the GANIL laboratory, the High Intensity and Energy ISOLDE at CERN, the Facility for Rare Isotope Beams at Michigan State University (MSU) and the Advanced Rare Isotope Laboratory at TRIUMF were also discussed, together with the advanced plans to build EURISOL, a major new European facility complementary to FAIR.

The nuclear mass is arguably the most basic information to be gained for an isotope. Its measurement has involved various techniques, but a paradigm shift came with the development of mass spectrometers based on Penning traps and such devices are now coupled to the majority of radioactive-beam facilities. This has led to mass-determinations of unprecedented precision for isotopes in all regions of the nuclear chart, making it possible in effect to walk round the mass “landscape” and scrutinize its details (figure 3).

CCari3_10_11

Recent results from the ISOLTRAP mass spectrometer at CERN, which has been in operation for more than 20 years, have a precision in the order of 10–8 for the masses of isotopes with half-lives down to milliseconds. The first determination of masses of neutron-rich francium isotopes, where the mass of 228Fr (T1/2 = 39 s) is a notable example, were presented at ARIS 2011. The JYFLTRAP group, using the IGISOL facility at the physics department of the University of Jyväskylä (JYFL), presented masses for about 40 neutron-rich isotopes in the medium-mass region. The SHIPTRAP spectrometer at GSI has made measurements of masses towards the region of super-heavy elements; 256Lr, produced at a rate of only two atoms a minute, is the heaviest element studied so far. The TRISTAN spectrometer at TRIUMF has boosted precision by “breeding” isotopes to higher charge-states – for example, in a new measurement of the mass of the super-allowed β-emitter 74Rb, which is relevant to the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. Results from isochronous mass-spectroscopy with the CSRe storage ring at the National Laboratory of Heavy Ion Research in Lanzhou were also presented. Ion-traps are now routinely used as a key instrument for cooling and bunching the radioactive beams. This gives an improvement of several orders of magnitude in the peak-to-background ratio in laser spectroscopy experiments, or can be used prior to post-acceleration.

The determination of nuclear matter and charge radii has been important to the progress of radioactive-beam physics. The observation of shape co-existence in mercury isotopes at ISOLDE was the starting point for impressive developments, with lasers playing the key role. The most recent results from the same area of the nuclear chart are measurements using the Resonance Ionization Laser Ion Source at ISOLDE of isotope shifts and charge radii for the isotope chain 191–218Po. Another demonstration of the state of the art was shown in the determination of the charge radius of 12Be using collinear laser spectroscopy based on a frequency comb together with a photon-ion coincidence technique. Electron scattering from radioactive beams will be the next step for investigating nuclear shapes; the ELISe project at GSI and SCRIT at RIKEN are examples of such plans.

The determination of matter radii by transmission techniques, pioneered at Berkeley in the mid-1980s, led to the discovery of halo states in nuclei. These are well known today but the main data are limited to the lightest region of the nuclear chart. A step towards heavier cases was presented at ARIS in new data from RIKEN, where results from RIPS and BigRIPS indicate a halo state for 22C and 31Ne and maybe also for 37Mg.

CCari4_10_11

The use of laser spectroscopy in measuring charge radii, nuclear spins, magnetic moments and electric quadrupole moments has been extremely successful over the years. New results from the IGISOL facility – mapping the sudden onset of deformation at N = 60 – and from the ISOLDE cooler/buncher ISCOOL – for copper and gallium isotopes – were highlighted at the conference. The two-neutron halo nucleus 11Li continues to attract interest both theoretically and experimentally, where a better determination of the ratio of the electric quadrupole moments between mass 11 and 9 was needed. Now a measurement at TRIUMF based on a β-detected nuclear-quadrupole resonance technique has yielded a value of Q(11Li)/Q(9Li) = 1.077(1). Here, the cross-fertilization between beam and detector developments has led to laser-resonant ionization becoming an essential ingredient in the production cycle of pure radioactive, sometimes isomeric, beams.

Nuclear-structure studies of exotic nuclei were the topic of many contributions at ARIS. There is progress on the theoretical side with large-scale shell model calculations in the vicinity of 78Ni leading to a unified description of neutron-rich nuclei between N = 40 and N = 50. The evolution of collectivity for the N = 40 isotopes has provided many interesting experimental results. From the strongly deformed N = Z nucleus 80Zr, collectivity is rapidly decreasing to 68Ni with a high-lying 2+ state at 2.03 MeV, suggesting a doubly magic character. Going to 64Cr, there is a new deformed region illustrated by a 2+ state at 470 keV, and research at the National Superconducting Cyclotron Laboratory at MSU has found an enhanced collectivity for 78Sr, with a quadrupole deformation parameter of β2=0.44.

Many of the talks at ARIS addressed the “island of inversion”. A recent result from REX-ISOLDE identifies an excited 0+ state in 32Mg, illustrating shape coexistence at the borders of the island. Many new results – a rotational band in 38Mg observed by BigRIPS, isotope shifts for 21–32Mg measured at ISOLDE, β-decay for chromium isotopes from Berkeley and shape-coexistence in 34Si and 44S – add to the understanding of this interesting region of the nuclear chart. A new island of inversion, indicated by data for 80–84Ga from the ALTO facility in Orsay, was also discussed.

Continuing with nuclear structure, data from GANIL and its MUST 2 array on d(68Ni, p)69Ni give access to the d5/2 orbital, which is crucial for understanding shell structure and deformation in this mass region. The reaction d(34Si, p)35Si shows a density dependence of the spin-orbit splitting leading to a depletion of the nuclear matter density and resulting in a “bubble nucleus” – a topic also discussed in a theory talk.

The doubly magic nucleus 24O has attracted interest for a decade, from experimental and theoretical viewpoints. At ARIS, the coupled-cluster approach was presented as an ideal compromise between computational costs and numerical accuracy in theoretical models, while the absence of bound oxygen isotopes up to the classically expected doubly magic nucleus 28O presents a theoretical challenge.

Experimentally, there is an impressive series of data – over a wide range of elements – from the MINIBALL array at ISOLDE. One of the highlights here was the observation of shape coexistence in the lead region. A theory talk pointed out that the nuclear energy-density functional approach, both for mean-field and beyond-mean-field applications, is an efficient tool for calculations on medium-mass and heavy nuclei.

Early experiments with radioactive beams revealed exotic decay-modes such as β-delayed particle emission. Today these processes are well understood and used as workhorses to learn about the structure of exotic nuclei. The study of β-delayed three-proton emission from 43Cr and two-proton radioactivity from 48Ni using an Optical Time Projection Chamber at MSU was also presented at ARIS. Here it is clear that in future the study of the most exotic decay modes will use active targets, such as in the Maya detector developed at GANIL and the ACTAR-TPC project being planned by GANIL together with MSU. An interesting new result concerns the observation of β-delayed fission-precursors in the Hg-Tl region, where an unexpected asymmetric fragment-distribution has been observed for the β-delayed fission of 180Tl.

Unbound nuclei or resonance states are sometimes debated as “ghosts” without any physics significance. However, developments over the past 5–10 years have provided a huge amount of data, so that most of the previously empty spots on the nuclear chart for the light elements are now filled. The production of 10He and 12,13Li from proton-knockout reactions from 11Li and 14Be, respectively, is a particularly spectacular case. The knockout of a strongly bound proton from the almost unbound nucleus 14Be results in a 11Li nucleus that together with two neutrons shows features that can only be attributed to an unbound 13Li nucleus. Many of the resonance states might be populated in transfer reactions in inverse kinematics, in which the exotic nuclei are used as an energetic beam directed towards a target that was earlier used as the beam. The HELIOS spectrometer, which will use neutron-rich beams from the CARIBU injector at the Argonne Tandem Linear Accelerator System, is a model for what might develop at many facilities in the future.

The super-heavy-element community was represented in several talks at ARIS. Having produced all elements up to Z = 118, the next step is to tackle the Z = 120 barrier, an exciting goal that could become a reality with reactions such as 54Cr+248Cm. Nuclear spectroscopy is also climbing towards ever higher mass numbers and elements, as demonstrated by data from JYFL for 254No. One exciting talk concerned the chemical identification of isotopes of element 114 (287,288Uuq), which is found to belong to group 14 (in modern notation) in the periodic table – the group that contains lead, tin, germanium, silicon and carbon.

CCari5_10_11

The acquisition of data pertinent to nuclear astrophysics has grown tremendously thanks to the access to nuclei in relevant regions of the nuclear chart. Results include the study at JYFL and at the Nuclear-physics Accelerator Institute (KVI), Groningen, of β-decay of 8B for the solar-neutrino problem and the work at ISOLDE, JYFL and KVI on β-decays of 12N and 12B, which are important for the production of 12C in astrophysical environments. Data from ISOLDE and JYFL on the neutron-deficient nuclei 31Ar and 23Al, relevant for explosive hydrogen burning, were also discussed, as were results from MSU relating to the hot carbon–nitrogen–oxygen cycle and the αp-, rp- and r-processes in nucleosynthesis. GANIL has results on the reaction d(60Fe,p)61Fe, which is relevant for type II supernovae, while the Radioactive Ion Beam Facility in Brazil in São Paulo has data on the p(8Li,α)5He reaction. Calculations for proton scattering on 7Be in a many-body approach, combining the resonating-group method with the ab initio no-core shell model, were also described at the conference.

Exotic nuclei can also provide information about fundamental symmetries and interactions. The painstaking collection of data over decades has provided an extremely sensitive test of the unitarity of the top row of the CKM matrix. Today there are precise data for 13 super-allowed β-emitters, which give a value of 0.99990(60) for this quantity. In this context, there are plans for measurements with the Magneto Optical Trap at Argonne of β-neutrino correlations for 6He and the electric dipole moment for 225Ra. The high-precision set-ups – WITCH at CERN, LPC Trap at GANIL and WIRED at the Weizmann Institute – were also discussed at the conference. The claim is that this kind of experiment – the high-precision frontier – will to some extent complement the high-energy frontier in understanding the deepest secrets of nature.

Finally, a review of the different techniques using radioactive isotopes in solid-state physics presented the current state of the art, together with some recent results. This work was pioneered at CERN and has over the years become an important ingredient at many facilities.

In summary, ARIS 2011 turned out to be a successful merger of the former ENAM and RNB conferences (see box). The talks, supported by an excellent poster show, covered the field perfectly. The talks are available on the conference website and the organizers had the excellent idea of putting the posters there too – this is “a first”, to be followed in future.

NA61/SHINE: more precision for neutrino beams

CCshi1_10_11

Accelerator neutrino beams are currently the object of intense discussion and development. They provide a necessary tool for the detailed study of neutrino oscillations and in particular the observation of potential CP-violating effects that are born from the interference of transitions among the three known species of neutrino. Neutrino interaction cross-sections are tiny, so the challenge in studying their properties has been to produce ever increasing beam intensities. The next challenge in neutrino physics will be to establish precisely the parameters of the oscillations and then compare the oscillations of neutrinos with anti-neutrinos (or the oscillation probability as a function of neutrino energy) to search for CP-violation. This will require precise measurements of the transitions of neutrinos into each other, which will in turn demand a much better knowledge of the neutrino beams.

At present – and probably for the next decade – neutrino beams are generated by the conventional technique: a beam of multi-giga-electron-volt protons, as powerful as possible (up to around 500 kW beam power), is directed at a target to produce a large number (1012 or more) of hadrons, mainly pions with a small admixture (5–10%) of kaons. These are then focused in the direction desired for the neutrino beam and they decay – producing neutrinos – in a decay tunnel.

In the absence of a good theory of hadronic interactions, a precise prediction of the properties of such neutrino beams requires measurements of particle production at an unprecedented level of precision. The role of the NA61/SHINE experiment at CERN’s Super Proton Synchrotron (SPS) is to perform these hadron production measurements. More specifically, it has taken data for the T2K experiment in Japan, both with a thin carbon target and a full replica of the target used in T2K. These data have already proved important for the extraction of the first results on electron-neutrino appearance and muon-neutrino disappearance in T2K. As statistics increase in T2K, they will become more and more essential.

CCshi2_10_11

The collaboration behind the SPS Heavy Ion and Neutrino Experiment (SHINE, approved at CERN as NA61) is an unlikely marriage between aficionados of the heaviest and lightest beams on offer. Ions as heavy as lead nuclei have been accelerated in the SPS, while neutrinos have the lightest mass (now famously non-zero) of all particles apart from photons. So what is the unifying concept between these communities that are a priori so different?

The NA49 detector in CERN’s North Area offers excellent tracking with its immense set of time-projection chambers (TPCs), time-of-flight (TOF) detectors and flexible beamline. To perform systematic measurements at energies at the onset of quark–gluon plasma creation, the heavy-ion physicists were interested in upgrading the detector to allow higher event statistics and lower systematic uncertainties. At the same time, neutrino physicists, attracted by the extensive coverage of the detector, were interested in running it in a simple configuration, but also with high statistics, so as to have the first data ready in time for the start of T2K.

The main upgrades relevant for all of the NA61/SHINE physics programmes concerned the TPC read-out, an extension of the TOF detectors and an upgrade of the trigger and data-acquisition system. Figure 1 shows the upgraded detector. Its acceptance fully covers the kinematic region of interest for T2K.

The NA61/SHINE experiment was approved in April 2007 and took data in a pilot run the following September, with 600,000 triggers on the thin carbon target and 200,000 triggers on the replica (long) T2K target. More extensive data-taking for the T2K physics programme took place in 2009 and 2010, both with thin (6 million triggers in 2009) and long targets (10 million triggers in 2010). In parallel, data were recorded for the NA61/SHINE heavy-ion and cosmic-ray programmes.

As a first priority, the cross-sections for producing charged pions from 30 GeV protons on carbon were measured with the thin-target data taken in 2007 (Abgrall et al. 2011). The systematic errors are typically in the range of 5–10% and smaller than the statistical errors. These data have already been used for an improved prediction of the neutrino flux in T2K (Abe et al. 2011). Furthermore, they also provide important input to improve the hadron-production models needed for the interpretation of air showers initiated by ultra-high-energy cosmic rays.

CCshi3_10_11

However, these first NA61/SHINE measurements provide only a part of what is needed to predict the neutrino flux in T2K. A substantial fraction of the high-energy flux, and in particular the electron-neutrino contamination, originates from the decay of kaons. Charged kaons are readily identified in NA61/SHINE from the suite of particle-identification techniques – dE/dx in the TPC and the TOF in the upgraded detector (see figure 2) – and a first set of cross-sections has been produced already. Neutral kaons can be reconstructed using the V0-like topology of K0S→ π+π decays.

A large fraction (up to 40%) of the neutrinos originates from particles produced by re-interactions of secondary particles in the target, which for T2K is 90 cm long. This is difficult to calculate precisely and it motivates a careful analysis of the data taken with the long target. Long-target data are notoriously more difficult to reconstruct and analyse but they provide much more directly the information needed for extracting the neutrino flux. The NA61/SHINE collaboration presented a pilot analysis at the NUFACT meeting at CERN in early August (Abgrall 2011). The ultimate precision will come from the full analysis of the long-target data taken in 2010. The collaboration is working hard to complete these analyses in time for the high-statistics measurements that will become possible in T2K when the experiment resumes data-taking after recovering from damage in the massive earthquake in north-eastern Japan that occurred in March this year.

Trends in isotope discovery

CCvie1_10_11

It all started innocently enough with a review article I wrote in 2004 about the nuclear driplines, which described the exploration of the most neutron- and proton-rich isotopes (Thoennessen 2004). The article included tables listing the first observation of each isotope along the proton- and neutron-dripline. The idea to expand this list to cover all isotopes lingered for a few years until in 2007 I mentioned it to an undergraduate student as a possible research project. At the beginning we did not appreciate the magnitude of the project; after all, there are more than 3000 isotopes presently known. However, with the help of many undergraduate students performing elaborate literature searches and carefully judging the merits of the individual papers we continued, even though we extrapolated that the project would take about 10 years to reach completion.

We have described details of the discovery of each isotope in short paragraphs, arranged by elements, which are published in a series of articles in Atomic Data and Nuclear Data Tables. In a summary table, the first author, year, journal, laboratory, country and method of discovery are presented. Now, only four years after we started, the project is almost completed. We finished the initial discovery assignment for all isotopes and are currently finalizing the paragraphs for the last four elements: actinium, thorium, protactinium and uranium.

The master table of all elements is a rich source of interesting information. Along the way it has been fascinating to see how not only the physics and technology changed over time, but also the style of the papers. For example, the average number of authors per article increased from 1.1 in 1930 to 16.4 in 2000.

One piece of information – the number of isotopes discovered by the different laboratories around the world as a function of time – was recently highlighted by a Nature News article and has drawn a lot of attention over the past few weeks (Samuel Reich 2011). The article reveals the labs and individuals that have discovered the largest number of new isotopes. The results show that while Lawrence Berkeley National Laboratory leads by almost a factor of two, other laboratories in Japan and Europe – most notably GSI in Germany – have made most of the new discoveries in the past couple of decades. A graph displaying the number of isotopes discovered per laboratory as a function of time was featured as the “Trendwatch” in a recent issue of Nature (Trendwatch 2011). The graph seems to indicate that the top five laboratories are Berkeley, Cavendish, GSI, RIKEN, and JINR in Dubna; however, RIKEN was included only because of the large number of recent discoveries. But in reality, CERN’s ISOLDE has played a pioneering role in the discovery of isotopes, especially with the “isotope-online” technique, and ranks number five on the list.

Now why is the information contained in the database significant? The discovery of isotopes has a long history beginning with the discovery of radioactivity of uranium (later identified as 238U) by Becquerel in 1896. The discovery of new isotopes is closely linked to developments of new techniques and new accelerators (Thoennessen and Sherrill 2011). Creating and detecting new isotopes is the first prerequisite to being able to study them, automatically putting the laboratories that produce the most exotic isotopes in the best position for doing the most exciting science with these isotopes. The techniques to produce, separate and identify these isotopes are also critical to make and deliver clean beams of less exotic isotopes at higher intensities, which can then be used to explore the properties of these nuclei. The recent conference on Advances in Radioactive Isotope Science, ARIS 2011, highlighted not only the tremendous interest in the field and the most recent advances in physics but also the technical developments making these experiments with exotic isotopes possible (ARIS 2011 charts the nuclear landscape).

The data presented in the Trendwatch indicate that the balance of power pushing the field forward has shifted away from the US. The article did not stress that 2010 was the most productive year for the discovery of isotopes. For the first time more than 100 isotopes were discovered in a single year. This points to a renaissance of the field, which is driven by the start of a new accelerator system in RIKEN, Japan, and new technical developments at GSI. During the past 20 years, most new isotopes were discovered at projectile fragmentation facilities, thus the next major step will be the new accelerators currently being designed at the Facility for Antiprotons and Ion Research (FAIR) at GSI and the Facility for Rare Isotope Beams (FRIB) at Michigan State University in the US. FRIB is absolutely critical for the US to play a leading role in nuclear physics in the future.

Superconductivity and the LHC: the early days

Comparison of dipoles

As the 1970s turned into the 1980s, two projects at the technology frontier were battling it out in the US accelerator community: the Energy Doubler, based on Robert Wilson’s vision to double the energy of the Main Ring collider at Fermilab; and Isabelle (later the Colliding Beam Accelerator) in Brookhaven. The latter was put in question by the difficulty in increasing the magnetic field from 4 T to 5 T – which turned out to be much harder than originally thought – and eventually gave way to Carlo Rubbia’s idea to transform CERN’s Super Proton Synchrotron into a p–p collider, allowing the first detection of W and Z particles. Fermilab’s project, however, became a reality. Based on 800 superconducting dipole magnets with a field in excess of 4 T, it involved the first ever mass-production of superconductor and represented a real breakthrough in accelerator technology. For the first time, a circular accelerator had been built to work at a higher energy without increasing its radius.

When the Tevatron began operation at 540 GeV in 1983, Europe was just starting to build HERA at DESY. This electron–proton collider included a 6 km ring of superconducting magnets for the 820 GeV protons and it came into operation in 1989. The 5 T dipoles for HERA were the first to feature cold iron and – unlike the Tevatron magnets, which were built in house – they were produced by external companies, thus marking the industrialization of superconductivity.

Meanwhile the USSR was striving to build a 3 TeV superconducting proton synchrotron (UNK), which was later halted by the collapse of the Soviet Union, while at CERN the idea was emerging to build a Large Hadron Collider in the tunnel constructed for the Large Electron–Positron (LEP) collider (CERN Courier October 2008 p9). However, the US raised the bid with a study for the “definitive machine”. The Superconducting Super Collider (SSC), which was strongly supported by the US Department of Energy and by President Reagan, would accelerate two proton beams to 20 TeV in a ring of 87 km circumference with 6.6 T superconducting dipoles. With this size and magnetic field, the SSC would require decisive advances in superconductors as well as in other technologies. When the then director-general of CERN, Herwig Schopper, attended a high-level official meeting in the US and asked what influence on the scientific and technical goals the Europeans could have by joining the project, he was told “none, either you join the project as it is or you are out”. This ended the possibility of collaboration and the competition began.

To compete with the SSC, the LHC had to fight on two fronts: increase the magnetic field as much as possible so as to reduce the handicap of the relatively small circumference of the LEP tunnel; and increase the luminosity as much as possible to compensate for the inevitable lower energy. In addition, CERN had to cope with a tunnel with a cross-section that was tiny for a hadron collider, which many considered a “poisoned gift” from LEP. However, the interest for young physicists and engineers lay in the “impossible challenges” that the LHC presented.

To begin with, there was the 8–10 T field in a dipole magnet. Such a large step with respect to the Tevatron would require both the use of large superconducting cable to carry 13 kA in operating conditions of 10 T – almost double the capability of existing technology – and cooling by superfluid helium at 1.8–1.9 K. Never previously used in accelerators, superfluid helium cooling had been developed for TORE Supra, the tokamak project led by Robert Aymar but on a smaller scale. Then, to fit the exiting LEP tunnel, the magnets would have to be of an innovative “two-in-one” design – first proposed by Brookhaven but discarded by US colleagues for the SSC – where two magnetic channels are hosted in the iron yoke within a single cold mass and cryostat. In this way, a 1 m diameter cryostat could house two magnets, while the geometry of the SSC (with separate magnets but with 30% lower field than the LHC) simply could not fit in the LHC tunnel. Figure 1 shows the various main-dipole cross-sections for the various hadron machines.

A critical milestone

In 1986 R&D on the LHC started under the leadership of Giorgio Brianti, quietly addressing the three issues specific to the LHC (high field, superfluid helium and two-in-one), while relying on the development done for HERA and especially for the SSC for almost all of the other items that needed to be improved. The high field was the critical issue and had to be tested immediately. Led by Romeo Perin and Daniel Leroy, CERN produced the first LHC coil layout and provided the first large superconducting cable to Ansaldo Componenti. This company then manufactured on its own a 1-m long dipole model – single bore, without a cryostat – that was tested at CERN in 1987. Reaching a field of 9 T at 1.8 K, it proved the possibility of reaching the region of 8–10 T (CERN Courier October 2008 p19). This was arguably the most critical milestone of the project because it gave credibility to the whole plan and began to lay doubt on the strategy for the SSC.

Structure of the superconducting cable

These results were obtained with niobium-titanium alloy (Nb-Ti), the workhorse of superconductivity. CERN had also a parallel development with niobium-tin (Nb3Sn) that could have produced a slightly higher field at 4.5 K, with standard helium cooling. This development, pursued with the Austrian company Elin and led by CERN’s Fred Asner, produced a 1-m long 9.8 T magnet and also a 10.1 T coil in mirror configuration, the first accelerator coil to break the 10 T wall. However, in 1990 the development work on Nb3Sn was stopped in favour of the much more advanced and practical Nb-Ti operating at 1.9 K. This was a difficult decision, as Nb3Sn had a greater potential than Nb-Ti and would avoid the difficulty of using superfluid helium, but it was vitally important to concentrate resources and to have a viable project in a short time. The decision was similar that taken by John Adams in the mid-1970s to abandon the emerging superconducting technology in favour of more robust resistive magnets for CERN’s Super Proton Synchrotron.

For the development of the superconducting cable there were three main issues. First, it should reach a sufficient critical current density with a uniformity of 5–10% over the whole production, which also had to be guaranteed in the ratio between the superconductor and the stabilizing copper matrix, illustrated in figure 2. The critical current was to be optimized at 11 T at 1.9 K, maximizing the gain when passing from 4.2 to 1.9 K. The second issue was to reduce the size of the superconducting filaments to 5 μm without compromising the critical current. This required, among other features, the development of a niobium barrier around Nb-Ti ingots. Third was to control the dynamic (ramping up) effect in a large cable, as some effects vary as the cube of the width.

Again, the strategy was to concentrate on specific LHC issues – the large cable, the critical current optimization at 1.9 K – and rely on the SSC’s more advanced development for the other issues. There is, indeed, a large debt that CERN owes to the SSC project for the superconductor development. However, when the SSC project was cancelled in 1993, the problem of eliminating the dynamic effect arising from the resistance between strands composing the cable was still unresolved – but it became urgent in view of the results on the first long magnets in 1994 and after. Later, CERN carried out intense R&D to find a solution suitable for mass production relatively late, at the end of the 1990s. This involved controlled oxidation, after cable formation, of the thin layer of tin-silver alloy with which all the copper/Nb-Ti strands were coated – a technology that was a step beyond the SSC development.

Returning to the magnet development, after the success of the 1987 model magnet, which was replicated by another single-bore magnet that CERN ordered from Ansaldo, the R&D route split into two branches. One concerned the manufacture of 1-m long full-field LHC dipole magnets to prove the concept of high fields in the two-in-one design, with superconducting cable and coil geometry close to the final ones. A few bare magnets, called Magnet Twin Aperture (MTA), were commissioned from European Industry (Ansaldo, Jeumont-Scheinder consortium, Elin, Holec) under the supervision of Leroy at CERN.

The second line of development lay in proving the two-in-one concept in long magnets and a superfluid-helium cryostat. This involved assembling superconducting coils from the HERA dipole production, which had ended in 1988, in a single cold mass and cryostat, the Twin Aperture Prototype (TAP). The magnet, under the supervision of Jos Vlogaert with the cryostat and cold-mass, under Philippe Lebrun, tested successfully in 1990, reaching 5.7 T at 4.2 K and 7.3–8.2 T at 1.8 K – thus supporting the choices of the two-in-one magnet design, of the superfluid helium cooling and the new cryostat design.

At the same time, the LHC dipole was designed in the years 1987–1990, featuring an extreme variation: the “twin” concept, where the two coil apertures are fully coupled, i.e. with no iron between the two magnetic channels (figure 3). We now take this design for granted, but at the time there was scepticism within the community (especially across the Atlantic); it was supposed to be much more vulnerable to perturbations because of the coupling and to present an irresolvable issue with field quality. It is to the great credit of the CERN management and especially Perin, who for a long time was head of the magnet group, that they defended this design with great resilience – because among many advantages it also made an important 15% saving in the cost.

Schematic of early options for the LHC dipole

The result of the first sets of twin 1-m long magnets came in 1991–1992. Some people were disappointed because they felt that the results fell short of the 10 T field “promised” in the LHC “pink book” of 1990. However, anyone who knows superconductivity greatly appreciated that the first generation of twins went well over 9 T. This was already a high field and only 5–10% less than expected from the so-called “short sample” (the theoretical maximum inferred by measuring the properties of a short 50–70 cm length of the superconducting cable); accelerator magnets normally work at 80%, or less, of the short-sample value. The results of the 1m LHC models also made it clear that the cable’s mechanical and electrical characteristics and the field quality of the magnet (both during ramp and at the flat top) were not far from the quality required for the LHC.

A final step would be to combine the two branches of the development work and put together magnets of the twin design with a 10 m cold mass in a 1.8 K cryostat to demonstrate that full-size, LHC dipoles of the final design were feasible. However, the strict deadline imposed by the then director-general, Rubbia, dictated that the LHC should have the same time-scale as the SSC and be ready at the end of 1999. This meant that CERN was forced to launch the programme for the first full-size LHC prototypes in 1988, i.e. well before the end of the previous step, the construction in parallel of 1 m LHC MTA models and the 10-m long TAP.

At this point, CERN was just finishing construction of LEP and beginning work on industrialization of the components for LEP2; it was a period of shortage of personnel and financial resources (not a new situation). So Brianti and collaborators devised a new strategy: for the first time CERN would seek strong participation from national institutes in member states in the accelerator R&D and construction. In 1987–1988 the president of INFN, Nicola Cabibbo, and CERN’s director-general, Schopper, agreed – with a simple exchange of letters (everything was easier in those days) – that INFN would given an exceptional contribution to the LHC R&D phase. The total value was about SwFr12 million (1990 values) to be spread over eight years.

Towards real prototypes

In 1988 and 1989, INFN and CERN ordered LHC-type superconducting cables for long magnets and in 1989 INFN ordered two 10-m long twin dipoles from Ansaldo Componenti in Italy, some nine months before CERN had the budget to order three long dipoles, one from Ansaldo and two from Noell, a German company that had been involved in the construction of HERA quadrupoles. A fourth CERN long magnet, without the twin design, was ordered from the newly formed Alstom-Jeumont consortium (even at CERN some people still doubted the effectiveness of the twin design). The effort was decisive in being able by 1993 to have the magnets qualified by individual tests and then put into a string, consisting of dipoles and quadrupoles connected in series to simulate the behaviour of a basic LHC cell.

Parallel to the INFN effort, the French CEA-Saclay institute established collaboration with CERN and took over the construction of the first two full-size superconducting quadrupoles for the LHC. While CERN provided specifications and all of the magnet components (including superconducting cable), CEA did the full design and assembly of these quadrupoles, for a value a few million Swiss francs over the eight years of R&D (CERN Courier January/February 2007 p25). This was the start of a long collaboration; the French also continued to support the project after the initial R&D, throughout industrialization and construction phases, with an in-kind contribution on quadrupoles, cryostats and cryogenics for about SwFr50 million (split between CEA and CNRS-IN2P3).

The challenge of the prototyping was hard and covered many aspects. In particular for the dipoles, CERN first had to convince industry to pay enough attention and to invest resources in the LHC; the allure of the SSC, a much larger project (6000 main dipoles of 15 m length, 2000 quadrupoles, etc), was difficult to ignore. CERN’s project was much more challenging technically, with the required accuracy of the tooling a factor of five or so higher than for the HERA magnets. There was also the usual fight in a prototyping phase: good results required building expensive tooling for one or two magnets, with insufficient budget and no certainty that the project would be approved and the tooling cost thus paid for.

Short straight section

A delay of one year was the price to pay for the many developments and adjustments. Meanwhile, in 1993 the project had to pass a tough review devoted to the cryo-magnet system led by Robert Aymar, who as CERN’s director-general 10 years later would collect the fruit of the review. With the review over and completion of the long magnet prototypes approaching, the credibility of the LHC project increased. In autumn 1993, the SSC came to a halt – certainly because of high and increasing cost (more than $12 billion) and the low economic cycle in the US, but also because the LHC now seemed a credible alternative to reach similar goals at a much lower cost ($2 billion in CERN accounting). Rubbia, near the end of his mandate as director-general, which was the most critical for the R&D phase, led the project without rival. In a symbolic coincidence, the demise of the SSC occurred at the same as leadership of the LHC project passed from Giorgio Brianti, who had led the project firmly from its birth through the years of uncertainty, to Lyn Evans, who was to be in charge until completion 15 years later. The end of the SSC and the green light for the LHC was marked by the delivery to CERN of the first INFN dipole magnet in December 1993, just in time to be shown to the Council. This was followed four months later by the second INFN magnet and then by the CERN magnets, as well as by the two CEA quadrupoles designed and built by the team of Jacques Perot and later Jean-Michel Rifflet (figure 4).

Returning to the first dipole, which had been delivered from INFN at the end of 1993, a crash programme was necessary to overcome an unexpected problem (a short circuit in the busbar system – a problem that in a different form would later plague the project), so as to test it by in time for a special April session of the Council in 1994. The magnet passed with flying colours, going above the operational field of 8.4 T at the first quench, beyond 9 T in two quenches, and a first quench above 9 T after a thermal cycle i.e. full memory (figure 5). Its better-than-expected performance was actually misleading, giving the idea that construction of the LHC might be easy; in fact, it took a long six years before another equally good magnet was again on the CERN test bench. However, the other 10-m long magnets performed reasonably well and with the two very good CEA quadrupoles (3.5 m long), CERN set up the first LHC magnet string, to test it thoroughly and finally receive the approval of the project in December 1994.

The first 10 m LHC dipole prototype

Many other formidable challenges were still to be resolved on the technical, managerial and financial sides. These included: the nonuniformity of quench results and the problem of retraining that plagued the second generation of LHC prototypes; the unresolved question of the inter-strand resistance; the change of aluminium to austenitic steel as the material for the collars, implemented by Carlo Wyss; and the lengthening of the magnets from 10 m to 15 m with the consequent curvature of the cold mass, etc.

Looking back at the period 1985–1994, when the base for the LHC was established, it is clear that a big leap forward was accomplished during those years. The vision initiated by Robert Wilson for the Tevatron was brought to a peak, pushing the limit of Nb-Ti to its extreme on a large scale. New superconducting cables, new superconducting magnet architectures and new cooling schemes were put to the test, in the constant search for economic solutions that would be applicable later to large scale production. This last point is an important heritage that the LHC leaves to the world of superconductivity: the best performing solution is not always going to be really the best. Economics and large-scale production are very important when a magnet is part of a large system and integration is critical. “The best is the enemy of the good” has been the guiding maxim of the LHC project – a lesson from the LHC for the world of superconductivity in this 100th anniversary year.

OPERA reports time-of-flight anomaly

The OPERA experiment in Italy’s INFN Gran Sasso Laboratory has sent ripples round the world with its findings that neutrinos created 730 km away at CERN arrive at the detector slightly earlier than if they were travelling at the speed of light.

The result is based on the observation of more than 15,000 neutrino events measured by the experiment, which observes the beam produced by the CERN Neutrinos to Gran Sasso (CNGS) project. Using high-statistics data taken in 2009, 2010 and 2011, the collaboration has measured the velocity of the muon-neutrinos reaching the detector with much higher accuracy than previous studies conducted using accelerator neutrinos. Upgrades to the CNGS timing system and to the OPERA detector, as well as the use of high-precision geodesy to measure the neutrino baseline, allowed the collaboration to achieve comparable systematic and statistical accuracies.

To perform the study, the OPERA collaboration teamed up with experts in metrology from CERN and other institutions to make a series of high-precision measurements of the distance between the source and the detector, and of the neutrinos’ time of flight. The distance between the origin of the neutrino beam and OPERA was measured with an uncertainty of 20 cm over the 730 km travel path. The neutrinos’ flight time was determined with an accuracy of less than 10 ns by using sophisticated instruments, including advanced GPS systems and atomic clocks. The time responses of all of the elements of the CNGS beamline and of the OPERA detector have also been measured with great precision.

The results indicate that neutrinos from CERN arrive early at Gran Sasso by 60.7 ± 6.9 (stat.) ± 7.4 (sys.) ns compared with the time that would be taken assuming the speed of light in vacuum. This anomaly corresponds to a relative difference of the muon-neutrino velocity, v, with respect to the speed of light, c, (v-c)/c = (2.48 ± 0.28 (stat.) ± 0.30 (sys.) × 10–5.

Given the potential far-reaching consequences of such a result, independent measurements are certainly needed before the effect can either be refuted or firmly established. While OPERA continues to gather more data, the MINOS collaboration in the US is planning to improve its measurement of the neutrino time of flight with the beam from Fermilab to the Soudan Underground Laboratory, about 730 km away.

Gravitational waves: European detectors keep up the pace

For several years the European gravitational-wave detectors GEO600 (a collaboration between Germany and the UK), close to Hanover, and Virgo (a collaboration between Italy, France, the Netherlands, Poland and Hungary), close to Pisa, have been performing data-taking runs together with the LIGO detectors in the US. About a year ago the LIGO collaboration turned off its detectors to start an important upgrade, so this summer the European detectors joined forces to step up their search for gravitational waves in a last three-month data-taking run before Virgo also shuts down for its own upgrade.

GEO600 and Virgo had the good fortune to be on with an impressive 82% duty cycle at the time of the recent nearby supernova explosion. Unfortunately, the event on 24 August was too far away and of Type 1a, so releasing only a small amount of energy as gravitational waves. Analysis is nevertheless continuing at full speed.

These detectors are kilometre-scale Michelson laser-interferometers that work by measuring tiny changes caused by a passing gravitational wave in the lengths of their orthogonal arms. Laser beams sent down the arms are reflected from mirrors, suspended under vacuum at the ends of the arms, to a central photodetector. The periodic stretching and shrinking of the arms is then recorded as varying interference patterns.

The worldwide detector upgrades that are just starting will be a fundamental step forward. With current sensitivities, the probability of detecting a gravitational-wave burst in one full year of data-taking is estimated to be of the order of a few per cent. The upgrades aim to improve the sensitivities by a factor of 10 with respect to the present values, which should then extend the “listening” distance by a factor of 10. This will increase the volume of universe explored and the detection probability by a factor of 1000, offering the “certainty” of catching several gravitational-wave events a year.

The non-detection of gravitational waves so far has nevertheless allowed the derivation of several important scientific results. For example, important limits have been established on the production of gravitational waves of cosmological origin and by known pulsars. Improving the spin-down limit of the Crab and Vela pulsars should put limits on the ellipticity of the stellar mass-distributions, which are expected to be related to the magnetic asymmetries in these systems.

“Multimessenger” astrophysics has meanwhile begun, looking for coincidences of candidate gravitational-wave signals with gamma-ray bursts and signals from space-borne cosmic-ray detectors as well as neutrino and optical telescopes. Such clues will have paramount importance in studying the sources as soon as genuine gravitational-wave detection becomes routine after 2015, when detector upgrades are expected to be completed.

Kaonic hydrogen casts new light on strong dynamics

CCnew3_09_11

Hadronic bound systems with strange quarks, such as kaonic hydrogen, are well suited for testing chiral dynamics, especially in view of the interplay between spontaneous and explicit symmetry breaking. Effective field theories with coupled channels based on chiral meson–baryon Lagrangians have become well established as a framework for describing K–nucleon interactions at threshold, including much disputed Λ(1405) resonances and deeply bound antikaonic nuclear clusters lying just below the respective thresholds.

A recent precision measurement at the Laboratori Nazionali di Frascati of the strong-interaction-induced shift and width of the 1s level in kaonic hydrogen sheds new light on these basic problems in strong-interaction binding and dynamics. Kaonic hydrogen, in which a K replaces the electron, is produced by the capture of stopped K from the decay of φ mesons in hydrogen gas. The φ mesons are generated nearly at rest at the DAΦNE e+e collider, operating in a new, high-luminosity collision mode.

The shift and width of the kaonic 1s state is deduced from precision X-ray spectroscopy of the K-series transitions in the kaonic hydrogen. The emitted K-series X-rays, with energies of 6–9 keV, were detected by the recently developed Silicon Drift Detector for Hadronic Atom Research by Timing Application (SIDDHARTA) experiment, which performs X-ray–kaon coincidence spectroscopy using microsecond timing and the excellent energy resolution of about 180 eV FWHM at 6 keV of 144 large-area (1 cm²) silicon drift detectors that surround the hydrogen target cell. This method reduces the large X-ray background from beam losses by orders of magnitude. It has led to the most precise values for the 1s level shift, ε1s= –283 ± 36 (stat.) ± 6 (syst.) eV, and width Γ1s = 541 ± 89 (stat.) ± 22 (syst.) eV for kaonic hydrogen (Bazzi et al. 2011).

A recent study using next-to-leading-order chiral dynamics calculations of the shift and the width has shown excellent agreement with these measurements (Ikeda et al. 2011). Further measurements with similar accuracy are planned for the K-series X-rays from kaonic deuterium, using an improved SIDDHARTA-2 set-up to disentangle the isoscalar and isovector scattering lengths.

ATLAS looks at vector bosons plus jets…

CCnew6_09_11

While searches with 2011 LHC data for the Higgs and new physics caught the headlines over the summer, detailed studies of 2010 data continue to yield high-precision physics. For example, the ATLAS collaboration has published a number of results on the production of vector bosons (W and Z) based on the full 2010 dataset of 37 pb–1, including measurements that require additional jets in the final state.

The challenge of precision measurements of Standard Model vector-boson production is to understand and control the systematic uncertainties; this contrasts with many analyses that are still dominated by statistical uncertainties and can thus “simply” wait for more data. This challenge will increase in analyses of the larger 2011 data set, where ATLAS will probe higher jet-multiplicities and higher jet transverse-momenta. In addition to precise measurements of electroweak parameters, the study of W and Z bosons at the LHC tests perturbative QCD (pQCD) and it constrains the distribution of partons (quarks and gluons) inside the proton. W and Z bosons are also studied as background to other Standard Model signals and to look for new physics.

Two recent ATLAS results have focused on the production of a vector boson together with jets from b-quarks. The Z measurement is still statistically limited, while the W measurement is dominated by systematic uncertainties. The cross-section for inclusive Z + b-jets production agrees with next-to-leading-order pQCD calculations. For the production cross-section of a W with one or two b-jets, the results are again consistent within uncertainties, although the value observed is slightly higher than predicted (Fig. 1). These measurements with b-jets not only test pQCD for heavy quarks, they also assess what is a significant background source in searches, for example for associated Higgs production, where H→bb.

CCnew7_09_11

Considering the ratio of cross-sections rather than their absolute value has the advantage that many sources of systematic uncertainty cancel. ATLAS has recently published a measurement of the ratio of W and Z cross-sections with exactly one associated jet, complementing measurements of the individual channels. The ratio is measured as a function of the jet transverse-momentum. The systematic and statistical uncertainties are of comparable size, thereby providing the basis for a precision test of the Standard Model (Fig. 2). The results are in reasonably good agreement with a number of Monte Carlo predictions.

…and measures suppression of single jets in heavy-ion collisions

CCnew8_09_11

While primarily designed for proton–proton collisions, the ATLAS detector is also an excellent tool to perform measurements in the hot, dense environment of heavy-ion collisions, where temperatures reach tera-kelvin scales. So far, results include detailed measurements of collective properties of the system, such as “elliptic flow”, as well as of “hard probes”, such as jets, quarkonia and vector bosons.

Using the initial 2010 heavy-ion collision data from the LHC, the ATLAS collaboration published the first direct evidence that jets lose energy as they pass through the hot, dense medium, a process called jet “quenching”, leading to event-by-event asymmetries in the energies of the two jets. To characterize the effects of quenching from a different perspective, the next major jet measurement in lead–lead collisions undertaken by ATLAS was to establish the overall reduction in the rate of jets in more “central” collisions, where the two nuclei overlap more completely.

For the Quark Matter 2011 conference, ATLAS compared the rates for central events with those in more peripheral events that consist primarily of a few simultaneous nucleon–nucleon collisions. One surprising result is that, for jets above 100 GeV, the measured jet-suppression factor is independent of the measured jet energy. An even more surprising finding is that this result is the same for jets reconstructed with different “cone” radii, implying that the suppression is not accompanied by a substantial modification of the distribution of energy within a jet. By contrast, an ATLAS measurement of W boson yields using single muons showed no suppression at all.

This comparison, shown in the figure, was quantified using the variable RCP, the ratio of yields measured in central and peripheral collisions, each yield normalized by the relevant number of binary collisions. This quantity is unity if jets are produced in proportion to the number of binary collisions, but falls below one if the yields are suppressed in more central collisions.

The higher luminosities expected in 2011 will provide increased jet statistics, allowing the measurement of jets with even higher energies. At the same time, a more precise understanding of the fluctuations of soft particles, mainly from a rich spectrum of collective modes, will allow the measurement of lower-energy jets, which in preliminary results from the Relativistic Heavy Ion Collider show stronger modification from passage through the medium.

bright-rec iop pub iop-science physcis connect