Comsol -leaderboard other pages

Topics

Strangeness and heavy flavours in Krakow

CCsqm1_10_11

The 13th international conference on Strangeness in Quark Matter (SQM 2011) took place in Krakow on 18–24 September. Organized by the Polish Academy of Arts and Sciences (Polska Akademia Umiejętności, PAU), it attracted more than 160 participants from 20 countries. The emphasis was on new data on the production of strangeness and heavy flavours in heavy-ion and hadronic collisions, in particular the new results from the LHC at CERN and the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. With the new high-quality data on identified particles, SQM 2011 in a sense supplemented the Quark Matter conference that was held in Annecy in May.

Summary talks during the first two morning sessions introduced the experimental highlights for the main heavy-ion experiments currently in operation. They included data at energies ranging from the Heavy Ion Synchrotron (SIS) at GSI (the HADES and FOPI experiments), through the Super Proton Synchrotron at CERN (NA49 and NA61) and RHIC (PHENIX and STAR), up to the LHC (ALICE, ATLAS and CMS), as well as prospects for new or future facilities, such as the Facility for Antiproton and Ion Research (FAIR) and the Nuclotron-based Ion Collider Facility (NICA).

In this report we can cover only a small selection of the impressive wealth of new results and information presented at the conference. The following highlights illustrate some of the most recent measurements in nucleus–nucleus collisions at the LHC and in the Beam Energy Scan programme at RHIC. All of them focus on results obtained in the sectors of strange and heavy quarks, which traditionally form the major part of the discussions at SQM conferences.

Experimental results

Maria Nicassio of the University and INFN Bari for the ALICE collaboration presented preliminary results on the production of the charged multi-strange hadrons Ξand Ω and their antiparticles, from peripheral to the most central lead–lead collisions at the current maximum centre-of-mass energy of 2.76 TeV per equivalent nucleon–nucleon (SUSY: the search continues). The enhancement of these particle yields normalized to the number of nucleons participating in the collision and compared with proton–proton (pp) production was shown for the first time (figure 1). As already found in heavy-ion collisions at the SPS and RHIC, the yields at the LHC cannot be achieved in a hadronic phase only, but require a fast equilibration and a large correlation volume. For these reasons, the enhanced production of multi-strange baryons is regarded as one of the signals for the phase transition from ordinary hadronic matter to the quark–gluon plasma (QGP). It was also stressed that, although the absolute production of hyperons increases with energy from RHIC to the LHC, both in heavy-ion and in pp collisions, the relative enhancement decreases as a result of a significant increase in pp yields at the LHC.

CCsqm2_10_11

Using the Beam Energy Scan at RHIC, the STAR collaboration has progressed significantly with tackling the evolution of the collective effects observed in heavy-ion (Au–Au) collisions between √sNN=7.7 GeV and 62.4 GeV, as Shusu Shi of Central China Normal University showed. While studying the excitation function of the second harmonic v2 in the azimuthal distribution of particle (π, K, p and Λ) production, the collaboration identified a significant difference in the behaviour of particles and antiparticles for 0–80% central Au–Au reactions (figure 2). The increasing deviation in v2 between particles and antiparticles, observed with decreasing √sNN, is more pronounced for baryons, such as protons and Λs, than for mesons (charged π and K). However, it must be noted that above 39 GeV, the difference in v2 between particles and antiparticles remains almost constant to higher energies at about 5–10%. The large difference between particle and antiparticle v2 at lower energies could thus be related to an increased amount of transported quarks to mid-rapidity or it could indicate that hadronic interactions become dominant below 11.5 GeV. In the latter case, the difference in v2 could be attributed to different interaction cross-sections of particles and antiparticles in hadronic matter of high baryon density.

CCsqm3_10_11

The ALICE collaboration also presented preliminary results on the azimuthal anisotropy (v2) of charm production in non-central lead–lead collisions at the LHC, in a talk by Chiara Bianchin of the University and INFN Padova. Such a measurement was highly anticipated, after the observation of a large suppression of charmed-meson yields in nucleus–nucleus collisions, which implies strong quenching of charm quarks in dense QGP. The study of charm anisotropy would provide insight on the degree of thermalization of the quenched charm quarks within QGP. Figure 3 shows the v2 parameter of D0 mesons, reconstructed in the Kπ+ channel, as a function of transverse momentum (in red), compared with that of charged hadrons (in black). This measurement, though statistically limited, hints at a non-zero charm v2 at low momentum and bodes well for the continuation of the study with the higher-luminosity lead run in 2011.

Theoretical discussions

The conference witnessed a lively debate on theoretical issues. In the theoretical summary talk, Giorgio Torrieri of the Goethe University, Frankfurt, pointed to various differences in the interpretation of heavy-ion data (e.g. equilibrated vs non-equilibrated hadron gas; statistical vs non-statistical production in small systems). Probably everyone connected with the SQM conferences is enthusiastic about the fact that statistical models do an excellent job in describing hadron production in heavy-ion and hadronic collisions, with the key role being played by the fast strange-quark thermalization. Perhaps, this attitude just defines the SQM community. On the other hand, there exist differences in the approaches and interpretations that should be resolved if the community is to gain a better understanding of hadron production processes. The relatively low proton-to-pion production ratio measured recently by ALICE, presented by Alexander Kalweit of the Technische Universität Darmstadt, will trigger such attempts.

CCsqm4_10_11

The analysis of the data has led to a physical picture that may be regarded as a kind of standard model of relativistic heavy-ion collisions. This model is based on the application of relativistic hydrodynamics combined with the modelling of the initial state on one side, supplemented by the kinetic simulations of freeze-out on the other side. From the theoretical point of view, it is not completely clear how strange particles may be accommodated into this picture, both at RHIC and at the LHC. The results obtained from 2+1 dissipative hydrodynamics, presented by Piotr Bozek of the Institute of Nuclear Physics, Krakow, indicate that the multi-strange particle spectra measured at the LHC cannot be simply reproduced in hydrodynamic calculations that are constructed to describe ordinary hadrons, such as pions, kaons and protons. The new LHC measurement of the elliptic flow of D0 mesons shown in figure 3 will be another important input for hydrodynamic and energy-loss models. As Christoph Blume of the University of Heidelberg indicated, the general concept that strange particles are emitted much earlier than other more abundant hadrons may be challenged in attempts to achieve a uniform description of several observables simultaneously.

Another theoretical activity presented at SQM 2011 was triggered by low-energy experiments aimed at finding the critical point of QCD (RHIC Beam Energy Scan, NA61, FAIR, NICA). This critical point marks the end of the alleged first-order phase transition in the QCD phase diagram. Its position is suggested by the effective models of QCD and lattice QCD simulations. These two approaches suffer from fundamental problems but, nevertheless, deliver useful physical insights. For example, as Christian Schmidt of the Frankfurt Institute for Advanced Studies showed, the lattice QCD calculations suggest that the curvature of the chiral phase-transition line is smaller than that of the freeze-out curve. Moreover, the lattice results are in agreement with the STAR data on net-proton fluctuations. As Krzysztof Redlich of the University of Wroclaw pointed out, theoretical probability distributions of conserved charges may be compared directly with the distributions measured by STAR to probe the critical behaviour.

The last day of the meeting was the occasion for more experimental highlights, presented in the summary talk by Karel Safarik of CERN. The conference ended with a presentation by Orlando Villalobos-Baillie of the next SQM meeting, which will be held in Birmingham, UK, in 2013.

Andrzej Bialas, the founder and the leader of the high-energy physics theoretical group in Krakow, who is currently the president of PAU, was the honorary chair of the conference. The organization chairs were Wojtek Broniowski and Wojtek Florkowski of Jan Kochanowski University, Kielce, and the Institute of Nuclear Physics, Krakow.

Two years of LHC physics: ATLAS takes stock

Production cross-section

Since the first LHC collisions about two years ago, the ATLAS experiment has performed superbly – collecting quality data with high efficiency, processing the data in a timely way and preparing and publishing many new physics results. During this time the LHC has delivered more than 5 fb–1 of proton–proton collision data at √s=7 TeV. Using a good fraction of these data, the collaboration has carried out Higgs-boson searches, as well as searches beyond the Standard Model. While no new physics has been observed, ATLAS is setting stringent limits on the production cross-sections of new particles, including – but not limited to – the Higgs boson predicted by the Standard Model. Accurate measurements of Standard Model physics quantities have been performed covering many orders of magnitude in cross-section, with a precision often comparable to or exceeding that of the predictions.

These accomplishments have not been as easy as they may appear. Built on the tremendous success of the LHC, they are also the product of the strength and hard work of the 3000-member ATLAS collaboration. This article recounts the story of the first two years of the ATLAS physics programme, with an eye to some of the special occurrences and accomplishments along the way. However, it represents only the tip of the iceberg as far as the reach of the LHC physics programme is concerned. So far, ATLAS has collected just a small percentage of the total luminosity expected over the lifetime of the LHC.

Major progress

Before any physics analysis can be carried out, many members of the ATLAS collaboration work tirelessly to ready and tune the detectors, data acquisition, and trigger, to collect and reconstruct the data, and to check the quality of the data. Other members work to understand and characterize the various reconstructed objects seen in the detector. These are electrons, muons, τ leptons, photons, jets, missing transverse energy and identified heavy flavour. In two years ATLAS has gone from beginning to understand charged particles in the inner detector to using complex neural networks for flavour tagging. Algorithms have also progressed, for example from simple calorimeter-based definitions of missing transverse energy to more sophisticated definitions correcting for the calibrations and energies of the various objects.

A candidate Z→μμ event

The general complexity of the ATLAS results has followed a similar progression, from counting events for processes with large cross-sections, through using advanced analysis techniques to extract small signals from large backgrounds. Some analyses used complex unfolding techniques to compare measurements in the best way to parton-level predictions, and some have used modern statistical tools that allow the combination of many different channels into a single physics interpretation.

Figure 1 summarizes the production cross-sections for the main Standard Model processes and shows the luminosity used to measure these processes. The W and Z inclusive cross-sections and the Wγ and Zγ cross-sections were measured with the approximately 35 pb–1 from 2010. The tt cross-section is based on a statistical combination of measurements using dilepton final states with 0.70 fb–1 of data and single-lepton final states with 35 pb–1. The other measurements were made with the 2011 dataset. After only two years of running ATLAS can now measure processes with cross-section times branching ratio down to around 10 pb.

One of the by-products of the excellent LHC performance is another increase in complexity: the presence of multiple interactions within the same bunch crossing (pile-up). The 40 pb–1 of data recorded in 2010 had an average of 3 interactions per bunch crossing (<μ>), allowing good quality measurements in a relatively clean environment. The LHC run in 2011 was characterized by a rapid increase in instantaneous luminosity from the beginning of the machine operations in March, reaching <μ> of 10 in August. Figure 2 shows the complexity of an event with a Z→μμ candidate produced in a bunch crossing with 11 reconstructed proton–proton interaction vertices. The time for the integrated luminosity to double at the beginning of the run was less than one week. Since the end of May this year, the machine has regularly delivered more luminosity in one day than the total delivered in 2010.

First results

In the beginning, analyses focused mostly on understanding and measuring properties using single detectors. The first ATLAS publication on collisions reported the charged-particle multiplicities and distributions as a function of the transverse momentum pT and pseudo-rapidity η, analysing the data taken in December 2009 (√s=0.9 TeV). This allowed validation of the charged-particle reconstruction, providing important feedback on the modelling of the alignment of the detector as well as on the distribution of the material. These results, and those from a more detailed later study, were also used to tune the parameters of the Monte Carlo modelling of non-perturbative processes, which is now used to model the effect of the multiple interactions for the latest data.

Transverse momentum of 1.3 TeV.

Using only 17 nb–1 of recorded data, the production cross-sections of inclusive jets and dijets have been measured over a range of jet transverse-momenta, up to 0.6 TeV. Figure 3 shows the event with the highest central dijet-mass recorded by ATLAS in 2010. These measurements allowed the first accurate tests of QCD at the LHC. The differential cross-sections showed a remarkably good agreement with next-to-leading (NLO) perturbative QCD (pQCD) calculations, corrected for non-perturbative effects, in this unexplored kinematic regime. Given the good agreement of data and the Standard Model predictions, the study of dijet final states was used to set limits on the mass of new physics objects such as excited quarks Q*: excluding 0.30<mQ* <1.26 TeV at 95% confidence level (CL).

The first few hundred inverse nanobarns of data allowed early searches for new physics, looking for quark-contact interactions in dijet final-state events by studying the χ variable associated with a jet pair, where χ=exp|y*| and y* is the scattering rapidity evaluated in the centre-of-mass frame. The data were fully consistent with Standard Model expectations and allowed quark-contact interactions to be excluded at 95% CL with a compositeness scale Λ=3.4 TeV. Using about 100 times more data than the first jet results, ATLAS studied more complex multi-jet production, with up to six jets per event in the kinematic region of pTjet>60 GeV.

Accurate measurements of Standard Model quantities have been performed covering many orders of magnitude in cross-section

ATLAS was soon able to combine measurements from many detector components and reconstruct more of the Standard Model particles. A study of Standard Model production of gauge bosons (W± and Z0*) was performed with the first 330 nb–1 of data, in the lν and ll final states (l=e or μ). Both the measurements of the inclusive production cross-section times branching fraction σW,Z × BR(W,Z→ lν, ll) and the ratio of the two agree well with next-to-NLO calculations (NNLO) within experimental and theoretical uncertainties.

By combining measurements from the calorimeters and the central inner tracker, ATLAS was able to analyse the production of inclusive high-pT photons and perform extensive QCD studies. For this, additional complexity resides in the need to model and understand significant background contributions. These were estimated for this analysis from data, based on the observed distribution of the transverse isolation energy around the photon candidate. A comparison of results to predictions from NLO pQCD calculations again showed remarkable agreement.

An initial measurement of the production cross-section for top-quark pairs was already possible with just a small fraction of the 2010 data. Combining information from almost all detector systems, events were selected with either a single lepton produced together with at least four high-pT jets plus large transverse missing energy, or two leptons in association with at least two high-pT jets and large transverse missing energy. In this analysis ATLAS used b-jet identification algorithms for the first time – crucial for the rejection of the large backgrounds that do not contain b-quarks. A total of 37 single-lepton and 9 dilepton events were selected, in good agreement with Standard Model predictions.

The use of all of the 2010 data allowed ATLAS to study more complex quantities and distributions. For example, the W and Z differential cross-sections were measured as functions of the boson transverse-momentum, allowing more extensive tests of pQCD. Using the ratio of the difference between the number of the positive and the negative Ws to their sum, ATLAS measured the W charge-asymmetry as a function of the boson pseudo-rapidity. These results provided the first input from ATLAS on the fractions of u and d quark momentum of the proton.

At this stage, the analysis of W→τν and Z→ττ represented an important step in commissioning the selection of hadronic τ final states that are crucial in searches for new physics as well as for the Higgs boson. More accurate studies of top physics were also performed, from the inclusive cross-section for tt pairs to a preliminary measurement of the top quark’s mass.

New experiences

The full 2010 dataset also offered the first concrete possibility to look for physics signals beyond the Standard Model over a wide spectrum of final states. Events with high-pT jets or leptons and with large missing transverse momentum were studied extensively to search for supersymmetric (SUSY) particles. Here, more complex variables were used, such as the “effective mass” (the sum of the transverse momentum of selected jets, leptons and missing transverse energy), which is sensitive to the production of new particles. No significant excess of events was found in the data and limits were set on the mass of squarks and gluinos, m&qtilde; and m&gtilde;, assuming simplified SUSY models. If m&qtilde; = m&gtilde; and the mass of the lightest stable SUSY particle is mχ01=0, then the limit is about 850 GeV at 95% CL. Limits have been placed assuming other SUSY interpretations, such as minimal supergravity grand unification (MSUGRA) and the constrained minimal supersymmetric extension of the Standard Model (CMSSM).

Event display

The 2010 run ended with a short period in November dedicated to lead-ion collisions with a centre-of-mass energy per nucleon √sNN = 2.76 TeV. This was certainly one of the most amazing experiences for ATLAS during the first two years of collisions at the LHC. As the online event display in the ATLAS control room brought up the first event images, the calorimeter plot showed many events where a narrow cluster of calorimeter cells with high-energy deposits (a jet) were poorly – or not at all – balanced in the transverse plane by equivalent activity in the back-to-back region (figure 4). The gut feeling was clear: this was the first direct observation of jet-quenching in heavy-ion collisions. A detailed analysis of the early lead-collision data studied the dijet asymmetry – defined as Aj = (ETj1–ETj2)/(ETj1+ETj2), where ETji is the transverse jet energy calibrated at the hadronic scale – as a function of the event “centrality”. This showed that the transverse energies of dijets in opposite hemispheres become systematically more unbalanced with increasing event “centrality”, leading to a large number of events that contain highly asymmetric dijets. Such an effect was not observed in proton–proton collisions, pointing to an interpretation in terms of strong jet-energy loss in a hot, dense medium.

The early part of data-taking in 2011 was an extremely intense period. Already in June, ATLAS presented the first preliminary results on eight analyses using about 300 pb–1 of data, almost 10 times the integrated luminosity of 2010. This allowed more stringent limits to be placed on SUSY particles, heavy bosons W’ and Z’, new particles decaying to tt pairs, and on the production cross-section of a Standard Model Higgs boson decaying to photon-pair final states. It also allowed the first limits on particles with masses above 1 TeV.

Using a similar dataset ATLAS reported preliminary results on the production of single top at the LHC

Using a similar dataset ATLAS reported preliminary results on the production of single top at the LHC, looking in particular to the t-channel process, where a b-quark from the sea scatters with a valence quark. This process is particularly important, as any deviation from QCD predictions may indicate the presence of new physics beyond the Standard Model. The analysis is extremely difficult as the signal is hidden under a large non-reducible background from W+jets. It requires complex methods that make use of the full kinematic information of the events, looking simultaneously at many different distributions.

Extending the search with 1 fb–1

While ATLAS was carrying out these analyses, the LHC completed delivery of the first inverse femtobarn, opening up a number of new physics channels. The collaboration quickly released results on a total of 35 physics analyses with these data, most of them having to deal with the increased level of pile-up.

Preliminary WW, WZ and ZZ diboson production cross-section measurements show an overall precision of about 15% (WW and WZ) or 30% (ZZ). These measurements, all consistent with the Standard Model predictions, represent an important foundation for searches for the Standard Model Higgs boson. Triple-gauge couplings have been studied and found to be in agreement with the Standard Model, allowing limits to be placed on the size of anomalous couplings of this kind.

The 2011 data allowed more sensitive searches for SUSY particles, using similar or more complex distributions than in 2010. Once more, the results are in good agreement with the Standard Model expectations and have again been interpreted in the MSUGRA/CMSSM models as well as in simplified models. These studies exclude squarks and gluinos in simplified models with masses less than about 1.08 TeV at 95% CL.

Transverse mass distribution

ATLAS has also performed searches for dijet, lepton–neutrino and lepton–lepton resonances. Figure 5 shows the invariant mass distribution for an electron and missing transverse-energy. These searches have placed limits on new heavy-quark masses, mQ* >1.92 TeV, and on the mass of W’ and Z’ predicted by a number of different models, mW’ > 2.15 TeV and mZ’ > 1.83 TeV, all at the 95% CL.

The 2011 data began to open up the search for the Standard Model Higgs boson. To cover the entire mass range, from about 110 GeV (the limit from the Large Electron Positron collider is 114.4 GeV at 95% CL) to the highest possible values (around 600 GeV), this exploration was conducted in several final states: H→ γγ; ττ; WW(*)→lνlν, lνqq; ZZ(*)→llll, llνν, llqq as well as H→bb produced in association with a W or Z.

In the analysis dedicated to the search for H→γγ processes, events with pairs of high-pT photons were selected and the photons combined to reconstruct the invariant mass. The accurate measurement of the direction of flight of the photons is crucial for obtaining high mass-resolution and hence a strong rejection of background processes, in particular, QCD diphoton production. This is possible in ATLAS thanks to the longitudinal segmentation of the electromagnetic calorimeter, which in addition allows a strong rejection of fake photons produced by QCD jets. This channel alone allowed exclusion at the level of about three times the cross-section predicted by the Standard Model.

The analysis dedicated to the search for H→WW*→ lνlν was based on the selection of high-pT lepton pairs, electrons and muons, produced in association with large transverse missing energy. Two independent final-state classes were considered, depending on whether 0 or 1 high-pT jets were reconstructed in the same event. The analysis revealed no excess events, excluding the production of a Standard Model Higgs boson with mass in the interval 154<mH<186 GeV at 95% CL and thereby enlarging the mass region already excluded by the Tevatron.

Higgs boson production cross-section

The golden channel H→ZZ(*)→4l is based on a conceptually simple analysis: the selection of events with isolated dimuon or di-electron pairs, associated to the same hard-scattering proton–proton vertex. ATLAS found the rate of 4-lepton events to be fully consistent with the expectations from background; the analysis excludes a Higgs boson produced with a cross-section close to that predicted by the Standard Model throughout nearly the entire mass interval from 200 to 400 GeV. No evidence for an excess of events has been found in all other analysed channels, allowing 95% CL exclusion limits to be placed for each of them.

Last, ATLAS used complex statistical methods to combine the information from all of these Higgs decay channels into a single limit. While the Standard Model does not predict the mass of the Higgs boson, it does predict the production cross-section and branching ratios once the mass is known. Figure 6 shows, as a function of the Higgs mass, the Higgs boson production cross-section excluded at 95% CL by ATLAS, in terms of the Standard Model cross-section. If the solid black line (the observed limit) dips below 1, then the data exclude the production of the Standard Model Higgs at 95% CL at that mass. If the solid black line is above 1, the production of a Standard Model Higgs cannot be excluded at that mass. As figure 6 shows, the data exclude the Standard Model Higgs boson in the mass range 146<mH<466 GeV at 95% CL, with the exception of the mass intervals 232<mH <256 GeV and 282<mH <296 GeV.

This article has been able to present only a few of the ATLAS results using up to the first inverse femtobarn of data. In 2011 the experiment collected more than 5 fb–1, so the collaboration is working hard on the analysis of the new data in time for presentation of new results at the “winter” conferences early in 2012 . The first results presented here represent only a few per cent of the total data ultimately expected from the LHC. We look forward to many more exciting and impressive years.

For information about all these results and more, see https://twiki.cern.ch/twiki/bin/view/AtlasPublic.

Zehui He: following a different road

This year marks the 100th anniversary of the first International Women’s Day and, appropriately, the awarding in December 1911 of a second Nobel prize to Marie Curie. No other woman physicist has achieved such worldwide acclaim, and although there have been a number of high-flyers they remain relatively unknown. One such person is Zehui He (Zah-Wei Ho), who worked at the Curie Institute in Paris in the 1940s before becoming a leading figure in nuclear physics in her own country of China.

Zehui was born in 1914 in Suzhou, on the lower reaches of the Yangtze River, into a family of eight children where culture and learning were as important for the girls as for the boys. After studying at a school for girls (which had been established by her maternal grandmother) and succeeding in a national competition, she was admitted to the physics department of the Tsinghua University, Peking (now Beijing), in 1932. In a class of 28 there were 10 women, and the head of the department strongly discouraged all of them from pursuing a career in physics, a common habit at the time (not only in China). However, he did not succeed with Zehui, and she came out top of the 10 students – including two other women – who graduated in 1936.

Some of the professors, Zehui later recalled, did appreciate her talent and pushed her towards a stimulating final research project on “A voltage stabilizer of electric current used in laboratories”. Yet, afterwards, like the other female graduates she was offered no support when looking for somewhere to continue her studies or to work. It was thanks again to her persistence, as well as to a fund granted by her native Shanxi Province, that she was able to go to Germany to pursue a doctoral degree at the Technical Physics Department of the Technische Hochschule in Berlin. Sanqiang Qian (San-Tsiang Tsien), who was also at the top of Zehui’s class in 1936, after working in the institute of physics in Peking for a year, went to Paris in 1937 to the laboratory of Irène Curie (Marie’s daughter) and her husband Frédéric Joliot. He obtained his doctorate there in 1940 with a thesis on “Étude des collisions des particules α avec les noyaux d’hydrogène” supervised by the Joliot-Curies.

In Berlin, meanwhile, Zehui pursued a doctorate in experimental ballistics with a thesis on “A new precise and simple method of measuring the speed of flying bullets”. By then it was 1940 and the Second World War had begun. Zehui was stuck in Germany, but she found work in Berlin with the Siemens Company and did research on magnetic materials from 1940 to 1942. However, during her studies Zehui had stayed at the home of Friedrich Paschen, well known for spectroscopy and the eponymous hydrogen series and line-splitting in a strong magnetic field. The Paschen family loved Zehui as one of their own, and Paschen introduced her to his friend Walther Bothe, director of the Physics Institute of the Kaiser Wilhelm Institute for Medical Research in Heidelberg. There, Zehui converted to basic research in nuclear physics.

Given the time (1943) and the place (soon to be not far from the war’s front line), it was an improbable scenario. Bothe, one of the principals of the German Uranium Project, had returned to basic research in Heidelberg, where in December 1943 the 10 MeV cyclotron came into operation – the first in Germany. While Bothe used counters and electronics to study cosmic rays and radioactive nuclei, his colleague Heinz Maier-Leibnitz built a cloud chamber and, together with Bothe and Wolfgang Gentner, in 1940 published the Atlas of Typical Cloud Chamber Images – a reference for identifying scattered particles.

Zehui worked with Maier-Leibnitz on building a second cloud chamber to study positron–electron collisions, using positrons from the decays of artificially produced radioactive isotopes, with a view to checking the validity of Homi Bhabha’s and Bothe’s calculations based on Paul Dirac’s theory. The advantage with respect to electron–electron elastic collisions was the lack of ambiguity between recoil and scattered particles, allowing a separation between events of large and small energy-exchange. The experiment, which used positrons from a source of 52Mn, also allowed a cross-check of Hans Bethe’s calculation of the ratio of annihilation to elastic cross-sections.

Breakthrough
The first ever picture of a positron–electron scatter was shown at the cosmic-ray conference in Bristol in September 1945 and mentioned in a report on the meeting published in November (Nature 1945). In all, Zehui measured 178 elastic collisions from 2774 positrons and found that: “In the first approximation, there is a general agreement between the theoretical and the experimental curves [for the number of collisions]. But it seems that in the case of strong energy exchange (A> 0.6), for which the measurements are more precise, the experimental values are higher than the theoretical ones” (Ho 1947). She also observed three annihilation events, as expected from Bethe’s calculations.

The results were widely disseminated. On 5 April 1946, a paper on the measurements was read by R W Pohl in Gottingen, then on 15 April by Joliot in Paris. In July 1946, Sanqiang Qian presented the work at the International Conference on Fundamental Particles and Low Temperatures in Cambridge; and a letter, sent at around the same time to Physical Review, was published in August that year (Ho 1946).

Sanqiang and Zehui proved the existence of ternary fission from the measurement of fission traces

Meanwhile, Zehui had moved from Heidelberg to Paris, where she rejoined her classmate Sanqiang, marrying him in the spring of 1946. From 1946 to 1948 she worked at the Nuclear Chemistry Laboratory of Collège de France and the Curie Laboratory of the Institut du Radium. Continuing the research she had started in Germany and using a cloud chamber with a long time sensitivity, as developed by Joliot, Zehui measured the spectrum of positrons and gammas from the decays of 34Cl and 18F, and also confirmed her previous result on positron–electron collisions. However, the discrepancy with the theory at large-energy transfer was not observed by others.

Working with Sanqiang and two PhD students – R Chastel and L Vigneron – Zehui went on to study the fission processes induced by slow neutrons, using nuclear emulsions loaded with uranium. After the discovery of fission in 1938 it was generally believed that the nucleus of a heavy atom splits into two lighter nuclei. However, with these experiments Sanqiang and Zehui proved the existence of ternary fission from the measurement of fission traces; they also explained the mechanism of such a reaction and predicted the mass spectrum of the fragments (Tsien et al. 1947). Zehui also made the first observation of quaternary fission in November 1946. Ternary fission was not understood by the physics community until the late 1960s, and multifission not verified until the 1970s.

In May 1948 Zehui returned to China with her husband and their six-month-old daughter. (A second daughter was born in 1949 and a son in 1951.) The couple’s involvement in science became deeply intertwined with the history of their country, echoing the farewell advice of the Joliot-Curies that they should “serve science, but science must serve the people”.

Zehui was immediately recruited as the only full-time research fellow in the Atomic Research Institute of the National Peking Research Academy. After the founding of the People’s Republic of China in 1949 she became a research fellow (1950–1958) at the Modern Physics Institute of the Chinese Academy of Sciences (CAS) and then research fellow (1958–1973) and deputy director (1963–1973) of the Atomic Energy Institute. Following the establishment of the Institute of High-Energy Physics (IHEP) at the CAS in 1973, she moved there as a research fellow and deputy director (1973–1984). She was elected a member of the academy in the Mathematics and Physics Division in 1980 and was also a standing member of the Chinese Space-Science Society.

Focusing on nuclear research
In all of her administrative positions, Zehui’s constant preoccupation was to develop her country’s nuclear research, almost from scratch to the current achievements. In 1956, for example, her group succeeded in making nuclear emulsions of a quality comparable to the most advanced in the world, mainly with respect to the ones sensitive to protons, alpha particles and fission fragments.

An important change took place in 1955 when the Chinese government decided to move into nuclear energy. Sanqiang took on major responsibilities in setting up a nuclear industry and by 1958, with help from the Soviet Union, the first Chinese nuclear reactor and a cyclotron had started operation. Zehui led the Neutron Physics Research Division of the Modern Physics Institute (later renamed Atomic Energy Institute) and made important contributions to the establishment of basic laboratory infrastructure, the design and manufacture of measuring instruments, and the development of various types of equipment.

Around 1966, Zehui disappeared from public view as a result of the Cultural Revolution. This was over by 1978, when for the first time in more than 30 years she visited Germany as a member of a government delegation. Around the same time, Sanqiang led a Chinese delegation to visit CERN – where the Super Proton Synchrotron had recently become operational – and later to the US and many other countries, working hard to promote international scientific collaboration.

In the wake of that effort, the Beijing Electron–Positron Collider was initiated, achieving its first collisions on 16 October 1988. Meanwhile, Zehui, in charge of the Cosmic Ray and Astrophysics Division of IHEP, promoted research in these fields. Under her initiation and fostering, the former cosmic-ray research division of IHEP built – through domestic and international collaborations – nuclear emulsion chambers installed at the highest altitude in the world (5500 m) on Kam-Pala mountain in Tibet. Also, starting from scratch, the division launched scientific balloons of increasing size near Beijing. In parallel, following the launch of the first Chinese satellite in 1970, the technology was developed to detect hard X-rays in space. As before, under Zehui’s direction and influence, generations of young researchers rapidly grew up to become the key figures in nuclear and space science in China.

Zehui He died in June 2011, nearly 20 years after Sanqiang Qian (1913–1992). She had continued to work full time until late in life, maintaining the high standards that she had always cherished. She loved her country and science; to both she is now an icon.

ARIS 2011 charts the nuclear landscape

CCari1_10_11

The roots of the first conference on Advances in Radioactive Isotope Science, ARIS 2011, go back to CERN in 1964, when the then director-general Victor Weisskopf called for proposals for on-line experiments to study radioactive nuclei at the 600 MeV synchrocyclotron. Why this should be done – and how – became the subject of a conference held in Lysekil, Sweden, in 1966 and a year later experiments began at ISOLDE, CERN’s Isotope Separator On Line. Following this successful start, in 1970 CERN organized a first meeting on nuclei far from stability in Leysin, Switzerland.

Since then there have been regular conferences within the field, with more specialized meetings arising hand in hand with increasingly sophisticated technical developments (see box). Three years ago the community felt that the time was ripe to streamline the conferences by merging all of the physics into a single meeting held every three years. The result was that at the end of May this year some 300 physicists met in the beautiful medieval town of Leuven in Belgium to attend ARIS 2011. The success of the meeting, with its excellent scientific programme, indicates that this was the perfect decision.

Over the past two decades the experimental possibilities for studying exotic nuclear systems have increased dramatically thanks to impressive technical developments for the production of rare nuclear species, both at rest and as energetic beams. New sophisticated detection methods and data-acquisition techniques with on- and off-line analysis methods have also been developed. The two basic techniques now used at laboratories worldwide are the isotope separator on-line (ISOL) and in-flight production methods, with several variations.

Conference highlights

CCari2_10_11

The conference heard the latest news about plans to make major improvements to existing facilities or to build new facilities, offering new research opportunities. The review of the first results from the new major in-flight facility, the Radioactive Isotope Beam Factory at the RIKEN research institute in Japan, was particularly exciting. The production of 45 new neutron-rich isotopes together with results from the Zero-Degree Spectrometer and the radioactive-ion beam separator, BigRIPS, gave a glimpse of the facility’s quality. Future installations, such as the Facility for Antiproton and Ion Research (FAIR) at GSI, SPIRAL2 at the GANIL laboratory, the High Intensity and Energy ISOLDE at CERN, the Facility for Rare Isotope Beams at Michigan State University (MSU) and the Advanced Rare Isotope Laboratory at TRIUMF were also discussed, together with the advanced plans to build EURISOL, a major new European facility complementary to FAIR.

The nuclear mass is arguably the most basic information to be gained for an isotope. Its measurement has involved various techniques, but a paradigm shift came with the development of mass spectrometers based on Penning traps and such devices are now coupled to the majority of radioactive-beam facilities. This has led to mass-determinations of unprecedented precision for isotopes in all regions of the nuclear chart, making it possible in effect to walk round the mass “landscape” and scrutinize its details (figure 3).

CCari3_10_11

Recent results from the ISOLTRAP mass spectrometer at CERN, which has been in operation for more than 20 years, have a precision in the order of 10–8 for the masses of isotopes with half-lives down to milliseconds. The first determination of masses of neutron-rich francium isotopes, where the mass of 228Fr (T1/2 = 39 s) is a notable example, were presented at ARIS 2011. The JYFLTRAP group, using the IGISOL facility at the physics department of the University of Jyväskylä (JYFL), presented masses for about 40 neutron-rich isotopes in the medium-mass region. The SHIPTRAP spectrometer at GSI has made measurements of masses towards the region of super-heavy elements; 256Lr, produced at a rate of only two atoms a minute, is the heaviest element studied so far. The TRISTAN spectrometer at TRIUMF has boosted precision by “breeding” isotopes to higher charge-states – for example, in a new measurement of the mass of the super-allowed β-emitter 74Rb, which is relevant to the unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. Results from isochronous mass-spectroscopy with the CSRe storage ring at the National Laboratory of Heavy Ion Research in Lanzhou were also presented. Ion-traps are now routinely used as a key instrument for cooling and bunching the radioactive beams. This gives an improvement of several orders of magnitude in the peak-to-background ratio in laser spectroscopy experiments, or can be used prior to post-acceleration.

The determination of nuclear matter and charge radii has been important to the progress of radioactive-beam physics. The observation of shape co-existence in mercury isotopes at ISOLDE was the starting point for impressive developments, with lasers playing the key role. The most recent results from the same area of the nuclear chart are measurements using the Resonance Ionization Laser Ion Source at ISOLDE of isotope shifts and charge radii for the isotope chain 191–218Po. Another demonstration of the state of the art was shown in the determination of the charge radius of 12Be using collinear laser spectroscopy based on a frequency comb together with a photon-ion coincidence technique. Electron scattering from radioactive beams will be the next step for investigating nuclear shapes; the ELISe project at GSI and SCRIT at RIKEN are examples of such plans.

The determination of matter radii by transmission techniques, pioneered at Berkeley in the mid-1980s, led to the discovery of halo states in nuclei. These are well known today but the main data are limited to the lightest region of the nuclear chart. A step towards heavier cases was presented at ARIS in new data from RIKEN, where results from RIPS and BigRIPS indicate a halo state for 22C and 31Ne and maybe also for 37Mg.

CCari4_10_11

The use of laser spectroscopy in measuring charge radii, nuclear spins, magnetic moments and electric quadrupole moments has been extremely successful over the years. New results from the IGISOL facility – mapping the sudden onset of deformation at N = 60 – and from the ISOLDE cooler/buncher ISCOOL – for copper and gallium isotopes – were highlighted at the conference. The two-neutron halo nucleus 11Li continues to attract interest both theoretically and experimentally, where a better determination of the ratio of the electric quadrupole moments between mass 11 and 9 was needed. Now a measurement at TRIUMF based on a β-detected nuclear-quadrupole resonance technique has yielded a value of Q(11Li)/Q(9Li) = 1.077(1). Here, the cross-fertilization between beam and detector developments has led to laser-resonant ionization becoming an essential ingredient in the production cycle of pure radioactive, sometimes isomeric, beams.

Nuclear-structure studies of exotic nuclei were the topic of many contributions at ARIS. There is progress on the theoretical side with large-scale shell model calculations in the vicinity of 78Ni leading to a unified description of neutron-rich nuclei between N = 40 and N = 50. The evolution of collectivity for the N = 40 isotopes has provided many interesting experimental results. From the strongly deformed N = Z nucleus 80Zr, collectivity is rapidly decreasing to 68Ni with a high-lying 2+ state at 2.03 MeV, suggesting a doubly magic character. Going to 64Cr, there is a new deformed region illustrated by a 2+ state at 470 keV, and research at the National Superconducting Cyclotron Laboratory at MSU has found an enhanced collectivity for 78Sr, with a quadrupole deformation parameter of β2=0.44.

Many of the talks at ARIS addressed the “island of inversion”. A recent result from REX-ISOLDE identifies an excited 0+ state in 32Mg, illustrating shape coexistence at the borders of the island. Many new results – a rotational band in 38Mg observed by BigRIPS, isotope shifts for 21–32Mg measured at ISOLDE, β-decay for chromium isotopes from Berkeley and shape-coexistence in 34Si and 44S – add to the understanding of this interesting region of the nuclear chart. A new island of inversion, indicated by data for 80–84Ga from the ALTO facility in Orsay, was also discussed.

Continuing with nuclear structure, data from GANIL and its MUST 2 array on d(68Ni, p)69Ni give access to the d5/2 orbital, which is crucial for understanding shell structure and deformation in this mass region. The reaction d(34Si, p)35Si shows a density dependence of the spin-orbit splitting leading to a depletion of the nuclear matter density and resulting in a “bubble nucleus” – a topic also discussed in a theory talk.

The doubly magic nucleus 24O has attracted interest for a decade, from experimental and theoretical viewpoints. At ARIS, the coupled-cluster approach was presented as an ideal compromise between computational costs and numerical accuracy in theoretical models, while the absence of bound oxygen isotopes up to the classically expected doubly magic nucleus 28O presents a theoretical challenge.

Experimentally, there is an impressive series of data – over a wide range of elements – from the MINIBALL array at ISOLDE. One of the highlights here was the observation of shape coexistence in the lead region. A theory talk pointed out that the nuclear energy-density functional approach, both for mean-field and beyond-mean-field applications, is an efficient tool for calculations on medium-mass and heavy nuclei.

Early experiments with radioactive beams revealed exotic decay-modes such as β-delayed particle emission. Today these processes are well understood and used as workhorses to learn about the structure of exotic nuclei. The study of β-delayed three-proton emission from 43Cr and two-proton radioactivity from 48Ni using an Optical Time Projection Chamber at MSU was also presented at ARIS. Here it is clear that in future the study of the most exotic decay modes will use active targets, such as in the Maya detector developed at GANIL and the ACTAR-TPC project being planned by GANIL together with MSU. An interesting new result concerns the observation of β-delayed fission-precursors in the Hg-Tl region, where an unexpected asymmetric fragment-distribution has been observed for the β-delayed fission of 180Tl.

Unbound nuclei or resonance states are sometimes debated as “ghosts” without any physics significance. However, developments over the past 5–10 years have provided a huge amount of data, so that most of the previously empty spots on the nuclear chart for the light elements are now filled. The production of 10He and 12,13Li from proton-knockout reactions from 11Li and 14Be, respectively, is a particularly spectacular case. The knockout of a strongly bound proton from the almost unbound nucleus 14Be results in a 11Li nucleus that together with two neutrons shows features that can only be attributed to an unbound 13Li nucleus. Many of the resonance states might be populated in transfer reactions in inverse kinematics, in which the exotic nuclei are used as an energetic beam directed towards a target that was earlier used as the beam. The HELIOS spectrometer, which will use neutron-rich beams from the CARIBU injector at the Argonne Tandem Linear Accelerator System, is a model for what might develop at many facilities in the future.

The super-heavy-element community was represented in several talks at ARIS. Having produced all elements up to Z = 118, the next step is to tackle the Z = 120 barrier, an exciting goal that could become a reality with reactions such as 54Cr+248Cm. Nuclear spectroscopy is also climbing towards ever higher mass numbers and elements, as demonstrated by data from JYFL for 254No. One exciting talk concerned the chemical identification of isotopes of element 114 (287,288Uuq), which is found to belong to group 14 (in modern notation) in the periodic table – the group that contains lead, tin, germanium, silicon and carbon.

CCari5_10_11

The acquisition of data pertinent to nuclear astrophysics has grown tremendously thanks to the access to nuclei in relevant regions of the nuclear chart. Results include the study at JYFL and at the Nuclear-physics Accelerator Institute (KVI), Groningen, of β-decay of 8B for the solar-neutrino problem and the work at ISOLDE, JYFL and KVI on β-decays of 12N and 12B, which are important for the production of 12C in astrophysical environments. Data from ISOLDE and JYFL on the neutron-deficient nuclei 31Ar and 23Al, relevant for explosive hydrogen burning, were also discussed, as were results from MSU relating to the hot carbon–nitrogen–oxygen cycle and the αp-, rp- and r-processes in nucleosynthesis. GANIL has results on the reaction d(60Fe,p)61Fe, which is relevant for type II supernovae, while the Radioactive Ion Beam Facility in Brazil in São Paulo has data on the p(8Li,α)5He reaction. Calculations for proton scattering on 7Be in a many-body approach, combining the resonating-group method with the ab initio no-core shell model, were also described at the conference.

Exotic nuclei can also provide information about fundamental symmetries and interactions. The painstaking collection of data over decades has provided an extremely sensitive test of the unitarity of the top row of the CKM matrix. Today there are precise data for 13 super-allowed β-emitters, which give a value of 0.99990(60) for this quantity. In this context, there are plans for measurements with the Magneto Optical Trap at Argonne of β-neutrino correlations for 6He and the electric dipole moment for 225Ra. The high-precision set-ups – WITCH at CERN, LPC Trap at GANIL and WIRED at the Weizmann Institute – were also discussed at the conference. The claim is that this kind of experiment – the high-precision frontier – will to some extent complement the high-energy frontier in understanding the deepest secrets of nature.

Finally, a review of the different techniques using radioactive isotopes in solid-state physics presented the current state of the art, together with some recent results. This work was pioneered at CERN and has over the years become an important ingredient at many facilities.

In summary, ARIS 2011 turned out to be a successful merger of the former ENAM and RNB conferences (see box). The talks, supported by an excellent poster show, covered the field perfectly. The talks are available on the conference website and the organizers had the excellent idea of putting the posters there too – this is “a first”, to be followed in future.

Hadron therapy: collaborating for the future

In 1946, accelerator pioneer Robert Wilson laid the foundation for hadron therapy with his article in Radiology about the therapeutic interest of protons for treating cancer (CERN Courier December 2006 p24). Sixty-five years later, proton therapy has grown into a mainstream clinical modality. More than 60,000 patients worldwide have been treated since the establishment of the first hospital-based treatment centre in Loma Linda, California, in 1990 and various companies are now offering turn-key solutions for medical centres. Moreover, encouraging studies with other types of hadrons have resulted in the creation and planning of various dedicated facilities.

Hadron therapy is the epitome of a multidisciplinary and transnational venture: its full development requires the competences of physicists, physicians, radiobiologists, engineers and IT experts, as well as collaboration between research and industrial partners. The translational aspects are extremely relevant because the communities involved are traditionally separate and they have to learn to speak the same “language”. Ions that are considered “light” by physicists, such as carbon, are “heavy” for radiobiologists – and this is just one of many examples.

Although state-of-the-art techniques borrowed from particle accelerators and detectors are increasingly being used in the medical field for the early diagnosis and treatment of tumours and other diseases, medical doctors and physicists lack occasions to get together and discuss global strategies. The first Physics for Health (PHE) workshop was organized in 2010 at CERN exactly to develop synergies between these diverse communities. Preparations are now underway for a follow-up workshop, which will join forces with the International Conference on Translational Research in Radiation Oncology (ICTR). The ICTR-PHE 2012 conference will be held in Geneva on 27 February – 2 March. The aim is to catalyse and enhance further exchanges and interactions between experts in this multidisciplinary field where medicine, biology and physics intersect.

The advantages of hadron therapy

The clinical interest in hadron therapy resides in the fact that it delivers precision treatment of tumours, exploiting the characteristic shape of the Bragg curve for hadrons, i.e. the dose deposition as a function of the depth of matter traversed. While X-rays lose energy slowly and mainly exponentially as they penetrate tissue, hadrons deposit almost all of their energy in a sharp peak – the Bragg peak – at the very end of their path.

The Bragg peak makes it possible to target a well defined cancerous region at a depth in the body that can be tuned by adjusting the energy of the incident particle beam, with reduced damage to the surrounding healthy tissues. The dose deposition is so sharp that new techniques had to be developed to treat the whole target. These fall under the categories of passive scattering, where one or more scatterers are used to spread the beam, and spot scanning, where a thin, pencil-like beam covers the target volume in 3D under the control of sweeping magnets coupled to energy variations.

While the advantages of protons over photons are quantitative in terms of the amount and distribution of the delivered dose, several studies show evidence that carbon ions damage cancer cells in a way that the cells cannot repair themselves. Carbon therapy may be the optimal choice to tackle radio-resistant tumours; other light ions, such as helium, are also being investigated.

Although hadron therapy has largely shown its potential scientifically, the relative complexity of the required infrastructures limits its exploitation. “Hadron therapy is not a replacement for conventional radiotherapy or surgery, but is an additional tool in the toolbox of the oncologists,” confirms Robert Miller of the Mayo Clinic in the US, which just embarked on the construction of two proton-therapy facilities. Indeed, hadron therapy is mostly used for treating tumours that are located close to vital organs that would be unacceptably damaged by X-rays, or in paediatric oncology, where quality of life and late side effects are a major concern.

At present, the world map of hadron therapy is divided into three distinct regions: Asia (mainly Japan), the US and Europe. In addition, three proton-therapy facilities are operational in Russia and one in South Africa.

Japan is the uncontested leader in treatment and clinical studies with carbon ions (CERN Courier June 2010 p22). By the end of 2010, its two major facilities – the Heavy-Ion Medical Accelerator in Chiba (HIMAC) and the Hyogo Ion Beam Medical Center – had treated more than 90% of the 6600 world total of patients irradiated with carbon ions. Clinical experience in the Japanese centres has not only demonstrated that carbon therapy is more effective than conventional photon radiotherapy on certain types of tumours but also that, with respect to both protons and photons, a significant reduction of the overall treatment time and the number of irradiation sessions can be achieved. In addition to the existing facilities, Japan is planning the construction of two more centres for carbon-ion therapy and two more for proton therapy. Following this lead, China and other countries in Asia have constructed or are planning several carbon-ion and proton-therapy facilities.

In the US alone more than 30,000 patients have already been treated with protons over the past 20 years, half of them at Loma Linda. There are currently six active proton facilities, three more under construction and a number of centres announced or planned in the near future. When hadron therapy was still confined to facilities operating within particle-physics laboratories, the US pioneered the use not only of protons but also of other ions: between 1957 and 1992, the Bevalac in Berkeley treated about 2500 cancer patients with particles including neon, carbon, silicon and argon. Today, there is no therapy centre delivering ions other than protons in America. Plans for the future include only an R&D facility in the San Francisco Bay area called SPARC, which would be a joint effort between Stanford/SLAC and Lawrence Berkeley National Laboratory/University of California San Francisco, and a carbon and helium facility at the Mayo Clinic.

Europe has 10 active proton facilities, with five more planned or under construction. Capitalizing on the experience gained from the carbon-therapy programme at GSI in Darmstadt and at Heidelberg, Europe is now witnessing the birth of “dual” centres that are capable of delivering beams of both protons and carbon ions. Two major centres were recently completed: Heidelberg Ion Therapy Centre (HIT), which started treatments at the end of 2009 and has irradiated about 500 patients with carbon ions to date; and the Centro Nazionale di Adroterapia Oncologica (CNAO) in Pavia, which started treating the first patient with protons in September and will launch the preclinical phase with carbon ions in the coming months. The MedAustron dual facility in Wiener Neustadt is currently under construction (CERN Courier October 2011 p33) and more centres of a similar nature are at different stages in planning and implementation in France and Germany.

HIT is the first facility in the world to be equipped with a gantry for carbon ions, i.e. a structure to rotate the particle beam and guide it to the patient at a chosen angle. Using the gantry, radio-oncologists can select the optimal beam direction to minimize the amount of healthy tissue traversed by the hadrons before reaching the tumour. They can also irradiate the target from multiple angles – a technique that, thanks to the overlapping beams, delivers to the target a total dose that is much higher than in the surrounding normal tissues. CNAO relies on an accelerator design implemented by Terapia con Radiazion Adroniche Foundation based on the results of the Proton Ion Medical Machine Study hosted at CERN from 1996 to 1999. The CNAO facility will deliver horizontal and vertical beams and a gantry will be added at a later stage.

Co-ordination and training

With the blossoming of carbon therapy in Europe, the European Network for Light Ion Therapy (ENLIGHT) considered that the time was right to leverage the experience at the various facilities, as well as the wealth of advances in beam delivery for conventional radiation therapy, and improve the technology with the aim of more effective and affordable cancer treatments with particles. While developing and optimizing the next-generation facilities remains the community’s primary goal, it is also of paramount importance that the existing centres collaborate intensively and that researchers, clinicians and patients have protocols to access these structures. Within this framework, the Union of Light Ion Centres in Europe (ULICE) project was launched in September 2009, funded by the European Commission.

ULICE is a collaboration of 20 partners led by Roberto Orecchia, scientific director of CNAO. The project involves all of the existing and planned European carbon-therapy facilities, including the two leading European companies in the hadron-therapy sector, IBA and Siemens. The participation of private companies ensures that specific issues related to possible future industrial production are addressed. IBA has designed and installed the majority of clinically operating proton-therapy facilities in the world and is developing innovative and more affordable single-room proton systems, as well as superconducting cyclotron solutions for carbon. Siemens Healthcare is one of the world’s largest providers of medical solutions and was the first company outside of Asia to enter the carbon-ion therapy market. The company delivered the complete patient environment at HIT and the treatment-planning system at CNAO.

ULICE is a four-year project built around three pillars: Joint Research Activities that focus on development of instruments and protocols; Networking, to increase co-operation between facilities and research communities wanting to work with the research infrastructure; and Transnational Access, which aims at allowing researchers to use the facilities and for radiobiological and physics experiments to take place.

At the recent mid-term review meeting in Marburg, Richard Pötter, a radiation oncologist at the University of Vienna and co-ordinator of the Joint Research Activities of ULICE, confirmed that the first achievements of the research work are extremely encouraging. The existing clinical-study protocols worldwide have been reviewed to start defining common guidelines for patient selection. Specific studies have focused on setting up appropriate structures for a comprehensive and prospective multicentre clinical-research programme and the development of a dosimetry protocol. Important steps forward have also been made in defining uniform methods and concepts for irradiation doses and tumour volumes in radio-oncology, to create a common language not only within the consortium but across all of the communities involved in different forms of radiotherapy. The ULICE consortium is working hard to develop new concepts for more compact and affordable gantries: the HIT gantry is a steel giant of 25 m in length and 13 m in height, and alternative designs are clearly needed.

Within the activities of Transnational Access, the ULICE partners are examining the complex task of setting up a structure to allow access to the existing European facilities for patients, clinical and experimental research, as well as for clinical training and education. Japan is once again an example to follow, with the International Open Laboratory (IOL) programme of the National Institute of Radiological Sciences launched in 2008 to grant beam time at HIMAC to external researchers. There are currently four active IOLs with Columbia University, Colorado State University, the University of Sussex, Karolinska Institutet and GSI. As of summer 2011, researchers from eligible countries can apply to take part in research activities or submit experimental proposals in the clinical, radiobiological and physical field at the University Hospital of Heidelberg and at CNAO. In the words of Jürgen Debus, medical director of the Department of Radiation Oncology and Radiation Therapy of Heidelberg University Hospital and co-ordinator of the ULICE Transnational Access: “A technology has worth in the medical field only if it is spread and if everyone can participate to its evolution with their experience and feedback.” Applications for participation in the Transnational Access programme will be reviewed by a multicentre scientific committee and successful applicants will be granted free access thanks to the European Union Transnational Access funding. In the same framework, ULICE is also developing an international web-based documentation and data-management system, which will be an essential tool for transnational and multicentre clinical studies in particle therapy.

In the coming years, the project will focus on expanding and consolidating the transnational access and on developing innovative gantry designs. The support of ENLIGHT will be instrumental to dissemination, communication and networking, which will help it reach out to the widest possible community.

ENLIGHT also actively supports the creation of the next generation of the necessary highly specialized experts through the Particle Training Network for European Radiotherapy (PARTNER), funded by the European Commission under the Marie Curie Initial Training Network programme (CERN Courier March 2010 p27). Both ENLIGHT and PARTNER are co-ordinated by Manjit Dosanjh at CERN. PARTNER is offering research and training opportunities in leading European institutions and companies to 25 young researchers who are mostly involved in PhD studies at the same time. At the recent annual meeting in Marburg, the presentations of the individual projects displayed clearly the variety of topics being addressed and the quality of the research. PARTNER is now in its fourth and final year, and in a few months it will be time to review the results that have been achieved.

• For more about ULICE, see http://cern.ch/ULICE.

NA61/SHINE: more precision for neutrino beams

CCshi1_10_11

Accelerator neutrino beams are currently the object of intense discussion and development. They provide a necessary tool for the detailed study of neutrino oscillations and in particular the observation of potential CP-violating effects that are born from the interference of transitions among the three known species of neutrino. Neutrino interaction cross-sections are tiny, so the challenge in studying their properties has been to produce ever increasing beam intensities. The next challenge in neutrino physics will be to establish precisely the parameters of the oscillations and then compare the oscillations of neutrinos with anti-neutrinos (or the oscillation probability as a function of neutrino energy) to search for CP-violation. This will require precise measurements of the transitions of neutrinos into each other, which will in turn demand a much better knowledge of the neutrino beams.

At present – and probably for the next decade – neutrino beams are generated by the conventional technique: a beam of multi-giga-electron-volt protons, as powerful as possible (up to around 500 kW beam power), is directed at a target to produce a large number (1012 or more) of hadrons, mainly pions with a small admixture (5–10%) of kaons. These are then focused in the direction desired for the neutrino beam and they decay – producing neutrinos – in a decay tunnel.

In the absence of a good theory of hadronic interactions, a precise prediction of the properties of such neutrino beams requires measurements of particle production at an unprecedented level of precision. The role of the NA61/SHINE experiment at CERN’s Super Proton Synchrotron (SPS) is to perform these hadron production measurements. More specifically, it has taken data for the T2K experiment in Japan, both with a thin carbon target and a full replica of the target used in T2K. These data have already proved important for the extraction of the first results on electron-neutrino appearance and muon-neutrino disappearance in T2K. As statistics increase in T2K, they will become more and more essential.

CCshi2_10_11

The collaboration behind the SPS Heavy Ion and Neutrino Experiment (SHINE, approved at CERN as NA61) is an unlikely marriage between aficionados of the heaviest and lightest beams on offer. Ions as heavy as lead nuclei have been accelerated in the SPS, while neutrinos have the lightest mass (now famously non-zero) of all particles apart from photons. So what is the unifying concept between these communities that are a priori so different?

The NA49 detector in CERN’s North Area offers excellent tracking with its immense set of time-projection chambers (TPCs), time-of-flight (TOF) detectors and flexible beamline. To perform systematic measurements at energies at the onset of quark–gluon plasma creation, the heavy-ion physicists were interested in upgrading the detector to allow higher event statistics and lower systematic uncertainties. At the same time, neutrino physicists, attracted by the extensive coverage of the detector, were interested in running it in a simple configuration, but also with high statistics, so as to have the first data ready in time for the start of T2K.

The main upgrades relevant for all of the NA61/SHINE physics programmes concerned the TPC read-out, an extension of the TOF detectors and an upgrade of the trigger and data-acquisition system. Figure 1 shows the upgraded detector. Its acceptance fully covers the kinematic region of interest for T2K.

The NA61/SHINE experiment was approved in April 2007 and took data in a pilot run the following September, with 600,000 triggers on the thin carbon target and 200,000 triggers on the replica (long) T2K target. More extensive data-taking for the T2K physics programme took place in 2009 and 2010, both with thin (6 million triggers in 2009) and long targets (10 million triggers in 2010). In parallel, data were recorded for the NA61/SHINE heavy-ion and cosmic-ray programmes.

As a first priority, the cross-sections for producing charged pions from 30 GeV protons on carbon were measured with the thin-target data taken in 2007 (Abgrall et al. 2011). The systematic errors are typically in the range of 5–10% and smaller than the statistical errors. These data have already been used for an improved prediction of the neutrino flux in T2K (Abe et al. 2011). Furthermore, they also provide important input to improve the hadron-production models needed for the interpretation of air showers initiated by ultra-high-energy cosmic rays.

CCshi3_10_11

However, these first NA61/SHINE measurements provide only a part of what is needed to predict the neutrino flux in T2K. A substantial fraction of the high-energy flux, and in particular the electron-neutrino contamination, originates from the decay of kaons. Charged kaons are readily identified in NA61/SHINE from the suite of particle-identification techniques – dE/dx in the TPC and the TOF in the upgraded detector (see figure 2) – and a first set of cross-sections has been produced already. Neutral kaons can be reconstructed using the V0-like topology of K0S→ π+π decays.

A large fraction (up to 40%) of the neutrinos originates from particles produced by re-interactions of secondary particles in the target, which for T2K is 90 cm long. This is difficult to calculate precisely and it motivates a careful analysis of the data taken with the long target. Long-target data are notoriously more difficult to reconstruct and analyse but they provide much more directly the information needed for extracting the neutrino flux. The NA61/SHINE collaboration presented a pilot analysis at the NUFACT meeting at CERN in early August (Abgrall 2011). The ultimate precision will come from the full analysis of the long-target data taken in 2010. The collaboration is working hard to complete these analyses in time for the high-statistics measurements that will become possible in T2K when the experiment resumes data-taking after recovering from damage in the massive earthquake in north-eastern Japan that occurred in March this year.

Trends in isotope discovery

CCvie1_10_11

It all started innocently enough with a review article I wrote in 2004 about the nuclear driplines, which described the exploration of the most neutron- and proton-rich isotopes (Thoennessen 2004). The article included tables listing the first observation of each isotope along the proton- and neutron-dripline. The idea to expand this list to cover all isotopes lingered for a few years until in 2007 I mentioned it to an undergraduate student as a possible research project. At the beginning we did not appreciate the magnitude of the project; after all, there are more than 3000 isotopes presently known. However, with the help of many undergraduate students performing elaborate literature searches and carefully judging the merits of the individual papers we continued, even though we extrapolated that the project would take about 10 years to reach completion.

We have described details of the discovery of each isotope in short paragraphs, arranged by elements, which are published in a series of articles in Atomic Data and Nuclear Data Tables. In a summary table, the first author, year, journal, laboratory, country and method of discovery are presented. Now, only four years after we started, the project is almost completed. We finished the initial discovery assignment for all isotopes and are currently finalizing the paragraphs for the last four elements: actinium, thorium, protactinium and uranium.

The master table of all elements is a rich source of interesting information. Along the way it has been fascinating to see how not only the physics and technology changed over time, but also the style of the papers. For example, the average number of authors per article increased from 1.1 in 1930 to 16.4 in 2000.

One piece of information – the number of isotopes discovered by the different laboratories around the world as a function of time – was recently highlighted by a Nature News article and has drawn a lot of attention over the past few weeks (Samuel Reich 2011). The article reveals the labs and individuals that have discovered the largest number of new isotopes. The results show that while Lawrence Berkeley National Laboratory leads by almost a factor of two, other laboratories in Japan and Europe – most notably GSI in Germany – have made most of the new discoveries in the past couple of decades. A graph displaying the number of isotopes discovered per laboratory as a function of time was featured as the “Trendwatch” in a recent issue of Nature (Trendwatch 2011). The graph seems to indicate that the top five laboratories are Berkeley, Cavendish, GSI, RIKEN, and JINR in Dubna; however, RIKEN was included only because of the large number of recent discoveries. But in reality, CERN’s ISOLDE has played a pioneering role in the discovery of isotopes, especially with the “isotope-online” technique, and ranks number five on the list.

Now why is the information contained in the database significant? The discovery of isotopes has a long history beginning with the discovery of radioactivity of uranium (later identified as 238U) by Becquerel in 1896. The discovery of new isotopes is closely linked to developments of new techniques and new accelerators (Thoennessen and Sherrill 2011). Creating and detecting new isotopes is the first prerequisite to being able to study them, automatically putting the laboratories that produce the most exotic isotopes in the best position for doing the most exciting science with these isotopes. The techniques to produce, separate and identify these isotopes are also critical to make and deliver clean beams of less exotic isotopes at higher intensities, which can then be used to explore the properties of these nuclei. The recent conference on Advances in Radioactive Isotope Science, ARIS 2011, highlighted not only the tremendous interest in the field and the most recent advances in physics but also the technical developments making these experiments with exotic isotopes possible (ARIS 2011 charts the nuclear landscape).

The data presented in the Trendwatch indicate that the balance of power pushing the field forward has shifted away from the US. The article did not stress that 2010 was the most productive year for the discovery of isotopes. For the first time more than 100 isotopes were discovered in a single year. This points to a renaissance of the field, which is driven by the start of a new accelerator system in RIKEN, Japan, and new technical developments at GSI. During the past 20 years, most new isotopes were discovered at projectile fragmentation facilities, thus the next major step will be the new accelerators currently being designed at the Facility for Antiprotons and Ion Research (FAIR) at GSI and the Facility for Rare Isotope Beams (FRIB) at Michigan State University in the US. FRIB is absolutely critical for the US to play a leading role in nuclear physics in the future.

Superconductivity and the LHC: the early days

Comparison of dipoles

As the 1970s turned into the 1980s, two projects at the technology frontier were battling it out in the US accelerator community: the Energy Doubler, based on Robert Wilson’s vision to double the energy of the Main Ring collider at Fermilab; and Isabelle (later the Colliding Beam Accelerator) in Brookhaven. The latter was put in question by the difficulty in increasing the magnetic field from 4 T to 5 T – which turned out to be much harder than originally thought – and eventually gave way to Carlo Rubbia’s idea to transform CERN’s Super Proton Synchrotron into a p–p collider, allowing the first detection of W and Z particles. Fermilab’s project, however, became a reality. Based on 800 superconducting dipole magnets with a field in excess of 4 T, it involved the first ever mass-production of superconductor and represented a real breakthrough in accelerator technology. For the first time, a circular accelerator had been built to work at a higher energy without increasing its radius.

When the Tevatron began operation at 540 GeV in 1983, Europe was just starting to build HERA at DESY. This electron–proton collider included a 6 km ring of superconducting magnets for the 820 GeV protons and it came into operation in 1989. The 5 T dipoles for HERA were the first to feature cold iron and – unlike the Tevatron magnets, which were built in house – they were produced by external companies, thus marking the industrialization of superconductivity.

Meanwhile the USSR was striving to build a 3 TeV superconducting proton synchrotron (UNK), which was later halted by the collapse of the Soviet Union, while at CERN the idea was emerging to build a Large Hadron Collider in the tunnel constructed for the Large Electron–Positron (LEP) collider (CERN Courier October 2008 p9). However, the US raised the bid with a study for the “definitive machine”. The Superconducting Super Collider (SSC), which was strongly supported by the US Department of Energy and by President Reagan, would accelerate two proton beams to 20 TeV in a ring of 87 km circumference with 6.6 T superconducting dipoles. With this size and magnetic field, the SSC would require decisive advances in superconductors as well as in other technologies. When the then director-general of CERN, Herwig Schopper, attended a high-level official meeting in the US and asked what influence on the scientific and technical goals the Europeans could have by joining the project, he was told “none, either you join the project as it is or you are out”. This ended the possibility of collaboration and the competition began.

To compete with the SSC, the LHC had to fight on two fronts: increase the magnetic field as much as possible so as to reduce the handicap of the relatively small circumference of the LEP tunnel; and increase the luminosity as much as possible to compensate for the inevitable lower energy. In addition, CERN had to cope with a tunnel with a cross-section that was tiny for a hadron collider, which many considered a “poisoned gift” from LEP. However, the interest for young physicists and engineers lay in the “impossible challenges” that the LHC presented.

To begin with, there was the 8–10 T field in a dipole magnet. Such a large step with respect to the Tevatron would require both the use of large superconducting cable to carry 13 kA in operating conditions of 10 T – almost double the capability of existing technology – and cooling by superfluid helium at 1.8–1.9 K. Never previously used in accelerators, superfluid helium cooling had been developed for TORE Supra, the tokamak project led by Robert Aymar but on a smaller scale. Then, to fit the exiting LEP tunnel, the magnets would have to be of an innovative “two-in-one” design – first proposed by Brookhaven but discarded by US colleagues for the SSC – where two magnetic channels are hosted in the iron yoke within a single cold mass and cryostat. In this way, a 1 m diameter cryostat could house two magnets, while the geometry of the SSC (with separate magnets but with 30% lower field than the LHC) simply could not fit in the LHC tunnel. Figure 1 shows the various main-dipole cross-sections for the various hadron machines.

A critical milestone

In 1986 R&D on the LHC started under the leadership of Giorgio Brianti, quietly addressing the three issues specific to the LHC (high field, superfluid helium and two-in-one), while relying on the development done for HERA and especially for the SSC for almost all of the other items that needed to be improved. The high field was the critical issue and had to be tested immediately. Led by Romeo Perin and Daniel Leroy, CERN produced the first LHC coil layout and provided the first large superconducting cable to Ansaldo Componenti. This company then manufactured on its own a 1-m long dipole model – single bore, without a cryostat – that was tested at CERN in 1987. Reaching a field of 9 T at 1.8 K, it proved the possibility of reaching the region of 8–10 T (CERN Courier October 2008 p19). This was arguably the most critical milestone of the project because it gave credibility to the whole plan and began to lay doubt on the strategy for the SSC.

Structure of the superconducting cable

These results were obtained with niobium-titanium alloy (Nb-Ti), the workhorse of superconductivity. CERN had also a parallel development with niobium-tin (Nb3Sn) that could have produced a slightly higher field at 4.5 K, with standard helium cooling. This development, pursued with the Austrian company Elin and led by CERN’s Fred Asner, produced a 1-m long 9.8 T magnet and also a 10.1 T coil in mirror configuration, the first accelerator coil to break the 10 T wall. However, in 1990 the development work on Nb3Sn was stopped in favour of the much more advanced and practical Nb-Ti operating at 1.9 K. This was a difficult decision, as Nb3Sn had a greater potential than Nb-Ti and would avoid the difficulty of using superfluid helium, but it was vitally important to concentrate resources and to have a viable project in a short time. The decision was similar that taken by John Adams in the mid-1970s to abandon the emerging superconducting technology in favour of more robust resistive magnets for CERN’s Super Proton Synchrotron.

For the development of the superconducting cable there were three main issues. First, it should reach a sufficient critical current density with a uniformity of 5–10% over the whole production, which also had to be guaranteed in the ratio between the superconductor and the stabilizing copper matrix, illustrated in figure 2. The critical current was to be optimized at 11 T at 1.9 K, maximizing the gain when passing from 4.2 to 1.9 K. The second issue was to reduce the size of the superconducting filaments to 5 μm without compromising the critical current. This required, among other features, the development of a niobium barrier around Nb-Ti ingots. Third was to control the dynamic (ramping up) effect in a large cable, as some effects vary as the cube of the width.

Again, the strategy was to concentrate on specific LHC issues – the large cable, the critical current optimization at 1.9 K – and rely on the SSC’s more advanced development for the other issues. There is, indeed, a large debt that CERN owes to the SSC project for the superconductor development. However, when the SSC project was cancelled in 1993, the problem of eliminating the dynamic effect arising from the resistance between strands composing the cable was still unresolved – but it became urgent in view of the results on the first long magnets in 1994 and after. Later, CERN carried out intense R&D to find a solution suitable for mass production relatively late, at the end of the 1990s. This involved controlled oxidation, after cable formation, of the thin layer of tin-silver alloy with which all the copper/Nb-Ti strands were coated – a technology that was a step beyond the SSC development.

Returning to the magnet development, after the success of the 1987 model magnet, which was replicated by another single-bore magnet that CERN ordered from Ansaldo, the R&D route split into two branches. One concerned the manufacture of 1-m long full-field LHC dipole magnets to prove the concept of high fields in the two-in-one design, with superconducting cable and coil geometry close to the final ones. A few bare magnets, called Magnet Twin Aperture (MTA), were commissioned from European Industry (Ansaldo, Jeumont-Scheinder consortium, Elin, Holec) under the supervision of Leroy at CERN.

The second line of development lay in proving the two-in-one concept in long magnets and a superfluid-helium cryostat. This involved assembling superconducting coils from the HERA dipole production, which had ended in 1988, in a single cold mass and cryostat, the Twin Aperture Prototype (TAP). The magnet, under the supervision of Jos Vlogaert with the cryostat and cold-mass, under Philippe Lebrun, tested successfully in 1990, reaching 5.7 T at 4.2 K and 7.3–8.2 T at 1.8 K – thus supporting the choices of the two-in-one magnet design, of the superfluid helium cooling and the new cryostat design.

At the same time, the LHC dipole was designed in the years 1987–1990, featuring an extreme variation: the “twin” concept, where the two coil apertures are fully coupled, i.e. with no iron between the two magnetic channels (figure 3). We now take this design for granted, but at the time there was scepticism within the community (especially across the Atlantic); it was supposed to be much more vulnerable to perturbations because of the coupling and to present an irresolvable issue with field quality. It is to the great credit of the CERN management and especially Perin, who for a long time was head of the magnet group, that they defended this design with great resilience – because among many advantages it also made an important 15% saving in the cost.

Schematic of early options for the LHC dipole

The result of the first sets of twin 1-m long magnets came in 1991–1992. Some people were disappointed because they felt that the results fell short of the 10 T field “promised” in the LHC “pink book” of 1990. However, anyone who knows superconductivity greatly appreciated that the first generation of twins went well over 9 T. This was already a high field and only 5–10% less than expected from the so-called “short sample” (the theoretical maximum inferred by measuring the properties of a short 50–70 cm length of the superconducting cable); accelerator magnets normally work at 80%, or less, of the short-sample value. The results of the 1m LHC models also made it clear that the cable’s mechanical and electrical characteristics and the field quality of the magnet (both during ramp and at the flat top) were not far from the quality required for the LHC.

A final step would be to combine the two branches of the development work and put together magnets of the twin design with a 10 m cold mass in a 1.8 K cryostat to demonstrate that full-size, LHC dipoles of the final design were feasible. However, the strict deadline imposed by the then director-general, Rubbia, dictated that the LHC should have the same time-scale as the SSC and be ready at the end of 1999. This meant that CERN was forced to launch the programme for the first full-size LHC prototypes in 1988, i.e. well before the end of the previous step, the construction in parallel of 1 m LHC MTA models and the 10-m long TAP.

At this point, CERN was just finishing construction of LEP and beginning work on industrialization of the components for LEP2; it was a period of shortage of personnel and financial resources (not a new situation). So Brianti and collaborators devised a new strategy: for the first time CERN would seek strong participation from national institutes in member states in the accelerator R&D and construction. In 1987–1988 the president of INFN, Nicola Cabibbo, and CERN’s director-general, Schopper, agreed – with a simple exchange of letters (everything was easier in those days) – that INFN would given an exceptional contribution to the LHC R&D phase. The total value was about SwFr12 million (1990 values) to be spread over eight years.

Towards real prototypes

In 1988 and 1989, INFN and CERN ordered LHC-type superconducting cables for long magnets and in 1989 INFN ordered two 10-m long twin dipoles from Ansaldo Componenti in Italy, some nine months before CERN had the budget to order three long dipoles, one from Ansaldo and two from Noell, a German company that had been involved in the construction of HERA quadrupoles. A fourth CERN long magnet, without the twin design, was ordered from the newly formed Alstom-Jeumont consortium (even at CERN some people still doubted the effectiveness of the twin design). The effort was decisive in being able by 1993 to have the magnets qualified by individual tests and then put into a string, consisting of dipoles and quadrupoles connected in series to simulate the behaviour of a basic LHC cell.

Parallel to the INFN effort, the French CEA-Saclay institute established collaboration with CERN and took over the construction of the first two full-size superconducting quadrupoles for the LHC. While CERN provided specifications and all of the magnet components (including superconducting cable), CEA did the full design and assembly of these quadrupoles, for a value a few million Swiss francs over the eight years of R&D (CERN Courier January/February 2007 p25). This was the start of a long collaboration; the French also continued to support the project after the initial R&D, throughout industrialization and construction phases, with an in-kind contribution on quadrupoles, cryostats and cryogenics for about SwFr50 million (split between CEA and CNRS-IN2P3).

The challenge of the prototyping was hard and covered many aspects. In particular for the dipoles, CERN first had to convince industry to pay enough attention and to invest resources in the LHC; the allure of the SSC, a much larger project (6000 main dipoles of 15 m length, 2000 quadrupoles, etc), was difficult to ignore. CERN’s project was much more challenging technically, with the required accuracy of the tooling a factor of five or so higher than for the HERA magnets. There was also the usual fight in a prototyping phase: good results required building expensive tooling for one or two magnets, with insufficient budget and no certainty that the project would be approved and the tooling cost thus paid for.

Short straight section

A delay of one year was the price to pay for the many developments and adjustments. Meanwhile, in 1993 the project had to pass a tough review devoted to the cryo-magnet system led by Robert Aymar, who as CERN’s director-general 10 years later would collect the fruit of the review. With the review over and completion of the long magnet prototypes approaching, the credibility of the LHC project increased. In autumn 1993, the SSC came to a halt – certainly because of high and increasing cost (more than $12 billion) and the low economic cycle in the US, but also because the LHC now seemed a credible alternative to reach similar goals at a much lower cost ($2 billion in CERN accounting). Rubbia, near the end of his mandate as director-general, which was the most critical for the R&D phase, led the project without rival. In a symbolic coincidence, the demise of the SSC occurred at the same as leadership of the LHC project passed from Giorgio Brianti, who had led the project firmly from its birth through the years of uncertainty, to Lyn Evans, who was to be in charge until completion 15 years later. The end of the SSC and the green light for the LHC was marked by the delivery to CERN of the first INFN dipole magnet in December 1993, just in time to be shown to the Council. This was followed four months later by the second INFN magnet and then by the CERN magnets, as well as by the two CEA quadrupoles designed and built by the team of Jacques Perot and later Jean-Michel Rifflet (figure 4).

Returning to the first dipole, which had been delivered from INFN at the end of 1993, a crash programme was necessary to overcome an unexpected problem (a short circuit in the busbar system – a problem that in a different form would later plague the project), so as to test it by in time for a special April session of the Council in 1994. The magnet passed with flying colours, going above the operational field of 8.4 T at the first quench, beyond 9 T in two quenches, and a first quench above 9 T after a thermal cycle i.e. full memory (figure 5). Its better-than-expected performance was actually misleading, giving the idea that construction of the LHC might be easy; in fact, it took a long six years before another equally good magnet was again on the CERN test bench. However, the other 10-m long magnets performed reasonably well and with the two very good CEA quadrupoles (3.5 m long), CERN set up the first LHC magnet string, to test it thoroughly and finally receive the approval of the project in December 1994.

The first 10 m LHC dipole prototype

Many other formidable challenges were still to be resolved on the technical, managerial and financial sides. These included: the nonuniformity of quench results and the problem of retraining that plagued the second generation of LHC prototypes; the unresolved question of the inter-strand resistance; the change of aluminium to austenitic steel as the material for the collars, implemented by Carlo Wyss; and the lengthening of the magnets from 10 m to 15 m with the consequent curvature of the cold mass, etc.

Looking back at the period 1985–1994, when the base for the LHC was established, it is clear that a big leap forward was accomplished during those years. The vision initiated by Robert Wilson for the Tevatron was brought to a peak, pushing the limit of Nb-Ti to its extreme on a large scale. New superconducting cables, new superconducting magnet architectures and new cooling schemes were put to the test, in the constant search for economic solutions that would be applicable later to large scale production. This last point is an important heritage that the LHC leaves to the world of superconductivity: the best performing solution is not always going to be really the best. Economics and large-scale production are very important when a magnet is part of a large system and integration is critical. “The best is the enemy of the good” has been the guiding maxim of the LHC project – a lesson from the LHC for the world of superconductivity in this 100th anniversary year.

Gravitation: Foundations and Frontiers

By T Padmanabhan
Cambridge University Press
Hardback: £50 $85
E-book: $68

CCboo3_09_11

The general theory of relativity – the foundation of gravitation and cosmology – may be as widely known today as Newton’s laws were before Einstein proposed their geometric interpretation. That was around 100 years ago, yet many unanswered questions and issues are being revisited from the current perspective, such as: why is gravity described by geometry and why is the cosmological constant so extraordinarily fine-tuned in comparison with the scale of elementary particles?

In an active research field – where the universe at large meets the discoveries in particle physics – there is much need for textbooks based on research that address gravity in depth. Thanu Padmanabhan’s book fills this need well and in a unique way. Within minutes of opening the rich, heavy, full, yet succinctly written 728 pages I realized that this is a new and personal view on general relativity, which leads beyond many excellent standard textbooks and offers a challenging training ground for students with its original exercises and study topics.

In the first 340 pages, the book presents the fundamentals of relativity in an approachable style. Yet, even in this “standard” part the text goes far beyond the conventional framework in preparing the reader in depth for mastering the “frontiers”. The titles of the following chapters speak for themselves: “Black Holes”, “Gravitational Waves”, “Relativistic Cosmology” and “Evolution of Cosmological Perturbations”, all of which address key domains in present-day research. Then, on page 591, the book turns to the quantum frontier and extensions of general relativity to extra dimensions, and to efforts to view it as an effective “emergent” theory.

This research-oriented volume is written in a format that is suitable for a primary text in a year-long graduate class on general relativity, although the lecturer is likely to leave a few of the chapters to self-study. “Padmanabhan” complements the somewhat older offerings of this type, such as “The Big Black Book” (Gravitation by Charles Misner, Kip Thorne and John Wheeler, W H Freeman 1973) or “Weinberg” (Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, Wiley 1972).

Naturally, this publication differs greatly from “text and no research” offerings, such as Ta-Pei Cheng’s Relativity, Gravitation and Cosmology: A Basic Introduction (OUP 2009) or Ray d’Inverno’s Introducing Einstein’s Relativity (OUP 1992). Any lecturer using these should consider adding “Padmanabhan” as an optional text to offer a wider view to students on what is happening in research today. In comparison with “Hartle” (Gravity: An Introduction to Einstein’s General Relativity, Addison-Wesley 2003), one cannot but admire that “Padmanabhan” does not send the reader to other texts to handle details of computations; what is mentioned is also derived and explained in depth. Of course, “Hartle” is often used in a “first” course on gravity but frankly how often is there a “second” course?

“Padmanabhan” is, as noted earlier, voluminous, making it an excellent value for money because it contains the material of three contemporary books for the price of one. So who should own a copy? Certainly for any good library covering physics, the question is really not if to buy but how many copies. I also highly recommend it to anyone interested in general relativity and related fields because it offers a modern update. Students who have already had a “first” course in the subject and are considering taking up research in this field will find in “Padmanabhan” a self-study text to deepen their understanding. If you are a bookworm like me, you must have it, because it is a great read from start to finish.

The Pursuit of Quantum Gravity: Memoirs of Bryce DeWitt from 1946 to 2004 and Bryce DeWitt’s Lectures on Gravitation

The Pursuit of Quantum Gravity: Memoirs of Bryce DeWitt from 1946 to 2004

By Cécile DeWitt Morette

Springer 2011
Hardback: £31.99 €36.87 $49.95

Bryce DeWitt’s Lectures on Gravitation

By Bryce DeWitt (ed. Steven M Christensen)
Springer 2011
Paperback: £62.99 €73.80 $89.95

CCboo1_09_11

Bryce DeWitt made many deep contributions to quantum field theory, general relativity and quantum gravity. He generalized Richard Feynman’s original approach to quantum gravity at the one-loop level, to a fully fledged, all-order quantization of non-abelian gauge theories, including ghosts. The formalism that he developed also transformed the way that we think about quantum field theory, although it took some time before his ideas percolated the community.

The Pursuit of Quantum Gravity is a charming and remarkable book put together by Cécile Morette, who became his wife and was to share his life for more than 50 years. Here we meet the man and his science. It is a remarkable story of vision, passion, independence and determination that led this scientist along such a difficult road, against all odds.

CCboo2_09_11

The material in the book is difficult to find elsewhere and it is not only highly informative but also a pleasure to read. For instance, the way that he organized an expedition to Mauritania to check the deflection of light by the Sun and thus verify the results from the 1919 eclipse by Arthur Eddington et al. There are also documents that are not easily accessible elsewhere, such as the essay that won him the first prize of the Gravity Research Foundation in 1953. It is quite remarkable how many aspects of the vision laid out in that paper that he was able to accomplish.

This book makes us aware of how much we owe Bryce DeWitt, and how deep and broad his influence has been. It pays homage to a truly great man – through the words of the person who knew and understood him best.

Back in 1971, he delivered a series of lectures on gravitation at Stanford University, before moving to the University of Texas at Austin. It has taken 40 years for them to be available to the physics community, but finally they are here as Bryce DeWitt’s Lectures on Gravitation, thanks to the efforts of his former student Steven M Christensen. Anyone who has seen the original realizes how grateful we should be to the editor for the large amount of work required in carrying out this task.

These lectures do not represent a standard introduction to the subject but rather DeWitt’s unique way of presenting it. Along with standard topics that include special relativity, continuous groups and Riemannian manifolds, one finds a remarkable treatment of the study of asymptotic fields, the energy–momentum of the gravitational field, and above all the dynamics of the production and propagation of gravitational waves.

Many of the results found here cannot be found in other books or review articles on the subject, despite the number of years that have elapsed since they were presented. Take, for example, the treatment of the angular momentum carried by gravitational waves, where a cursory look at the relevant chapters shows why this book is different. The complexity of the algebra involved requires a combination of tenacity, wizardry and understanding that is difficult to find in any other master of general relativity. DeWitt’s head-on, uncompromising approach is unique.

The book also has high historical value, showing how this maverick maven thought of the subject. It is a great tribute to his scientific legacy.

bright-rec iop pub iop-science physcis connect