Comsol -leaderboard other pages

Topics

FLASH breaks more records and reaches the water window

Using the Free-electron LASer in Hamburg (FLASH), DESY has established a new world record, generating pulses of laser light at wavelengths between 13.5 and 13.8 nm with an average power of 10 mW and energies of up to 170 µJ per pulse, all at repetition rates of 150 times a second. The pulses have a duration of only around 10 fs, so the peak power per pulse can reach 10 GW, greater than is currently available at the biggest plasma X-ray laser facilities. In addition, a specific part of the radiation at 2.7 nm – the fifth harmonic – enables FLASH to reach deep into the “water window”, a wavelength range that is crucially important for the investigation of biological samples. The range around 13.5 nm is also important because laser radiation of this wavelength is required by the semiconductor industry to produce the next generation of microprocessors using extreme ultraviolet lithography.

FLASH is currently the only laser facility that can deliver ultra-short high-power X-ray laser pulses with a very high repetition rate. It currently generates laser radiation with fundamental wavelengths between 13.1 and 40 nm. Future development will see the repetition rate reach the multi-kilohertz range and the average power increase to more than 100 mW. FLASH also produced coherent radiation at the third and fifth harmonics of the 13.7 nm fundamental wavelength, that is, at around 4.6 and 2.7 nm with a pulse duration of less than 10 fs. The corresponding energies approach 1 µJ and 10 nJ per pulse for the third and fifth harmonics, respectively.

In 2007, the facility will be upgraded to allow it to generate radiation with a fundamental wavelength that is continuously tunable between 6 and 60 nm. At the higher harmonics, FLASH will thus provide ultra-short laser pulses with microjoule energies for which the wavelengths will be tunable within and across the edges of the water window. This will create unprecedented opportunities for high-resolution in vitro 2D and 3D imaging and spectroscopy of biological systems.

The record performance was achieved by the DESY FLASH team in collaboration with international partners, the characterization of the photon beams being performed in collaboration with researchers from the Laboratoire d’interaction du rayonnement X avec la matière (CNRS/Université Paris-Sud), the International Research Centre in Experimental Physics at Queen’s University Belfast, and the National Centre for Plasma Science and Technology, Dublin City University.

US team finds direct proof for dark matter

The idea of dark matter in the universe dates back to the 1930s, with the observation that the gravitational force on the visible matter in clusters of galaxies could not fully account for their behaviour, implying some alteration to gravity, or the existence of non-luminous, invisible matter. Now a team in the US has used a combination of astronomical images to analyse gravitational lensing in a region where two clusters are merging. The researchers find that their observations cannot be explained by modified gravity.

CCEust1_10-06

While dark matter has become the focus of a range of research, from cosmology to particle physics, it has proved difficult to rule out the alternative scenario in which gravity is slightly altered from the standard 1/r2 force law. The new study, however, has discovered a system in which the inferred dark matter is not coincident with the observable matter, and the difference in position is too great to be accounted for by modifying gravity. This, the team says, provides direct empirical proof for dark matter.

The team from the universities of Arizona and Florida, the Kavli Institute for Particle Astrophysics and Cosmology, and the Harvard-Smithsonian-Center for Astrophysics has combined observations from various telescopes to build a picture of what is happening in the galaxy cluster 1E0657-558. This cluster is particularly interesting because it shows evidence that a smaller cluster has at some stage ripped through a larger cluster, creating a bow-shaped shock wave.

Using images from the Hubble Space Telescope, the European Southern Observatory’s Very Large Telescope and the Magellan telescope to provide information on gravitational lensing of more-distant galaxies, the team has created a map of the gravitational potential across the cluster 1E0657-558. This reveals two regions in which the mass is concentrated.

The team has also observed the cluster with NASA’s Chandra X-ray Observatory to measure the positions of the two clouds of hot gas that are associated with the merging galaxies. It finds that these two clouds of X-ray emitting plasma of normal baryonic matter are not coincident with the two central locations of the gravitational mass, which in fact are further apart. This suggests that the plasma clouds have slowed as they passed through each other and interacted, while dark matter in the two clusters has not interacted.

Metallic water becomes even more accessible

Water is well known for its astonishing range of unusual properties, and now Thomas Mattsson and Michael Desjarlais of Sandia National Laboratories in New Mexico have suggested yet another one. They found that water should have a metallic phase at temperatures of 4000 K and pressures of 100 Gpa, which are a good deal more accessible than earlier calculations had indicated.

CCEmet1_10-06

The two researchers used density functional theory to calculate from first principles the ionic and electronic conductivity of water across a temperature range of 2000–70,000 K and a density range of 1–3.7 g/cm3. Their calculations showed that as the pressure increases, molecular water turns into an ionic liquid, which at higher temperatures is electronically conducting, in particular above 4000 K and 100 GPa. This is in contrast to previous studies that indicated a transition to a metallic fluid above 7000 K and 250 GPa. Interestingly, this metallic phase is predicted to lie just next to insulating “superionic” ice, in which the oxygen atoms are locked into place but all the hydrogen atoms are free to move around.

Suitable conditions for metallic water should exist on the giant gas planets. In particular, the line of constant entropy (isentrope) on the planet Neptune is expected to lie in the region of temperature and pressure suggested by these studies for the metallic liquid phase.

Supernova follows X-ray flash

Astronomers have reached another milestone in the understanding of gamma-ray bursts by observing a supernova at the location of an X-ray flash that was detected by NASA’s Swift spacecraft. The supernova was caught from the start of the explosion thanks to the X-ray flash precursor, an observation that sheds light on the link between X-ray flashes, gamma-ray bursts and supernovae.

Swift, the first spacecraft dedicated to observing gamma-ray bursts, is again making the headlines. Being able to turn its X-ray and optical-ultraviolet telescopes within a minute or so to the location of a new gamma-ray burst, it provides key information on the position of the burst and on the evolution of its early afterglow (see CERN Courier October 2005 p11). This has led to the identification of the host galaxies of a few short gamma-ray bursts, supporting the idea that short gamma-ray bursts arise from the merger of two neutron stars (see CERN Courier December 2005 p20).

The recent observation of a supernova associated with the X-ray flash detected by Swift on 18 February is the subject of a series of four papers published in Nature. S Campana and colleagues link features of the X-ray flash to the shock wave from supernova SN 2006aj, while E Pian and collaborators report on the optical discovery of the supernova and its association with the X-ray flash. SN 2006aj is the fourth supernova to be clearly linked to an X-ray or gamma-ray burst event. While the previous associations established the link between long gamma-ray bursts and especially luminous and powerful type Ibc supernovae (see CERN Courier September 2003 p15 and CERN Courier September 2004 p13), this new observation shows that X-ray flashes are also produced in similar core-collapse explosions of stars stripped of their outer hydrogen and helium envelopes. The papers in Nature analyse the link between the physical properties of the X-ray flash XRF 060218 and its associated supernova compared with the three other similar stellar explosions so far observed.

The emission of XRF 060218 was seen to peak in the X-ray region, at a lower energy than for typical gamma-ray bursts. Moreover, although the event lasted for as long as around 2000 seconds, it was about 100 times less energetic than typical gamma-ray bursts. The associated supernova was also much less powerful for this X-ray flash than for the three gamma-ray bursts. The inferred initial mass of the dying star is only about 20 solar masses, which is believed to be too small to form a black hole through core collapse. P A Mazzali and collaborators therefore suggest that a magnetar – a strongly magnetized neutron star – is at the origin of the X-ray flash on 18 February. This peculiarity would explain the sub-energetic properties of this event compared with other gamma-ray bursts thought to be powered by black holes.

The observation thus extends gamma-ray burst phenomenology towards smaller stellar masses than expected. The supernova associated with XRF 060218 has properties between those of a normal supernova of type Ibc that does not produce a precursor event and those powering a gamma-ray burst. This should allow a better understanding of why only some dying stars produce an X-ray flash or a gamma-ray burst. The detection of this X-ray flash was only possible because it was relatively nearby. According to A M Soderberg and colleagues, X-ray flashes should be about 10 times more common than long gamma-ray bursts, but most of them are too far away to be detected by current instruments.

Further reading

S Campana et al. 2006 Nature 442 1008.
P A Mazzali et al. 2006 Nature 442 1018.
E Pian et al. 2006 Nature 442 1011.
A M Soderberg et al. 2006 Nature 442 1014.

Trieste focuses on hadrons

The fifth Perspectives in Hadronic Physics conference was held on 22–26 May at the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste. The latest in a series organized every second year by the ICTP and the Italian Istituto Nazionale di Fisica Nucleare (INFN), this year’s conference was also sponsored by the Consorzio per la Fisica, Trieste, and the Department of Physics, University of Perugia. It brought together around 100 theorists and experimentalists for more than 60 plenary talks, focusing on present and future theoretical and experimental activities in hadronic physics and relativistic particle–nucleus and nucleus–nucleus scattering.

CCEtri1_10-06

A major success of the conference was the joint participation of the hadronic and heavy-ion communities. This was reflected in the wide range of topics, from the structure of the hadron at low virtuality, to the investigation of the states of matter under extreme conditions and the possible formation of quark–gluon plasma in high-energy heavy-ion collisions. This article presents a summary of the broad range covered by the speakers.

Hadrons, in vacuo and in the medium

The first part of the conference focused on the study through quantum chromodynamics (QCD) of free hadrons and the properties of hadrons in the medium – that is, in nuclear matter. It covered a broad spectrum of theoretical approaches and experimental investigations, including hadron structure and the quantities that describe it, namely form factors, structure functions and generalized parton distributions (GPDs). In this context there was much emphasis on the appreciable amount of experimental work undertaken at Jefferson Lab and at the Mainz Microtron (MAMI), for example, casting important light on the role of strange quarks in the nucleon.

MAMI has also obtained values of the masses and widths of mesons in the medium, which appear to differ appreciably from the free case. The possibility of a ω-nucleus bound state was suggested. Scalar and axial vector mesons can be generated dynamically within a chiral dynamics approach, which was presented in detail.

Information from Jefferson Lab on the nucleon-spin structure function from almost real photon scattering to the deep-inelastic scattering (DIS) region was reviewed at the meeting, and recent results were reported on the use of semi-inclusive DIS on the proton and the deuteron as a tool for investigating the up and down quark densities. Quark–hadron duality was discussed both in its theoretical and experimental aspects, illustrating how recent data from Jefferson Lab can be used to extract the higher twist contributions to the moments of parton distribution functions, which are sensitive to the quark–gluon correlations in the nucleon. Several talks presented recent results on the calculations of hadron form factors and cross-sections in terms of relativized quark models. These included the consideration of higher Fock components in the hadron wave functions.

Exclusive hard processes were discussed in terms of a new nonperturbative quantity that describes the hadron-to-photon transition, for instance in virtual Compton scattering in the backward region. The usefulness of this approach was illustrated in forward exclusive meson-pair production in γγ* scattering.

GPDs were the subject of detailed discussion at the meeting, with a report on the impressive experimental results from Hall A and the Deeply Virtual Compton Scattering Collaboration at Jefferson Lab. These experiments have accessed the twist-2 term in the proton, which is a linear combination of GPDs, and they find almost no dependence on momentum-transfer squared, Q2. This is in good agreement with the theoretical expectation where the process described by the so-called “handbag” diagram dominates.

Turning to the calculation of GPDs, a meson-cloud model allows their computation at the hadronic scale, while GPDs for a nuclear target have been calculated in a constituent quark model, which shows that nuclear effects prove to be much larger than expected. Theoretical results for Compton scattering and two-photon annihilation into pairs of hadrons within the handbag approach were compared with data from Jefferson Lab and from the Belle experiment at the KEKB facility. The meeting presented indications for the presence and role of two different non-perturbative scales in hadronic structure, while showing that complementary information on the 3D parton structure of the hadron was accessible by measuring multiple parton distributions in hadron–nucleus reactions.

Cold nuclear matter figured in several talks, which presented new experimental and theoretical results. These clearly demonstrated that nowadays our knowledge on nuclei has reached the stage of a quantitative access to nucleon–nucleon correlations.

Eventually it will be possible to explore QCD where the strength of nonlinearities is substantially higher than at DESY’s HERA electron–proton collider.

Mechanisms for quark hadronization and hadron formation in the medium were another important topic. The HERMES collaboration at DESY reported a clear attenuation of various leading hadrons in heavy targets compared with the DIS process on deuterium. Theoretical interpretations of these nuclear effects are based either in terms of inelastic hadron interactions or in terms of quark energy loss. Successful interpretations of the data provide information on the time needed to produce a colour-neutral precursor, which eventually fragments into a leading hadron. Preliminary data at a lower photon energy from the CEBAF Large Acceptance Spectrometer at Jefferson Lab, particularly on the transverse-momentum broadening, are expected to shed new light on these effects owing to the finite formation time of hadrons.

Several presentations showed that eventually it will be possible to explore QCD where the strength of nonlinearities is substantially higher than at DESY’s HERA electron–proton collider. The meeting also discussed ultraperipheral collisions, which will allow the study of structure functions at low Q2, and diffraction at very high energies.

From hadrons to quark–gluon plasma

The heavy-ion part of the conference began with an overview of saturation physics from the Relativistic Heavy Ion Collider (RHIC) to the Large Hadron Collider (LHC). Here emphasis was on experimental signals for the so-called colour glass condensate, followed by the theoretical aspects of saturation and shadowing physics at small values of the variable, Bjorken x.

The modification of the jet shapes in the jet-quenching phenomenon at RHIC seems to provide an efficient tool for probing the soft gluon radiation induced by the produced medium. At both RHIC and the LHC, the ratios of heavy to light mesons at large transverse momentum offer solid possibilities for checking the formalisms for energy loss. Photon-tagged correlations have also been proved to be efficient tools for extracting both the vacuum and the medium-modified fragmentation functions, in proton–proton and nucleus–nucleus scattering. The strength of the jet-quenching process depends on the medium transport coefficient qˆ, but this dependence is weakened by the geometrical bias that favours the production of the hard parton at the periphery of the medium. Nonetheless, its precise value is of considerable importance to interpret the present RHIC data and to foresee the amount of quenching in lead–lead collisions at the LHC. A recent non-perturbative estimate of this quantity, using the anti-de Sitter space/conformal field theory correspondence, was presented.

Recent measurements of J/ψ production in deuteron–gold and gold–gold collisions by the PHENIX collaboration at RHIC seem to be consistent with a weak shadowing effect together with the possible inelastic interaction of the J/ψ meson in cold nuclear matter. In the heavy-ion collisions, the J/ψ suppression at RHIC is remarkably similar to that observed at CERN’s Super Proton Synchrotron (SPS), despite the much larger energy density reached at RHIC. The reason for this is not yet clear and could be owing to the formation of J/ψ states from the statistical recombination of charm quarks in the medium.

The soft-physics side reported recent numerical calculations on plasma instabilities, which attempt to determine the behaviour of an anisotropic non-abelian medium on long time scales. Observables measured in two-pion (Hanbury-Brown/Twiss) interferometry and in pion spectra at RHIC are consistent with the emission of pions from a system that has a restored chiral symmetry. The recent preliminary data from the NA60 experiment on r-meson production in indium–indium collisions at SPS energies were discussed. While the measurements compare well with expectations for the broadening of the ρ width, these data tend to exclude the drop in mass expected from Brown-Rho scaling, which predicts the in-medium mass to be proportional to the qbarq condensate. More detailed presentations complemented overviews of heavy-ion collisions at intermediate energies and the physics programme for the ALICE experiment at the LHC.

The conference then focused on spectroscopic studies and the production of exotic states. The new exotic states discovered at 4 GeV by Belle and the BaBar experiment at SLAC can be understood as diquark–antidiquark (qq–qbarqbar) states. The meeting also covered various problems related to dense hadronic matter – in particular, the high-temperature phase of QCD, bifurcations in the physics of strong gluon fields, and the topological structure of dense hadronic matter, and the possibility of measuring the production of “strangelets” at the LHC using the Centauro And STrange Object Research (CASTOR) detector at the CMS detector.

The conference also discussed the formation and properties of ultra-dense quark matter in the stars, from several different aspects. These included the colour-superconducting quark matter that can be formed in the core of compact stars, the various many-body approaches to the treatment of the equation of state of nuclear matter at baryon densities exceeding the density of normal nuclei by several times, and the quark-deconfinement model of gamma-ray bursts.

The last part of the meeting focused on the presentation of the most relevant plans for future experimental facilities. These included overviews on the future Facility for Antiproton and Ion Research (FAIR) at GSI and the broad programme of physics at the Japan Proton Accelerator Research Complex (J-PARC), as well as physics at Jefferson Lab with the CLAS detector. The possibilities offered by a high-luminosity electron–ion collider were also discussed.

The conference closed with a talk from the 2005 physics Nobel prize winner, Roy Glauber from Harvard, who described his pioneering work on quantum optics and its relationship to heavy-ion physics. The hadronic and heavy-ion communities are now looking forward to the sixth Perspectives in Hadronic Physics ICTP conference.

EPAC’06 showcases the latest in accelerators

EPAC’06: une vitrine pour les accélérateurs

La belle ville historique d’Edimbourg, en Écosse, a constitué le cadre de la dixième conférence européenne sur les accélérateurs de particules (EPAC’06), qui s’est tenue en juin et a rassemblé plus de 1000 participants venant de 33 pays. Le programme scientifique couvrait tous les aspects des accélérateurs et des technologies annexes, depuis les très hautes énergies du Grand collisionneur de hadrons et du projet de Collisionneur linéaire international jusqu’aux moindres détails de la commande des faisceaux et des contraintes techniques de l’hadrothérapie. Cette manifestation a montré en particulier combien l’avenir de la physique des hautes énergies est prometteur, grâce aux physiciens qui proposent une multitude d’idées innovantes, lesquelles sont à l’étude dans des collaborations internationales.

The 10th European Particle Accelerator Conference, EPAC’06, took place in Edinburgh on 26–29 June. Attended by more than 1000 participants from 33 countries on six continents, it offered a wide scientific programme, covering all aspects of accelerators and their technology. In particular, the meeting showed that the future of high-energy physics looks bright, thanks to a community that is generating innovative ideas, which are being studied in worldwide collaborations. All-in-all, EPAC’06 was a bumper edition, and this article reports only briefly on the highlights.

The packed programme included some special sessions, during which the three European Physical Society Accelerator Prizes for 2006 were awarded. These went to Axel Winter of DESY and Hamburg University, to Lutz Lilje of DESY, and to Vladimir Teplyakov of IHEP, Protvino, who was unable to attend. The conference also featured a talk by Roger Penrose from Oxford, with the intriguing title “Big Bang: An Outrageous New Perspective, and its Implications for Particle Physics”, and Stefano Chiocchio from Garching, a leading member of the ITER fusion project team, gave an inspiring closing presentation on “ITER and International Scientific Collaboration”.

The high-energy frontier

The imminent operation of the Large Hadron Collider (LHC) at CERN was one of the major issues at the conference and was covered during the sessions devoted to circular colliders. Speakers described the challenges of construction, installation, the first test beams and the future upgrades, which are already foreseen. A good proportion of the talks and posters about the LHC were the work of an enlarged world community, reflecting the trend of increased co-operation among the particle-physics laboratories.

Existing colliders, the Tevatron at Fermilab and HERA at DESY, will pass the baton for the high-energy frontier to the LHC. Talks described the recent successful runs at the Tevatron and the technical challenges during operation of HERA, in both cases emphasizing possible lessons for the LHC.

The Relativistic Heavy Ion Collider (RHIC) at Brookhaven is currently in a very active phase. In six years of operation, there have been collisions between various ion species; now a long future is envisaged, with runs with enhanced luminosity, new ion species and e-RHIC, which will become HERA heir with a programme of electron–proton collisions. The high-luminosity frontier at colliders featured in presentations on the success of the programmes at the B and φ factories, for which there are new and innovative ideas for increasing luminosity.

With the LHC on the horizon, the accelerator community is already preparing the next steps to push the high-energy frontier even further in work on linear colliders, lepton acceleration and new acceleration techniques. Innovative ideas and novel techniques are being developed, which require technologies beyond the state of the art, and ambitious and challenging R&D programmes are being pursued often by fruitful worldwide collaborations.

The importance of these developments was highlighted in the opening presentation at EPAC’06, by Barry Barish, director of the Global Design Effort (GDE) for an International Linear Collider (ILC). He emphasized the worldwide consensus that has emerged for an electron–positron linear collider as the favoured next high-energy physics facility, to complement the LHC and to study the properties of the particles that the LHC should discover. Barish presented the technological challenges of the ILC based on superconducting accelerating cavities, and described the GDE by a worldwide collaboration to optimize the design, do the R&D and prepare the technology transfer to industry. The aim is to have a 500 GeV linear collider ready to build from 2010, possibly extendable to 1 TeV.

Ambitious test facilities are planned in the US and Japan to develop superconducting radio-frequency (RF) technology beyond the state of the art as developed by the DESY-based TESLA collaboration. The Compact Linear Collider (CLIC) study is pushing linear-collider technology even further into the collision energy range of multi-tera-electron-volts by using a novel scheme of two-beam acceleration at high frequency. The ambitious CLIC Test Facility is being developed at CERN to demonstrate the feasibility of the concept in a multi-lateral collaboration.

The key to high luminosity in linear colliders is very small beam dimensions (a few nanometres) at the collision point. This requires beams that are cooled to extremely small emittances in damping rings and very strong focusing in beam delivery systems. Large collaborations are developing these techniques and setting up test facilities.

More exotic technologies could extend the high-energy frontier even further in the future. The conference heard about the concept of acceleration using plasma wake-fields induced by lasers able to produce accelerating fields of several tera-electron-volts a metre. A recent demonstration showed quasi-mono-energetic beam acceleration, thus opening the door to multi-giga-electron-volt applications in a variety of domains by taking advantage of progress in lasers. The meeting also reported impressive post-acceleration in plasma excited by beam.

Another avenue for continuing the quest for higher energies could involve muons, which have all the advantages of electrons without the intrinsic limitations owing to synchrotron radiation. They could be ideal for future high-energy research if successful cooling can be achieved. A Muon Ionisation Cooling Experiment is being built at the Rutherford Appleton Laboratory (RAL) in the UK to demonstrate the feasibility of the novel technique of ionization cooling, with a first beam expected in October 2007. Research with neutrinos, which are produced naturally in the decay of muons, could also benefit from this technique. The conference heard of various methods being studied for a possible neutrino factory. They range from using muon decay in storage rings to the innovative techniques of “beta beams”, involving the decay of unstable ions produced by a high-power proton beam hitting a target or ionization in a dedicated ring.

Light sources and hadron rings

Covering storage rings, linacs and energy-recovery linac-based sources, the presentations on synchrotron light sources and free-electron lasers (FELs) began with a flash rather than a bang. FLASH, the FEL facility at DESY, was reported to be delivering 13 nm and has achieved a peak brilliance that exceeds all other sources by orders of magnitude. There were several reports of R&D programmes in the US, the UK and Japan into injector and superconducting RF systems that would meet the challenges of future advanced energy-recovery sources such as the fourth-generation light source (4GLS) proposed for the UK.

Turning to new third-generation storage rings, the old adage that you wait for one and then three come along together has proved true. The Australian Light Source has just announced first beam in the storage ring, and the SOLEIL and Diamond projects in France and the UK, respectively, have both seen beam commissioning activities this year. Talks on the two European projects illustrated the power of modern digitally based diagnostic systems in beam commissioning and, coincidentally, reported delays caused by cooling water – not a glamorous system but a vital component of any large accelerator.

The growing role for conventional laser systems within single-pass FEL facilities was reported in a talk that reviewed a diversity of applications. One such application, highlighted separately, was synchronization, a critical aspect in achieving and exploiting short-pulse sources. Other highlights included the successful lasing of the SPring-8 Compact SASE Source test accelerator in Japan, a report on injector systems for FELs, and a review of single-pass FELs, which reported that these devices are now relatively mature drivers of user facilities and also discussed the challenges of extending this technique to shorter wavelengths.

The session on hadron accelerators featured several commissioning reports, as well as new projects and developments. The Spallation Neutron Source (SNS) at Oak Ridge has recently gone through its initial commissioning stage with the first high-energy beam pulses on the target producing spallation neutrons. Norbert Holtkamp, the SNS director of the Accelerator Systems, who is soon to take up a position as principal deputy director-general of ITER, presented some of the highlights from the commissioning.

The very large Facility for Antiproton and Ion Research being prepared at GSI will provide antiprotons and ions of all charge states with intensities that are orders of magnitude higher than are available today. Presentations on other high-intensity machines and their operation included status reports on the accelerators of the Japan Proton Accelerator Research Complex, the upgrade to 1.8 MW of the proton facility at PSI, and the upgrades to the ISIS pulsed neutron source at RAL.

More novel topics included the ideas of an Energy Recovery Linac to be used as a high-energy electron cooler for RHIC; the use of crystals and channelling of beams in accelerator extraction, deflection and collimation; and the ideas and first tests with a circular RF quadrupole. At CERN, the Low Energy Antiproton Ring – a cooler storage ring – has recently been rejuvenated as the Low Energy Ion Ring for use as a lead-ion cooler and accumulator ring for the LHC.

Beam dynamics and control

As at previous conferences, beam dynamics and electromagnetic fields received the largest number of contributions. Almost 300 papers were presented in this category, underlining the extremely high level of activity and continuous interest that the accelerator community has in this area. Eleven talks and the poster sessions covered a broad spectrum, ranging from beam optics and single-particle dynamics, through collective effects and instabilities, to developments in computer code and simulation studies.

Speakers reviewed the well developed art of electromagnetic field computation, stressing the benefits of an interdisciplinary approach involving computer science and applied mathematics along with accelerator physics. One impressive example concerned a complex 3D model of a complete superconducting accelerator module of the type that was developed by the TESLA collaboration at DESY. Another overview focused on modelling the effects of space charge and coherent synchrotron radiation in bunch compressor systems – a topic of the highest relevance for FEL projects operating with high peak-current electron bunches. The electron cloud instability, a potential performance limitation in both positron and proton storage rings, retains high activity in both experimental and simulation studies. A great deal of effort has gone into refining computer modelling of the effect, for example, to include aspects of surface science and the magnetic fields of quadrupole and wiggler magnets.

Further presentations concerned space-charge driven resonances and halo formation in high-intensity hadron beams; improvement of collimation systems by non-linear optics insertions; single-particle dynamics near the half-integer resonance in the KEKB facility in Japan and in the presence of betatron coupling in RHIC; suppression of longitudinal coupled bunch instabilities by phase modulation; local bunch shortening by strong RF focusing in the DAFNE machine at Frascati; and analysis of a fast beam-ion instability occurring in a small gap undulator at the Pohang Light Source.

The session on beam instrumentation and feedback featured numerous contributions on R&D for the ILC and FEL facilities. For the ILC these included new developments in cavity and re-entrant-cavity beam-position monitors with sub-micrometre resolution, high-resolution beam-size monitoring systems based on laser wires and Fresnel zone plates, and fast beam-based feedback. Higher-order-mode (HOM) signals induced in superconducting cavities have been used to measure the position of the cavity centres, the beam phase relative to the phase of the accelerating frequency, the beam position, and in a HOM-based feedback to minimize the HOM power in a module.

Talks related to FELs presented new developments and challenges in measuring ultrashort longitudinal bunch profiles, including the intrabunch structure, together with recent experimental data. New applications with ultrafast laser diagnostics, which achieve resolutions approaching 10 fs, were also described, as well as a bunch arrival-time monitor system that has yielded a precision of about 30 fs and could be applied to measure the beam position with an error of only about 3 µm. The future developments discussed included, for example, the combination of ultrafast lasers and light emitted in FELs.

Other highlights in this session included recent measurements of the transverse profiles of the counter-rotating beams at the Tevatron, which have been achieved using ionization profile monitors with new fast electronics. There was also a report on progress on developments for a test of the CPT theorem, whereby the depolarization frequency of two electron bunches was measured with record accuracy.

Technology and applications

The session on accelerator technology covered three aspects at the forefront of developments in different areas of accelerators: RF systems for linear and circular machines, insertion devices for synchrotron radiation facilities, and gantry designs for the latest accelerators for proton and carbon cancer therapy. In addition, there was a report on the unique experience of building a 27 km cryogenic installation at CERN for the LHC.

Superconducting cavities are now the preferred choice for RF for new accelerators, with various designs and applications. There are different options for producing the RF power to feed the cavities: klystrons, inductive output tubes and solid-state amplifiers all have their place, depending on the final power that is needed. To control and regulate these systems digital electronics is the choice everywhere.

Synchrotron-radiation facilities depend on the quality of the synchrotron light that they produce; better quality comes from installing insertion devices. The conference introduced the latest developments in this area: superconducting and in-vacuum undulators and wigglers, cryocooled magnetic structures, and so on.

As usual, a good proportion of the posters and talks on the applications of accelerators was devoted to medicine, in particular protons and light-ion therapy. Hadron accelerators have been used experimentally for cancer treatment for more than a decade, and new projects are now being built to provide general and regular treatment. One of the most demanding aspects in terms of fulfilling the strict safety regulations is the stability and precision of the gantry that delivers the particle beam. There are now designs for gantries weighing 650 tonnes, with a precision of 0.5 mm.

While more than a dozen hospital-based facilities for proton therapy are in operation or under construction around the world, Japan has the only two dedicated centres for cancer therapy with light ions, namely, carbon ions. However, two new centres are being built in Europe, and these will offer beams of both protons and ions. One is located in Heidelberg and the other at the Centro Nazionale Adroterapia Oncologica in Italy. Several others are in advanced stages of planning, and in Japan, the construction of a third carbon-therapy facility has started at Gunma University.

This session also heard about fixed-field alternating-gradient accelerators and their potential use, for example, in accelerator-driven systems (subcritical nuclear reactors), muon acceleration or the production of neutrons for boron neutron capture therapy. In the latter case, neutrons would be produced by 10 MeV protons in an internal beryllium target, and the energy loss and emittance increase of the stored beam caused by scattering in the target would be counteracted by a process similar to the one that is used in ionization cooling.

• The next conference will take place in Genoa, Italy, in June 2008.

Further reading

The proceedings, published less than a month after the conference, are available open-access from the Joint Accelerator Conferences Website (JACoW) at www.jacow.org.

Can experiment access Planck-scale physics?

Physics on the large scale is based on Einstein’s theory of general relativity, which interprets gravity as the curvature of space–time. Despite its tremendous success as an isolated theory of gravity, general relativity has proved problematic in its integration with physics as a whole, and in particular with the physics of the very small, which is governed by quantum mechanics. There can be no unification of physics that does not include both general relativity and quantum mechanics. Superstring theory and its recent extension to the more general theory of branes is a popular candidate for a unified theory, but the links with experiment are very tenuous. The approach known as loop quantum gravity attempts to quantize general relativity without unification, and has so far received no obvious experimental verification. The lack of experimental guidance has made the issue extremely hard to pin down.

CCEcan1_10-06

One hundred years ago, when Max Planck introduced the constant named after him, he also introduced the Planck scales, which combined his constant with the velocity of light and Isaac Newton’s gravitational constant to give the fundamental Planck time around 10–43 s, the Planck length around 10–35 m and the Planck mass around 10–8 kg. Experiments on quantum gravity require access to these scales, but direct access using accelerators would require machines that reach an energy of 1019 GeV, well beyond the reach of any experiments currently conceivable.

For almost a century it has been widely perceived that the lack of experimental evidence for quantum gravity presents a major barrier to a breakthrough. One possible way of investigating physics at the Planck scale, however, is to use the kind of approach developed by Albert Einstein in his study of thermal fluctuations of small particles through Brownian motion, where he showed that the visible motion provided a window onto the invisible world of molecules and atoms. The idea is to access the Planck scale by observing decoherence in matter waves caused by quantum fluctuations, as first proposed using neutrons more than 20 years ago by CERN’s John Ellis and colleagues (Ellis et al. 1984). Since then, ultra-cold atom technologies have advanced considerably, and armed with the sensitivity of modern atomic matter-wave interferometry we are now in a position to consider using “macroscopic” instruments to access the Planck scales, a possibility that William Power and Ian Percival outlined more recently (Power and Percival 2000).

Our recent work represents a new approach to gravitationally produced decoherence near the Planck scale (Wang et al. 2006). It has been made possible by the recent discovery by one of us of the conformal structure – the scaling property of geometry – of canonical gravity, one of the earliest important approaches to quantum gravity. This leads to a theoretical framework in which the conformal field interacts with gravity waves at zero-point energy using a conformally decomposed Hamiltonian formulation of general relativity (Wang 2005). Working in this framework, we have found that the effects of ground-state gravitons on the geometry of space–time can lead to observable effects by causing quantum matter waves to lose coherence.

The basic scenario is that near the Planck scale, ground-state gravitons constantly stretch and squash the geometry of space–time causing conformal fluctuations in space–time. This process is analogous to the Brownian motion of a pollen particle interacting with ambient molecules of much smaller sizes. It means that information on gravitons near the Planck scale can be extracted by observing the conformal fluctuations of space–time, which can be done by analysing their blurring effects on coherent matter waves.

The curvature of space–time produces changes in proper time, the time measured by moving clocks. For sufficiently short time intervals, near the Planck time, proper time fluctuates strongly owing to quantum fluctuations. For longer time intervals, proper time is dominated by a steady drift due to smooth space–time. Proper time is therefore made up of the quantum fluctuations plus the steady drift. The boundary separating the shorter time-scale fluctuations from the longer time-scale drifts is marked by a cut-off time, τcut-off, which defines the borderline between semi-classical and fully quantum regimes of gravity. It is given by τcut-off = λTPlanck, for quantum-gravity theories, where TPlanck is the Planck time, and λ is a theory-dependent parameter determined by the amplitude of zero-point gravitational fluctuations. A lower limit on λ is given by noting that the quantum-to-classical transition should occur at length scales λLPlanck that are greater than the Planck length LPlanck by a few orders of magnitude, so we can expect λ > 102.

A matter-wave interferometer can be used to measure quantum decoherence due to fluctuations in space–time, and hence provide experimental guidance to the value of λ. In an atom interferometer an atomic wavepacket is split into two wavepackets, which follow different paths before recombining (see “Atom interferometer”). The phase change of each wavepacket is proportional to the proper time along its path, resulting in an interference pattern when the wavepackets recombine. The detection of the decoherence due to space–time fluctuations on the Planck scale would provide experimental access to quantum-gravity effects analogous to accessing to atomic scales provided by Brownian motion.

In our analysis we found an equation that gives λ (See “equation”).

CCEcan2_10-06

M is the mass of the quantum particle; T is the separation time before two wavepackets recombine; and Δ denotes the loss of contrast of the matter wave and is a measure of the decoherence (Wang et al. 2006). Existing matter-wave experiments set limits on the size of λ, their sensitivity depending on both Δ and M. Results using caesium atom interferometers (Chu et al. 1997) and also from a fullerene C70 molecule interferometer (Hackermueller et al. 2004) with its larger value of M, both set a lower bound for λ of the order of 104, well within the theoretical limits of λ > 102. This suggests that the sensitivities of advanced matter-wave interferometers may well be approaching the fundamental level due to quantum space–time fluctuations. Investigating Planck-scale physics using matter-wave interferometry may therefore become a reality in the near future.

Further improved measurements will confirm and refine this bound on λ, pushing it to higher values. An atom interferometer in space, such as the proposed HYPER mission, could provide such improvements. However, the lower bound of λ calculated using current experimental data is already within the expected range. This is a very good sign and strongly suggests that the measured decoherence effects are converging towards the fundamental decoherence due to quantum gravity. Therefore, a space mission flying an atom-wave interferometer with significantly improved accuracy will be able to investigate Planck-scale physics.

As well as causing quantum matter waves to lose coherence at small scales, the conformal gravitational field is responsible for cosmic acceleration linked to inflation and the problem of the cosmological constant. Our formula, which relates the measured decoherence of matter waves to space–time fluctuations, is “minimum” in the sense that ground-state matter fields have not been taken on board. Their inclusion may further increase the estimated conformal fluctuations and result in an improved “form factor” in our formula. In this sense, the implications go beyond quantum gravity to more generic physics at the Planck scale. Furthermore, it opens up new perspectives of the interplay between the conformal dynamics of space–time and vacuum energy due to gravitons, as well as elementary particles. (A well known example of vacuum energy is provided by the Casimir effect.) These may have important consequences on cosmological problems such as inflation and dark energy.

The longest journey: the LHC dipoles arrive on time

Nearly three years ago we celebrated the lift-off of the industrial production of the superconducting dipole magnets for CERN’s Large Hadron Collider (LHC), marked by the delivery of the first octant on 3 December 2003 (see CERN Courier January/February 2004 p30). It had taken about 10 years of R&D, models and prototyping to launch the first large call for tender, and a further five years of model refinement, industrialization and pre-series construction (the first 90 dipoles) to surpass a production rate of 10 dipoles a month in summer 2003. In contrast, it took only three years at the maximum rate to produce the remaining 90% of the dipoles. Indeed, since December 2005, the production has begun to slow and will finish well before the end of 2006, in perfect time for installation in the LHC ring.

Only four years ago success was far from certain. The preparation time for the largest and most complex hi-tech production ever tried in particle physics had been so long that scepticism was palpable. Moreover, the scheme devised by the LHC management – that CERN would supply the magnet manufacturers with all of the main components – seemed to many to be doomed to fail. One component or another would cause delays and/or technical problems, which would be charged to CERN. Now this story looks set to have a happy ending, and we should know for sure by November 2006.

The route to success

There have been five key ingredients for this success. First, we ensured adequate preparation and training of the companies during the industrialization phase. This was set up through pilot orders and technology transfer in the CERN Magnet Assembly Facility in Building 181, where about 25 dipole “cold masses” went through the final stages of assembly. It was preceded by a fairly long period of careful preparation and prototyping.

This first stage allowed detailed technical specifications to be produced. The dipoles were “build to print” and “build to process”, with only minor degrees of freedom left to the companies in certain areas, notably in the coil winding and pole assembly. This reflected a change in strategy from the start of the project, when CERN management was inclined to buy an almost turnkey product, where the supplier takes responsibility for how production is implemented. In the end, CERN retained a kind of intellectual property in the dipole project.

A third important point was that CERN procured all the main components for the magnet assembly (see figure 1 and “The incredible supply chain”). This put an additional burden on CERN’s shoulders, involving managing and taking responsibility for the interfaces; liability for delays, such that the famous just-in-time curves of the LHC “dashboard” might have been (and almost were in a few cases) a nightmare; additional workloads that were difficult to cope with; and transport, storage and logistics: we moved 120,000 tonnes of material around Europe, with five international road transport operations a day for more than four years.

Despite these difficulties, this procurement policy has proved to be to CERN’s advantage, and has probably been the cornerstone of our success. It meant that technical homogeneity was assured with contracts issued by one client. Quality assurance (QA) was guaranteed, with follow-up by the final client, that is, by the body most interested in the final product and the only one qualified to balance quality against realism as dictated by the schedule. The procurement policy also enabled economy of scale, through large contracts and the possibility of placing them ahead of the magnet-assembly contracts, which was critical for items requiring long lead times and technical uniformity, such as the superconducting cables and the low-carbon steel for the yoke. In addition, the procurement by CERN of many of the components has helped a balanced industrial return, a goal set by the CERN Council that has been met thanks to the Purchasing Service.

A fourth ingredient for success has been the continuous supervision by a team of CERN physicists, engineers and technicians, reinforced by external professional QA inspectors, who have been resident on the manufacturing sites. Many people in CERN’s Accelerator Technology (AT) department, as well as in the Accelerator Beams (AB) and Technical Support (TS) departments, have played key roles in measurement, analysis and validation of the work undertaken by industry.

Lastly, most of the major tooling for construction and monitoring was supplied under CERN’s responsibility. This allowed tooling development and procurement to carry on in parallel with development of the optimal assembly procedure for the dipoles. This constituted a risk, especially in the case of the big welding presses. However, this strategy, devised as early as 1998, allowed us to gain at least two years, allowing the LHC schedule to be met.

Industrial organization

CERN awarded the contract for manufacturing the LHC dipole magnets – or more precisely, what we call the cold masses, before they are installed in their cryostats at CERN and completed with other items, such as the beam screen – to three firms or consortia. These suppliers were the French Alstom MSA–Jeumont Areva consortium, the Italian company AS-G (previously Ansaldo Superconduttori Genova) and the German firm Babcock Noell (previously BNN). The tender process was difficult and lasted about three years (1999–

2001). Eventually, the companies agreed to lower substantially the tendered price, with no compromise on the technical quality. In exchange CERN had to take most of the risk and responsibility, except for manufacturing errors. The price reduction meant that there was no margin for big errors or for delays, which inevitably incur extra costs.

The three suppliers structured the work-flow, the production process and the logistics in different ways. These all worked well and led to no major difference in production flow.

The French consortium has taken care of cable insulation, coil fabrication and pole assembly on one site, Jeumont Areva, the pride of the French nuclear industry near the Belgian border. The poles are then shipped about 500 km south to Belfort in the historical Alstom factory where the French TGV high-speed train was born. Here, where Alstom also successfully produced about 40% of the precious LHC cable, the poles are put together in dipole assemblies and collared. The collared coils then have to pass critical tests based on HV electric integrity and magnetic measurements. The assembly of the cold mass is eventually completed on the same site with yoking, welding of the shells and curvature formation, corrector magnet mounting, finishing of the extremities with electrical connections among coils and bus bars, closure with the endcap, “collarette” and various flange welding, and final assessment of the curvature and geometry at the extremities. Once the final vacuum test has been passed, the cold mass is transported by special lorries – developed by TCT, winner of the 2002 “truck of the year” in France – along the 250 km route to CERN.

With the Italian company Ansaldo Superconduttori, cable insulation is done by a subcontractor, and then everything else is produced on site at Genoa. This is where the huge superconducting coils of the barrel toroid magnet for the ATLAS detector and the giant solenoid for the CMS detector were also manufactured. For the LHC cold masses, every step takes place from coil winding to the last geometrical laser track control, before the cold mass is shipped to CERN, about 400 km away. In total half of the magnetic energy that will be stored in the LHC tunnel and in the detectors will come from magnets constructed in Genoa.

The German company Babcock Noell split production between two sites. The first stage, up to the production of the collared coil and magnetic measurements, took place in Würzburg, near the headquarters, while the cold-mass assembly was done some 300 km east in Zeitz, near Leipzig, in a renovated building previously used for tank repair and maintenance for the Soviet Union army in the former East Germany. The finished cold masses travel 1000 km across Germany and part of France to CERN.

Industrial production

It is interesting to analyse the industrial approach to managing the production, in particular as applied by Babcock Noell, which had already finished its supply of cold masses in November 2005. This company, in principle less experienced than the other two suppliers in such production techniques, first conducted a careful study, in collaboration with the University of Hanover, to evaluate the whole sequence of operations that are necessary to manufacture a dipole cold mass. The study indicated the number of assembly lines and people needed for each operation (or post) on the base of two shifts, five days a week, as a function of production rate. This work allowed the sensitivity of each working post to be assessed with respect to tooling failure or the necessity to ramp up beyond the nominal rate to recover from a stoppage. Not surprisingly, it singled out the winding stage as the ultimate bottleneck in the overall production.

Taking advantage of the fact that in Germany workers can be hired for projects for up to five years, while avoiding high social payments on laying off after the project, Babcock Noell also decided to take on about 30–40% more employees than the other two firms, which allowed them to carry out the production less time than the other suppliers. The company has consistently delivered 3.5–4 dipoles a week, while a rate of 3–3.3 would have been enough to meet the contractual obligation. In this way the firm lowered its general expenses and used tooling and manpower more intensively. From CERN’s point of view this helped to compensate for a general delay in the pre-series production. However, we could not have sustained such an advanced schedule from all three producers; we would have become short of components, with the serious risk of having to pay extra costs for work stoppages: in 2005, at the maximum production rate, we had more than 400 people working at the three suppliers.

Figure 2 shows the “learning” curves for the production at Babcock Noell. It is remarkable how close to the forecast the actual production rate was. The small margin gained with respect to the target was used to correct inevitable errors and mistakes that were generated by manufacturing or by faulty components. Figure 2a highlights the most critical part, the coil winding and curing for the pre-series of 30 magnets; the singularities are clearly shown, while they are hardly visible in the global production plot in figure 2b. Reaching the goal is satisfying for the companies and also for CERN. It confirms that such a technically complex object can be made ready for industrial production.

Figure 3 shows the learning curves of another producer, Jeumont, in coil winding, curing and assembly. In this case the company continues learning and at each injection of new personnel there is a “need” for learning. For all three suppliers, although to different degrees, the earliest part of the production process proved to be the most critical and more subject to mistakes and unexpected events.

CERN has also carried out a study to compare the dipole production with other projects in terms of industrial learning curves. Based on well known industrial production theory, the data can be fitted with certain models (typically based on power curves) and used to determine the learning percentage, as in figure 4 for the dipole production. Compared with other production processes, the learning percentage for the LHC dipole production lies, not surprisingly, between shipbuilding and aerospace production.

Production and QA

In considering QA, we can take the example of the collared coils and the detection of hidden defects, in which magnetic measurements at the suppliers at room temperature (with a current of only 8 A, that is, at a 5 mT field) played a primary role. CERN provided the tooling and the know-how and, once the pre-series was completed, passed the job of measuring to industry for the series production. Each measurement has been analysed almost online at CERN, with a commitment to giving an answer normally within two hours, exceptionally within a day, if the magnet is good. If an anomaly is detected, a thorough analysis is carried out to establish if the magnet can proceed or if it must be disassembled, a decision that is costly and painful because of the interruption to the tight production cycle.

Figure 5 show the total number of collared coils that have been disassembled versus the magnet fabrication number. At the start of production, as expected, there were a number of rejects. Then after a period of good production, a defective series appeared when production was almost stable. After investigation these defects were traced to small details and procedures that did not have sufficient margin to accommodate small mistakes once the slow and strictly monitored early production gave way to mass production, which also involved huge recruitment and training of new staff. Indeed in the mass production the experienced technicians, trained for years during the prototyping period, became the supervisors of newly trained colleagues; the reality is that in series production the best workers are not active players.

A project like this requires pragmatism to focus on the main objective: producing a magnet of sufficient quality in the given time, rather than continuously improving. As stated in the first external review of superconducting cable and magnet production, instituted in autumn 2001 and composed by a panel of international scientists and engineers, “the best is the enemy of the good”. We made this the guiding star of the whole production process. Following other good advice from the review, a thorough audit of QA was carried out on the manufacturing sites; the very low rate of mistakes in the later production (see figure 5) is also a result of these high-quality audits.

Delivery and performance

It is worth remembering that at the call for tender for the series in 2001 – 386 dipole cold masses for each of the three producers – the bidders offered their minimum price, reduced by more than 20%, with a delivery for November 2006, even if the contract demanded it for June 2006 to provide some contingency. Even then, November 2006 was already considered the latest date to respect the LHC schedule. This goal has not shifted, showing that the project, despite the problems encountered in other areas, has not been delayed significantly since the end of 2001. Babcock Noell has completed production seven months early and the other two suppliers are projected to have delivered the dipoles for the tunnel (1232 in total) by October 2006, in line with the LHC schedule.

The performances of the magnets can be qualified to a good extent by three parameters. The first is quench behaviour, that is, the irreversible loss of the superconducting state. Figure 6 shows the number of quenches that are needed to pass the nominal field of 8.3 T at the second thermal cycle. (The magnets will see at least three thermal cycles before they see particle beams.) The number passing is very encouraging; based on this and on the results for a few weak magnets that were tested three or four times, we anticipate that dipole quenches will take up only around 10–15 days during the hardware commissioning, and a negligible time during beam operation.

Regarding the magnetic field, the most important quantity to monitor is the uniformity of bending strength between dipoles, as all of the magnets in a sector of the LHC will be powered in series. Figure 7 shows the exceptionally satisfactory results for the coils already measured (95% of the whole production). Based on these results, magnets from different manufacturers can be mixed, allowing us to drop a constraint that would have made installation even more difficult than it is today.

A third important check concerns the geometry of the magnets. The dipoles are bent to follow the beam trajectory and minimize coil aperture and cost. Each dipole cold mass has a sagitta of 9 mm over its 15 m length and must be very precise at the extremities, where corrector magnets are positioned. This is not easy on equipment that weighs 30 tonnes, has a laminated construction, and the shape of which is determined by friction and welding shrinkage. It required a great deal of supervision in industry and at CERN (where the cold masses are inserted in their cryostats and many other operations preformed), and good collaboration among different teams in the AT, AB and TS departments. Figure 8 shows the actual positions of the sextupole corrector magnets.

Acknowledgements

The production of the LHC dipoles involved hundreds of people at CERN and about 1200 people in industry all around Europe and worldwide. It has been a hi-tech achievement that has shown the capacity of high-energy physics and of CERN in particular to achieve large and difficult industrial projects on time.

Precision pins down the electron’s magnetism

CCEpre1_10-06

The electron’s magnetic moment has recently been measured to an accuracy of 7.6 parts in 1013 (Odom et al. 2006). As figure 1a indicates, this is a six-fold improvement on the last measurement of this moment made nearly 20 years ago (Van Dyck et al. 1987). The new measurement and the theory of quantum electrodynamics (QED) together determine the fine structure constant to 0.70 parts per billion (Gabrielse et al. 2006). This is nearly 10 times more accurate than has so far been possible with any rival method (figure 1b). Higher accuracies are expected, based upon convergence of many new techniques – the subject of a half-dozen Harvard PhD theses during the past 20 years. A one-electron quantum cyclotron, cavity-inhibited spontaneous emission, a self-excited oscillator and a cylindrical Penning trap contribute to the extremely small uncertainty. For the first time, researchers have achieved spectroscopy with the lowest cyclotron and spin levels of a single electron fully resolved via quantum non-demolition measurements, and a cavity shift of g has been directly observed.

CCEpre2_10-06

Unusual features

A circular storage ring is the key to these greatly improved measurements, but the storage ring is unusual compared with those at CERN, for example. To begin with it uses only one electron, stored and reused for months at a time. The radius of the storage ring is much less than 0.1 µm, and the electron energy is so low that we use temperature units to describe it – 100 mK. Furthermore, the electron does not orbit in a familiar circular orbit even though it is in a magnetic field; instead, it makes quantum jumps between only the ground state and the first excited states of its cyclotron motion – non-orbiting stationary states. It also makes quantum jumps between spin up and spin down states. Blackbody photons stimulate transitions between the two cyclotron ground states until we cool our storage ring to 100 mK to essentially eliminate them. The spontaneous emission of synchrotron radiation is suppressed because of its low energy and by locating the electron in the centre of a microwave cavity. The damping time is typically about 10 seconds, about 1024 times slower than for a 104 GeV electron in the Large Electron–Positron collider (LEP). To confine the electron weakly we add an electrostatic quadrupole potential to the magnetic field by applying appropriate potentials to the surrounding electrodes of a Penning trap, which is also a microwave cavity (figure 2a).

CCEpre3_10-06

The lowest cyclotron and spin energy levels for an electron in a magnetic field are shown in figure 2b. (Very small changes to these levels from the electrostatic quadrupole and special relativity are well understood and measured, though they cannot be described in this short report.) Microwave photons introduced into our trap cavity stimulate cyclotron transitions from the ground state to the first excited state. The long cyclotron lifetime allows us to turn on a detector to count the number of quantum jumps for each attempt as a function of cyclotron frequency νc (figure 3d). A similar quantum jump spectroscopy is carried out as a function of the frequency of a radiofrequency drive at a frequency νa = νs – νc, which stimulates a simultaneous spin flip and cyclotron excitation, where νs is the spin precession frequency (figure 3c). The lineshapes are understood theoretically. One-quantum cyclotron transitions (figure 3b) and spin flips (figure 3a) are detected with good signal-to-noise from the small shifts that they cause to an orthogonal, classical electron oscillation that is self-excited.

CCEpre4_10-06

The dimensionless electron magnetic moment is the magnetic moment in units of the Bohr magneton, ehbar/2m, where the electron has charge –e and mass m. The value of g is determined by a ratio of the frequencies that we measure, g/2 = 1 + νa/νc, with the result that g/2 = 1.00115965218085(76) [0.76 ppt]. The uncertainty is nearly six times smaller than in the past, and g is shifted downwards by 1.7 standard deviations (Odom et al. 2006).

CCEpre5_10-06

What can be learned from the more accurate electron g? The first result beyond g itself is the fine structure constant, α = e2/4πε0hbarc – the fundamental measure of the strength of the electromagnetic interaction, and also a crucial ingredient in our system of fundamental constants. A Dirac point particle has g = 2. QED predicts that vacuum fluctuations and polarization slightly increase this value. The result is an asymptotic series that relates g and α:

(Eq. 1)

g/2 = 1 + C2(α/π) + C4(α/π)2 + C6(α/π)3 + C8(α/π)4
+ … aµτ + ahadronic + aweak

According to the Standard Model, hadronic and weak contributions are very small and believed to be well understood at the accuracy needed. Impressive QED calculations give exact C2, C4 and C6, a numerical value and uncertainty for C8, and a small aµτ. Using the newly measured g in equation 1 gives α–1 = 137.035999710(96) [0.70 ppb] (Gabrielse et al. 2006). The total uncertainty of 0.70 ppb is 10 times smaller than for the next most precise methods (figure 1b), which determine α from measured mass ratios, optical frequencies, together with rubidium (Rb) or caesium (Cs) recoil velocities.

CCEpre6_10-06

The second use of the newly measured electron g is in testing QED. The most stringent test of QED – which is one of the most demanding comparisons of any calculation and experiment – continues to come from comparing measured and calculated g-values, the latter using an independently measured α as an input. The new g, compared with equation 1 with α(Cs) or α(Rb), gives a difference δg/2 < 15 × 10sup>–12 (see Gabrielse 2006 for details and a discussion.) The small uncertainties in g/2 will allow a 10 times more demanding test if ever the large uncertainties in the independent α values can be reduced. The prototype of modern physics theories is thus tested far more stringently than its inventors ever envisioned – as Freeman Dyson remarks in his letter at the beginning of the article – with better tests to come.

CCEpre7_10-06

The third use of the measured g is in probing the internal structure of the electron – limiting the electron to constituents with a mass m* > m/√(δg/2) = 130 GeV/c2, corresponding to an electron radius R <1 × 10–18 m. If this test was limited only by our experimental uncertainty in g, then we could set a limit m* > 600 GeV. This is not as stringent as the related limit set by LEP, which probes for a contact interaction at 10.3 TeV. However, the limit is obtained quite differently, and is somewhat remarkable for an experiment carried out at 100 mK.

The fourth use of the new electron g concerns measurements of the muon g – 2 as a way to search for physics beyond the Standard Model. Even though the muon g values have nearly 1000 times larger uncertainties than the new electron g, heavy particles – possibly unknown in the Standard Model – are expected to make a contribution that is much larger for the muon. However, this contribution would still be very small compared with the calculated QED contribution, which depends on α and must be subtracted out. The electron g provides α and a confidence-building test of the QED, both needed for the large subtraction.

CERN has long embraced particle physics at whatever energy scales are most appropriate for learning about fundamental reality. It is impressive that CERN is replacing the highest energy electron–positron collider, LEP, with the world’s highest energy proton collider, the Large Hadron Collider. Also at CERN, however, the lowest energy antiproton storage rings are also operating. One antiproton cooled to 4.2 K was used to show that the magnitudes of q/m for the proton and antiproton were the same to better than nine parts in 1011 – the most stringent test of CPT invariance with a baryon system.

Now, these low-energy antiproton techniques are being used to make the coldest possible antihydrogen atoms, to be used for higher-precision tests of fundamental symmetries. It is fitting that the new measurement of the electron magnetic moment and the fine structure constant were carried out in the lab of a long-time CERN researcher, since they illustrate the power of low-energy techniques of the sort that we are applying to antihydrogen studies at CERN’s Antiproton Decelerator facility, the unique source of low-energy antiprotons.

Particle physics and the press

These are exciting times for particle physics, and the world’s press are taking notice. As the Large Hadron Collider prepares to begin operations, as the International Linear Collider becomes an ever more clearly defined project, as programmes for neutrino physics and astrophysics flourish, and most of all as long-awaited discoveries reveal the secrets of the universe, our friends in the media will share the adventure. Their stories and articles, TV programmes, blogs and podcasts will inform and inspire others with the spirit of excitement that particle physicists are feeling at the start of the 21st century.

CCEpar1_10-06

The journalists who tell our story will have wildly varying backgrounds, skills and points of view. Their pieces will cover the spectrum of science journalism. They will define and describe; compare and contrast; make judgements and express opinions; and praise and criticize. Writing in language that is accessible to their readers, they will at times seem wanting in their grasp of scientific subtleties. Sometimes they will appear to lack appreciation for something that we care deeply about; occasionally they may even give more credit than we deserve.

It is accepted wisdom that the press almost always get it wrong. Actually, in our experience, ultimately they get it just about right. In the months and years ahead, the majority of journalists who tell the story of 21st-century particle physics will do an excellent job. From time to time, inevitably, they will get it wrong – at least as we see it. A true test of our character as a field is how we react to this level of media coverage.

At a time of extraordinary scientific opportunity in particle physics, we must keep our eyes on the science and enjoy the privilege of taking part in discovering how the universe works. We should equally enjoy the opportunity afforded by the media’s interest.

In the past, there have been occasions when our field has devolved into warring camps, reading each new press article with suspicion, quick to take offence at every real or imagined slight or bias. It’s time to change this model. Do we want to be seen as a fractious, contentious community beset by invidiousness, or as a unified community of committed scientists confronting a golden age of discovery? We have the choice. We can set a tone of respect and admiration for all projects and experiments that lead to discovery – or one that begrudges every word of praise for others’ work. Without fail, the media will pick up on our tone. So will our colleagues, our students, scientists in other disciplines and we ourselves. It will be part of what defines the kind of field that we are.

Competition will always exist, and this is a good thing. People care passionately about their work. Of course they want to see it recognized, and defend it if it is unfairly criticized. But we have everything to gain by maintaining perspective. There will be hundreds of stories during the years ahead. Today’s lukewarm review will be tomorrow’s encomium – and vice versa. We should take them all in our stride, because we are in this together for the long haul. We all want to discover how the universe works. It’s a big universe with room, and credit, enough for everyone.

• This article is being published simultaneously in the October issues of CERN Courier and symmetry (see
www.symmetrymag.org).
Members of InterAction, a collaboration of particle-physics communicators from laboratories around the world (www.interactions.org): Roberta Antolini, INFN Gran Sasso; Peter Barratt, PPARC; Natalie Bealing, CCLRC/RAL; Stefano Bianco, INFN Frascati; Karsten Buesser, DESY; Neil Calder, SLAC; Elizabeth Clements, ILC; Reid Edwards, Lawrence Berkeley National Laboratory; Suraiya Farukhi, Argonne National Laboratory; James Gillies, CERN; Judith Jackson, Fermilab; Marge Lynch, Brookhaven National Laboratory; Youhei Morita, KEK, ILC; Christian Mrotzek, DESY; Perrine Royole-Degieux, IN2P3, ILC; Yves Sacquin, DAPNIA CEA; Ahren Sadoff, Cornell University LEPP; Maury Tigner, Cornell University LEPP; and Barbara Warmbein, ILC.

bright-rec iop pub iop-science physcis connect