Comsol -leaderboard other pages

Topics

HERA leaves a rich legacy of knowledge

The HERA facility at DESY was unique: it was the only accelerator in the world to collide electrons (or positrons) with protons, at centre-of-mass energies of 240–320 GeV. In collisions such as these, the point-like electron “probes” the interior of the proton via the electroweak force, while acting as a neutral observer with regard to the strong force. This made HERA a precision machine for QCD – a “super electron microscope” designed to measure precisely the structure of the proton and the forces within it, particularly the strong interaction. HERA’s point-like probes also gave it an advantage over proton colliders such as the LHC: while protons can have a much higher energy, they are composite particles dominated by the strong force, which makes it much more difficult to use them to resolve the proton’s structure. The results from HERA, many of which are already part of textbook knowledge, promise to remain valid and unchallenged for quite some time.

Into the depths of the proton

The proton’s structure can be described by using various structure functions, each of which covers different aspects of the electron–proton interaction. HERA was the world’s only accelerator where physicists could study the three structure functions of the proton in detail. During the first phase of operation (HERA I), the colliding-beam experiments H1 and ZEUS already provided surprising new insights into F2, which describes the distribution of the quarks and antiquarks as a function of the momentum transfer (Q2) and the momentum fraction (x) of the proton’s total momentum. When HERA started up in 1992, physicists already knew that the quarks in the proton emit gluons, which give rise to other gluons and to quark–antiquark pairs in the virtual “sea”. However, the general assumption was that, apart from the three valence quarks, there were only very few quark–antiquark pairs and gluons in the proton.

Thanks to HERA’s high centre-of-mass energy, H1 and ZEUS pushed forward to increasingly shorter distances and smaller momentum fractions, and measured F2 over a range that spans four orders of magnitude of x and Q2 – two to three orders of magnitude more than were accessible with earlier experiments (figure 1). What the two experiments discovered came as a great surprise: the smaller the momentum fraction, the greater the number of quark–antiquark pairs and gluons that appear in the proton (figure 2). The interior of the proton therefore looks much like a thick, bubbling soup in which gluons and quark–antiquark pairs are continuously emitted and annihilated. This high density of gluons and sea quarks, which increases at small x, represented a completely new state of the strong interaction – which had never been investigated until then.

The proton sea, however, comprises not only up, down and strange quarks. Thanks to the high luminosity achieved during HERA’s second operating phase (HERA II), the experiments for the first time revealed charm and bottom quarks in the proton, with charm quarks accounting for 20–30% of F2 in some areas, and bottom quarks accounting for 0.3–1.5%. It appears that all quark flavours are produced democratically at extremely high momentum-transfers, where even the mass of the heavy quarks becomes irrelevant. The analysis of the remaining data will further enhance the precision and lead to a better understanding of the generation of heavy quarks, which is particularly important for physics at the LHC.

During HERA II, H1 and ZEUS also used longitudinally polarized electrons and positrons. This boosted the experiments’ sensitivity for the structure function xF3, which describes the interference effects between the electromagnetic and weak interactions within the proton. These effects are normally difficult to measure, but their intensity increases with the polarization of the particles, making them clearly visible.

Shortly before HERA’s time came to an end, the accelerator ran at a reduced proton energy for several months (460 and 575 GeV, instead of 920 GeV). Measurements at different energies, but under otherwise identical kinematic conditions, filter out the third structure function FL, which provides information on the behaviour of the gluons at small x. These measurements are without parallel and are particularly important for the understanding of QCD.

HERA provided another surprise soon after it went into operation. In events at the highest Q2, a quark is violently knocked out of the proton. In 10–15% of such cases, instead of breaking up into many new particles, the proton remains completely intact. This is about as surprising as if 15% of all head-on car crashes left no scratch on the cars. Such phenomena were familiar at low energies, and were generally described using the concepts of diffractive physics, which involve the pomeron, a hypothetical neutral particle with the quantum numbers of the vacuum. However, early HERA measurements showed that this concept did not hold up, failing completely in the hard diffraction range.

To conform with QCD, at least two gluons must be involved in a diffractive interaction to make it colour-neutral. Could hard diffraction therefore be related to the high gluon density at small x? The H1 and ZEUS results were clear: the colour-neutral exchange is indeed dominated by gluons. These observations at HERA led to the development of an entire industry devoted to describing hard diffraction, and the analyses and attempts at interpretation continue unabated. There have been some successes, but the results are not yet completely understood. It is therefore important to analyse the HERA data from all conceivable points of view to assess all theoretical interpretations appropriately.

The fundamental forces of nature

A special characteristic of the strong interaction is its unusual behaviour with respect to distance. While the electromagnetic interaction becomes weaker with increasing distance, the opposite is true for the strong force. It is only when the quarks are close together that the force between them is weak (asymptotic freedom); the force becomes stronger at greater distances, thus more or less confining the quarks within the proton. While other experiments have also determined the strong coupling constant αs as a function of energy, H1 and ZEUS for the first time demonstrated the characteristic running of αs over a broad range of energies in a single experiment (figure 3). Thus, the HERA results impressively confirmed the special behaviour of the strong force that David Gross, David Politzer and Frank Wilczek predicted 20 years ago – a prediction for which they won the Nobel Prize for Physics in 2004.

Although the collaborations used HERA mostly for QCD studies, the aim of studying the electroweak interaction was part of the proposal for the machine. For instance, H1 and ZEUS measured the cross-sections of neutral and charged-current reactions as a function of Q2. At low-momentum transfers, i.e. large distances, the electromagnetic processes occur significantly more often than the weak ones because the electromagnetic force acts much more strongly than the weak force. At higher Q2, and thus smaller distances, both reactions occur at about the same rate, i.e. both forces are equally strong. H1 and ZEUS thus directly observed the effects of electroweak unification, which is the first step towards the grand unification of the forces of nature.

The longitudinal polarization of the electrons in HERA II also opened up new possibilities for studying the electroweak force. For example, theory predicts that because only left-handed neutrinos exist in nature, the transformation of a right-handed electron into a right-handed neutrino via the weak interaction should be impossible. H1 and ZEUS measured the charged currents as a function of the various polarization states, and proved that there are indeed no right-handed currents in nature, even at the high energies of HERA (figure 4).

Particle collisions at the highest Q2 are comparatively rare. Yet it is here, at the known limits of the Standard Model, that any new effects should appear. Thanks to the higher luminosity of HERA II, the collaborations can study this realm with enhanced precision. They have to date not observed any significant deviations from the Standard Model. The results from HERA substantially broaden the Standard Model’s range of validity and restrict the possible phase space for new phenomena, so refining the insights of the Standard Model all the way up to the highest momentum transfers.

The nucleon-spin puzzle

Another important contribution to our understanding of the proton is provided by another HERA experiment, HERMES, which was designed to study the origin of nucleon spin. In the mid-1980s, experiments at CERN and SLAC discovered that the three valence quarks account for only around a third of the total nucleon spin. Starting in 1995, the HERMES collaboration aimed to find out where the other two-thirds come from, by sending the longitudinally polarized electrons or positrons from HERA through a target cell filled with polarized gases.

During HERA I, HERMES completed its first task, which was to determine the individual quark contributions to the nucleon spin. Using measurements on longitudinally polarized gases, the HERMES collaboration provided the world’s first model-independent determination of the separate contributions made to the nucleon spin by the up, down and strange quarks (figure 5). The results revealed that the largest contribution to the nucleon spin comes from the valence quarks, with the up quarks making a positive contribution, the down quarks a negative one. The polarizations of the sea quarks are all consistent with zero. The HERMES measurements therefore proved that the spin of the quarks generates less than half of the spin of the nucleon, and that the quark spins that do contribute come almost exclusively from the valence quarks – a decisive step toward the solution of the spin puzzle.

The HERMES team then turned its attention to gluon spin, making one of the first measurements to give a direct indication that the gluons make a small but positive contribution to the overall spin. The analysis of the latest data will yield more detailed information. Until recently, it was impossible to investigate the orbital angular momentum of the quarks experimentally. Now, using deeply virtual Compton scattering (DVCS) on a transversely polarized target, the HERMES team has made the first model-dependent extraction of the total orbital angular momentum of the up quarks. Analysis of the data taken with a new recoil detector in 2006–2007 will perfect the knowledge of DVCS and enable HERMES to make a key contribution to improving the models of generalized parton distributions, in the hope of soon identifying the total orbital angular momentum of the up quarks.

Parton distribution functions characterize the nucleon by describing how often the partons – quarks and gluons – will be found in a certain state. There are three fundamental quark distributions: the quark number density, which the H1 and ZEUS experiments have measured with high precision; the helicity distribution, which was the main result of measurements by HERMES with longitudinally polarized gases; and the transversity distribution, which describes the difference in the probabilities of finding quarks in a transversely polarized nucleon with their spin aligned to the nucleon spin, and quarks with their spin anti-aligned. Using data on transversely polarized hydrogen, the HERMES collaboration can now determine this transversity distribution for the first time. The measurements also provide access to the Sivers function, which describes the distribution of unpolarized quarks in a transversely polarized nucleon. As the Sivers function should vanish in the absence of quark orbital angular momentum, its measurement marks an additional important step in the study of orbital angular momentum in the nucleon. Analysis of the initial data shows that the Sivers function seems to be significantly positive, which indicates that the quarks in the nucleon do in fact possess a non-vanishing orbital angular momentum.

Although HERMES focuses on nucleon spin, the physics programme for the experiment extends much further, including, for example, studies of quark propagation in nuclear matter and quark fragmentation, tests of factorization and searches for pentaquark exotic baryon states. Analysis of the data collected up until the shutdown in June 2007 will provide unique insights here as well.

The LHC and beyond

In 2008, the LHC will start colliding protons at centre-of-mass energies about 50 times higher than those at HERA. The results provided by HERA are essential for the interpretation of the LHC data: the proton–proton collisions at the LHC are difficult to describe, involving composite particles rather than point-like ones. It is therefore crucial to have the most exact understanding possible of the collisions’ initial state. This comes from HERA, for example, in the form of precise parton distribution functions of the up, down and strange quarks, and also the charm and bottom quarks (figure 6). An accurate knowledge of these distributions is vitally important, particularly for predictions of Higgs particle production at the LHC.

Many of these LHC-relevant measurements could only be carried out at HERA. To support the transfer of knowledge and create a long-term connection that takes account of the overlapping physics interest at HERA and the LHC, DESY and CERN have intensified their co-operation in this area. Many researchers from HERA, along with many students and PhD candidates, are already participating in the LHC experiments.

Over the past 15 years, HERA has enabled us to uncover a wealth of different – and partly unexpected – aspects of the proton and the fundamental forces. The analysis of the data recorded up until HERA’s closure in June 2007 is expected to last well into the next decade. The HERA collaborations will be melding these aspects into a vast and cohesive whole – a comprehensive description of strongly interacting matter at small distance scales and short time scales. Given HERA’s unique nature, this picture will endure for a long time and define for years, and possibly decades, our understanding of the dynamics of the strong interaction.

With their results, the HERA teams are now handing the baton over to the LHC collaborations, and also to the theorists. From the outset, the results from HERA have stimulated a large amount of theoretical work, particularly in the field of QCD, where an intensive and fruitful collaboration between theory and experiments has arisen. Thus, the knowledge of the proton and the fundamental forces gained from HERA forms the basis, not only for future experiments, but also for many current developments in theoretical particle physics – a rich legacy indeed.

Vertex 2007 prepares for the radiation challenge

The use of precision position information at the level of a few micrometres has become an increasingly important part of high-energy physics experiments. The purpose of the annual International Workshop on Vertex Detectors is to review progress on silicon-based vertex detectors, investigate the possibilities of new materials and design structures, and discuss applications to medical and other fields. More than 70 physicists participated in the 16th meeting in the series, which was held at the Crowne Plaza Hotel in Lake Placid, New York, on 23–28 September and hosted by the high-energy physics group of Syracuse University. Lake Placid provided a splendid venue (at the height of the autumn tree colours) and created an inspiring atmosphere for excellent talks and discussions. The workshop also included a new poster session to showcase the work of bright young researchers.

CCver1_01_08

The programme included extensive reviews of the almost completed systems for the major LHC experiments – ALICE, ATLAS, CMS and LHCb. The talks and informal discussions showed the great progress there has been in commissioning these impressive systems with test beams and cosmic rays. As the experiments gear up for data-taking, the teams are validating and refining the tracking, alignment and vertex-reconstruction software tools. There are, however, concerns about exposing these detectors at frighteningly close distances to the beams in an accelerator that has not yet run. There were presentations of plans for experiment protection at the LHC, but the information provided did not quell the debate.

While everyone is eagerly awaiting data-taking at the LHC, there are already plans for upgrades and new facilities. The world community is poised to meet the challenges of vertex detectors for the proposed International Linear Collider, high-luminosity upgrades of the LHC – the Super LHC (SLHC) – and flavour physics experiments (a Super B-factory and an upgrade of the LHCb experiment). Many upgrade paths require vertex detectors with improved radiation hardness and higher segmentation to cope with the higher multiplicities and higher event rates. In addition, several considerations, not the least of which is the amount of material, motivate the effort towards detector thinning.

Upgraded experiments will also require upgraded analysis tools. Many important new particles, such as Higgs bosons, are likely to decay into B particles. These leave displaced vertices, so algorithms that provide this information, especially in the earlier stages of the trigger processor, will be necessary. This is a paramount consideration in the LHCb upgrade and provides a strong motivation to pursue a pixel-based vertex detector.

CCver2_01_08

Novel devices

A strong focus of this workshop centred on the evaluation of new devices developed to address a variety of the challenges posed by future projects. Radiation hardness, for example, is a critical consideration for SLHC upgrades. This is the motivation behind RD50, a large R&D effort based at CERN that involves scientists from all over the world. After years of R&D on a variety of technologies and structures, this group is now reaching important conclusions. In particular, devices using “n+” electrodes (pixels or strips) implanted on p-type substrates appear to be one of the most effective options to cope with increased radiation fluence. Speakers presented recent results and showed that microstrip detectors can still be operated after being irradiated at fluencies up to 1016 neutrons/cm2, as required in the innermost layers at SLHC luminosities (figure 1). One plane of a strip-detector implemented on p-type substrates has been installed in LHCb, “the first full-scale SLHC silicon plane”. Although the traditional emphasis of this conference is on silicon-based technology, discussions also covered the naturally radiation-hard diamond detectors, in particular the promising single-crystal diamond devices.

The reduced collection distance achievable at high levels of irradiation has helped to inspire renewed interest in thinned silicon. The workshop learned that this performs just as well as “thick” 300-μm detectors after sufficient radiation, as the charge cannot be collected out of the thicker sensors. In addition, examples were shown of detectors thinned down to 10 μm that are functional and mechanically stable. Other novel detector concepts included 3-dimensional detectors; monolithic devices, where the readout chip is made on the same silicon substrate as used for the sensor; and DEPFET, where each pixel is a p-channel FET on a completely depleted bulk. Another interesting development is the so-called “3D integration”, featuring integration of the sensor and several layers of readout electronics, which is facilitated by the strong push towards miniaturization in the computer industry.

Silicon micropattern detectors are central to precision imaging in several areas of research, from medicine to biology, to astrophysics and astroparticle physics. The field is in rapid evolution and several interesting talks highlighted a broad spectrum of applications. Examples of imaging geared towards medical or biological applications included the MEDIPIX chip, 3H imaging, NANO-CT scanning, and the PILATUS system – a pixelated hybrid silicon X-ray detector developed for protein crystallography at the Swiss Light Source. Astrophysics applications included PAMELA, now taking data in space, and the more futuristic EXIST, a proposed large-area telescope for X-ray astronomy.

• The next meeting in the series will be run by Richard Brenner and held near Stockholm in the summer.

CPT ’07 goes in quest of Lorentz violation

Lorentz symmetry and the closely related CPT symmetry, which combines charge conjugation (C), parity reversal (P) and time reversal (T), are well tested properties of nature. Nevertheless, efforts to find experimental evidence of Lorentz and CPT violation have increased in number, motivated in part by the quest for a theory to unite quantum mechanics and gravity. Further impetus has come from the introduction of a framework for Lorentz and CPT violation known as the Standard Model Extension (SME) which encompasses the panorama of physical theories. Since its development by Alan Kostelecký and co-workers at Indiana University in the 1990s, the SME has been used widely to guide experimental efforts and allow comparisons of results from different experiments.

CCcpt1_01_08

This field is the topic of a series of meetings that have run triennially since 1998 at the Indiana University physics department, bringing together researchers to share results and ideas. In 2007, the fourth meeting on Lorentz and CPT Symmetry (CPT ’07) was held on 8–11 August, with contributed and invited talks, and a poster session during the conference reception.

The meeting opened with a welcome from Bennett Bertenthal, dean of the university’s College of Arts and Sciences. Ron Walsworth of Harvard University and the Harvard Smithsonian Center for Astrophysics gave the first scientific talk, in which he reflected on the progress in experimental studies of Lorentz violation since 1997, when the SME coefficients first appeared in their current form. He also discussed his group’s current work to upgrade its noble-gas maser, with the aim of improving the sensitivity to a variety of SME coefficients.

Accelerator-based tests

The SME has opened a rich variety of possibilities for Lorentz violation in the context of neutrino oscillations. Recent work has shown that some, or perhaps even all, of the oscillation effects seen in existing data may be attributable to Lorentz violation. Talks included a presentation by Rex Tayloe of the MiniBooNe collaboration at Fermilab, who provided an overview of the recent data and considered their relation to earlier results from the Liquid Scintillator Neutrino Detector experiment at Los Alamos. He also discussed the three-parameter “tandem” model based on SME coefficients.

The physics of neutral-meson oscillations provides an abundance of theoretical possibilities for Lorentz and CPT violation that can be tested in current and planned experiments. Antonio Di Domenico of the KLOE collaboration showed the first results of a search for CPT-violating effects using K mesons at the DAΦNE collider in Frascati. New results also came from David Stoker of University of California, Irvine, who presented the first constraints on all four coefficients for CPT violation in the Bd system, based on results from the BaBar experiment at SLAC.

CCcpt2_01_08

Possible signals of Lorentz violation in the muon system hinge on variations in the anomaly frequency, which could be detected by performing instantaneous frequency comparisons and sidereal-variation searches. The results described by Lee Roberts from Boston University and the Muon (g-2) Collaboration at Brookhaven represent the highest-sensitivity test of Lorentz and CPT violation for leptons, and improve previous results with muonium, electrons and positrons by about an order of magnitude.

Lorentz and CPT violation may be detectable using antihydrogen spectroscopy. Such tests would involve looking for sidereal changes in the spectra, or looking for direct differences between the spectra of antihydrogen and conventional hydrogen. Theoretical considerations have shown that the hyperfine transitions are of particular interest in these tests. Ryugo Hayano, spokesperson for the ASACUSA collaboration at CERN, provided details of his group’s progress towards tests of this type (see “ASACUSA moves towards new antihydrogen experiments“). Niels Madsen of Swansea gave an overview of the status of the ALPHA experiment at CERN, which has the potential to test Lorentz and CPT symmetry in a variety of ways using trapped antihydrogen.

Gravitational and astrophysical effects

There have been extensive studies of signals of Lorentz violation in the gravitational sector during the past few years, and the results include the identification of 20 coefficients for Lorentz violation in the pure-gravity sector of the minimal SME. The meeting featured presentations of the first ever measurements of SME coefficients in the gravitational sector by two experimental groups. Holger Müller of Stanford University announced measurements of seven such coefficients for Lorentz violation, based on work with Mach–Zehnder atom interferometry. Other first results were unveiled by James Battat of Harvard University, placing limits on six gravitational-sector SME coefficients from the analysis of more than three decades of archival lunar-laser-ranging data from the McDonald Observatory in Texas and the Observatoire de la Côte d’Azur in France. The Apache Point Observatory in New Mexico should achieve further improvements in lunar-laser ranging, down to a sensitivity level of 1 mm, as Tom Murphy of the University of California at San Diego explained.

The fundamental nature of Lorentz symmetry means that there are subtle conceptual issues to be addressed. Roman Jackiw of MIT considered one example, and showed that symmetry breaking may be a mask for co-ordinate choice in a diffeomorphism invariant theory. Robert Bluhm of Colby College in Maine provided a comprehensive discussion of the Nambu–Goldstone and massive fluctuation modes about the vacuum in gravitational theories. Other theoretical topics included approaches to deriving the Dirac equation in theories that violate Lorentz symmetry, presented by Claus Lämmerzahl of Bremen University, and Chern–Simons electromagnetism, which Ralf Lehnert of MIT described.

Satellites offer unique opportunities to probe Lorentz symmetry in a low-gravity environment. Tim Sumner of Imperial College gave an overview of approaches to space-based experiments with high-sensitivity instruments, and looked at ESA’s upcoming plans. James Overduin of Stanford University likewise reviewed the ongoing analysis of data from the Gravity Probe B satellite.

Cosmological and astrophysical sources also provide a number of intriguing possibilities for testing the fundamental laws and symmetries of nature. Matt Mewes of Marquette University presented recent work using the cosmic microwave background to place limits on Lorentz-violation coefficients in the renormalizable and non-renormalizable sectors of the SME. The Pioneer anomaly – apparent deviations in the paths of spacecraft in the outer solar system, such as the Pioneer 10 and 11 – provides a possibility for new physics, as Michael Nieto of Los Alamos described, giving perspectives on the underlying physics that may be responsible for the observations. Synchrotron radiation and inverse Compton scattering from high-energy astrophysical sources may also show sensitivity to a variety of SME coefficients, as Brett Altschul of the University of South Carolina explained.

Atomic-physics tests

There have been tests of the electromagnetic sector of the SME for several years using low-energy experiments that include optical and microwave cavity oscillators, torsion pendulums, atomic clocks, and interferometric techniques. Experimental innovations have led to steadily improving resolutions and the ability to access better the geometrical components of the SME coefficient space.

Achim Peters of the Humboldt University in Berlin announced improvements by a factor of 30 on certain photon-sector SME coefficients using a cryogenic precision optical resonator on a rotating turntable. Another test in the photon sector has been performed by Michael Tobar of the University of Western Australia, who has used a Mach–Zehnder interferometer to improve the sensitivity to one particular coefficient by six orders of magnitude. The team at Princeton University has recently developed innovations for a second-generation comagnetometer. Sylvia Smullin and group leader Mike Romalis described this work, and also presented the results from the experiment’s first-generation predecessor.

The Eöt-Wash torsion pendulum group at the University of Washington in Seattle has made major contributions to the search for Lorentz violation, including several of the tightest constraints on SME coefficients in the electron sector, which they recently generated using a spin-polarized torsion pendulum with a macroscopic intrinsic spin. Group member Blayne Heckel described preliminary results achieving yet greater sensitivity to a number of electron coefficients using a further refined version of the apparatus.

In all, the 2007 meeting on CPT and Lorentz symmetry highlighted the intense efforts of the physics community in testing Lorentz symmetry and other fundamental properties of nature. Should the minuscule traces of Lorentz violations be found, it would be a paradigm-changing event, leading to profound alterations to our current theories describing the forces of nature.

From BCS to the LHC

It was a little odd for me, a physicist whose work has been mainly on the theory of elementary particles, to be invited to speak at a meeting of condensed-matter physicists celebrating a great achievement in their field. It is not only that there is a difference in the subjects that we explore. There are deep differences in our aims, in the kinds of satisfaction that we hope to get from our work.

Condensed-matter physicists are often motivated to deal with phenomena because the phenomena themselves are intrinsically so interesting. Who would not be fascinated by weird things, such as superconductivity, superfluidity, or the quantum Hall effect? On the other hand, I don’t think that elementary-particle physicists are generally very excited by the phenomena they study. The particles themselves are practically featureless, every electron looking tediously just like every other electron.

Another aim of condensed-matter physics is to make discoveries that are useful. In contrast, although elementary-particle physicists like to point to the technological spin-offs from elementary-particle experimentation, and these are real, this is not the reason that we want these experiments to be done, and the knowledge gained by these experiments has no foreseeable practical applications.

Most of us do elementary-particle physics neither because of the intrinsic interestingness of the phenomena that we study, nor because of the practical importance of what we learn, but because we are pursuing a reductionist vision. All of the properties of ordinary matter are what they are because of the principles of atomic and nuclear physics, which are what they are because of the rules of the Standard Model of elementary particles, which are what they are because…well, we don’t know, this is the reductionist frontier, which we are currently exploring.

I think that the single most important thing accomplished by the theory of John Bardeen, Leon Cooper, and Robert Schrieffer (BCS) was to show that superconductivity is not part of the reductionist frontier (Bardeen et al. 1957). Before BCS this was not so clear. For instance, in 1933 Walter Meissner raised the question of whether electric currents in superconductors are carried by the known charged particles, electrons and ions. The great thing that Bardeen, Cooper, and Schrieffer showed was that no new particles or forces had to be introduced to understand superconductivity. According to a book on superconductivity that Cooper showed me, many physicists were even disappointed that “superconductivity should, on the atomistic scale, be revealed as nothing more than a footling small interaction between electrons and lattice vibrations”. (Mendelssohn 1966).

His testimony was so scrupulously honest that I think it helped the SSC more than it hurt it.

The claim of elementary-particle physicists to be leading the exploration of the reductionist frontier has at times produced resentment among condensed-matter physicists. (This was not helped by a distinguished particle theorist, who was fond of referring to condensed-matter physics as “squalid state physics”.) This resentment surfaced during the debate over the funding of the Superconducting Super Collider (SSC). I remember that Phil Anderson and I testified in the same Senate committee hearing on the issue, he against the SSC and I for it. His testimony was so scrupulously honest that I think it helped the SSC more than it hurt it. What really did hurt was a statement opposing the SSC by a condensed-matter physicist who happened at the time to be the president of the American Physical Society. As everyone knows, the SSC project was cancelled, and now we are waiting for the LHC at CERN to get us moving ahead again in elementary-particle physics.

During the SSC debate, Anderson and other condensed-matter physicists repeatedly made the point that the knowledge gained in elementary-particle physics would be unlikely to help them to understand emergent phenomena like superconductivity. This is certainly true, but I think beside the point, because that is not why we are studying elementary particles; our aim is to push back the reductive frontier, to get closer to whatever simple and general theory accounts for everything in nature. It could be said equally that the knowledge gained by condensed-matter physics is unlikely to give us any direct help in constructing more fundamental theories of nature.

So what business does a particle physicist like me have at a celebration of the BCS theory? (I have written just one paper about superconductivity, a paper of monumental unimportance, which was treated by the condensed-matter community with the indifference it deserved.) Condensed-matter physics and particle physics are relevant to each other, despite everything I have said. This is because, although the knowledge gained in elementary-particle physics is not likely to be useful to condensed-matter physicists, or vice versa, experience shows that the ideas developed in one field can prove very useful in the other. Sometimes these ideas become transformed in translation, so that they even pick up a renewed value to the field in which they were first conceived.

The example that concerns me is an idea that elementary-particle physicists learnt from condensed-matter theory – specifically from the BCS theory. It is the idea of spontaneous symmetry breaking.

Spontaneous symmetry breaking

In particle physics we are particularly interested in the symmetries of the laws of nature. One of these symmetries is invariance of the laws of nature under the symmetry group of three-dimensional rotations, or in other words, invariance of the laws that we discover under changes in the orientation of our measuring apparatus.

When a physical system does not exhibit all the symmetries of the laws by which it is governed, we say that these symmetries are spontaneously broken. A very familiar example is spontaneous magnetization. The laws governing the atoms in a magnet are perfectly invariant under three-dimensional rotations, but at temperatures below a critical value, the spins of these atoms spontaneously line up in some direction, producing a magnetic field. In this case, and as often happens, a subgroup is left invariant: the two-dimensional group of rotations around the direction of magnetization.

Now to the point. A superconductor of any kind is nothing more or less than a material in which a particular symmetry of the laws of nature, electromagnetic gauge invariance, is spontaneously broken. This is true of high-temperature superconductors, as well as the more familiar superconductors studied by BCS. The symmetry group here is the group of two-dimensional rotations. These rotations act on a two-dimensional vector, whose two components are the real and imaginary parts of the electron field, the quantum mechanical operator that in quantum field theories of matter destroys electrons. The rotation angle of the broken symmetry group can vary with location in the superconductor, and then the symmetry transformations also affect the electromagnetic potentials, a point to which I will return.

The symmetry breaking in a superconductor leaves unbroken a rotation by 180°

The symmetry breaking in a superconductor leaves unbroken a rotation by 180°, which simply changes the sign of the electron field. In consequence of this spontaneous symmetry breaking, products of any even number of electron fields have non-vanishing expectation values in a superconductor, though a single electron field does not. All of the dramatic exact properties of superconductors – zero electrical resistance, the expelling of magnetic fields from superconductors known as the Meissner effect, the quantization of magnetic flux through a thick superconducting ring, and the Josephson formula for the frequency of the AC current at a junction between two superconductors with different voltages – follow from the assumption that electromagnetic gauge invariance is broken in this way, with no need to inquire into the mechanism by which the symmetry is broken.

Condensed-matter physicists often trace these phenomena to the appearance of an “order parameter”, the non-vanishing mean value of the product of two electron fields, but I think this is misleading. There is nothing special about two electron fields; one might just as well take the order parameter as the product of three electron fields and the complex conjugate of another electron field. The important thing is the broken symmetry, and the unbroken subgroup.

It may then come as a surprise that spontaneous symmetry breaking is mentioned nowhere in the seminal paper of Bardeen, Cooper and Schrieffer. Their paper describes a mechanism by which electromagnetic gauge invariance is in fact broken, but they derived the properties of superconductors from their dynamical model, not from the mere fact of broken symmetry. I am not saying that Bardeen, Cooper, and Schrieffer did not know of this spontaneous symmetry breaking. Indeed, there was already a large literature on the apparent violation of gauge invariance in phenomenological theories of superconductivity, the fact that the electric current produced by an electromagnetic field in a superconductor depends on a quantity known as the vector potential, which is not gauge invariant. But their attention was focused on the details of the dynamics rather than the symmetry breaking.

This is not just a matter of style. As BCS themselves made clear, their dynamical model was based on an approximation, that a pair of electrons interact only when the magnitude of their momenta is very close to a certain value, known as the Fermi surface. This leaves a question: How can you understand the exact properties of superconductors, like exactly zero resistance and exact flux quantization, on the basis of an approximate dynamical theory? It is only the argument from exact symmetry principles that can fully explain the remarkable exact properties of superconductors.

Though spontaneous symmetry breaking was not emphasized in the BCS paper, the recognition of this phenomenon produced a revolution in elementary-particle physics. The reason is that (with certain qualification, to which I will return), whenever a symmetry is spontaneously broken, there must exist excitations of the system with a frequency that vanishes in the limit of large wavelength. In elementary-particle physics, this means a particle of zero mass.

The first clue to this general result was a remark in a 1960 paper by Yoichiro Nambu, that just such collective excitations in superconductors play a crucial role in reconciling the apparent failure of gauge invariance in a superconductor with the exact gauge invariance of the underlying theory governing matter and electromagnetism. Nambu speculated that these collective excitations are a necessary consequence of this exact gauge invariance.

Nambu put this idea to good use in particle physics

A little later, Nambu put this idea to good use in particle physics. In nuclear beta decay an electron and neutrino (or their antiparticles) are created by currents of two different kinds flowing in the nucleus, known as vector and axial vector currents. It was known that the vector current was conserved, in the same sense as the ordinary electric current. Could the axial current also be conserved?

The conservation of a current is usually a symptom of some symmetry of the underlying theory, and holds whether or not the symmetry is spontaneously broken. For the ordinary electric current, this symmetry is electromagnetic gauge invariance. Likewise, the vector current in beta decay is conserved because of the isotopic spin symmetry of nuclear physics. One could easily imagine several different symmetries, of a sort known as chiral symmetries, that would entail a conserved axial vector current. However, it seemed that any such chiral symmetries would imply either that the nucleon mass is zero, which is certainly not true, or that there must exist a triplet of massless strongly interacting particles of zero spin and negative parity, which isn’t true either. These two possibilities simply correspond to the two possibilities that the symmetry, whatever it is, either is not, or is, spontaneously broken, not just in some material like a superconductor, but even in empty space.

Nambu proposed that there is indeed such a symmetry, and it is spontaneously broken in empty space, but the symmetry in addition to being spontaneously broken is not exact to begin with, so the particle of zero spin and negative parity required by the symmetry breaking is not massless, only much lighter than other strongly interacting particles. This light particle, he recognized, is nothing but the pion, the lightest and first discovered of all the mesons. In a subsequent paper with Giovanni Jona-Lasinio, Nambu presented an illustrative theory in which, with some drastic approximations, a suitable chiral symmetry was found to be spontaneously broken, and in consequence the light pion appeared as a bound state of a nucleon and an antinucleon.

So far, there was no proof that broken exact symmetries always entail exactly massless particles, just a number of examples of approximate calculations in specific theories. In 1961 Jeffrey Goldstone gave some more examples of this sort, and a hand-waving proof that this was a general result. Such massless particles are today known as Goldstone bosons, or Nambu–Goldstone bosons. Soon after, Goldstone, Abdus Salam and I made this into a rigorous and apparently quite general theorem.

Cosmological fluctuations

This theorem has applications in many branches of physics. One is cosmology. You may know that today the observation of fluctuations in the cosmic microwave background are being used to set constraints on the nature of the exponential expansion, known as inflation, that is widely believed to have preceded the radiation-dominated Big Bang. But there is a problem here. In between the end of inflation and the time that the microwave background that we observe was emitted, there intervened a number of events that are not at all understood: the heating of the universe after inflation, the production of baryons, the decoupling of cold dark matter, and so on. So how is it possible to learn anything about inflation by studying radiation that was emitted long after inflation, when we don’t understand what happened in between? The reason that we can get away with this is that the cosmological fluctuations now being studied are of a type, known as adiabatic, that can be regarded as the Goldstone excitations required by a symmetry, related to general co-ordinate invariance, that is spontaneously broken by the space–time geometry. The physical wavelengths of these cosmological fluctuations were stretched out by inflation so much that they were very large during the epochs when things were happening that we don’t understand, so they then had zero frequency, which means that the amplitude of these fluctuations was not changing, so that the value of the amplitude relatively close to the present tells us what it was during inflation.

Werner Heisenberg continued to believe this into the 1970s

But in particle physics, this theorem was at first seen as a disappointing result. There was a crazy idea going around, which I have to admit that at first I shared, that somehow the phenomenon of spontaneous symmetry breaking would explain why the symmetries being discovered in strong-interaction physics were not exact. Werner Heisenberg continued to believe this into the 1970s, when everyone else had learned better.

The prediction of new massless particles, which were ruled out experimentally, seemed in the early 1960s to close off this hope. But it was a false hope anyway. Except under special circumstances, a spontaneously broken symmetry does not look at all like an approximate unbroken symmetry; it manifests itself in the masslessness of spin-zero bosons, and in details of their interactions. Today we understand approximate symmetries such as isospin and chiral invariance as consequences of the fact that some quark masses, for some unknown reason, happen to be relatively small.

Though based on a false hope, this disappointment had an important consequence. Peter Higgs, Robert Brout and François Englert, and Gerald Guralnik, Dick Hagen and Tom Kibble were all led to look for, and then found, an exception to the theorem of Goldstone, Salam and me. The exception applies to theories in which the underlying physics is invariant under local symmetries, symmetries whose transformations, like electromagnetic gauge transformations, can vary from place to place in space and time. (This is in contrast with the chiral symmetry associated with the axial vector current of beta decay, which applies only when the symmetry transformations are the same throughout space–time.) For each local symmetry there must exist a vector field, like the electromagnetic field, whose quanta would be massless if the symmetry was not spontaneously broken. The quanta of each such field are particles with helicity (the component of angular momentum in the direction of motion) equal in natural units to +1 or –1. But if the symmetry is spontaneously broken, these two helicity states join up with the helicity-zero state of the Goldstone boson to form the three helicity states of a massive particle of spin one. Thus, as shown by Higgs, Brout and Englert, and Guralnik, Hagen and Kibble, when a local symmetry is spontaneously broken, neither the vector particles with which the symmetry is associated nor the Nambu–Goldstone particles produced by the symmetry breaking have zero mass.

This was actually argued earlier by Anderson, on the basis of the example provided by the BCS theory. But the BCS theory is non-relativistic, and the Lorentz invariance that is characteristic of special relativity had played a crucial role in the theorem of Goldstone, Salam and me, so Anderson’s argument was generally ignored by particle theorists. In fact, Anderson was right: the reason for the exception noted by Higgs et al. is that it is not possible to quantize a theory with a local symmetry in a way that preserves both manifest Lorentz invariance and the usual rules of quantum mechanics, including the requirement that probabilities be positive. In fact, there are two ways to quantize theories with local symmetries: one way that preserves positive probabilities but loses manifest Lorentz invariance, and another that preserves manifest Lorentz invariance but seems to lose positive probabilities, so in fact these theories actually do respect both Lorentz invariance and positive probabilities; they just don’t respect our theorem.

Effective field theories

The appearance of mass for the quanta of the vector bosons in a theory with local symmetry re-opened an old proposal of Chen Ning Yang and Robert Mills, that the strong interactions might be produced by the vector bosons associated with some sort of local symmetry, more complicated than the familiar electromagnetic gauge invariance. This possibility was specially emphasized by Brout and Englert. It took a few years for this idea to mature into a specific theory, which then turned out not to be a theory of strong interactions.

Perhaps the delay was because the earlier idea of Nambu, that the pion was the nearly massless boson associated with an approximate chiral symmetry that is not a local symmetry, was looking better and better. I was very much involved in this work, and would love to go into the details, but that would take me too far from BCS. I’ll just say that, from the effort to understand processes involving any number of low-energy pions beyond the lowest order of perturbation theory, we became comfortable with the use of effective field theories in particle physics. The mathematical techniques developed in this work in particle physics were then used by Joseph Polchinski and others to justify the approximations made by BCS in their work on superconductivity.

The story of the physical application of spontaneously broken local symmetries has often been told, by me and others, and I don’t want to take much time on it here, but I can’t leave it out altogether because I want to make a point about it that will take me back to the BCS theory. Briefly, in 1967 I went back to the idea of a theory of strong interactions based on a spontaneously broken local symmetry group, and right away, I ran into a problem: the subgroup consisting of ordinary isospin transformations is not spontaneously broken, so there would be a massless vector particle associated with these transformations with the spin and charges of the ρ meson. This, of course, was in gross disagreement with observation; the ρ meson is neither massless nor particularly light.

The theory requires a massless vector particle, but it is not the ρ meson, it is the photon

Then it occurred to me that I was working on the wrong problem. What I should have been working on were the weak nuclear interactions, like beta decay. There was just one natural choice for an appropriate local symmetry, and when I looked back at the literature I found that the symmetry group I had decided on was one that had already been proposed in 1961 by Sheldon Glashow, though not in the context of an exact spontaneously broken local symmetry. (I found later that the same group had also been considered by Salam and John Ward.) Even though it was now exact, the symmetry when spontaneously broken would yield massive vector particles, the charged W particles that had been the subject of theoretical speculation for decades, and a neutral particle, which I called the Z particle, to mediate a “neutral current” weak interaction, which had not yet been observed. The same symmetry breaking also gives mass to the electron and other leptons, and in a simple extension of the theory, to the quarks. This symmetry group contained electromagnetic gauge invariance, and since this subgroup is clearly not spontaneously broken (except in superconductors), the theory requires a massless vector particle, but it is not the ρ meson, it is the photon, the quantum of light. This theory, which became known as the electroweak theory, was also proposed independently in 1968 by Salam.

The mathematical consistency of the theory, which Salam and I had suggested but not proved, was shown in 1971 by Gerard ‘t Hooft; neutral current weak interactions were found in 1973; and the W and Z particles were discovered at CERN a decade later. Their detailed properties are just those expected according to the electroweak theory.

There was (and still is) one outstanding issue: just how is the local electroweak symmetry broken? In the BCS theory, the spontaneous breakdown of electromagnetic gauge invariance arises because of attractive forces between electrons near the Fermi surface. These forces don’t have to be strong; the symmetry is broken however weak these forces may be. But this feature occurs only because of the existence of a Fermi surface, so in this respect the BCS theory is a misleading guide for particle physics. In the absence of a Fermi surface, dynamical spontaneous symmetry breakdown requires the action of strong forces. There are no forces acting on the known quarks and leptons that are anywhere strong enough to produce the observed breakdown of the local electroweak symmetry dynamically, so Salam and I did not assume a dynamical symmetry breakdown; instead we introduced elementary scalar fields into the theory, whose vacuum expectation values in the classical approximation would break the symmetry.

This has an important consequence. The only elementary scalar quanta in the theory that are eliminated by spontaneous symmetry breaking are those that become the helicity-zero states of the W and Z vector particles. The other elementary scalars appear as physical particles, now generically known as Higgs bosons. It is the Higgs boson predicted by the electroweak theory of Salam and me that will be the primary target of the new LHC accelerator, to be completed at CERN sometime in 2008.

But there is another possibility, suggested independently in the late 1970s by Leonard Susskind and me. The electroweak symmetry might be broken dynamically after all, as in the BCS theory. For this to be possible, it is necessary to introduce new extra-strong forces, known as technicolour forces, that act on new particles, other than the known quarks and leptons. With these assumptions, it is easy to get the right masses for the W and Z particles and large masses for all the new particles, but there are serious difficulties in giving masses to the ordinary quarks and leptons. Still, it is possible that experiments at the LHC will not find Higgs bosons, but instead will find a great variety of heavy new particles associated with technicolour. Either way, the LHC is likely to settle the question of how the electroweak symmetry is broken.

It would have been nice if we could have settled this question by calculation alone, without the need for the LHC, in the way that Bardeen, Cooper and Schrieffer were able to find how electromagnetic gauge invariance is broken in a superconductor by applying the known principles of electromagnetism. But that is just the price we in particle physics have to pay for working in a field whose underlying principles are not yet known.

• This article is based on the talk given by Steven Weinberg at BCS@50, held on 10–13 October 2007 at the University of Illinois at Urbana–Champaign to celebrate the 50th anniversary of the BCS paper. For more about the conference see www.conferences.uiuc.edu/bcs50/.

Electronics for LHC-era experiments and beyond

The Topical Workshop on Electronics for Particle Physics (TWEPP ’07) recently brought together more than 160 participants from the international high-energy physics community, specialized technical institutes and industry. Held in Prague on 3–7 September 2007, the workshop was organized by Charles University, the Czech Technical University, the Institute of Physics and the Nuclear Physics Institute of the Czech Academy of Sciences. It represented both a continuation and a significant broadening of the scope of the series of annual Workshops on Electronics for LHC Experiments initiated in 1994.

CClhc1_01_08

This series of workshops began within the framework of the R&D programme supervised initially by CERN’s Detector Research and Development Committee and later by the LHC Committee. The goal was to promote collaboration and dissemination of relevant expertise within the LHC community, harness specialized knowledge from industry and technical institutes and encourage common approaches and the adoption of standards. The proceedings of the previous 12 workshops show that the programme met these aims. Overall progress has often been spectacular, from the initial R&D phase to the installation and commissioning of the large-scale and complex high-technology electronics systems for LHC experiments. Despite the successful resolution of the many initial R&D challenges, several practical electrical engineering aspects have recently proved to cause some of the biggest headaches in assembling the full LHC detector systems.

With the LHC experiments now well into their commissioning phase, the meeting in Prague was a timely occasion to review lessons learned from more than a decade of design, production and installation of detector electronics. It was also a time to look forward to the challenges of developments in electronics for potential experimental facilities beyond the LHC, such as the Super-LHC (SLHC), the International Linear Collider (ILC) and the Compact Linear Collider study, as well as neutrino and fixed-target experiments. The workshop featured 89 submitted presentations, nine invited talks, topical sessions on supply and distribution of power in detectors, working groups on microelectronics and optoelectronics, and an optional tutorial on robust ASIC designs for hostile environments. While the majority of contributions (58%) described electronics for LHC experiments, 9% of the papers addressed an SLHC upgrade issue and 33% concerned the ILC or other experiments. Some 16% of participants were from non-European institutes.

CClhc2_01_08

Some lessons learnt

Approximately 40% of the workshop contributions were on electronics systems, installation and commissioning. This is no surprise given the advanced state of the LHC experiments. Speakers reported on significant progress in integrating the sub-detectors in the LHC experiments and in commissioning tests with cosmic rays. In general, the performance of the front-end and back-end electronics and the associated software and firmware for controls, monitoring and readout, agrees well with expectations. This major achievement is largely a result of the tremendous effort that the community has made to deliver complex and functional electronics systems to the experiments. However, installation and verification of the complicated services for the front-end electronics (power, cooling, cables etc) often turned out to be much more difficult than anticipated. One particular point of concern relates to the supply and distribution of power to the experiments. In the current LHC detectors, typically only around 30–40% of the power produced is really dissipated in the front-end circuits, the remainder being lost in long power cables and through conversion inefficiencies in power supplies.

A more efficient power distribution system would have reduced the amount of material required in the form of power cabling and cooling infrastructure to remove the heat; this in turn would have allowed improved tracking detectors. The development of such power supply and distribution systems will be critical for the successful construction of future detectors. In a possible SLHC luminosity upgrade, for example, a 10-fold increase in luminosity will require detectors with higher granularity and hence an increased number of electronic channels. The use of advanced front-end ASIC technology holds the promise of reduced power dissipation per channel, and therefore should help to contain any increase in the global power dissipation of the whole front-end electronics systems. Nevertheless, these advanced IC technologies operate at lower voltages than those employed in the LHC detectors today, so the fraction of power dissipated in power cabling at the SLHC detectors is at risk of increasing.

To review the present situation and discuss future orientations, the workshop devoted a day to topical sessions on power management and distribution in large detector systems, with presentations and discussions about several new approaches. At the ILC for instance, the time structure with bunch trains of around 1 ms interspersed with 200 ms of idle time offers the possibility of placing the electronics in quiescent mode during the idle periods, which could lead to a 99% reduction in the average power consumed by the front-end electronics. This power cycling technique cannot be used at the SLHC, but local DC–DC conversion and serial powering are strong alternative options. The first of these alternative approaches delivers power to the front-end modules at high voltage (say, 24 V) and then uses a local DC–DC converter to step down to the required ASIC supply voltage (1.5 V for 130 nm CMOS). In the serial-powering approach the floating modules are powered in series and fed with a constant current. Each module is equipped with a voltage regulator and a current shunt in order to maintain the required drop in supply voltage, regardless of load variations or possible module failure.

The topical sessions concluded with general agreement on the need to adopt a coordinated approach to the supply, management and distribution of power for large experiments in order to avoid a posteriori systems engineering. A working group will be established to assess power-related issues, including lessons learnt from LHC detectors; power management developments required for future upgrades and experiments; and methodologies for the quality control and qualification of power systems.

Front-end to back-end

The second largest session of the workshop focused on ASIC developments. In view of the considerable challenge presented by electronics for the future SLHC or ILC detectors, clear signs of vigorous development activity are excellent news. The ASIC session covered a rich set of applications, including front-end circuits for pixel and micro-strip detectors for tracking, front-end electronics for calorimetry at the LHC, SLHC and ILC, and generic functions, such as single-event upset-tolerant programmable logic and optical transfer of data, clock and trigger signals at multi-gigabit rates. ASIC projects presented at the workshop employed a range of standard CMOS technologies (with minimum feature sizes of 350 nm, 250 nm, 180 nm and 130 nm), as well as other technologies chosen to meet the specific requirements of different detectors. The latter included silicon-germanium processes to handle signals with a wide dynamic range, high-voltage processes for DC–DC converter developments, and silicon-on-insulator technology for the development of monolithic integrated pixel detectors.

A large fraction of the contributions on ASICs were related to the ILC detectors, where a low material budget within the detector and low-power front-end electronics are particularly important. Developments addressing these requirements include monolithic pixel systems, ASICs to read out CCD arrays and ASICs to read out silicon microstrips in advanced 130 nm CMOS technology with built-in support for power cycling.

The ASICs being developed for particle detector readout are now becoming real “systems on chips”, and their increasing complexity requires ever more expertise from larger and larger development teams, as well as an approach that takes system aspects into account from an early stage of the development. The appropriate choice of technology will depend strongly on the specific development timescale of the different projects, as well as the global cost of accessing such technologies, including qualification and the design support environment. The use of a common technology base would allow sharing of building blocks and reduction of the global effort needed for radiation hardness qualification.

The maintenance of the firmware and the software will present a considerable support challenge over the lifetime of the experiments.

A Microelectronics Users’ Group meeting directly followed the ASIC session to spread information about progress in making deep sub-micrometre technologies available to the particle-physics community. CERN has negotiated access to 130 nm and 90 nm CMOS technologies following a similar model to that used for the 250 nm technology employed in many of the developments for LHC experiments. A design kit and a commercial library facilitating digital and mixed-signal ASIC developments in 130 nm CMOS are already available for the SLHC, ILC and other future projects.

The transmission of signals between the front-end ASICs and the readout, trigger, timing and control crates in the counting rooms of the LHC experiments has in nearly all cases been implemented with radiation-resistant high-bandwidth optical links. The production, assembly, integration and commissioning of these optical links involved large-scale quality-assurance programmes. Contributors to the workshop presented various quality-control tools for integration of optical link systems, commissioning and in-field fault diagnosis. Despite initial fears about their fragility, the quality of the systems installed so far has proved to be very high, with the fraction of unrecoverable faulty connections in the per mille range. Recently, efforts have begun to investigate the possibility of using similar optical systems at the SLHC. Although the rapid evolution of technology is making available optical links with sufficient bandwidth, effort on the selection and radiation hardness qualification of optical fibres, lasers, and pin photodiodes is just starting. Results were presented at the workshop on radiation tests of optical fibres and vertical cavity surface-emitting lasers (VCSELs) operating at 850 nm wavelength. A working group met to coordinate work on future optical systems with the aim of promoting common development, testing and qualification paths.

In parallel to the highly customized front-end electronics, impressive progress is also being made in commissioning the trigger and data-acquisition interface electronics for LHC experiments. The back-end electronics in the counting rooms typically employ large, high-density boards housing optical transceivers and several field- programmable gate arrays (FPGAs). Manufacturing problems with the high-density circuit boards have been largely overcome through close co-operation with the manufacturers and ongoing attention to detail. The use of FPGAs provides complex data-processing functionality in a reduced board area, and their reconfigurability is ideal for the flexible implementation and evolution of trigger algorithms. A downside of this flexibility is the potential proliferation of firmware versions and variants across a large number of board designs and different types of FPGA. The maintenance of the firmware and the software will present a considerable support challenge over the lifetime of the experiments.

The TWEPP ’07 workshop confirmed that most electronics systems for LHC experiments are ready and functioning according to specifications. In addition, it took a further step towards extending the original goals of the earlier Workshops on Electronics for LHC Experiments to the wider community of particle physicists engaged in developing future experimental facilities. It provided an excellent forum to exchange novel ideas, technical know-how and practical experience between different sectors of the international particle-physics community. In a context where electronics is an essential enabler for future experiments, such a forum will certainly contribute to improving the quality and reliability of the systems built. It will also lead to the formation of new collaborations and the preparation of common projects.

Hot plasma fills the Orion nebula

Observations with the XMM-Newton satellite have revealed soft X-ray emission from an extended region in the Orion nebula. The most massive stars in the heart of the nebula are probably at the origin of this million-degree plasma flowing through it.

The Orion nebula (Messier 42), is more than a thousand light-years away, but is visible to the naked eye. It is the most spectacular star-forming region in the northern sky. The nebula hosts the Trapezium group of four recently formed very massive stars – seen by eye as a single star called Θ Orionis – which illuminate and ionize the surrounding gas.

A team of astronomers led by Manuel Güdel from the Paul Scherrer Institute in Switzerland has discovered that a hot plasma pervades the nebula. The extended plasma emission, which reaches a temperature of about 2 million degrees, was observed with the relatively wide-field camera of the European X-ray Multi-Mirror satellite (XMM-Newton).

In the absence of a supernova bubble in this very young nebula, which is about 3 million years old, the only source of energy available to heat the gas is the fast wind from the Trapezium stars. The brightest star in the group is about 40 times more massive than the Sun and generates a wind of plasma with a speed up to 1650 km/s. The violent collision between this wind and the surrounding dense gas heats the plasma to millions of degrees, but only about a ten-thousandth of the wind’s kinetic energy is needed to account for the X-ray luminosity of the hot plasma.

Further calculations by Güdel and colleagues show that the hot X-ray plasma is approximately in pressure equilibrium with the ambient ionized gas, because although it is cooler the latter compensates the pressure by being much denser. This equilibrium could explain the presence of the hot gas in a cavity in the nebula: the hot gas would be channelled by the cooler, denser structures and slowly flow into the cavity.

This is not yet the end of the journey, however. Güdel and colleagues further speculate that the hot gas could continue its flow out of the cavity into the nearby Eridanus superbubble, like a river flowing out of a lake into the sea. This giant bubble is 400 light-years wide and extends over 20 degrees of the sky. The wind-shocked gas from the Trapezium stars would thus slowly replenish the Eridanus superbubble, which was formed by supernova explosions from previous generations of massive stars. The team plans further observations to test this scenario, which could also explain how the observed radioactive aluminium-26 could have migrated from the Orion nebula into this superbubble (CERN Courier January/February 2006 p10).

If the proposed scenario is correct, it means that the gas in our galaxy is not only enriched by heavy elements – such as carbon, oxygen, nitrogen or iron – from sudden supernova explosions. It could also be gently enriched over millions of years by the continuous stellar wind of massive stars that leaks out of star-forming regions into the interstellar medium.

ASACUSA moves towards new antihydrogen experiments

Antihydrogen experiments under way at CERN’s Antiproton Decelerator have so far aimed at making high-precision measurements of the frequency of optical transitions, such as that between antihydrogen’s 1S ground state and its first excited, 2S state, near 2466 THz. Comparing this with the same frequency for ordinary hydrogen constitutes a highly sensitive test of CPT symmetry, which involves simultaneous inversions of charge (C), parity (P) and time (T) (see “CPT ’07 goes in quest of Lorentz violation“).

Recently, the Japanese–European ASACUSA group made the first steps towards producing a low-velocity antihydrogen beam, which may be used to measure the hyperfine transition frequency between the two spin substates of antihydrogen’s ground state. Its value for ordinary hydrogen is near 1420 MHz.

Before this can be done antiprotons must be confined and cooled in an evacuated container in which magnetic and/or electric fields produce restoring forces that stop the antiprotons drifting to the container walls, where they would annihilate. To do this, the MUSASHI group of the ASACUSA collaboration has introduced a novel variant of the familiar Helmholtz coils. The MUSASHI coils differ from the usual configuration by having antiparallel rather than parallel excitation currents. This produces a magnetic quadrupole field rather than the normal constant one, and is symmetric about the coil axis. If a suitable electrostatic multipole field is added to this so-called “magnetic cusp” field, all of the restoring forces needed to confine both positive and negative charges are present within the container.

This “cusp trap” can thus also hold positrons, with which the antiprotons recombine to create the antihydrogen, as well as electrons. The latter can be used to cool the antiprotons to the extremely low temperature at which recombination occurs. In the recent tests, some 3 million antiprotons were stored in the trap and cooled with electrons.

A well known obstacle to CPT tests with antihydrogen is that both the hyperfine and the 1S–2S frequency measurements must be performed on ground-state atoms, while it appears that positron–antiproton recombination produces them in very highly excited states. One great advantage of the cusp trap is that if these neutral atoms are cold enough its quadrupole field pulls on their large magnetic moment, causing them to seek the field minimum at the trap centre. They remain confined there until they reach the ground state. However, since their magnetic moment falls as they de-excite, the pull weakens. This means that in the ground state, only antihydrogen atoms in one of the two possible spin states are pulled to the centre, while those in the other state are expelled along the trap axis, emerging as a spin-polarized, ground-state antihydrogen beam.

This is ideal for the classical type of slow atomic beam experiment in which a microwave cavity induces spin flips when tuned to the correct hyperfine frequency (see figure). The resonant frequency can then be detected using a sextupole magnet which focuses flipped atoms onto a detector but defocuses unflipped ones. Comparison with the well measured hydrogen frequency then gives a stringent test of CPT symmetry.

Although much of this remains to be done, the recent successes are so encouraging that further steps along the road to a slow antihydrogen beam are now planned.

Camera captures image of two-proton decay

In work that harks back to the early days of nuclear physics, an international team of researchers at Michigan State University’s National Superconducting Cyclotron Laboratory (NSCL) has used a novel detector incorporating a CCD camera to record optically the tracks of charged particles emitted in the two-proton decay of iron-45 (45Fe). The technique has allowed the first measurement of correlations between the two protons, demonstrating that the process is indeed a three-body decay. Besides shedding light on a novel form of radioactive decay, the technique could lead to additional discoveries about short-lived rare isotopes, which may hold the key to understanding processes inside neutron stars and determining the limits of nuclear existence.

CCnew4_01_08

Although it is more than 100 years since Henri Becquerel opened the door to nuclear physics with his discovery of radioactivity, there are still open questions that continue to nag experimentalists. One such example is the mechanism underlying the two-proton emission of neutron-deficient nuclei, first observed in the 1980s.

Now Krzysztof Miernik and colleagues from Poland, Russia and the US have taken several steps towards an answer, by peering closely at the radioactive decay of a rare iron isotope at the edge of the known nuclear map (Miernik et al. 2007). The researchers set out to obtain a better understanding of two-proton emission from 45Fe, which has a nucleus of 26 protons and 19 neutrons; in comparison, the stable form of iron most abundant on Earth has 30 neutrons. One possibility was that the neutron-deficient 45Fe might occasionally release a diproton – an energetically correlated pair of protons. It was also possible that the two protons, whether emitted in quick succession or simultaneously, were unlinked.

The experiment’s key device was the novel imaging detector built by Marek Pfutzner and colleagues from Warsaw University – the Optical Time Projection Chamber (OPTC). This consists of a front-end gas chamber that accepts and slows down rare isotopes in a beam from the NSCL Coupled Cyclotron Facility. Electrons from the ionized tracks drift in a uniform electric field to a double amplification structure, where UV emission occurs. A luminescent foil converts these photons to optical wavelengths, for detection by a CCD camera. In this way, the camera records the projection of the particle tracks on the luminescent foil. A photomultiplier tube also detects the photons from the foil to provide information on the drift time of the electrons, and hence the third dimension, normal to the plane of the CCD.

Analysis of these images ruled out the proposed diproton emission and indicated that the correlations between emitted protons are best described by a three-body decay. A theory of this process has been described by Leonid Grigorenko, a physicist at JINR, and a co-author of the paper.

The experiment itself recalls the early days of experimental nuclear physics in which visual information served as the raw data, with tracks recorded in photographic emulsion. Indeed, this was the process that lay behind Becquerel’s discovery of radioactivity. The new result may represent the first time in modern nuclear physics that fundamental information about radioactive decays has been captured in a camera image and in a digital format. Usually, nuclear physics experiments provide digitized data and numerical information of various types, but not images.

Lead ions knock at the LHC’s door

There was jubilation in the CERN Control Centre late in the afternoon on 12 November. Only a few hours before the annual winter shutdown of the accelerators, monitoring screens showed that a beam of lead ions dispatched from the SPS had reached the threshold of the LHC. For the first time the beam had been extracted close to the LHC along the TT60 transfer line. It marked another milestone towards the final target of circulating lead ions in the LHC to produce collisions.

Since the installation of the Low Energy Ion Ring (LEIR) in 2005, the team working on I-LHC, the project to deliver heavy ions to the LHC, has focused on the injector chain in order to supply ion beams to the LHC in optimal conditions. A year previously, ions that had been accumulated in LEIR and sent to the PS were ejected at the threshold of the SPS for the first time.

In 2007, ions were successfully injected into the SPS from the beginning of September. After many adjustments and studies, the beam had been accelerated with a view to its extraction into one of the two transfer lines linking the SPS and the LHC. But technical problems had arisen, including a vacuum leak detected in the PS at the beginning of November. By increasing ion losses, this leak had resulted in a reduction in the intensity of the ion beam, placing the success of the operation in jeopardy. However, at approximately 5.00 p.m. on 12 November, thanks to an increase in beam intensity to 20 million ions per bunch, the long-awaited beam finally made its appearance on the screens in the control room.

The next stage will be to refine and optimize the beam to reach the nominal intensity for the LHC of 100 million ions per bunch – this will be five times higher than that recently obtained.

Belle Collaboration discovers new meson

The Belle collaboration at KEK has recently announced the discovery of an exotic new particle with non-zero electric charge. This particle, which the researchers have named the Z(4430), does not fit into the usual scheme of mesons.

The Z(4430) particle has appeared in the decay products of B mesons (containing a bottom quark), which are produced in large numbers at KEKB, the B-factory at the KEK laboratory in Japan. While investigating various decays of B mesons in a data sample containing nearly 660 million pairs of B and anti-B mesons, the Belle team observed 120 B mesons that decay into a Z(4430) and a K meson. The Z(4430) then instantly decays into a Ψ’ and a π meson. The team found that the new particle has negative charge and a mass about 4.7 times that of the proton.

Both Belle and the BaBar experiment at SLAC have found a number of peculiar new particles during the past few years, such as the X(3872), Y(4260), X(3940) and Y(3940). These all have masses in the region of 4–4.5 times the proton’s mass, and decay into J/Ψ or Ψ’ particles and π mesons. A simple explanation for these particles would be that they are examples of charmonium, the family of bound states of a charm quark (c) and antiquark (c̅) that includes the J/Ψ and Ψ’. However, their masses and decay properties do not match theoretical expectations for charmonium, so theorists have proposed other explanations.

One possibility is that some of the new particles are multiquark states containing a c and (c) together with another lighter quark and antiquark, for example, an up quark (u) and antiquark (u) or down quark (d) and antiquark (d). However, because the particles previously discovered are electrically neutral, it has not been possible experimentally to rule out entirely that they are unusual charmonium states.

The newly discovered Z(4430), on the other hand, has non-zero electric charge, a characteristic that clearly distinguishes it from charmonium. This raises the possibility that it could be indeed be a multiquark state, containing a c and c̅ together with a quark and different antiquark, for example cuc d.

bright-rec iop pub iop-science physcis connect