Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University (MSU) have measured the half-lives of 100Sn and 96Cd, two nuclei with equal numbers of protons and neutrons that are close to the proton drip line – the proton-rich limit of stability. The result for 100Sn narrows the error range of previous half-life measurements, while the half-life of 96Cd, measured here for the first time, casts a light on the role of the isotope in the rapid proton-capture (rp) process – a key part of heavy-element synthesis in the cosmos. The result for 96Cd also implies a new, as-yet-unknown origin for 96Ru in the solar system, where its abundance has long remained unexplained.
Daniel Bazin and colleagues at MSU used the same fast-beam fragmentation scheme to create both species. Using the facility’s coupled cyclotrons, the team generated a primary beam of 120 MeV/nucleon 112Sn and fragmented it on a beryllium target. The resulting radioactive beam was filtered through the A1900 Fragment Separator and newly commissioned RF Fragment Selector. Finally, the filtered secondary beam implanted itself in NSCL’s Beta Counting System, a series of silicon beta-particle detectors flanked by detectors of the laboratory’s segmented germanium array. To track the beta-decay of implanted nuclei, Bazin’s team monitored decay events at the impact site and neighbouring pixels in the detector for 10 s after implantation.
For 100Sn, the team observed a half-life of 0.55+0.70–0.31s. This result is similar to previous measurements made at GSI and yields an average of 0.86+0.37–0.20s when combined. The increased precision may bolster understanding of this isotope, which is one of the few “doubly magic” nuclei close to the proton drip line. Its protons and neutrons both form a closed-shell configuration, which affords extra stability to the nucleus.
The measured half-life of 96Cd, which was previously unknown, was 1.03+0.24–0.21. This is within the range of several theoretical predictions but it is too short to make 96Cd a critical “waiting point” in the rp process. This process, along with slow neutron capture and rapid neutron capture, probably accounts for many of the universe’s heavy elements. It occurs in supernovae, X-ray bursts and perhaps other astrophysical environments where seed nuclei join with free protons to form nuclei of increasing atomic number. Build-up stalls at specific stages when the binding of another proton is energetically unfavourable. Nuclei accumulate at these so-called waiting points, generating a spike in the observed isotope abundance. Such a spike exists at 96Ru, the product of beta-decay from 96Cd, which suggests a waiting point at 96Cd.
With the result for 96Cd, the half-lives of all expected waiting points along the proton drip line, up to the rp-process’s predicted endpoint, are now known experimentally. However, the half-life that Bazin and collaborators have measured is approximately a tenth of the value required to account for the observed abundance of 96Ru. There must be a different explanation – perhaps an unexplored astrophysical process.
Evidence that the nature of dark energy is a cosmological constant is gathering strength. By adding independent constraints on dark energy from X-ray observations of clusters of galaxies by the Chandra spacecraft, it seems unlikely that the universe will end in a Big Rip.
Dark energy manifests itself by accelerating the expansion of the universe, an effect that was first noticed in 1998 by studying distant supernovae of type Ia (CERN Courier September 2003 p23). Additional evidence that dark energy currently constitutes more than 70% of the matter-energy content of the universe came from the Wilkinson Microwave Anisotropy Probe (WMAP) spacecraft through the analysis of the cosmic microwave background fluctuations (CERN Courier April 2003 p11; May 2006 p12; May 2008 p8).
A first step in the characterization of dark energy is the determination of its equation of state, which describes the relation between pressure, P, and energy density, u, through the parameter w: P=wu. Unlike matter, dark energy is characterized by a negative value of w leading to a negative pressure acting as anti-gravity in the equations of general relativity. An acceleration of the expansion rate of the universe is possible if w is less than –1/3. If w is exactly –1, dark energy has the properties of the cosmological constant, with an acceleration going on forever at the same rate. For lower values of w, the acceleration would continue increasing until a dramatic Big Rip occurred, which would tear everything apart, from galaxies down to atoms and nuclei (CERN Courier May 2003 p13).
The study of clusters of galaxies provides an independent characterization of dark energy. The approach is based on measurements by the Chandra satellite of X-ray emission from hot gas in the clusters. The first results appeared four years ago (CERN Courier July/August 2004 p12). By increasing the sample to 37 distant clusters, with an average redshift of z= 0.55, and comparing them with a sample of 49 nearby ones, the team led by A Vikhlinin from the Harvard-Smithsonian Center for Astrophysics has now significantly improved the constraints on dark energy. They find an evolution of the cluster-mass function implying the existence of dark energy with an equation-of-state parameter of w= –1.14±0.21, assuming it to be constant and the universe to be flat. In combination with the latest constraints from type-Ia supernovae, baryonic acoustic oscillations and WMAP, the team obtains w= –0.991 with statistical and systematic uncertainties each of only about 0.04. This combined analysis also puts an upper limit of 0.33 eV on the masses of light neutrinos.
Only 10 years after identifying the effect of dark energy, astronomers can now combine different measurements that are consistent with each other if dark energy has the properties of the cosmological constant introduced by Einstein to counteract self-gravity in a static universe. There is still some freedom within the uncertainties for a more exotic dark energy, but the alternatives are clearly disfavoured by simplicity arguments (Occam’s razor). This means that dark energy is most likely vacuum energy, but the mystery of its low but none-zero energy density remains: why is it 120 orders of magnitude below the quantum expectation?
The European Particle Accelerator Conference (EPAC) has been a regular feature on the conference calendar since the first meeting in Rome 20 years ago. It brings together accelerator specialists working in areas ranging from fundamental physics through applications in material science, biology and medicine to commercialization in industry. EPAC ’08, the 11th in the series, took place in Genoa, the Italian port city with links to Christopher Columbus and Marco Polo – explorers who established some of the first routes from Europe to the Americas and the Far East. It was a fitting location, because this was the final conference in the series. EPAC will merge with the related American and Asian conferences into the International Particle Accelerator Conference (IPAC), which will roam between America, Asia and Europe on a three-yearly basis.
High energies and high intensities continue to be the major goals in the field, with a common thread of superconducting technology. The drive for high energies comes principally from high-energy particle physics, where CERN’s LHC is poised to lead the field. The machine was entering the final stages of hardware commissioning at the time of the conference. It is often pointed out that the LHC is its own prototype, breaking new ground in many ways in terms of scale and complexity – from the world’s largest vacuum system to the quench-protection systems and interlocks. Problems have been inevitable; the latest (Mobilizing for the LHC) will surely be overcome as previous ones were. One lesson that can be learned for future projects of this scale is not to forget the infrastructure while focusing on the more challenging aspects.
High aspirations
For now, the Tevatron at Fermilab and RHIC at Brookhaven continue to provide the high-energy frontier in hadron collisions. Thanks to a number of improvements, the Tevatron reached a peak luminosity of 3.15 × 1032 cm–2s–1 in 2008, exceeding the goals of the upgrade for Run II. Beam–beam interactions remain a major limitation, but work on compensation using two electron lenses installed in the ring is providing promising results in increasing the lifetime of the proton beam. Similarly RHIC has exceeded its design parameters, not only with gold–gold collisions but also in operating as a high-luminosity polarized-proton collider. Runs have reached a peak luminosity of about 35 × 1030 cm–2s–1 and a polarization of around 60% for a 100 GeV proton beam. To reach higher luminosities, electron cooling of the heavy-ion beams is being investigated, and a 20 MeV energy-recovery linac (ERL) is under construction for tests. This could also be used to study the new idea of coherent electron cooling in which density variations induced in the electron beam by the hadron beam are amplified by a free-electron laser (FEL) and fed back to correct the hadron beam – a variation on stochastic cooling. In addition, the ERL can test design ideas for the proposed electron–ion collider, e-RHIC.
For some years the international particle-physics community has been pursuing options for a future linear electron–positron collider to complement the high-energy hadron collisions at the LHC. In 2004 the International Technology Recommendation Panel decided that a future International Linear Collider operating in the 0.5 TeV centre-of-mass region should be based on 1.3 GHz superconducting RF technology. The Reference Design Report released in 2007 specified two 11 km linacs with an accelerating gradient of 31.5 MV/m and a peak luminosity of 2 × 1034 cm–2s–1.
A major goal of the first stage of the technical-design phase is to demonstrate by mid-2010 the feasibility of an accelerating gradient of 35 MV/m in 9-cell cavities, thereby allowing an operating margin of 10%. Global R&D on cavity shapes, fabrication and surface preparation is under way to meet this challenge. While tests have achieved field gradients as high as 41 MV/m, average results worldwide are still 15–20% short of reliably meeting the design requirement. A further challenge is the beam-delivery system, and particularly the issue of chromaticity, which is being investigated by a large international collaboration, with dedicated test facilities existing and under construction at SLAC and at KEK. The positron source is another challenging area because the design luminosity demands a source that delivers 1000 times as many particles per pulse than previous sources, together with the added complication of a possible upgrade to high (60%) polarization. Tests at SLAC have shown that a superconducting-undulator solution is feasible and development work is now in progress in the UK.
In parallel, an increasing international effort in the Compact Linear Collider (CLIC) study is investigating an innovative two-beam accelerator concept, which aims at a centre-of-mass energy of 3 TeV and a luminosity of 2 × 1034 cm–2s–1. Tests are under way at CERN in the CLIC Test Facility with a view to the production of a conceptual design report by 2010. Accelerating structures have already been tested to the required field gradient of 100 MV/m.
More futuristic is the attempt to harness electric fields in plasmas to accelerate particles (e.g. in the plasma-wakefield approach). Studies at the Accelerator Test Facility at Brookhaven are using multiple electron bunches only 5.5 ps long to generate the wakefields in a potentially more efficient manner. New results demonstrate a maximum wakefield of around 22 MV/m near the tail of the bunched beam. At Lawrence Berkeley National Laboratory, tests with laser wakefields reached 1 GeV in a distance of only 3 m in 2006. The goal now is to develop this principle into a laser accelerator to drive a short-wavelength FEL.
Current plans for the LHC foresee a series of luminosity upgrades by 2017, with a new injection system that could easily be upgraded to provide a multimegawatt beam. Another future option, which is still in the embryonic stage, would be to build an electron ring in the LHC tunnel to allow electron–proton collisions. The aim is for a luminosity of 1.1 × 1033 cm–2s–1, some 10 times as high as at DESY’s HERA collider, which ceased operations in 2007. Another option would be for a 140 GeV linac to supply the electrons. In either case, ERL technology should prove useful.
Intense and brilliant
In other areas of particle physics, the emphasis is on high intensities, because physics beyond the Standard Model may well lie in processes that are rare and/or difficult to observe. By the time it ceased operations in April 2008, the PEP-II B factory at SLAC had surpassed its design luminosity by a factor of four, reaching 1.2 × 1034 cm–2s–1, thanks to successive improvements, including continuous injection. The machine also provided valuable lessons to be learned for the design of future machines, such as in feedback. Elsewhere, commissioning is in full swing on the upgrade of the Beijing Electron–Positron Collider, BEPCII, to increase the collision rate by a factor of 100. At the Budker Institute of Nuclear Physics in Novosibirsk the aim is to achieve higher luminosity in the VEPP-2000 collider by means of innovative ideas, using existing injectors in a restricted area.
KEKB, the asymmetric e+e– collider for the study of B mesons at KEK, has operated with crab cavities for more than a year, the first installation to do so, following 13 years of R&D. The scheme using a single cavity per ring gives somewhat lower luminosity than without, but also at a lower beam current. The system needs further study to understand the effects limiting the luminosity at higher currents. A similar R&D programme at the DAΦNE e+e– collider at Frascati is investigating the use of a “crab waist” scheme, with sextupoles that give a factor of three shrinkage in the vertical plane as well as a narrower crossing. Put into operation in early 2008, they reached a peak luminosity 50% higher than the previous record, but at lower currents and two-thirds power.
Elsewhere, high-intensity facilities offer the possibility of a range of research. At the Japan Proton Accelerator Complex (J-PARC), the main ring is a 50 GeV synchrotron designed to deliver a 0.75 MW beam. This will serve kaon-rare decay studies, for example, and provide the first-ever megawatt-class fast-extracted beam to create neutrinos for the Tokai to Kamioka experiment. It will be fed by a 3 GeV 1 MW rapid-cycling synchrotron (RCS), which will also serve muon and neutron production targets in the Materials and Life Science Facility. The RCS already operates at design energy in conditions corresponding to a beam power of about 130 kW.
The Facility for Antiproton and Ion Research (FAIR), which is planned for Darmstadt, is set to be the largest science project funded in Europe in the next decade. It will provide beams of antiprotons and heavy ions at intensities 100 times as great as present, with additional capabilities for fragment-separation and plasma physics using ion bunches and petawatt lasers. Technical challenges surround the high-current beams, control of the dynamic-vacuum pressure and the design of rapid-cycling superconducting magnets. With two linacs and eight rings (including four storage rings), the FAIR complex will involve interesting beam manipulations with implications for RF synchronization. Experiments are expected in 2013.
The techniques of in-flight separation and isotope separation online (ISOL) are currently providing a turning point for research using radioactive beams. Superconducting linacs are the key in providing beams of heavy ions, while cyclotrons and synchrotrons are needed to reach high energies at in-flight facilities such as FAIR and the Radioisotope Beam Factory at the RIKEN institute in Japan. Studies in Europe, meanwhile, are leading towards the European ISOL facility, EURISOL, with a planned beam power of 5 MW. In the US both the National Superconducting Cyclotron Laboratory at Michigan State University and the Argonne National Laboratory have proposals for facilities that include the possibility of reaccelerating rare isotopes. TRIUMF has a slightly different proposal to use a megawatt-class electron linac for the photofission of a uranium target to produce neutron-rich rare isotopes (MW linacs could supply medical isotopes).
Design ideas for the necessary low-energy deuteron accelerator are already being pursued in the context of an international agreement between Euratom and Japan
Neutron sources also demand high-incident beam intensities for neutron production. The Spallation Neutron Source at Oak Ridge National Laboratory has a design-beam power of 1.4 MW, which is to be achieved after three years of operation. Since start-up in October 2006 the facility has reached 0.52 MW, making it the world’s most powerful spallation neutron source. There have been problems, however, with the low-energy beam transport and the superconducting 1 GeV proton linac. An intense neutron source is also a key element of the International Fusion Materials Irradiation Facility, which will study the responses of materials to the high flux of neutrons (1018 n/m2s) that would be emitted in a future fusion reactor. Design ideas for the necessary low-energy deuteron accelerator are already being pursued in the context of an international agreement between Euratom and Japan.
For facilities using FELs to provide short-wavelength photon beams for a variety of science, the key word is “brilliance”. While synchrotron sources can provide high energies (and hence short wavelengths), FELs offer the route to increased brilliance. The Free-electron Laser in Hamburg (FLASH) at DESY has been operating successfully for more than a year, delivering pulses at 6.5 nm. For the future, there is ongoing accelerator R&D for an X-ray FEL (XFEL) based on a 17.5 GeV electron linac, compared with the 1 GeV linac in FLASH. At SLAC, meanwhile, the Linac Coherent Light Source is under construction to operate at X-ray wavelengths (0.15–1.5 nm). This will use a new 135 MeV injector and the downstream third of the famous 3 km linac to reach a final energy of 14 GeV. It is on course to provide X-rays for the first experiments in summer 2009.
Beyond the laboratory
The main use of particle accelerators is outside research, particularly in X-ray machines in medicine. Now an increasing number of machines are being designed and built explicitly for hadron therapy using protons and carbon ions. In April 2007, PSI started the treatment of patients with a proton-scanning system based on a commercially supplied 250 MeV superconducting cyclotron. The Heidelberg Ion Therapy Facility is set to be Europe’s first dedicated proton- and carbon-therapy facility, with a synchrotron to provide the particles. Commissioning for three fixed beams was finished in April 2008, and commissioning of the gantry for scanning has begun. In Italy, the Centro Nazionale di Adroterapia Oncologica in Pavia is under construction, and commissioning the sources and the low-energy beam transfer is also under way.
As hadron therapy moves out of research laboratories and into hospitals, there is a growing market that industry can serve by providing not only the basic accelerators but also other items and services related to the treatment of patients. In this area, as well as in others, collaborations between industry and researchers provide an important means of transferring from a project to a product.
Indeed, working closely with industry is an increasingly important part of the accelerator scene. The LHC has led the way for “mega projects”, with industrialization of the magnet construction. One problem, however, is that the duration of such big projects can be longer than the lifetime of some companies. Nevertheless, co-operation with industry is essential from an early stage. The European XFEL project has already begun to work with industry in the production of the 100 cryomodules – each with eight 9-cell superconducting RF cavities and superconducting magnets. At the same time, research institutes that require linacs can benefit from being able to acquire customized systems direct from industry, particularly from the company ACCEL Instruments. There are many other specialized areas where partnerships between research and industry have proved mutually beneficial.
The accelerator scene continues to evolve and grow, not only in underlying technology, but also in its relations with other areas of science and industry. It is therefore fitting that the conference scene should reflect this evolution. The last EPAC provided a worthy ending to a successful series and the community now looks forward to the first IPAC, in 2010 in Kyoto, following the last in the Particle Accelerator Conference series, in Vancouver in May this year.
• The organizing committee for EPAC ’08, chaired by Caterina Biscari of INFN, and the scientific programme committee, chaired by Oliver Bruning of CERN, were formed by the board elected by the European Physical Society Accelerator Group, plus representatives of APAC and PAC, while the local organizing committee, chaired by Paolo Pierini of INFN, included a dozen members from INFN (Genoa, LASA, LNF), SINCROTRONE Trieste and CERN.
EPAC ’08: a hard act to follow
EPAC ’08 was a resounding success: the fruit of 20 years of experience in organizing these events with exciting scientific programmes reflecting the state of the art. More than 1000 delegates from 38 countries converged on EPAC ’08, the last in the biennial series as the conference joins Asia and North America to propose an International Particle Accelerator Conference on a three-year cycle.
As is customary for EPAC, the four-and-a-half-day programme consisted of plenary sessions at the opening, closing and prize sessions (with no more than two oral sessions in parallel), followed each afternoon by plenary poster sessions, allowing delegates to derive maximum benefit from the scientific programme. The meeting was augmented by the regular industrial exhibition, which for EPAC ’08 was the largest ever, with 90 companies participating. Many conference delegates also attended the traditional session for industry.
The 2008 prizes for the European Physical Society Accelerator Group were awarded during the prize ceremony, with the Rolf Wideröe prize going to Alex Chao of SLAC; the Gersh Budker prize to Norbert Holtkamp now at ITER and formerly of the Oak Ridge National Laboratory (ORNL) and Spallation Neutron Source (SNS); and the Frank Sacherer prize to Viatcheslav Danilov, also of ORNL/SNS.
With the continuing financial support of European laboratories, 66 students from around the world attended the conference. They had an extra opportunity to present their work in a special student-poster session. A prize was awarded in two categories: for a young physicist or engineer for quality of work and promise for the future; and for best posters. The prize for the best student poster went to Rocco Paparella of INFN–LASA.
Thanks to the team effort of the Joint Accelerator Conferences website (JACoW) collaboration editors, the proceedings were published “prepress” on the last day of the conference, and in record time at the JACoW open-access site three weeks later.
The structure of nuclei is determined by the nature of the strong force: strong repulsion at short distances and strong attraction at moderate distances. This force, which binds the nucleons together while also keeping the structure from collapsing, makes the nucleus a fairly dilute system. This has allowed for calculations that treat the nucleus as a collection of hard objects in an average or mean field to describe many of the properties of nuclear matter. Of course, this simple picture of the nucleus is inaccurate – the nucleons should really be thought of as waves that can strongly overlap for short periods of time. Indeed, recent experiments have shown that about 20% of all nucleons in carbon are in such a state at any given time.
These states of strongly overlapping wave functions are commonly referred to as nucleon–nucleon short-range correlations (SRC). Calculations indicate that, for short periods, these correlations lead to local densities in the nucleus that are several times as high as the average nuclear density of 0.17 GeV/fm3. Such densities are comparable to those predicted in the core of neutron stars. c– whether extremely small (such as helium nuclei) or extremely large (such as neutron stars).
The distinctive experimental features of two-nucleon SRCs are the large back-to-back relative momentum and small centre-of-mass momentum of the correlated pair, where large and small are relative to the Fermi-sea level of about 250 MeV/c. This is shown in figure 1, where a virtual photon is absorbed by one nucleon in a correlated pair, causing both nucleons to be emitted from the nucleus. The large strength of the nucleon–nucleon interaction at short distances means that the relative motion in the pair should be the same in all nuclei, although the absolute probability of a correlation grows with density – with the probability of a nucleon to be part of a pair reaching 25% for iron and heavier nuclei.
Scaling effects
Isolating the signal of the SRC initial state has been difficult at low and medium energies because other processes (such as final-state interactions and meson-exchange currents) mimic this effect. Nevertheless, there has recently been progress using modern accelerators with high-luminosity and high-momentum transfer – as well as with kinematics, where competing mechanisms are suppressed. For electron scattering, this corresponds to luminosities of 1037 cm–2s–1; a four-momentum transfer, Q2, greater than 1.4 (GeV/c)2; and focusing on kinematics where Bjorken-x, Q2/2mυ, is greater than 1, where υ is the beam energy minus the energy of the scattered electron. For elastic scattering from a free proton, Bjorken-x is exactly 1. At least two nucleons must be involved to have x > 1; x > 2 requires a system with at least three nucleons.
One of the new results has come from inclusive data at high momentum-transfer, Q2 > 1.4 (GeV/c)2, and x > 1 from the Hall B CEBAF Large Acceptance Spectrometer at the US Department of Energy’s Jefferson Laboratory (K S Egiyan et al. 2006). The measurement was made to check the predicted universality of SRCs by measuring the ratio of the inclusive cross-sections off heavy nuclei to those of light nuclei at sufficiently large Q2 and x, where the scattering off slow nucleons in the nucleus does not contribute. The signal predicted to indicate dominance of such correlations is the scaling of the ratios – a weak dependence on x and Q2 for 1 < x <2 – which is clearly observed in the data. Continuing this line of reasoning would suggest that a second scaling region arising from three-nucleon correlations should be observed for x > 2. Indeed, a second scaling region does seem to be present, although the statistics are limited (figure 2). These results reflect the dominance of few-nucleon correlations in the high-momentum component of the nucleus.
While the inclusive data clearly suggest strong local correlations, it has taken exclusive data to confirm that the inclusive scaling arises from SRCs, as well as to measure directly what fraction of nucleon-pair types are involved. In exclusive experiments, using a high-momentum probe to remove one fast nucleon from the nucleus effectively breaks a pair and releases the second nucleon of the correlation. Brookhaven National Laboratory and Jefferson Lab have conducted such tests on the carbon nucleus with a hadronic and electromagnetic probe, respectively. They measured momentum transfers of greater than 1.5 GeV/c and a missing momentum greater than the Fermi momentum of 250 MeV/c.
Both experiments have shown that recoiling nucleons, with a momentum above the Fermi-sea level in the nucleus, are part of a correlated pair and both observed the same strength of proton–neutron correlations (Piasetzky et al. 2006; Subedi et al. 2008). This confirms that the process is accessing a universal property of nuclei unrelated to the probe. The Jefferson Lab’s experiment also observed the proton–proton pairs and used matched-acceptance detectors to determine the ratio of neutron–proton to proton–proton pairs as nearly 20, as figure 3 shows. Calculations explain the magnitude of this neutron–proton to proton–proton ratio as arising from the short-range tensor part, or nucleon–nucleon spin-dependent part, of the nucleon–nucleon force (Sargsian et al. 2005; Schiavilla et al. 2007; Alvioli et al. 2008).
Isolating the signatures of SRCs opens new avenues for the exploration of nucleon–nucleon interactions at short distances, particularly in addressing the long-standing question of how close nucleons have to approach before the nucleons’ quarks reveal themselves. Nucleon degrees of freedom can no longer be used to describe the system.
These studies can also influence calculations of the extremely massive. Without SRCs, a large object, such as a neutron star, could be well approximated as a Fermi gas predominantly of neutrons with a small fraction of protons acting as a separate Fermi gas. With SRCs the protons and neutrons interact, strongly enhancing the high-momentum component of the proton momentum distribution, leading to changes in the physical properties of the system (figure 4).
In the future, inclusive short-range-correlation experiments will improve the statistics of the x > 2 data to show definitively whether or not there is indeed a second scaling. These will use targets such as 40Ca and 48Ca to measure the dependence on the initial-state proton–neutron ratio. The future exclusive experiments will focus on 4He (a nucleus where both full and mean-field calculations can come together) and push the limits of the recoil momentum to extend our understanding of the repulsive part of the nucleon–nucleon potential.
The Pauli exclusion principle (PEP), and more generally the spin-statistics connection, plays a pivotal role in our understanding of countless physical and chemical phenomena, ranging from the periodic table of the elements to the dynamics of white dwarfs and neutron stars. It has defied all attempts to produce a simple and intuitive proof, despite being spectacularly confirmed by the number and accuracy of its predictions, because its foundation lies deep in the structure of quantum theory. Wolfgang Pauli remarked in his Nobel Prize lecture (13 December 1946): “Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had the feeling, and I still have it today, that this is a deficiency. The impression that the shadow of some incompleteness fell here on the bright light of success of the new quantum mechanics seems, to me, unavoidable.” Pauli’s conclusion remains basically true today.
The PEP was a major theme of the workshop “Theoretical and experimental aspects of the spin-statistics connection and related symmetries” at SpinStat 2008, held in Trieste on 21–25 October at the Stazione Marittima conference centre. Some 60 theoretical and experimental physicists attended, as well as a number of philosophers of science. The aim was to survey recent work that challenges traditional views and to put forward possible new experimental tests, including new theoretical frameworks.
A single framework for discussion
On the theoretical side, several researchers are currently exploring theories that may allow a tiny violation of PEP, such as quon theory, the existence of hidden dimensions, geometric quantization and a new spin-statistics connection in the framework of quantum gravity. Others have done several experiments over the past few years to search for possible small violations of the spin-statistics connection, for both fermions and photons. Thus scientists have recently obtained new limits for the validity of PEP for nuclei, nucleons and electrons, as well as for the validity of Bose–Einstein statistics for photons. These results were presented during the workshop and discussed for the first time in a single framework together with theoretical implications and future perspectives. The aim was to accomplish a “constructive interference” between theorists and experimentalists that could lead towards possible new ideas for nuclear and particle-physics tests of the PEP’s validity, including the interpretation of existing results.
The workshop benefited from the presence of researchers who have devoted a life’s work to the thorough examination of the structure of the spin-statistics connection in the context of quantum mechanics and field theory. In addition, young scientists put forward suggestions and experimental results that may pave the way to interesting future developments.
Oscar W Greenberg of the University of Maryland opened the workshop with a review talk on theoretical developments, with special emphasis on quon theory – which characterizes particles by a parameter q, where q spans the range from –1 to +1 and thus interpolates between fermion and boson – in an effort to develop more general statistics. Greenberg is the originator of this concept and he continues to be a major contributor to its theoretical development, maintaining a high degree of interest in the field. Robert Hilborn of the University of Texas reviewed the past experimental attempts to find a violation. Other theoretical speakers included distinguished scientists such as Stephen Adler, Michael Berry, Aiyalam P Balachandran, Sergio Doplicher, Giancarlo Ghirardi, Nikolaos Mavromatos and Allan Solomon.
The experimental reports included presentations on spectroscopic tests of Bose–Einstein statistics of photons by Dmitry Budker’s group at the University of California and the Lawrence Berkeley National Laboratory, and studies of spin-statistics effects in nuclear decays by Paul Kienle’s group at the Stefan Mayer Institute for Subatomic Physics in Vienna. Other talks included results from the Borexino neutrino experiment and the DAMA/LIBRA dark-matter detector in the Gran Sasso laboratory, the KLOE experiment at Frascati, the NEMO-2 detector in the Fréjus underground laboratory and the dedicated Violation of the Pauli exclusion principle experiment in the Gran Sasso laboratory. Each talk was followed by lively discussions concerning the interpretation of the results. Michela Massimi of University College London closed the workshop with an excellent talk on historical and philosophical issues.
Another highlight was the event held for the general public: a reading of selected parts of the book by George Gamow and Russell Stannard, The New World of Mr Tompkins, where the professor depicted in Gamow’s book was played by a witty Michael Berry from the University of Bristol. This event was a success, especially among the young students who participated so enthusiastically.
Overall, the workshop showed that the field is full of new and interesting ideas. Although nobody expects gross violations of the spin-statistics connection, there could be subtle effects that may point to new physics in a context quite different from that of the LHC.
The workshop was sponsored jointly by the INFN and the University of Trieste. It received generous contributions from the Consorzio per la Fisica, the Hadron Physics initiative (Sixth Framework Programme of the EU) and Regione Friuli–Venezia Giulia.
by Takashi Nakamura, Mamoru Baba, Eishi Ibe, Yasuo Yahagi and Hideaki Kameyama, World Scientific. Hardback ISBN 9789812778819 £56 ($98).
Terrestrial neutron-induced soft errors in semiconductor memory devices are currently a major concern in reliability issues. Understanding the mechanism and quantifying soft-error rates are primarily crucial for the design and quality assurance of semiconductor memory devices. This book covers relevant up-to-date topics in terrestrial neutron-induced soft errors and aims to provide succinct knowledge on these soft errors by presenting several valuable and unique features. It should be of interest to students and researchers in radiation effects, nuclear and accelerator physics and cosmic-ray physics; and to engineers involved in reliability, the design/quality assurance of semiconductor devices and IT systems.
by Mario Conte and William W MacKay, World Scientific. Hardback ISBN 9789812779601 £86 ($46). Paperback ISBN 9789812779618 £29 ($55).
This text offers a concise and coherent introduction to the physics of particle accelerators, with attention being paid to the design of an accelerator for use as an experimental tool. In this edition, new chapters on the spin dynamics of polarized beams, as well as instrumentation and measurements, are included with a discussion of frequency spectra and Schottky signals. The additional material also covers quadratic Lie groups and integration, highlighting new techniques using Cayley transforms, detailed estimation of collider luminosities and new problems. Graduates and advanced undergraduates in physics will find this book a useful resource.
by Massimo Giovannini, World Scientific. Hardback ISBN 9789812791429 £48 ($89).
In the past 15 years, various areas of high-energy physics, astrophysics and theoretical physics have converged on the study of cosmology. Today, therefore, any graduate student in these disciplines needs a reasonably self-contained introduction to the cosmic microwave background (CMB). This book presents the essential theoretical tools necessary to acquire a modern working knowledge of CMB physics. The style, falling somewhere between a monograph and a set of lecture notes, is pedagogical and the author uses the typical approach of theoretical physics to explain the main problems in detail, while also touching on the main assumptions and derivations of a fascinating subject.
by Helmut Hofmann, OUP Series: Oxford Studies in Nuclear Physics, Volume 25. Hardback ISBN 9780198504016 £65 ($130).
This book offers a comprehensive survey of basic elements of nuclear dynamics at low energies and discusses their similarities to mesoscopic systems. It addresses systems with finite excitations of their internal degrees of freedom so that their collective motion exhibits features typical of transport processes in small and isolated systems. The importance of quantum aspects is examined with respect to both the microscopic damping mechanism and the nature of the transport equations, and a critical discussion of the use of thermal concepts is included. The book is considerably self-contained and presents existing models, theories and theoretical tools, from both nuclear physics and other fields, which are relevant to an understanding of the observed physical phenomena.
edited by Yurij Holovatch, World Scientific. Hardback ISBN 9789812707673 £46 ($85).
This is the second volume of review papers on advanced problems of phase transitions and critical phenomena, following the success of the first volume in 2004. Broadly, it aims to demonstrate that there is still a good deal of work to be done in phase transition theory, both at the fundamental level and with respect to applications. Topics include critical behaviour as explained by the non-perturbative renormalization group; critical dynamics; a space–time approach to phase transitions; self-organized criticality; and exactly solvable models of phase transitions in strongly correlated systems. Like the first volume, this one is based on the review lectures that were given in Lviv (Ukraine) at the Ising Lectures – a traditional annual workshop on phase transitions and critical phenomena that brings together scientists in the field working with university students and others interested in the subject.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.