By Helge S Kragh, Oxford University Press. Hardback ISBN 9780199209163 £35 ($100).
This is a historical account of how natural philosophers and scientists have endeavoured to understand the universe at large, first in a mythical and later in a scientific context. Starting with the creation stories of ancient Egypt and Mesopotamia, the book covers all of the major events in theoretical and observational cosmology, from Aristotle’s cosmos through the Copernican revolution to the discovery of the accelerating universe in the late 1990s. It presents cosmology as a subject including scientific as well as non-scientific dimensions, and tells the story of how it developed into a true science of the heavens. It also offers an integrated account with emphasis on the modern Einsteinian and post-Einsteinian period. This book is suitable for students and professionals in astronomy, physics and history of science.
By W N Cottingham and D A Greenwood, Cambridge University Press. Hardback ISBN 9780521852494 £30 ($65).
The new edition of this introductory graduate textbook provides a concise but accessible introduction to the Standard Model. It has been updated to account for the successes of the theory of strong interactions and the observations on matter–antimatter asymmetry, and includes a coherent presentation of the phenomena of neutrinos with mass and the theory that describes them. The book clearly develops theoretical concepts, from the electromagnetic and weak interactions of leptons and quarks to the strong interactions of quarks. The mathematical treatments are suitable for graduates in physics, and the text and appendices develop more sophisticated mathematical ideas.
It is said that Galileo always kept 10% of his purse in reserve for his lens grinder so that he could look forward to peering further into the heavens through the next telescope in line. He also actively collaborated with the telescope builder and shared his joy of discovering each new star. Instrument builders and their users have often been the same community and sharing the enterprise together was the norm. Such was also the case with early sub-atomic physicists and chemists in the late 19th and early 20th century, with Ernest Rutherford, John Cockcroft, Ernest Walton, James Chadwick, and so on. The “chasm” began to open in the mid- to late 20th century with the emergence of the culture of large-scale experimental science via teams of specialists in accelerators, detectors, data processors, theoreticians, etc. Today, however, the demands of the next frontier in particle physics are sufficiently daunting that the gap is forced to close again. Witness the emergence of self-organized communities around the world that are working together in moving the field forward.
Engines of Discovery is written by two well-respected practitioners of accelerator science, recognized for their contributions to the field. Andrew Sessler and Edmund Wilson both began their careers at a time when the “chasm” had started to take root and continue in their trade today when it is beginning to heal again – a golden era in the history of development dominated by the use of large particle accelerators.
Sessler received a classical and advanced education from Harvard College and Columbia University in the middle of the last century at a time when the US was a scientific hot-bed, with great pre- and post-war scientists from around the world. Exposed to the greatest minds of the times, Sessler contributed to the very beginnings of the field via his contributions at MURA, the Wisconsin-based Midwestern Universities Research Association. This group pioneered the concept of the fixed field alternating gradient (FFAG) synchrotron – a concept that has been resurrected with a prototype for electrons now being built at the Cockcroft Institute and Daresbury Laboratory in the UK. Sessler then continued to lead the great laboratory in California created by one of the early pioneers, Ernest Lawrence. Sessler brings a substantive and unique perspective that is hard to match through his eminent stature in the community of scientists and humanitarians. He is known for his many contributions to theoretical accelerator physics, including collective beam instabilities, non-linear dynamics, muon colliders and free-electron lasers. Joining him is Edmund Wilson, a veteran from the world-renowned accelerator-based CERN laboratory. Educated at Oxford and having the rare experience of tutelage from and working with John Adams, the architect of many of CERN’s accelerators, Wilson brings his decades of research experience in operating accelerators and his formidable skills of inherited pedagogy, composition, literacy and the overall art of story-telling to complete this fascinating saga.
It is indeed a masterly tale of the emergence and growth of a field, told from a unique personal perspective, by two working scientists in the field. Understandably, the book is rich, dense and selective as it starts with the heritage of atomic, nuclear and particle physics and continues through to the end of the 20th century. The field eventually diversified into other basic sciences such as those driven by synchrotron radiation sources, free-electron lasers, laser-plasma interaction, high-field physics, etc – which have spawned much of the innovation and creativity of the latter years. The field has also become immensely global during the past few decades.
The book may appear relatively lean in promoting such diversity of sciences and characters in these recently emerging fields. Such incommensurate expression can be understood in the context of the historical footprint of the authors themselves and is only to be expected for a book of this scope. I would be remiss if I did not point out the brilliance, genius and creativity of the generation of bright emerging international scientists and technologists from Europe, the Americas, Asia and Africa who are transforming the field today. The authors only hint at it in the book via colleagues such as Katsunobu Oide and Chan Joshi, but today one will find many others at institutions around the world.
This is not a book to look at through the lens of a precise historian – or with the obsession of a perfectionist – demanding a complete lexicon, chronology, historical credit, etc. It is above all a book of inspiration. Nevertheless, the book does achieve a natural sense of historical progress and is made even more exciting by the anecdotal and factual bits and pieces put together about some of the players – more so in their order of appearance on the scene, than in any other sense. For every player that is mentioned and adds flair to the book, there are many who are not, including the authors themselves, whose contributions have been substantive.
Above all, this book uplifts one’s spirit; one reads it with zest, admiration and awe. The power of sheer dedication, brilliance, creativity, humility and humanity of the whole enterprise expressed in the pages of the book is sure to inspire and motivate generations to come.
Speaking as an individual in the wake of a personal transition from the US to the UK, and taking stock of shifting priorities in the field, I must thank the authors for providing a contextual basis for carrying our work forward with the noble mission of the ultimate quest for the ways of nature and life.
This year is the centenary of the birth of Hideki Yukawa who, in 1935, proposed the existence of the particle now known as the π meson, or pion. To celebrate, the International Nuclear Physics Conference took place in Japan on 3–8 June – 30 years after the previous one in 1977 – with an opening ceremony honoured by the presence of the Emperor and Empress. In his speech, the Emperor noted that Yukawa – the first Nobel Laureate in Japanese science – is an icon not only for young Japanese scientists but for everybody in Japan. Yukawa’s particle led to a gold mine in physics linked to the pion’s production, decay and intrinsic structure.This gold mine continues to be explored today. The present frontier is the quark–gluon-coloured-world (QGCW), whose properties could open new horizons in understanding nature’s logic.
Production
In his 1935 paper, Yukawa proposed searching for a particle with a mass between the light electron and the heavy nucleon (proton or neutron). He deduced this intermediate value – the origin of the name “mesotron”, later abbreviated to meson – from the range of the nuclear forces (Yukawa 1935). The search for cosmic-ray particles with masses between those of the electron and the nucleon became a hot topic during the 1930s thanks to this work.
On 30 March 1937, Seth Neddermeyer and Carl Anderson reported the first experimental evidence, in cosmic radiation, for the existence of positively and negatively charged particles heavier and with more penetrating power than electrons, but much less massive than protons (Neddermeyer and Anderson 1937). Then, at the meeting of the American Physical Society on 29 April, J C Street and E C Stevenson presented the results of an experiment that gave, for the first time, a mass value of 130 electron masses (me) with 25% uncertainty (Street and Stevenson 1937). Four months later, on 28 August, Y Nishina, M Takeuchi and T Ichimiya submitted to Physical Review their experimental evidence for a positively charged particle with mass between 180 me and 260 me (Nishina et al. 1937).
The following year, on 16 June, Neddermeyer and Anderson reported the observation of a positively charged particle with a mass of about 240 me (Neddermeyer and Anderson 1938), and on 31 January 1939, Nishina and colleagues presented the discovery of a negative particle with mass (170 ± 9) me (Nishina et al. 1939). In this paper, the authors improved the mass measurement of their previous particle (with positive charge) and concluded that the result obtained, m = (180 ± 20) me, was in good agreement with the value for the negative particle. Yukawa’s meson theory of the strong nuclear forces thus appeared to have excellent experimental confirmation. His idea sparked an enormous interest in the properties of cosmic rays in this “intermediate” range; it was here that a gold mine was to be found.
In Italy, a group of young physicists – Marcello Conversi, Ettore Pancini and Oreste Piccioni – decided to study how the negative mesotrons were captured by nuclear matter. Using a strong magnetic field to separate clearly the negative from the positive rays, they discovered that the negative mesotrons were not strongly coupled to nuclear matter (Conversi et al. 1947). Enrico Fermi, Edward Teller and Victor Weisskopf pointed out that the decay time of these negative particles in matter was 12 powers of 10 longer than the time needed for Yukawa’s particle to be captured by a nucleus via the nuclear forces (Fermi et al. 1947). They introduced the symbol μ, for mesotron, to specify the nature of the negative cosmic-ray particle being investigated.
In addition to Conversi, Pancini, Piccioni and Fermi, another Italian, Giuseppe Occhialini, was instrumental in understanding the gold mine that Yukawa had opened. This further step required the technology of photographic emulsion, in which Occhialini was the world expert. With Cesare Lattes, Hugh Muirhead and Cecil Powell, Occhialini discovered that the negative μ mesons were the decay products of another mesotron, the “primary” one – the origin of the symbol π (Lattes et al. 1947). This is in fact the particle produced by the nuclear forces, as Yukawa had proposed, and its discovery finally provided the nuclear “glue”. However this was not the end of the gold mine.
The decay-chain
The discovery by Lattes, Muirhead, Occhialini and Powell allowed the observation of the complete decay-chain π → μ → e, and this became the basis for understanding the real nature of the cosmicray particles observed during 1937–1939, which Conversi, Pancini and Piccioni had proved to have no nuclear coupling with matter. The gold mine not only contained the π meson but also the μ meson. This opened a completely unexpected new field, the world of particles known as leptons, the first member being the electron. The second member, the muon (μ), is no longer called a meson, but is now correctly called a lepton. The muon has the same electromagnetic properties as the electron, but a mass 200 times heavier and no nuclear charge. This incredible property prompted Isidor Rabi to make the famous statement “Who ordered that?” as reported by T D Lee (Barnabei et al. 1998).
In the 1960s, it became clear that there would not be so many muons were it not for the π meson. Indeed, if another meson like the π existed in the “heavy” mass region, a third lepton – heavier than the muon – would not have been so easily produced in the decays of the heavy meson because this meson would decay strongly into many π mesons. The remarkable π–μ case was unique. So, the absence of a third lepton in the many final states produced in high-energy interactions at proton accelerators at CERN and other laboratories, was not to be considered a fundamental absence, but a consequence of the fact that a third lepton could only be produced via electromagnetic processes, as for example via time-like photons in pp or e+e– annihilation. The uniqueness of the π–μ case therefore sparked the idea of searching for a third lepton in the appropriate production processes (Barnabei et al. 1998).
Once again, this was still not the end of the gold mine, as understanding the decay of Yukawa’s particle led to the field of weak forces. The discovery of the leptonic world opened the problem of the universal Fermi interactions, which became a central focus of the physics community in the late 1940s. Lee, M Rosenbluth and C N Yang proposed the existence of an intermediate boson, called W as it was the quantum of the weak forces. This particle later proved to be the source of the breaking of parity (P) and charge conjugation (C) symmetries in weak interactions.
In addition, George Rochester and Clifford Butler in Patrick Blackett’s laboratory in Manchester discovered another meson – later dubbed “strange” – in 1947, the same year of the π meson discovery. This meson, called θ, decayed into two pions. It took nearly 10 years to find out that the θ and another meson called τ, with equal mass and lifetime but decaying into three pions, were not two different mesons but two different decay modes of the same particle, the K meson. Lee and Yang solved the famous θ–τ puzzle in 1956, when they proved that no experimental evidence existed to establish the validity of P and C invariance in weak interactions; the experimental evidence came immediately after.
The violation of P and C generated the problem of PC conservation, and therefore that of time-reversal (T) invariance (through the PCT theorem). This invariance law was proposed by Lev Landau, while Lee, Reinhard Oehme and Yang remarked on the lack of experimental evidence for it. Proof that they were on the right experimental track came in 1964, when James Christenson, James Cronin, Val Fitch and René Turlay discovered that the meson called K02 also decayed into two Yukawa mesons. Rabi’s famous statement became “Who ordered all that?” – “all” being the rich contents of the Yukawa gold mine.
A final comment on the “decay seam” of the gold mine concerns the decay of the neutral Yukawa meson, π0 → γγ. This generated the ABJ anomaly, the celebrated chiral anomaly named after Steven L Adler, John Bell, and Roman Jackiw, which had remarkable consequences in the area of non-Abelian forces. One of these is the “anomaly-free condition”, an important ingredient in theoretical model building, which explains why the number of quarks in the fundamental fermions must equal the number of leptons. This allowed the theoretical prediction of the heaviest quark – the top-quark, t – in addition to the b quark in the third family of elementary fermions.
Intrinsic structure
The Yukawa particle is made of a pair of the lightest, nearly- massless, elementary fermions: the up and down quarks. This allows us to understand why chirality-invariance – a global symmetry property – should exist in strong interactions. It is the spontaneous breaking of this global symmetry that generates the Nambu–Goldstone boson. The intrinsic structure of the Yukawa particle needs the existence of a non-Abelian fundamental force – the QCD force – acting between the constituents of the π meson (quarks and gluons) and originating in a gauge principle. Thanks to this principle, the QCD quantum is a vector and does not destroy chirality-invariance.
To understand the non-zero mass of the Yukawa meson, another feature of the non-Abelian force of QCD had to exist: instantons. Thanks to instantons, chirality-invariance can also be broken in a non-spontaneous way. If this were not the case, the π could not be as “heavy” as it is; it would have to be nearly massless. So, can a pseudoscalar meson exist with a mass as large as that of the nucleon? The answer is “yes”: its name is η’ and it represents the final point in the gold mine started with the π meson. Its mass is not intermediate, but is nearly the same as the nucleon mass.
The η’ is a pseudoscalar meson, like the π, and was originally called X0. Very few believed that it could be a pseudoscalar because its mass and width were too big and there was no sign of its 2γ decay mode. This missing decay mode initially prevented the X0 from being considered the ninth singlet member of the pseudoscalar SU(3) uds flavour multiplet of Murray Gell-Mann and Yuval Ne’eman. However, the eventual discovery of the 2γ decay mode strongly supported the pseudoscalar nature of the X0, and once this was established, its gluon content became theoretically predicted through the QCD istantons.
If the η’ has an important gluon component, we should expect to see a typical QCD non-perturbative effect: leading production in gluon-induced jets. This is exactly the effect that has been observed in the production of the η’ mesons in gluon-induced jets, while it is not present in η production (figure 1).
The interesting point here is that it appears that the η’ is the lowest pseudoscalar state having the most important contribution from the quanta of the QCD force. The η’ is thus the particle most directly linked with the original idea of Yukawa, who was advocating the existence of a quantum of the nuclear force field; the η’ is the Yukawa particle of the QCD era. Seventy two years after Yukawa’s original idea we have found that his meson, the π, has given rise to a fantastic development in our thinking, the last step being the η’ meson.
The quark–gluon-coloured world
There is still a further lesson from Yukawa’s gold mine: the impressive series of totally unexpected discoveries. Let me quote just three of them, starting with the experimental evidence for a cosmic-ray particle that was believed to be Yukawa’s meson, which turned out to be a lepton: the muon. Then, the decay-chain π → μ → e was found to break the symmetry laws of parity and charge conjugation. Third, the intrinsic structure of the Yukawa particle was found to be governed by a new fundamental force of nature, the strong force described by QCD.
This is perfectly consistent with the great steps in physics: all totally unexpected. Such totally unexpected events, which historians call Sarajevo-type-effects, characterize “complexity”. A detailed analysis shows that the experimentally observable quantities that characterize complexity in a given field exist in physics; the Yukawa gold mine is a proof. This means that complexity exists at the fundamental level, and that totally unexpected effects should show up in physics – effects that are impossible to predict on the basis of present knowledge. Where these effects are most likely to be, no one knows. All we are sure of is that new experimental facilities are needed, like those that are already under construction around the world.
With the advent of the LHC at CERN, it will be possible to study the properties of the quark–gluon coloured world (QGCW). This is totally different from our world made of QCD vacuum with colourless baryons and mesons because the QGCW contains all the states allowed by the SU(3)c colour group. To investigate this world, Yukawa would tell us to search for specific effects arising from the fact that the colourless condition is avoided. Since the colourless condition is not needed, the number of possible states in the QGCW is far greater than the number of colourless baryons and mesons that have been built so far in all our laboratories.
So, a first question is: what are the consequences for the properties of the QGCW? A second question concerns light quarks versus heavy quarks. Are the coloured quark masses in the QGCW the same as the values we derive from the fact that baryons and mesons need to be colourless? It could be that all six quark flavours are associated with nearly “massless” states, similar to those of the u and d quarks. In other words, the reason why the top quark appears to be so heavy (around 200 GeV) could be the result of some, so far unknown, condition related to the fact that the final state must be QCD-colourless. We know that confinement produces masses of the order of a giga-electron-volt. Therefore, according to our present understanding, the QCD colourless condition cannot explain the heavy quark mass. However, since the origin of the quark masses is still not known, it cannot be excluded that in a QCD coloured world, the six quarks are all nearly massless and that the colourless condition is “flavour” dependent. If this was the case, QCD would not be “flavour-blind” and this would be the reason why the masses we measure are heavier than the effective coloured quark masses. In this case, all possible states generated by the “heavy” quarks would be produced in the QGCW at a much lower temperature than is needed in our world made of baryons and mesons, i.e. QCD colourless states. Here again, we should try to see if new effects could be detected due to the existence, at relatively low temperatures in QGCW physics, of all flavours, including those that might exist in addition to the six so far detected.
A third question concerns effects on the thermodynamic properties of the QGCW. Are these properties going to be along the “extensivity” or “non-extensivity” conditions? With the enormous number of QCD open-colour-states allowed in the QGCW, many different phase transitions could take place and a vast variety of complex systems should show up. The properties of this “new world” should open unprecedented horizons in understanding the ways of nature’s logic.
A fourth, related problem would be to derive the equivalent Stefan–Boltzmann radiation law for the QGCW. In classical thermodynamics, the relation between energy density at emission, U, and the temperature of the source, T, is U = sT4 – where s is a constant. In the QGCW, the correspondence should be U = pT and T = the average energy in the centre-of-momentum system, where pT is the transverse momentum. In the QGCW, the production of “heavy” flavours could be studied as functions of pT and E. The expectation is that pT = cE4, where c is a constant, and any deviation would be extremely important. The study of the properties of the QGCW should produce the correct mathematical structure to describe the QGCW. The same mathematical formalism should allow us to go from QGCW to the physics of baryons and mesons and from there to a more restricted component, namely nuclear physics, where all properties of the nuclei should finally find a complete description.
With the advent of the LHC, the development of a new technology should be able to implement collisions between different particle states (p, n, π, K, μ, e, γ, ν) and the QGCW in order to study the properties of this new world. Figure 2 gives an example of how to study the QGCW, using beams of known particles. A special set of detectors measures the properties of the outgoing particles. The QGCW is produced in a collision between heavy ions (208Pb82+) at the maximum energy available, i.e. 1150 TeV and a design luminosity of 1027 cm–2 s–1. For this to be achieved, CERN needs to upgrade the ion injector chain comprising Linac3, the Low Energy Ion Ring (LEIR), the PS and the SPS. Once the lead–lead collisions are available, the problem will be to synchronize the “proton” beam with the QGCW produced. This problem is being studied at the present time. The detector technology is also under intense R&D since the synchronization needed is at a very high level of precision.
Totally unexpected effects should show up if nature follows complexity at the fundamental level. However, as with Yukawa’s gold mine that was first opened 72 years ago, new discoveries will only be made if the experimental technology is at the forefront of our knowledge. Cloud-chambers, photographic emulsions, high-power magnetic fields and powerful particle accelerators and associated detectors were needed for the all the unexpected discoveries linked to Yukawa’s particle. This means that we must be prepared with the most advanced technology for the discovery of totally unexpected events. This is Yukawa’s last lesson for us.
This article is based on a plenary lecture given at the opening session of the Symposium for the Centennial Celebration of Hideki Yukawa at the International Nuclear Physics Conference in Tokyo, 3–8 June 2007.
The Institut Laue-Langevin was founded on 19 January 1967 with the signing of an agreement between the governments of the French Republic and the Federal Republic of Germany. In recognition of this dual nationality it was named jointly after the two physicists Max von Laue of Germany and the Frenchman Paul Langevin. The aim was to create an intense source of neutrons devoted entirely to civil fundamental research. The facility was given the status of “service institute” and was the first of its kind in the world. It was to be the world’s leading facility in neutron science and technology, offering the scientific community a combination of unrivalled performance levels and unique scientific instrumentation in the form of a very large cold neutron source equipped with 10 neutron guides – each capable of providing three or four instruments with a very high-intensity neutron flux.
The construction of the institute and its high-flux reactor in Grenoble represented an overall investment of 335 million French francs (1967 prices, equivalent to about €370 million today) and was jointly undertaken by France and Germany. The reactor went critical in August 1971 and reached its full power of 57 MW in December that same year. Since then, the high-flux reactor has successfully maintained its position as the most powerful neutron source for scientific research in the world.
The UK joined the two founding countries in 1973, becoming the institute’s third full associate member. Over the following years, the level of international co-operation has gradually been extended, with a succession of “scientific membership” agreements with Spain (1987), Switzerland (1988), Austria (1990), Russia (1996), Italy (1997), the Czech Republic (1999), Sweden and Hungary (2005), and Belgium and Poland (2006).
The ILL is operated jointly by France, Germany and the UK, and has an annual budget of around €74 million. It currently employs around 450 staff, including 70 scientists, about 20 PhD students, more than 200 technicians, 60 reactor operations and safety specialists, and around 50 administrative staff; the breakdown by nationality is 65% French, 12% German and 12% British.
The ILL is a unique research tool for the international scientific community, giving scientists access to a constantly upgraded suite of high-performance instruments arranged around a powerful neutron source. More than 1500 European scientists come to the institute each year and conduct an average of 750 experiments. Once their experiment proposal has been accepted by the Scientific Council, scientists are invited to the institute for an average period of one week.
The fields of research are primarily focused on fundamental science and are extremely varied, including condensed matter physics, chemistry, biology, nuclear and particle physics and materials science. Thanks to the combination of an intense neutron flux and the availability of a wide range of energies (from 10–7 eV to 105 eV), researchers can examine a whole range of subjects. The samples that are studied can weigh anything from a tenth of a milligram to several tonnes. Most of the experiments use neutrons as a probe to study various physical or biological systems, while others examine the properties of the neutrons themselves. It is here that there is the most overlap with the physics of elementary particles and where low-energy physics can help to tackle and solve problems usually associated with high-energy physics experiments, as the following selected highlights indicate.
Neutron basics
The neutron is unstable and decays into a proton together with an electron and an anti-neutrino. Its lifetime, τn, is one of the key quantities in primordial nucleosynthesis. It determines how many neutrons were available about three minutes after the Big Bang, when the universe had sufficiently cooled down for light nuclei to form from protons and neutrons. It therefore has a strong influence on the abundance of the primordial chemical elements (essentially 1H, 4He, 2H, 3He and 7Li).
Twenty years ago, in an experiment at ILL, Walter Mampe and colleagues achieved a new level of precision in measuring τn by storing ultracold neutrons in a fluid-walled bottle and counting the number of neutrons remaining in the bottle for various storage times. The experiment used a hydrogen-free oil to coat the walls in order to minimize the loss of neutrons in reflections at the walls. The final result of τn = 887.6 ± 3 s was the first to reach a precision well below 1% (Mampe et al. 1989). This value has found a number of applications, in particular making it possible to derive the number, N, of light neutrino types from a comparison of the observed element abundances with the predicted ones. The argument depends on the relationship between N and the expansion-rate of the universe during nucleosynthesis: the more light neutrinos that contribute to the energy density, the faster the expansion, leading to different element abundances. The result from ILL led in turn to a value of N = 2.6 ± 0.3 which made a fourth light neutrino generation extremely unlikely (Schramm and Kawano 1989). The stringent direct test that N is indeed equal to 3 came soon after with a precision measurement of the width of the Z boson at the LEP collider at CERN.
The neutron lifetime also feeds into the Standard Model of particle physics through the ratio of the weak vector and axial-vector coupling constants of the nucleon. Together with the Fermi coupling constant determined in muon decay, it can be used to determine the matrix element Vud of the Cabibbo–Kobayashi–Maskawa matrix. This in turn, provides a possibility for testing the Standard Model at the low-energy frontier and is one of the continuing motivations to improve still further the measurements of the neutron lifetime and of decay asymmetries in experiments at the ILL and elsewhere.
Another key property of the neutron for particle physics is the hypothetical electric-dipole moment (EDM), which has a high potential importance for physics beyond the Standard Model. The existence of an EDM in the neutron would violate time-reversal, T, and hence – through the CPT theorem – CP symmetry. The Standard Model predicts an immeasurably small neutron EDM, but most theories that attempt to incorporate stronger CP violation beyond the CKM mechanism predict values that are many orders of magnitude larger. An accurate measurement of the EDM provides strong constraints on such theories. A positive signal would constitute a major breakthrough in particle physics.
In 2006, a collaboration between the University of Sussex, the Rutherford Appleton Laboratory and the ILL announced a new tighter limit on the neutron’s EDM. Based on measurements using ultracold neutrons produced at the ILL, the upper limit on the absolute value was improved to 2.9 x 10–26 e cm (Baker et al. 2006). The experiment stored neutrons in a trap permeated by uniform electric and magnetic fields and measured the ratios of neutron-to-mercury-atom precession frequencies; shifts in this ratio proportional to the applied electric field may in principle be interpreted as EDM signals.
E really does equal mc2
Particle physics makes daily use of the relationship between mass and energy, expressed in Albert Einstein’s famous equation. An experiment at the ILL in 2005 combined with one at the Massachusetts Institute of Technology (MIT), made the most precise direct-test of the equation to date, with researchers at ILL measuring energy, E, while the team at MIT measured the related mass, m. The test was based on the fact that when a nucleus captures a neutron, the resulting isotope with mass number A+1 is lighter than the sum of the masses of the original nucleus with mass number A and the free neutron. The energy equivalent to this mass difference is emitted as gamma-rays, the wavelength of which can accurately be measured by Bragg diffraction from perfect crystals.
The team at MIT used a novel experimental technique to measure the masses of pairs of isotopes by comparing the cyclotron frequencies of the two different ions confined together in a Penning trap. They performed two separate experiments, one with 28Si and 29Si, the other with 32S and 33S, leading to mass differences with a relative uncertainty of about 7 × 10–8 in both cases. At the ILL, a team from the National Institute of Standards and Technology measured the energies of the gamma-rays emitted after neutron capture by both 28Si and 32S using the world’s highest-resolution double crystal gamma-ray spectrometer, GAMS4. The combination of the high neutron flux available at the ILL reactor and the energy accuracy of the GAMS4 instrument allowed the team to determine the gamma-ray energies with a precision of better than 5 parts in 10,000,000. By combining the mass differences measured in America and the energy measurements in Europe, it was possible to test Einstein’s equation, with a result of 1–γmc2/E = (–1.4 ± 4.7) × 10–7 – which is 55 times more accurate than the previous best measurements (Rainville et al. 2005).
The German physicist Max von Laue (1879–1960) received the Nobel Prize in Physics in 1914, in Stockholm, for demonstrating the diffraction of X-rays by crystals. This discovery revealed the wave nature of X-rays by enabling the first measurements of wavelength and showing the organization of atoms in a crystal. It is at the origin of all analysis methodology based on diffraction using X-rays, synchrotron light, electrons or neutrons.
Paul Langevin (1879–1946) was an eminent physicist from the pioneering French team of atomic researchers, which included Pierre and Marie Curie. A specialist in magnetism, ultrasonics and relativity, he also dedicated 40 years of his life to his responsibilities as director of Paris’s Ecole de Physique et de Chimie. His study of the moderation of rapid neutrons, i.e. how they are slowed by collisions with atoms, was invaluable for the design of the first research reactors.
These three examples give only a tiny glimpse into one aspect of the science that is undertaken at the ILL each year, illustrating the possibilities for testing theories in particle physics and cosmology. The institute looks forward to the next 40 years of fruitful investigations and important results in these fields as well as across many other areas of science.
The Astroparticle Physics European Coordination (ApPEC) consortium and the AStroParticle European Research Area (ASPERA) network have together published a roadmap giving an overview of the status and perspectives of astroparticle physics in Europe. This important step for astroparticle physics outlines the leading role that Europe plays in this new discipline – which is emerging at the intersection of particle physics, astronomy, and cosmology.
Grouped together in ApPEC and ASPERA, European astroparticle physicists and their research agencies are defining a common strategic plan in order to gain international consensus on what future facilities will be needed. This rapidly developing field has already led to new types of infrastructure that employ new detection methods, including underground laboratories or use of specially designed telescopes and satellite experiments to observe a wide range of cosmic particles, from neutrinos and gamma rays to dark-matter particles.
Over the past few years, ApPEC and ASPERA have launched an important effort to organize the discipline and ensure a leading position for Europe in this field, engaging the whole astroparticle-physics community. The roadmap is a result of this process, and though still in its first phase, it has started to identify a common policy.
In the process, ApPEC has reviewed several proposals and has recommended engaging in design studies for four large new infrastructures: the Cherenkov telescope array, a new-generation European observatory for high-energy gamma rays; EURECA, a tonne-scale bolometric detector for cryogenic research of dark matter; LAGUNA, a very large detector for proton decay and neutrino astronomy; and the Einstein telescope, a next-generation gravitational-wave antenna. ApPEC has also iterated its strong support for the high-energy neutrino telescope KM3 in the Mediterranean region.
These projects – as well as proposals for tonne-scale detectors for the measurement of neutrino mass, dark-matter detectors and high-energy cosmic-ray observatories – will be discussed and prioritized further in a workshop in Amsterdam on 21–22 September. During the workshop, which 300 European physicists are expected to attend, Europe’s priorities for astroparticle physics will be compared with those in other parts of the world.
The US National Science Foundation (NSF) has selected a proposal to produce a technical design for a deep underground science and engineering laboratory (DUSEL) at the former Homestake gold mine near Lead, South Dakota, site of the pioneering solar-neutrino experiment by Raymond Davis. A 22-member panel of external experts reviewed proposals from four teams and unanimously determined that the Homestake proposal offered the greatest potential for developing a DUSEL.
The selection of the Homestake proposal, which was submitted through the University of California (UC) at Berkeley by a team from various institutes, only provides funding for design work. The team, led by Kevin Lesko from UC Berkeley and the Lawrence Berkeley National Laboratory, could receive up to $5 million a year for up to three years. Any decision to construct and operate a DUSEL, however, will entail a sequence of approvals by the NSF and the National Science Board. Funding would ultimately have to be approved by the US Congress. If eventually built as envisioned by its supporters, a Homestake DUSEL would be the largest and deepest facility of its kind in the world.
The concept of DUSEL grew out of the need for an interdisciplinary “deep science” laboratory that would allow researchers to probe some of the most compelling mysteries in modern science, from the nature of dark matter and dark energy to the characteristics of microorganisms at great depth. Such topics can only be investigated at depths where hundreds of metres of rock can shield ultra-sensitive physics experiments from background activity, and where geoscientists, biologists and engineers can have direct access to geological structures, tectonic processes and life forms that cannot be studied fully in any other way. Several countries, including Canada, Italy and Japan, have extensive deep-science programmes, but the US has no existing facilities below a depth of 1 km. In September 2006, the NSF solicited proposals to produce technical designs for a DUSEL-dedicated site. Four teams had submitted proposals by the January 2007 deadline, but in four different locations.
The review panel included outside experts from relevant science and engineering communities and from supporting fields such as human and environmental safety, underground construction and operations, large project management, and education and outreach.
Scientists from Japan, Italy, the UK and Canada also served on the panel. The review process included site visits by panellists to all four locations, with two meetings to review the information, debate and vote on which, if any, of the proposals would be recommended for funding.
By Iain Nicolson, Canopus. Hardback ISBN 0954984633, £19.95.
If you are a particle physicist interested in cosmology, this book is for you. It makes a broad, clear and precise overview of our current understanding of dark matter and dark energy – the invisible actors governing the fate of the universe.
It is a challenge to try to make these apparently obscure concepts familiar to any motivated reader without a scientific background. But the author, Iain Nicolson, has been entirely successful in his enterprise. With a pleasant balance between text and colourful illustrations, he guides the reader through a fascinating, invisible and mysterious world that manifests its presence by shaping galaxies and the universe itself.
The book starts with an introduction to key concepts in astrophysics and the development of classical cosmology. It then describes the observational evidence for dark matter in galaxies and clusters of galaxies, showing that massive extremely dim celestial bodies cannot account for the missing mass. Particle physics is not neglected, with a description of our understanding of ordinary “baryonic” matter and the quest for detecting exotic weakly interacting massive particles (WIMPs). An entire chapter is also devoted to the idea that modified Newtonian dynamics (MOND) could be an alternative to the existence of dark matter. The second half of the book is devoted to cosmological observations and arguments that suggest the existence of dark energy – an even more mysterious ingredient of the universe. The pieces assemble through these chapters to reveal a universe that is flattened out by inflation and that is essentially made of cold dark matter, with dark energy acting as a cosmological constant.
This new cosmology is generally accepted as the standard model and gives the full measure of the dark side of the universe. The visible matter studied by astronomers so far appears to be just the tip of the iceberg (less than 1%) and even baryonic matter studied so far by physicists is only about 5% of the mass–energy content of the universe. The remaining 95% is unknown territory, which the book invites us to explore using all techniques available. This will be the major challenge for physics in the 21st century.
Physicists in Germany will soon be able to strengthen their role in the international quest to understand the fundamental laws of nature. On 15 May, the Senate of the Helmholtz Association of German Research Centres announced that it will grant €25 m in funding over the next five years to support the Helmholtz Alliance, Physics at the Terascale, in a proposal led by the DESY research centre. In this alliance, DESY – together with Forschungszentrum Karlsruhe, 17 universities and the Max Planck Institute for Physics in Munich – will bring together existing competencies in Germany in the study of elementary particles and forces.
At the same time, the Helmholtz Alliance will provide the basis to drive technological advancement in a much more focused way. This new initiative comes at a time when German particle physicists are making large contributions to international collaborations at particle accelerators, such as the LHC at CERN, and a future International Linear Collider.
Alliance funds will finance more than 50 new positions for scientists, engineers and technicians during the initial five-year period. Junior scientists, in particular, will be given the opportunity to lead research groups with options for tenure positions. This is intended to open up attractive new perspectives for a future career in particle physics. Joint junior positions at all partner institutes, coordinated recruitment and teaching substitutes for researchers who are abroad will together provide a framework where it is possible for scientists to work away from their home institutes at large-scale international research centres without interfering with teaching provision.
The new network will enhance collaboration between universities and research institutes in data-analysis fields and the development of new technologies. Particular support will be given to the design of new IT structures, as well as detector and accelerator technologies that are of central importance for the sustainable development of particle physics in the future.
As a member of the alliance, DESY will offer its facilities for testing and development of detector and accelerator technology, and 10 Helmholtz Alliance positions will be opened at the laboratory. An analysis centre for LHC data will also be established at DESY.
In another major development, Council approved a programme of additional activities together with the associated budget resources. This decision follows the definition of the European Strategy for Particle Physics adopted by Council last year. It makes it possible to start implementing the strategy as presented by CERN management last autumn. The approved resources amount to an extra SwFr240 million for 2008–2011. The host states, France and Switzerland, have committed to providing half of these additional funds.
The extra resources are essential to ensure full exploitation of the discovery potential of the LHC and to prepare CERN’s future. The programme consists of four priority themes: an increase in the resources dedicated to the experiments and to reliable operation of the LHC at its nominal luminosity; renovation of the injector complex; a minimum R&D programme on detector components and focusing magnets in preparation for an increase in the LHC luminosity and for enhancement of the qualifying programme for the Compact Linear Collider study; and activities of scientific importance for which contributions from other European organizations will be essential.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.