One of the things we do well in particle physics is collaborate with each other and internationally to carry out our science. Of course, individual and small collaborations are a trademark of modern scientific research and many of us have developed lifelong colleagues and friends from different cultures through our scientific interactions. Even during times when political situations have been constraining, scientific contacts have been maintained that have helped to break down those barriers. For example, during the heat of the Cold War, personal interactions were greatly hampered, yet scientific bonds persevered and some of those connections provided crucial ongoing contacts between Western and Soviet societies.
When I was a student at the University of California, Berkeley, I did my PhD research at the “Rad Lab”, now called Lawrence Berkeley National Laboratory. I recall being immediately surprised both by the number of foreign or foreign-born scientists at the lab and by how little it seemed to matter. For me, having grown up as a local Californian boy, this was an eye-opening experience and a terrific opportunity to learn about other cultures, customs and views of the world. After a while, I pretty much took it for granted that we scientists accept and relate to each other in ways that are essentially independent of our backgrounds. However, it is worth reminding ourselves that this is not the case in most of society and that we are the exception, not the rule.
I have often wondered what it is that unifies scientists. How can we work together so easily, when cultural, political and societal barriers inhibit that for most of society? After all, hostilities between countries and cultures seem to continue as an almost accepted part of our modern existence. I won’t theorize here on what enables scientists to work together and become colleagues and friends without regard to our backgrounds. Instead, I would like to briefly explore whether the nature of how we collaborate will, or should, change.
Particle physics is increasingly focused on the programmes at our big laboratories that house large accelerators, detectors and support facilities. These laboratories have essentially come to represent a distributed set of centres for high-energy physics, from which the intellectual and technical activities emanate and where physicists go to interact with their colleagues. Fermi National Accelerator Laboratory (Fermilab), Stanford Linear Accelerator Center (SLAC), the High Energy Accelerator Research Organization (KEK) and Deutsches Elektronen-Synchrotron (DESY) are examples of national laboratories that play this role in the US, Japan and Germany. CERN is a different example of a successful regional laboratory that has provided Europe with what is arguably the leading laboratory in the world for particle physics and with a meeting place for physicists from Europe and beyond.
One essential ingredient in the success of particle physics is that the accelerator facilities at the large laboratories have been made open to experimentalists worldwide without charge. This principle was espoused by the International Committee on Future Accelerators (ICFA) and, I believe, it has been crucial to widening participation.
It is interesting to contemplate how international collaboration might evolve as we go beyond the regional concept to a global one, like the International Linear Collider (ILC). The organizational principles for building and operating the ILC are not yet defined, but the idea is to form a partnership between Asian, American and European countries. Such an arrangement is already in place for the accelerator and detector R&D efforts. The general idea is to site the ILC near an existing host laboratory, to take advantage of the support facilities. However, the project itself will be under shared management by the international stakeholders. The experiments are expected to consist of large collider detectors similar to those at present colliders, but with some technical challenges that will require significant detector R&D over the next few years.
As we plan for the ILC, we want to ensure that we create a facility that will be available to the world scientific community. What needs to be done to ensure that we maintain the strong collaborative nature of our research and how do we create a true centre for the intellectual activities of our community? What should we require of the host country to assure openness to the laboratory and its facilities? How can we best include the broad community in the decision-making that will affect the facilities that are to be built? Is it time to consider new forms of detector collaboration and/or should we contemplate making the data from the detectors available to the broader community after an initial period (as in astronomy)?
I raise such questions only as examples, not to imply that we should change the way we do business, but to encourage us to think hard about how we can create an exciting international facility that will best serve our entire community and enable productive and broad collaboration to continue in science.
With this issue, CERN Courier takes on a slightly different look, with the inclusion of articles in French. This is because CERN has, with regret, taken the decision to cease the publication of separate French and English editions. In the new bilingual edition, articles submitted in English will remain in English while French articles will be published in French. There will, however, be summaries of feature articles in the other language, and the news items, as here, will be listed in French.
During its meeting in Geneva on 17 June, the CERN Council agreed to take on the role of defining the strategy and direction of European particle-physics research, a task already present in the founding convention.
A strategic planning team is to be established in support of this role, consisting of the chair of the European Committee for Future Accelerators, the chair of CERN’s Scientific Policy Committee, CERN’s director-general, one member nominated by each of CERN’s member-state delegations, and representatives of the major European national laboratories. In spring 2006, the team will provide the CERN Council with a status report in Berlin, with a full report to follow later that year.
At the same meeting, the Council also heard from project leader of the Large Hadron Collider (LHC), Lyn Evans, who told attendees that all efforts are currently being made to ensure that the LHC will be ready for commissioning in the summer of 2007. CERN’s chief scientific officer, Jos Engelen, also reported that all the LHC’s experiments expect to be in a position to take data in 2007, and that the LHC computing grid is progressing according to plan.
In his presentation to the Council, CERN’s director-general, Robert Aymar, applauded the progress that is being made towards the LHC. However, while the laboratory is on course for LHC start-up in 2007, current expenditure profiles indicate that CERN’s budget could be entirely committed to paying for the project right through to the next decade. This subject will be discussed at the Council’s meeting in September.
The DEAR (DAFNE Exotic Atoms Research) experiment at the DAFNE φ factory at Frascati has performed the most accurate determination of the effect of the strong interaction on the binding energy of kaonic hydrogen.
Kaonic hydrogen is an exotic atom where the electron is replaced by a K–, and it turns out to be an excellent laboratory for studies of quantum chromodynamics. Especially interesting is the determination of the strangeness content of the nucleon, which has traditionally been determined from low-energy kaon-nucleon scattering amplitudes. A significantly more accurate approach has now been discovered, which involves measuring the ground-state X-ray transitions in kaonic hydrogen atoms.
The DEAR collaboration took advantage of the low-energy monoenergetic kaons from the decay of φ mesons resonantly produced by e+e– collisions at one of the two interaction points at DAFNE. The kaons travelled through the thin beam pipe of DEAR and stopped in a gaseous hydrogen target. CCD detectors with a pixel size of 22.5 x 22.5 μm2 cooled to 165 K detected the X-rays emitted.
The DEAR experiment follows the steps of the KpX experiment at KEK in Japan, which first measured the ground-state X-ray peak of kaonic hydrogen. DEAR’s values are about a factor of two more accurate, and roughly 40% lower than those of the Japanese collaboration. The ground-state shift, ε1s, was measured to be -193 ± 37 (stat.) ± 6 (syst.) eV, with a 1s strong interaction width of Γ1s = 249 ± 111 (stat.) ± 30 (syst.) eV.
DEAR has also become the first experiment to observe transitions from different excited states, clearly identifying Kα, Kβ and Kγ lines.
On 24 May, Jonathan Dorfan, director of the Stanford Linear Accelerator Center (SLAC), announced a complete reorganization of the structure and senior management of the laboratory, which Stanford University has operated for more than 40 years for the US Department of Energy. The new organizational structure is built around four divisions: Photon Science, Particle and Particle Astrophysics, Linac Coherent Light Source (LCLS) Construction, and Operations.
“One thing that is recurrent in world-class scientific research is change,” Dorfan said. “Recognizing new science goals and discovery opportunities, and adapting rapidly to exploit them efficiently, cost-effectively and safely is the mark of a great laboratory. Thanks to the support of the Department of Energy’s Office of Science and Stanford University, SLAC is ideally placed to make important breakthroughs over a wide spectrum of discovery in photon science and particle and particle astrophysics. These fields are evolving rapidly, and we are remodelling the management structure to mobilize SLAC’s exceptional staff to better serve its large user community. The new structure is adapted to allow them to get on with what they do best – making major discoveries.”
Two of the new divisions – Photon Science, and Particle and Particle Astrophysics – encompass SLAC’s major research directions. As director of the Photon Science Division, Keith Hodgson has responsibility for the Stanford Synchrotron Radiation Laboratory, the science and instrument programme for the LCLS (the world’s first X-ray-free electron laser) and the new Ultrafast Science Center. Persis Drell, director of the Particle and Particle Astrophysics Division, oversees the B-Factory (an international collaboration studying matter and antimatter), the Kavli Institute for Particle Astrophysics and Cosmology, the International Linear Collider effort, accelerator research and non-accelerator particle-physics programmes.
Construction of the $379 million LCLS, a key element in the future of accelerator-based science at SLAC, started this fiscal year. A significant part of the laboratory’s resources and manpower are being devoted to building LCLS, with completion of the project scheduled for 2009. Commissioning will begin in 2008 and science experiments are planned for 2009. John Galayda serves as director of the LCLS Construction division.
To reinforce SLAC’s administrative and operational efficiency, and to stress the importance of strong and effective line management at the laboratory, a new position of chief operating officer has been created, filled by John Cornuelle. This fourth division, Operations, has broad responsibilities for operational support and R&D efforts that are central to the science divisions. Included in Operations will be environmental safety and health, scientific computing and computing services, mechanical and electrical support departments, business services, central facilities and maintenance.
The E158 experiment at the Stanford Linear Accelerator Center has made a landmark observation: the strength of the weak force acting on two electrons lessens when the electrons are far apart. The results will be published in Physical Review Letters.
Because there is an asymmetry in how the weak force acts, there is a difference between how often left- and right-handed electrons scatter via a Z particle (the neutral carrier of the weak force). Two years ago, the team made the first observation of this parity-violation effect in electron-electron interactions.
For the new results, E185 used its improved precision asymmetry measurement to calculate the long-distance (low momentum transfer, Q) weak charge of the electron, which determines the strength of the weak force between two electrons. The result is the world’s best determination of the weak mixing angle at low energy: sin2θweff = 0.2397 ± 0.0010 (stat.) ± 0.0008 (syst.), evaluated at Q2 = 0.026 GeV2.
Previous experiments at SLAC and CERN measured the electron’s weak charge at high momentum transfer (short distances). E158’s long-distance measurement observes this weak charge to be half the size of the charge at short distances. Comparing the short-distance measurements with the long-distance results demonstrates (with 6σ significance) the variation of the strength of the weak force with distance, and confirms an important aspect of Standard Model theory. Using the result for sin2θweff, E158 finds the electron’s weak charge to be -0.041 ± 0.006 – half the value expected if there were no variation.
E158 was also sensitive to indirect signals from hypothetical Z’ particles, suggesting they are at least 10 times the mass of the Z.
A potentially cost-saving and performance-enhancing new approach to fabricating superconducting radiofrequency (SRF) accelerating cavities has been demonstrated by the Institute for Superconducting Radiofrequency Science & Technology (ISRFST) at Jefferson Lab in Newport News, Virginia.
Several single-cell niobium cavities were made from material sliced from large-grain niobium ingots – rather than fine-grain material melted from ingots and formed into sheets by the traditional process of forging, annealing, rolling and chemical etching.
In tests carried out by ISRFST, these cavities performed extremely well. If multi-cell cavities are also successful, the method could have a substantial impact on the economics of high-performance RF superconductivity.
The work aimed to provide a deeper understanding of the influence of grain boundaries on the often-observed drop in Q (the cavity-performance quality factor) at accelerating gradients above 20 MV/m.
“Q-drop” is not well understood, but it may be linked to contaminants and grain boundaries in the niobium.
The researchers used single-crystal niobium sheets for forming into half-cells, omitting expensive processing steps and producing cavities with few or no grain boundaries. Reference Metals Company Inc of Bridgeville, Pennsylvania, provided the niobium in a research collaboration with JLab.
This proof-of-principle work could have wide repercussions. Most notably, it could lead to more reliable production and reduced costs.
The research also has important implications for the forthcoming International Linear Collider (ILC), a 500 GeV machine that will need some 17,000 SRF cavities performing above 28 MV/m. Using a scaled version of a low-loss design proposed for the ILC, a test cavity supported an accelerating gradient of 45 MV/m. This figure is very close to both Cornell’s current world record and the theoretical limit.
The existence of powerful quasars at high redshift raises the question of the rapid formation of supermassive black holes. How could early black holes accrete 1 billion solar masses in less than 1 billion years after the Big Bang? Was there an epoch with conditions particularly favourable for such a rapid growth of black holes?
These questions have been tackled by cosmologists Marta Volonteri and Martin Rees at Cambridge University. They attempt to identify what was different in the early universe that allowed black holes to grow as quickly as suggested by the existence of fully mature quasars in the Sloan Digital Sky Survey (SDSS) dataset at a redshift of about 6.
The first problem is the origin of the seed black holes. They could be low-mass (less than 1.5 solar masses) primordial black holes, formed during the first seconds of the Big Bang, but Volonteri and Rees do not want to rely on such a hypothesis. They propose that the seeds are intermediate-mass black holes, formed by the core collapse of massive stars of the very first generation.
Such Population III stars, which are composed only of primordial hydrogen and helium, can have masses up to 1000 times that of the Sun, thus exceeding by an order of magnitude the most massive metal-rich stars existing today. When exploding as supernovae at the end of their lives they would form black holes with masses approximately 20-600 times that of the Sun.
These pregalactic seed black holes would form in gas halos, collapsing at a redshift of 20-30 at the peaks of the primordial density field. Before too many heavy nuclei were released by stars, the newborn black holes would benefit from unique conditions to accrete gas at a very high rate. The calculations of Volonteri and Rees indeed suggest that there was a brief window of rapid black-hole growth during the “dark ages”, before the stellar radiation fully re-ionized the intergalactic gas (CERN Courier October 2003 p13).
Their calculations show that the absence of heavy nuclei and the effective cooling by hydrogen atoms via line emission allow the formation of a “fat” disc of relatively cold (approximately 5000-10,000 K) gas around the seed black hole at the centre of the halo. The thickness of the disc allows quasi-spherical accretion into the black hole’s event horizon at a much higher rate than for thin discs. However, the growth of the black hole increases the radius of the inner disc, which becomes increasingly thin with respect to the size of the black hole. This super-accretion will therefore no longer be sustained once the black hole has reached a significant mass.
Although the details of the process are still quite uncertain, the absence of heavy nuclei seems to play a critical role in the early growth of massive black holes. Super-accretion with a very low luminosity compared with the amount of matter falling into the black hole was likely to have stopped once the universe had been enriched by metals at a redshift of 6-10. But by then, a population of supermassive black holes would already have formed, which became fully mature quasars by turning matter into light much more efficiently than before.
Problems in theories such as quantum chromodynamics (QCD) that involve strong coupling are among the most intractable in physics. The difficulties of accurate calculations are particularly vexing when it comes to studying charge-parity (CP) violation – a necessary ingredient for explaining the absence of the antimatter produced in the Big Bang, and a vital topic in particle physics. Progress in making accurate QCD calculations in this sector of the Standard Model could have far-reaching consequences, because the larger theory in which the Standard Model is embedded, even if not strongly coupled ab initio, will almost certainly have strongly coupled sectors.
The amount of CP violation that is consistent with our current understanding of the Standard Model is not enough to account for the disappearance of the antimatter produced along with the matter. The decay of B mesons is the most promising arena in which to search for other sources of CP violation, and the B-factory experiments, BaBar and Belle, have been spectacularly successful in observing and studying CP violation in those decays.
However, the search for CP violation beyond the Standard Model involves comparison of the angles of the “unitarity triangle” measured in CP violation experiments, with the lengths of the sides determined from more conventional measurements. These lengths are determined from elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, which describes the relative strengths of the weak decays of quarks. The problem is that quarks do not appear alone, but in strongly interacting combinations such as mesons and baryons. So, there is always a strong-interaction parameter that relates decay measurements to the underlying CKM matrix element. So far, the uncertainties in these parameters severely limit the precision of measurements of the CKM matrix elements.
One example of such a strong-interaction parameter, called fB, is required to extract the CKM matrix element called Vtd from measurements of B0B0bar mixing. This parameter is a measure of the separation of the b quark and the anti-d quark in a B meson, and could, in principle, be measured in B+ → μ+ν decay. However, this decay is so slow that an accurate measurement is impossible, even with the enormous data samples that the BaBar and Belle collaborations have accumulated. This is one of the parameters that lattice QCD (LQCD) theorists can calculate, but accuracy has been limited. Recent progress in LQCD holds the promise of precise calculations of the parameters, including fB, that are required to determine CKM matrix elements, but until now there has been no effective direct experimental test of the precision of these calculations.
Five years ago, some LQCD theorists and members of the CLEO collaboration at the Cornell Laboratory for Elementary-Particle Physics realized that comparing measurements of D meson decays with LQCD calculations could test the LQCD calculations needed for extracting CKM matrix elements. In particular, in the decay D+ → μ+ν there is a parameter called fD+ that is analogous to fB. The rate for D+ → μ+ν decay is larger than the corresponding B meson decay rate, so it can be measured accurately. If good agreement is found between the experimental and LQCD values for fD+, that would inspire confidence in LQCD calculations of fB. It is also generally believed that the ratio fB:fD+ can be calculated more accurately than either one, so fB could be determined from an experimental measurement of fD+ and a LQCD calculation of the ratio. This caused the CLEO collaboration and a group of LQCD theorists to embark on D meson decay experiments and the corresponding LQCD calculations, with goals of accuracies of a few per cent. However, they had technical difficulties to overcome.
Although there have been LQCD calculations since the 1970s, their accuracy has been limited to 20% because of a simplification – the “quenched approximation”. Predictions from “unquenched” LQCD calculations that match experimental results to a few per cent are needed to demonstrate that the calculations can be done at the desired level of accuracy. The LQCD theorists were the first to reach the goal of a few per cent in their calculations of important parameters (not fD+ or fB) in the b and c quark systems. However, their results were “postdictions” not predictions, because the corresponding experimental results had already been published. Still, this success motivated the group of LQCD theorists to embark on their calculations of the strong-interaction parameters needed to determine CKM matrix elements.
Measurement of fD+ is one of the main goals of the CLEO-c physics programme at the Cornell Electron Storage Ring (CESR). However, obtaining high luminosity was a major challenge for the CESR accelerator group. Although CESR had made progress in luminosity since its first operation in 1979, much of those gains would be lost in reducing the energy from the b quark threshold region, near 5 GeV per beam, to the c quark threshold region near 2 GeV per beam, where the measurement could be made most readily. As the energy of the beams is decreased, the synchrotron radiation “damping” required for high luminosity is substantially reduced. The solution was the installation of 12 “wiggler” magnets to increase the damping at the lower energies. These magnets were installed in 2003 and 2004 in CESR-c, and the CLEO-c programme then began in earnest, funded by a five-year grant from the National Science Foundation.
The first engineering run of CESR-c with six wigglers yielded an integrated luminosity of 60 pb-1 in e+e– collisions at a total energy of 3.77 GeV, the peak of the ψ (3770) resonance. This is substantially more than the luminosities available to either the MARK III or the BES II collaborations, at SLAC and the Institute of High Energy Physics in Beijing respectively, which previously took data at the same energy. Subsequent runs with 12 wigglers brought the total integrated luminosity to 281 pb-1. This is the most desirable energy for measuring D meson decays because the ψ (3770) decays only to D+D– or D0D0bar pairs, making very clean “tagged” measurements possible. In tagged measurements pioneered by the MARK III collaboration, if one D meson, D– for example, is reconstructed in an event, then the rest of the particles in that event must be from the decay of a D+ meson. Coupled with the excellent resolution and large acceptance of the CLEO-c detector, tagging provides a very clean sample of D+ meson decays, which is an ideal arena for searching for rare decays such as D+ → μ+ν.
The CLEO collaboration found eight D+ → μ+ν events (with an estimated background of one event) in the first 60 pb-1 of CLEO-c data. This provided a rough measurement of fD+ to an accuracy of 20%, which has now been published (Bonvicini 2004). The larger data sample and improvements in selection criteria produced a yield of 50 candidates for D+ → μ+ν decay with an estimated background of 2.9 ± 0.5, enough candidates to yield an error in fD+ below 10%.
While the CLEO collaboration, with the help of their CESR colleagues, was accumulating and analysing these data, a group of LQCD theorists was also hard at work calculating fD+. It became clear that both groups could have substantial results just in time for the Lepton-Photon Symposium in Uppsala at the end of June. Since both communities felt that it was very important for the LQCD result to be a real prediction, they agreed to embargo both of their results until the conference. On the second day of the symposium, Marina Artuso of Syracuse University reported the preliminary CLEO-c result fD+ = 223 ± 16-9+7 MeV (CLEO-c 2005), and Iain Stewart of MIT reported the LQCD result from the Fermilab, MIMD Lattice Calculation (MILC) and High Precision QCD (HPQCD) collaborations, fD+ = 201 ± 3 ± 17 MeV (Aubin et al. 2005). For both results the errors are statistical and systematic, respectively. The two results agree well within the errors of about 8% for each.
The agreement between the results motivates both communities to continue comparing LQCD calculations with experiments. On the LQCD side, important next steps include improvements in algorithms that can reduce systematic errors and precision calculations of the “form factors” involved in semileptonic decays of D and B mesons. The CLEO collaboration plans to utilize its data sample to measure form factors in semileptonic D decay and take more data to reduce errors. The LQCD theorists and the CLEO collaboration both aim to reduce errors to below 5%. The CLEO collaboration is also planning to explore the threshold region for DsDsbar production to search for an energy at which the tagging techniques can be applied to make the first accurate measurements of Ds meson decay, including fDs and Ds semileptonic decay form factors. The Fermilab, MILC and HPQCD group has already predicted the value of fDs.
Fred Hoyle, the great cosmologist, nuclear astrophysicist and controversialist, was born 90 years ago in the beautiful county of Yorkshire in the north of England. Hoyle’s first science teacher was his father, who supplied the boy with books and apparatus for chemistry experiments. By the age of 15 he was making highly toxic phosphine (PH3) in his mother’s kitchen, and terrifying his young sister with explosions. In high school he excelled in mathematics, chemistry and physics, and in 1933 won a place at Cambridge to study physics.
On arrival at Cambridge he immediately demonstrated his fierce independence by telling his astonished tutor that he was switching from physics to applied mathematics. The future nuclear astrophysicist foresaw that Cambridge mathematics rather than laboratory physics would give him the right start as a theorist. The country boy displayed an astonishing talent at mathematics, even by the highest standards of the university. He skipped the second year completely, yet graduated with the highest marks in his year. That soaring achievement won him a position as a research student in the Cavendish Laboratory, where Ernest Rutherford held the chair of experimental physics. By the 1930s Rutherford had created for Cambridge the greatest nuclear physics laboratory in the world.
Hoyle identified Rudolph Peierls as a supervisor. Peierls, a German citizen and son of a Jewish banker, had studied quantum theory under the pioneer Werner Heisenberg. In 1933 Peierls and his young wife had escaped the anti-Jewish practices of the Nazi regime; they arrived in Cambridge via Stalin’s Russia. Peierls won a one-year fellowship from the Rockefeller Foundation, and by the time Hoyle tracked him down he had just returned from spending six months in Rome with Enrico Fermi. Peierls immediately set Hoyle the task of improving Fermi’s theory of beta decay, published in 1934. This led, in 1937, to Hoyle’s first research paper, “The generalised Fermi interaction”.
In 1938 Paul Dirac, who had won the Nobel prize in 1933, became Hoyle’s supervisor because Peierls had left Cambridge for a permanent position in nuclear physics at the University of Birmingham. Just one year under Dirac’s silent tutelage enabled Hoyle to produce two papers in quantum electrodynamics, both of them masterpieces.
The impending 1939-1945 war curtailed Hoyle’s career as a theoretical nuclear physicist. In January 1939 he read of Irène and Frédérick Joliot-Curies’ discovery that the fission of uranium by neutron bombardment produced a fresh flood of neutrons. The nuclear physicists in the Cavendish Laboratory immediately realized that a chain reaction could be used to create a nuclear bomb. Hoyle foresaw that war research would drain the UK universities of scientists and mathematicians. Wishing to avoid being drafted for weapons research, he changed his research interest to theoretical astronomy and offered his services to the nation as a weather forecaster. The British authorities declined this suggestion. Instead he found himself engaged in radar countermeasures for the war at sea, a field in which he worked with great distinction that was never publicly recognized.
In late 1944 the US Navy convened a secret meeting in Washington for the US and UK to share knowledge of radar research. Hoyle was one of two UK delegates. Outside of the meeting he used his time productively. He flew out to the US west coast to meet Walter Baade, one of the greatest observational astrophysicists of the 20th century. Baade introduced Hoyle to papers that he had missed during his war work, about the extremely high temperatures in supernovae. Baade taught him that a supernova is a nuclear explosion triggered by stellar collapse: “Maybe a star is like a nuclear weapon!” was how Baade put it.
Hoyle returned to England via Montreal, his itinerary allowing him to visit the Chalk River Laboratories. This Canadian facility had played host to British research on nuclear weapons since 1942. From a former Cavendish Laboratory contact, Hoyle learned some of Britain’s nuclear secrets first-hand. What particularly amazed him was how far the measurements of energy levels in nuclei had progressed. Hoyle now made an important connection: what would the nuclear chain reactions look like in an exploding star?
The post-war period
At the conclusion of the war in Europe, Hoyle walked out of his job as a radar scientist (he could have continued if he had wished), and returned to Cambridge as a lecturer. He immediately turned his mind to nuclear reactions in massive stars with central temperatures of around 3 x 109 K. Astrophysicists had long known of the two stellar reactions that synthesize helium from hydrogen. Hoyle, following suggestions by his hero, Arthur Eddington, now asked himself if helium could be processed to the heavier elements via chain reactions. He studied tables giving the natural abundances of the chemical elements, picking up an important clue from the marked increase in abundances around iron, the so-called iron peak. From his solid grasp of nuclear physics and statistical mechanics he convinced himself that the iron nuclei were synthesized in stars at very high temperatures. He set himself the task of working out how stars do it.
He quickly became frustrated at the lack of data on nuclear masses and energy levels. Then, one afternoon in the spring of 1946, he bumped into Otto Frisch. Frisch had recently returned to the UK from Los Alamos, where he had worked on nuclear fission aspects of the Manhattan Project. Frisch directed Hoyle to a declassified compilation on nuclear masses that the British authorities had found in occupied Berlin. Drawing on these data from the wartime atomic-weapons programmes, he now worked alone in St John’s College (rather than the Cavendish Laboratory), searching for the answers to the origin of the elements from beryllium to iron.
A single page in a notebook he had first started in 1945 captures the moment when Hoyle cracked this problem. The notebook has a series of reactions, commencing with 12C capturing 4He, and concluding with Fe. Hoyle treated the problem as one of statistical equilibrium. For example, he wrote down a chain reaction connecting 16O and 20Ne in the following manner: 16O + 4He→←19F + 1H, 19F + 1H→←20Ne + hν. The double arrow symbol he used →← is to indicate that these reactions are occurring in equilibrium, with the action being read from right to left as well as left to right.
Using statistical mechanics, Hoyle calculated the proportions of each isotope that would arise under equilibrium conditions. This is not explosive nucleosynthesis, but a more mundane steady-state alchemy in the cores of red giant stars. Hoyle assumed, correctly as it turned out, that rotational instability and stellar explosions would release the heavier elements inside the star back to the interstellar medium. His scheme neatly matched reality, as its predicted distribution of the different elements corresponded well with their abundance in the natural environment. But there was barely a ripple of interest from the scientific community when Hoyle published his findings in 1946. At that stage he was far ahead of his time in applying nuclear physics to stellar interiors. In the 10 years following publication the paper received just three citations.
One spring afternoon in 1953 a young postdoc, Geoffrey Burbidge, gave a seminar in Cambridge that changed nuclear astrophysics forever. He described the proportions of chemical elements in a peculiar star (γ Gem) that his wife Margaret had just observed. The composition of this star seemed bizarrely different from that of the Sun. The rare earths, from 57La to 71Lu, were spectacularly over-represented: in γ Gem 57La has an abundance 830 times greater than in the Sun. The Burbidges appeared to have discovered a star with nuclear reactions taking place on the surface.
The results greatly excited Willy Fowler of Caltech who was in Cambridge as a Fulbright professor. He already knew of Hoyle’s work on synthesis through the iron peak. Now he introduced himself to the Burbidges saying that his particle accelerator in the Kellogg Radiation Lab could accelerate protons to the energies found in solar flares. He exclaimed: “Geoff, the four of us should attack the problems of nucleosynthesis together.”
Soon after the seminar Fowler and Hoyle joined up again at Caltech. Hoyle had a problem on his mind. His synthesis through to the iron peak started with 12C. Where had the 12C come from? Not from the Big Bang – that made only hydrogen and helium. The synthesis of elements with atomic masses 5 and 8 in stellar interiors was already known to be impossible because there are no stable isotopes with those masses. In the absence of these light isotopes to form a stepping-stone, how could three 4He become 12C? Calculations suggested that anything synthesized from three alpha particles (4He) would be absurdly short-lived. And there the matter rested until Hoyle goaded a reluctant Fowler into action.
One of Fowler’s associates, Edwin Salpeter, had found an enhanced energy level in 8Be that lasted just long enough to react with an alpha particle and make the prized 12C. However, when Hoyle looked at the nuclear physics more closely, he realized that the 12C resulting from Salpeter’s scheme would immediately react to 16O. However, in a flash of inspiration Hoyle tried to make Salpeter’s triple-alpha process work with an enhanced level in 12C. To his amazement he found that if the newly made 12C had a resonance at 7.65 MeV the reaction would proceed at just the correct rate.
Hoyle crashed into Fowler’s office without so much as a “by your leave” and urged him to measure the resonance levels in carbon. The experimentalist wasn’t going to embark on a quest that would take many weeks just because an exotic theorist from England asked him to. But Hoyle persisted and Fowler eventually relented. Hoyle had already returned to his teaching in Cambridge by the time Fowler’s group completed the experiment. They did find the resonance at 7.65 MeV, a discovery that Fowler found absolutely amazing. “From that moment we took Hoyle very seriously indeed,” he later said, because Hoyle had predicted a nuclear-energy level entirely on the basis of an anthropic argument.
The Burbidges, Fowler and Hoyle – “B2FH” – now embarked on an enormous research programme to account for the origin of the elements in the entire Periodic Table. The Burbidges brought the observations to the collaboration, Fowler the nuclear data, and Hoyle and Geoff Burbidge many of the calculations (on hand-cranked machines). Their encyclopaedic paper, always referred to as B2FH, ran to 108 pages, appearing in Reviews of Modern Physics in 1957 (B2FH 1957). It has received 1400 citations, which is very high for a paper in astrophysics. It remains a key paper, which set out the physics of several different mechanisms of nucleosynthesis, including the explosive pathways in which supernovae build the elements beyond the iron peak. The paper led to a lifelong friendship between Fowler and Hoyle, both of whom made many further contributions to nucleosynthesis. Fowler was deeply disturbed and disappointed when Hoyle did not get a share of the 1983 Nobel prize, which went to Chandrasekhar and Fowler.
Fowler strongly supported Hoyle’s plans for an Institute of Theoretical Astronomy in Cambridge. This opened in 1968, with nuclear astrophysics at the heart of the programme. Hoyle used the institute as a platform to re-establish British expertise in all branches of theoretical astronomy. By example he pulled the subject out of the doldrums, inspiring a string of distinguished visitors and a legion of graduate students. His research papers (there are more than 500) show he was wrong more often than he was right. That did not trouble him at all. Among the papers in the “right” class, those on nuclear astrophysics still stand as a towering achievement of central importance to astrophysics.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.