Comsol -leaderboard other pages

Topics

Yukawa’s gold mine

CCyuk1_07_07

This year is the centenary of the birth of Hideki Yukawa who, in 1935, proposed the existence of the particle now known as the π meson, or pion. To celebrate, the International Nuclear Physics Conference took place in Japan on 3–8 June – 30 years after the previous one in 1977 – with an opening ceremony honoured by the presence of the Emperor and Empress. In his speech, the Emperor noted that Yukawa – the first Nobel Laureate in Japanese science – is an icon not only for young Japanese scientists but for everybody in Japan. Yukawa’s particle led to a gold mine in physics linked to the pion’s production, decay and intrinsic structure.This gold mine continues to be explored today. The present frontier is the quark–gluon-coloured-world (QGCW), whose properties could open new horizons in understanding nature’s logic.

Production

In his 1935 paper, Yukawa proposed searching for a particle with a mass between the light electron and the heavy nucleon (proton or neutron). He deduced this intermediate value – the origin of the name “mesotron”, later abbreviated to meson – from the range of the nuclear forces (Yukawa 1935). The search for cosmic-ray particles with masses between those of the electron and the nucleon became a hot topic during the 1930s thanks to this work.

On 30 March 1937, Seth Neddermeyer and Carl Anderson reported the first experimental evidence, in cosmic radiation, for the existence of positively and negatively charged particles heavier and with more penetrating power than electrons, but much less massive than protons (Neddermeyer and Anderson 1937). Then, at the meeting of the American Physical Society on 29 April, J C Street and E C Stevenson presented the results of an experiment that gave, for the first time, a mass value of 130 electron masses (me) with 25% uncertainty (Street and Stevenson 1937). Four months later, on 28 August, Y Nishina, M Takeuchi and T Ichimiya submitted to Physical Review their experimental evidence for a positively charged particle with mass between 180 me and 260 me (Nishina et al. 1937).

CCyuk2_07_07

The following year, on 16 June, Neddermeyer and Anderson reported the observation of a positively charged particle with a mass of about 240 me (Neddermeyer and Anderson 1938), and on 31 January 1939, Nishina and colleagues presented the discovery of a negative particle with mass (170 ± 9) me (Nishina et al. 1939). In this paper, the authors improved the mass measurement of their previous particle (with positive charge) and concluded that the result obtained, m = (180 ± 20) me, was in good agreement with the value for the negative particle. Yukawa’s meson theory of the strong nuclear forces thus appeared to have excellent experimental confirmation. His idea sparked an enormous interest in the properties of cosmic rays in this “intermediate” range; it was here that a gold mine was to be found.

In Italy, a group of young physicists – Marcello Conversi, Ettore Pancini and Oreste Piccioni – decided to study how the negative mesotrons were captured by nuclear matter. Using a strong magnetic field to separate clearly the negative from the positive rays, they discovered that the negative mesotrons were not strongly coupled to nuclear matter (Conversi et al. 1947). Enrico Fermi, Edward Teller and Victor Weisskopf pointed out that the decay time of these negative particles in matter was 12 powers of 10 longer than the time needed for Yukawa’s particle to be captured by a nucleus via the nuclear forces (Fermi et al. 1947). They introduced the symbol μ, for mesotron, to specify the nature of the negative cosmic-ray particle being investigated.

In addition to Conversi, Pancini, Piccioni and Fermi, another Italian, Giuseppe Occhialini, was instrumental in understanding the gold mine that Yukawa had opened. This further step required the technology of photographic emulsion, in which Occhialini was the world expert. With Cesare Lattes, Hugh Muirhead and Cecil Powell, Occhialini discovered that the negative μ mesons were the decay products of another mesotron, the “primary” one – the origin of the symbol π (Lattes et al. 1947). This is in fact the particle produced by the nuclear forces, as Yukawa had proposed, and its discovery finally provided the nuclear “glue”. However this was not the end of the gold mine.

The decay-chain

The discovery by Lattes, Muirhead, Occhialini and Powell allowed the observation of the complete decay-chain π → μ → e, and this became the basis for understanding the real nature of the cosmicray particles observed during 1937–1939, which Conversi, Pancini and Piccioni had proved to have no nuclear coupling with matter. The gold mine not only contained the π meson but also the μ meson. This opened a completely unexpected new field, the world of particles known as leptons, the first member being the electron. The second member, the muon (μ), is no longer called a meson, but is now correctly called a lepton. The muon has the same electromagnetic properties as the electron, but a mass 200 times heavier and no nuclear charge. This incredible property prompted Isidor Rabi to make the famous statement “Who ordered that?” as reported by T D Lee (Barnabei et al. 1998).

In the 1960s, it became clear that there would not be so many muons were it not for the π meson. Indeed, if another meson like the π existed in the “heavy” mass region, a third lepton – heavier than the muon – would not have been so easily produced in the decays of the heavy meson because this meson would decay strongly into many π mesons. The remarkable π–μ case was unique. So, the absence of a third lepton in the many final states produced in high-energy interactions at proton accelerators at CERN and other laboratories, was not to be considered a fundamental absence, but a consequence of the fact that a third lepton could only be produced via electromagnetic processes, as for example via time-like photons in pp or e+e annihilation. The uniqueness of the π–μ case therefore sparked the idea of searching for a third lepton in the appropriate production processes (Barnabei et al. 1998).

Once again, this was still not the end of the gold mine, as understanding the decay of Yukawa’s particle led to the field of weak forces. The discovery of the leptonic world opened the problem of the universal Fermi interactions, which became a central focus of the physics community in the late 1940s. Lee, M Rosenbluth and C N Yang proposed the existence of an intermediate boson, called W as it was the quantum of the weak forces. This particle later proved to be the source of the breaking of parity (P) and charge conjugation (C) symmetries in weak interactions.

In addition, George Rochester and Clifford Butler in Patrick Blackett’s laboratory in Manchester discovered another meson – later dubbed “strange” – in 1947, the same year of the π meson discovery. This meson, called θ, decayed into two pions. It took nearly 10 years to find out that the θ and another meson called τ, with equal mass and lifetime but decaying into three pions, were not two different mesons but two different decay modes of the same particle, the K meson. Lee and Yang solved the famous θ–τ puzzle in 1956, when they proved that no experimental evidence existed to establish the validity of P and C invariance in weak interactions; the experimental evidence came immediately after.

The violation of P and C generated the problem of PC conservation, and therefore that of time-reversal (T) invariance (through the PCT theorem). This invariance law was proposed by Lev Landau, while Lee, Reinhard Oehme and Yang remarked on the lack of experimental evidence for it. Proof that they were on the right experimental track came in 1964, when James Christenson, James Cronin, Val Fitch and René Turlay discovered that the meson called K02 also decayed into two Yukawa mesons. Rabi’s famous statement became “Who ordered all that?” – “all” being the rich contents of the Yukawa gold mine.

A final comment on the “decay seam” of the gold mine concerns the decay of the neutral Yukawa meson, π0 → γγ. This generated the ABJ anomaly, the celebrated chiral anomaly named after Steven L Adler, John Bell, and Roman Jackiw, which had remarkable consequences in the area of non-Abelian forces. One of these is the “anomaly-free condition”, an important ingredient in theoretical model building, which explains why the number of quarks in the fundamental fermions must equal the number of leptons. This allowed the theoretical prediction of the heaviest quark – the top-quark, t – in addition to the b quark in the third family of elementary fermions.

Intrinsic structure

The Yukawa particle is made of a pair of the lightest, nearly- massless, elementary fermions: the up and down quarks. This allows us to understand why chirality-invariance – a global symmetry property – should exist in strong interactions. It is the spontaneous breaking of this global symmetry that generates the Nambu–Goldstone boson. The intrinsic structure of the Yukawa particle needs the existence of a non-Abelian fundamental force – the QCD force – acting between the constituents of the π meson (quarks and gluons) and originating in a gauge principle. Thanks to this principle, the QCD quantum is a vector and does not destroy chirality-invariance.

To understand the non-zero mass of the Yukawa meson, another feature of the non-Abelian force of QCD had to exist: instantons. Thanks to instantons, chirality-invariance can also be broken in a non-spontaneous way. If this were not the case, the π could not be as “heavy” as it is; it would have to be nearly massless. So, can a pseudoscalar meson exist with a mass as large as that of the nucleon? The answer is “yes”: its name is η’ and it represents the final point in the gold mine started with the π meson. Its mass is not intermediate, but is nearly the same as the nucleon mass.

The η’ is a pseudoscalar meson, like the π, and was originally called X0. Very few believed that it could be a pseudoscalar because its mass and width were too big and there was no sign of its 2γ decay mode. This missing decay mode initially prevented the X0 from being considered the ninth singlet member of the pseudoscalar SU(3) uds flavour multiplet of Murray Gell-Mann and Yuval Ne’eman. However, the eventual discovery of the 2γ decay mode strongly supported the pseudoscalar nature of the X0, and once this was established, its gluon content became theoretically predicted through the QCD istantons.

CCyuk3_07_07

If the η’ has an important gluon component, we should expect to see a typical QCD non-perturbative effect: leading production in gluon-induced jets. This is exactly the effect that has been observed in the production of the η’ mesons in gluon-induced jets, while it is not present in η production (figure 1).

The interesting point here is that it appears that the η’ is the lowest pseudoscalar state having the most important contribution from the quanta of the QCD force. The η’ is thus the particle most directly linked with the original idea of Yukawa, who was advocating the existence of a quantum of the nuclear force field; the η’ is the Yukawa particle of the QCD era. Seventy two years after Yukawa’s original idea we have found that his meson, the π, has given rise to a fantastic development in our thinking, the last step being the η’ meson.

The quark–gluon-coloured world

There is still a further lesson from Yukawa’s gold mine: the impressive series of totally unexpected discoveries. Let me quote just three of them, starting with the experimental evidence for a cosmic-ray particle that was believed to be Yukawa’s meson, which turned out to be a lepton: the muon. Then, the decay-chain π → μ → e was found to break the symmetry laws of parity and charge conjugation. Third, the intrinsic structure of the Yukawa particle was found to be governed by a new fundamental force of nature, the strong force described by QCD.

This is perfectly consistent with the great steps in physics: all totally unexpected. Such totally unexpected events, which historians call Sarajevo-type-effects, characterize “complexity”. A detailed analysis shows that the experimentally observable quantities that characterize complexity in a given field exist in physics; the Yukawa gold mine is a proof. This means that complexity exists at the fundamental level, and that totally unexpected effects should show up in physics – effects that are impossible to predict on the basis of present knowledge. Where these effects are most likely to be, no one knows. All we are sure of is that new experimental facilities are needed, like those that are already under construction around the world.

With the advent of the LHC at CERN, it will be possible to study the properties of the quark–gluon coloured world (QGCW). This is totally different from our world made of QCD vacuum with colourless baryons and mesons because the QGCW contains all the states allowed by the SU(3)c colour group. To investigate this world, Yukawa would tell us to search for specific effects arising from the fact that the colourless condition is avoided. Since the colourless condition is not needed, the number of possible states in the QGCW is far greater than the number of colourless baryons and mesons that have been built so far in all our laboratories.

So, a first question is: what are the consequences for the properties of the QGCW? A second question concerns light quarks versus heavy quarks. Are the coloured quark masses in the QGCW the same as the values we derive from the fact that baryons and mesons need to be colourless? It could be that all six quark flavours are associated with nearly “massless” states, similar to those of the u and d quarks. In other words, the reason why the top quark appears to be so heavy (around 200 GeV) could be the result of some, so far unknown, condition related to the fact that the final state must be QCD-colourless. We know that confinement produces masses of the order of a giga-electron-volt. Therefore, according to our present understanding, the QCD colourless condition cannot explain the heavy quark mass. However, since the origin of the quark masses is still not known, it cannot be excluded that in a QCD coloured world, the six quarks are all nearly massless and that the colourless condition is “flavour” dependent. If this was the case, QCD would not be “flavour-blind” and this would be the reason why the masses we measure are heavier than the effective coloured quark masses. In this case, all possible states generated by the “heavy” quarks would be produced in the QGCW at a much lower temperature than is needed in our world made of baryons and mesons, i.e. QCD colourless states. Here again, we should try to see if new effects could be detected due to the existence, at relatively low temperatures in QGCW physics, of all flavours, including those that might exist in addition to the six so far detected.

A third question concerns effects on the thermodynamic properties of the QGCW. Are these properties going to be along the “extensivity” or “non-extensivity” conditions? With the enormous number of QCD open-colour-states allowed in the QGCW, many different phase transitions could take place and a vast variety of complex systems should show up. The properties of this “new world” should open unprecedented horizons in understanding the ways of nature’s logic.

A fourth, related problem would be to derive the equivalent Stefan–Boltzmann radiation law for the QGCW. In classical thermodynamics, the relation between energy density at emission, U, and the temperature of the source, T, is U = sT4 – where s is a constant. In the QGCW, the correspondence should be U = pT and T = the average energy in the centre-of-momentum system, where pT is the transverse momentum. In the QGCW, the production of “heavy” flavours could be studied as functions of pT and E. The expectation is that pT = cE4, where c is a constant, and any deviation would be extremely important. The study of the properties of the QGCW should produce the correct mathematical structure to describe the QGCW. The same mathematical formalism should allow us to go from QGCW to the physics of baryons and mesons and from there to a more restricted component, namely nuclear physics, where all properties of the nuclei should finally find a complete description.

The last lesson from Yukawa

CCyuk4_07_07

With the advent of the LHC, the development of a new technology should be able to implement collisions between different particle states (p, n, π, K, μ, e, γ, ν) and the QGCW in order to study the properties of this new world. Figure 2 gives an example of how to study the QGCW, using beams of known particles. A special set of detectors measures the properties of the outgoing particles. The QGCW is produced in a collision between heavy ions (208Pb82+) at the maximum energy available, i.e. 1150 TeV and a design luminosity of 1027 cm–2 s–1. For this to be achieved, CERN needs to upgrade the ion injector chain comprising Linac3, the Low Energy Ion Ring (LEIR), the PS and the SPS. Once the lead–lead collisions are available, the problem will be to synchronize the “proton” beam with the QGCW produced. This problem is being studied at the present time. The detector technology is also under intense R&D since the synchronization needed is at a very high level of precision.

Totally unexpected effects should show up if nature follows complexity at the fundamental level. However, as with Yukawa’s gold mine that was first opened 72 years ago, new discoveries will only be made if the experimental technology is at the forefront of our knowledge. Cloud-chambers, photographic emulsions, high-power magnetic fields and powerful particle accelerators and associated detectors were needed for the all the unexpected discoveries linked to Yukawa’s particle. This means that we must be prepared with the most advanced technology for the discovery of totally unexpected events. This is Yukawa’s last lesson for us.

This article is based on a plenary lecture given at the opening session of the Symposium for the Centennial Celebration of Hideki Yukawa at the International Nuclear Physics Conference in Tokyo, 3–8 June 2007.

PHYSTAT-LHC combines statistics and discovery

With the LHC due to start running next year, the PHYSTAT-LHC workshop on Statistical Issues for LHC Physics provided a timely opportunity to discuss the statistical techniques to be used in the various LHC analyses. The meeting, held at CERN on 27–29 June, attracted more than 200 participants, almost entirely from the LHC experiments.

The PHYSTAT series of meetings began at CERN in January 2000 and has addressed various statistical topics that arise in the analysis of particle-physics experiments. The meetings at CERN and Fermilab in 2000 were devoted to the subject of upper limits, which are relevant when an experiment fails to observe an effect and the team attempts to quantify the maximum size the effect could have been, given that no signal was observed. By contrast, with exciting results expected at the LHC, the recent meeting at CERN focused on statistical problems associated with quantifying claims of discoveries.

The invited statisticians, though few in number, were all-important participants – a tradition started at the SLAC meeting in 2003. Sir David Cox of Oxford, Jim Berger of Duke University and the Statistical and Applied Mathematical Sciences Institute, and Nancy Reid of Toronto University all spoke at the meeting, and Radford Neal, also from Toronto, gave his talk remotely. All of these experts are veterans of previous PHYSTAT meetings and are familiar with the language of particle physics. The meeting was also an opportunity for statisticians to visit the ATLAS detector and understand better why particle physicists are so keen to extract the maximum possible information from their data – we devote much time and effort to building and running our detectors and accelerators.

The presence of statisticians greatly enhanced the meeting, not only because their talks were relevant, but also because they were available for informal discussions and patiently explained statistical techniques. They gently pointed out that some of the “discoveries” of statistical procedures by particle physicists were merely “re-inventions of the wheel” and that some of our wheels resemble “triangles”, instead of the already familiar “circular” ones.

The meeting commenced with Cox’s keynote address, The Previous 50 Years of Statistics: What Might Be Relevant For Particle Physics. In particular, he discussed multiple testing and the false discovery rate – particularly relevant in general-purpose searches, where there are many possibilities for a statistical fluctuation to be confused with a discovery of some exciting new physics. Cox reminded the audience that it is more important to ask the correct question than to perform a complicated analysis, and that when combining data, first to check that they are not inconsistent.

One approach to searching for signs of new physics is to look for deviations from the Standard Model. This is usually quantified by calculating the p-value, which gives the probability – assuming the Standard Model is true – of finding data at least as discrepant as that observed. Thus a small p-value implies an inconsistency between the data and the Standard Model prediction. This approach is useful in looking for any sort of discrepancy. The alternative is to compare the predictions of the Standard Model with some specific alternative, such as a particular version of supersymmetry. This is a more powerful way of looking for this particular form of new physics, but is likely to be insensitive to other possibilities.

Luc Demortier of Rockefeller University gave an introduction to the subject of p-values, and went on to discuss ways of incorporating systematic uncertainties in their calculation. He also mentioned that, in fitting a mass spectrum to a background-only hypothesis and to a background plus a 3-parameter peak, it is a common misconception that in the absence of any signal, the difference in χ2 of the two fits behaves as a χ2 with 3 degrees of freedom. The talk by Kyle Cranmer of Brookhaven National Laboratory dealt with the practical issues of looking for discoveries at the LHC. He too described various methods of incorporating systematics to see whether a claim of 5 σ statistical significance really does correspond to such a low probability. The contributed talk by Jordan Tucker from University of California, Los Angeles, (UCLA) also explored this.

While the LHC is being completed, the CDF and DØ experiments are taking data and analysing it at the Fermilab collider, which currently provides the highest accelerator energies. Fermilab’s Wade Fisher summarized the experience gained from searches for new phenomena. Later, speakers from the major LHC experiments discussed their “statistical wish-lists” – topics on which they would like advice from statistics experts. Jouri Belikov of CERN spoke for the ALICE experiment, Yuehong Xie of Edinburgh University for LHCb and Eilam Gross of the Weizmann Institute for ATLAS and CMS. It was particularly pleasing to see the co-operation between the big general-purpose experiments ATLAS and CMS on this and other statistical issues. The Statistics Committees of both work in co-operation and both experiments will use the statistical tools that are being developed (see below). Furthermore, it will be desirable to avoid the situation where experiments make claims of different significance for their potential discoveries, not because their data are substantially different, but simply because they are not using comparable statistical techniques. Perhaps PHYSTAT can claim a little of the credit for encouraging this collaboration.

When making predictions for the expected rate at which any particle will be produced at the LHC, it is crucial to know the way that the momentum of a fast-moving proton is shared among its constituent quarks and gluons (known collectively as partons). This is because the fundamental interaction in which new particles are produced is between partons in the colliding protons. This information is quantified in the parton distribution functions (PDFs), which are determined from a host of particle-physics data. Robert Thorne of University College London, who has been active in determining PDFs, explained the uncertainties associated with these distributions and the effect that they have on the predictions. He stressed that other effects, such as higher-order corrections, also resulted in uncertainties in the predicted rates.

Statisticians have invested much effort on “experimental design”. A typical example might be how to plan a series of tests investigating the various factors that might affect the efficiency of some production process; the aim would be to determine which factors are the most critical and to find their optimal settings. One application for particle physicists is to decide how to set the values of parameters associated with systematic effects in Monte Carlo simulations; the aim here is to achieve the best accuracy of the estimate of systematic effects with a minimum of computing. Since a vast amount of computing time is used for these calculations, the potential savings could be very useful. The talks by statisticians Reid and Neal, and by physicist Jim Linnemann of Michigan State University, addressed this important topic. Plans are underway to set up a working group to look into this further, with the aim of producing recommendations on this issue.

Two very different approaches to statistics are provided by the Bayesian and frequentist approaches (see box). Berger gave a summary of the way in which the Bayesian method could be helpful. One particular case that he discussed was model selection, where Bayesianism provides an easy recipe for using data to choose between two or more competing theories.

With the required software for statistical analyses becoming increasingly complicated, Wouter Verkerke from NIKHEF gave an overview of what is currently available. A more specific talk by Lorenzo Moneta of CERN described the statistical tools within the widely used ROOT package developed by CERN’s René Brun and his team. An important topic in almost any analysis in particle physics is the use of multivariate techniques for separating the wanted signal from undesirable background. There is a vast array of techniques for doing this, ranging from simple cuts via various forms of discriminant analysis to neural networks, support vector machines and so on. Two largely complementary packages are the Toolkit for Multivariate Data Analysis and StatPatternRecognition. These implement a variety of methods that facilitate comparison of performance, and were described by Frederik Tegenfeldt of Iowa State University and Ilya Narsky of California Institute of Technology, respectively. It was reassuring to see such a range of software available for general use by all experiments.

Although the theme of the meeting was the exciting discovery possibilities at the LHC, Joel Heinrich of University of Pennsylvania returned to the theme of upper limits. At the 2006 meeting in Banff, Heinrich had set up what became known as the “Banff challenge”. This consisted of providing data with which anyone could try out their favourite method for setting limits. The problem included some systematic effects, such as background uncertainties, which were constrained by subsidiary measurements. Several groups took up the challenge and provided their upper limit values to Heinrich. He then extracted performance figures for the various methods. It seemed that the “profile likelihood” did well. A talk by Paul Baines of Harvard University described the “matching prior” Bayesian approach.

Unlike the talks that sometimes conclude meetings, the summary talk by Bob Cousins of UCLA was really a review of the talks presented at the workshop. He had put an enormous amount of work into reading through all of the available presentations and then giving his own comments on them, usefully putting the talks of this workshop in the context of those presented at earlier PHYSTAT meetings.

Overall, the quality of the invited talks at PHYSTAT-LHC was impressive. Speakers went to great lengths to make their talks readily intelligible: physicists concentrated on identifying statistical issues that need clarification; statisticians presented ideas that can lead to improved analysis. There was also plenty of vigorous discussion between sessions, leading to the feeling that meetings such as these really do lead to an enhanced understanding of statistical issues by the particle-physics community. Gross coined the word “phystatistician” for particle physicists who could explain the difference between the probability of A, given that B had occurred, compared with the probability of B, given that A had occurred. When the LHC starts up in 2008, it will be time to put all of this into practice. The International Committee for PHYSTAT concluded that the workshop was successful enough that it was worth considering a further meeting at CERN in summer 2009, when real LHC data should be available.

ILL celebrates 40 years in the service of science

The Institut Laue-Langevin was founded on 19 January 1967 with the signing of an agreement between the governments of the French Republic and the Federal Republic of Germany. In recognition of this dual nationality it was named jointly after the two physicists Max von Laue of Germany and the Frenchman Paul Langevin. The aim was to create an intense source of neutrons devoted entirely to civil fundamental research. The facility was given the status of “service institute” and was the first of its kind in the world. It was to be the world’s leading facility in neutron science and technology, offering the scientific community a combination of unrivalled performance levels and unique scientific instrumentation in the form of a very large cold neutron source equipped with 10 neutron guides – each capable of providing three or four instruments with a very high-intensity neutron flux.

CCill1_07_07

The construction of the institute and its high-flux reactor in Grenoble represented an overall investment of 335 million French francs (1967 prices, equivalent to about €370 million today) and was jointly undertaken by France and Germany. The reactor went critical in August 1971 and reached its full power of 57 MW in December that same year. Since then, the high-flux reactor has successfully maintained its position as the most powerful neutron source for scientific research in the world.

The UK joined the two founding countries in 1973, becoming the institute’s third full associate member. Over the following years, the level of international co-operation has gradually been extended, with a succession of “scientific membership” agreements with Spain (1987), Switzerland (1988), Austria (1990), Russia (1996), Italy (1997), the Czech Republic (1999), Sweden and Hungary (2005), and Belgium and Poland (2006).

The ILL is operated jointly by France, Germany and the UK, and has an annual budget of around €74 million. It currently employs around 450 staff, including 70 scientists, about 20 PhD students, more than 200 technicians, 60 reactor operations and safety specialists, and around 50 administrative staff; the breakdown by nationality is 65% French, 12% German and 12% British.

The ILL is a unique research tool for the international scientific community, giving scientists access to a constantly upgraded suite of high-performance instruments arranged around a powerful neutron source. More than 1500 European scientists come to the institute each year and conduct an average of 750 experiments. Once their experiment proposal has been accepted by the Scientific Council, scientists are invited to the institute for an average period of one week.

The fields of research are primarily focused on fundamental science and are extremely varied, including condensed matter physics, chemistry, biology, nuclear and particle physics and materials science. Thanks to the combination of an intense neutron flux and the availability of a wide range of energies (from 10–7 eV to 105 eV), researchers can examine a whole range of subjects. The samples that are studied can weigh anything from a tenth of a milligram to several tonnes. Most of the experiments use neutrons as a probe to study various physical or biological systems, while others examine the properties of the neutrons themselves. It is here that there is the most overlap with the physics of elementary particles and where low-energy physics can help to tackle and solve problems usually associated with high-energy physics experiments, as the following selected highlights indicate.

Neutron basics

The neutron is unstable and decays into a proton together with an electron and an anti-neutrino. Its lifetime, τn, is one of the key quantities in primordial nucleosynthesis. It determines how many neutrons were available about three minutes after the Big Bang, when the universe had sufficiently cooled down for light nuclei to form from protons and neutrons. It therefore has a strong influence on the abundance of the primordial chemical elements (essentially 1H, 4He, 2H, 3He and 7Li).

CCill2_07_07

Twenty years ago, in an experiment at ILL, Walter Mampe and colleagues achieved a new level of precision in measuring τn by storing ultracold neutrons in a fluid-walled bottle and counting the number of neutrons remaining in the bottle for various storage times. The experiment used a hydrogen-free oil to coat the walls in order to minimize the loss of neutrons in reflections at the walls. The final result of τn = 887.6 ± 3 s was the first to reach a precision well below 1% (Mampe et al. 1989). This value has found a number of applications, in particular making it possible to derive the number, N, of light neutrino types from a comparison of the observed element abundances with the predicted ones. The argument depends on the relationship between N and the expansion-rate of the universe during nucleosynthesis: the more light neutrinos that contribute to the energy density, the faster the expansion, leading to different element abundances. The result from ILL led in turn to a value of N = 2.6 ± 0.3 which made a fourth light neutrino generation extremely unlikely (Schramm and Kawano 1989). The stringent direct test that N is indeed equal to 3 came soon after with a precision measurement of the width of the Z boson at the LEP collider at CERN.

The neutron lifetime also feeds into the Standard Model of particle physics through the ratio of the weak vector and axial-vector coupling constants of the nucleon. Together with the Fermi coupling constant determined in muon decay, it can be used to determine the matrix element Vud of the Cabibbo–Kobayashi–Maskawa matrix. This in turn, provides a possibility for testing the Standard Model at the low-energy frontier and is one of the continuing motivations to improve still further the measurements of the neutron lifetime and of decay asymmetries in experiments at the ILL and elsewhere.

CCill3_07_07

Another key property of the neutron for particle physics is the hypothetical electric-dipole moment (EDM), which has a high potential importance for physics beyond the Standard Model. The existence of an EDM in the neutron would violate time-reversal, T, and hence – through the CPT theorem – CP symmetry. The Standard Model predicts an immeasurably small neutron EDM, but most theories that attempt to incorporate stronger CP violation beyond the CKM mechanism predict values that are many orders of magnitude larger. An accurate measurement of the EDM provides strong constraints on such theories. A positive signal would constitute a major breakthrough in particle physics.

In 2006, a collaboration between the University of Sussex, the Rutherford Appleton Laboratory and the ILL announced a new tighter limit on the neutron’s EDM. Based on measurements using ultracold neutrons produced at the ILL, the upper limit on the absolute value was improved to 2.9 x 10–26 e cm (Baker et al. 2006). The experiment stored neutrons in a trap permeated by uniform electric and magnetic fields and measured the ratios of neutron-to-mercury-atom precession frequencies; shifts in this ratio proportional to the applied electric field may in principle be interpreted as EDM signals.

E really does equal mc2

Particle physics makes daily use of the relationship between mass and energy, expressed in Albert Einstein’s famous equation. An experiment at the ILL in 2005 combined with one at the Massachusetts Institute of Technology (MIT), made the most precise direct-test of the equation to date, with researchers at ILL measuring energy, E, while the team at MIT measured the related mass, m. The test was based on the fact that when a nucleus captures a neutron, the resulting isotope with mass number A+1 is lighter than the sum of the masses of the original nucleus with mass number A and the free neutron. The energy equivalent to this mass difference is emitted as gamma-rays, the wavelength of which can accurately be measured by Bragg diffraction from perfect crystals.

The team at MIT used a novel experimental technique to measure the masses of pairs of isotopes by comparing the cyclotron frequencies of the two different ions confined together in a Penning trap. They performed two separate experiments, one with 28Si and 29Si, the other with 32S and 33S, leading to mass differences with a relative uncertainty of about 7 × 10–8 in both cases. At the ILL, a team from the National Institute of Standards and Technology measured the energies of the gamma-rays emitted after neutron capture by both 28Si and 32S using the world’s highest-resolution double crystal gamma-ray spectrometer, GAMS4. The combination of the high neutron flux available at the ILL reactor and the energy accuracy of the GAMS4 instrument allowed the team to determine the gamma-ray energies with a precision of better than 5 parts in 10,000,000. By combining the mass differences measured in America and the energy measurements in Europe, it was possible to test Einstein’s equation, with a result of 1–γmc2/E = (–1.4 ± 4.7) × 10–7 – which is 55 times more accurate than the previous best measurements (Rainville et al. 2005).

CCill4_07_07

The German physicist Max von Laue (1879–1960) received the Nobel Prize in Physics in 1914, in Stockholm, for demonstrating the diffraction of X-rays by crystals. This discovery revealed the wave nature of X-rays by enabling the first measurements of wavelength and showing the organization of atoms in a crystal. It is at the origin of all analysis methodology based on diffraction using X-rays, synchrotron light, electrons or neutrons.

Paul Langevin (1879–1946) was an eminent physicist from the pioneering French team of atomic researchers, which included Pierre and Marie Curie. A specialist in magnetism, ultrasonics and relativity, he also dedicated 40 years of his life to his responsibilities as director of Paris’s Ecole de Physique et de Chimie. His study of the moderation of rapid neutrons, i.e. how they are slowed by collisions with atoms, was invaluable for the design of the first research reactors.

These three examples give only a tiny glimpse into one aspect of the science that is undertaken at the ILL each year, illustrating the possibilities for testing theories in particle physics and cosmology. The institute looks forward to the next 40 years of fruitful investigations and important results in these fields as well as across many other areas of science.

Polarized hyperons probe dynamics of quark spin

A continuing mystery in nuclear and particle physics is the large polarization observed in the production of Λ hyperons in high-energy, proton–proton interactions. These effects were first reported in the 1970s in reactions at incident proton momenta of several hundred GeV/c, where experiments measured surprisingly strong hyperon polarizations of around 30% (Heller 1997). Although the phenomenology of these reactions is now well known, the inability to distinguish between various competing theoretical models has hampered the field (Zuo-Tang and Boros 2000).

CChyp1_07_07

Two new measurements from the US Department of Energy’s Jefferson Lab in Virginia are now challenging existing ideas on quark spin dynamics through studies of beam-recoil spin transfer in the electro- and photoproduction of K+Λ final states from an unpolarized proton target. Analyses of the two experiments in Hall B at Jefferson Lab using the CLAS spectrometer (figure 1) have provided extensive results of spin transfer from the polarized incident photon (real or virtual) to the final state Λ hyperon.

The results indicate that the Λ polarization is predominantly in the direction of the spin of the incoming photon, independent of the centre-of-mass energy or the production angle of the K+. Moreover, the photoproduction data show that, even where the transferred Λ polarization component along the photon direction is less than unity, the total magnitude of the polarization vector is equal to unity. Since these observations are not required by the kinematics of the reaction (except at extreme forward and backward angles) there must be some underlying dynamical origin.

CChyp2_07_07

Both analyses have proposed simple quark-based models to explain the phenomenology, however they differ fundamentally in their description of the spin transfer mechanism. In the electroproduction analysis a simple model has been proposed from data using a 2.567 GeV longitudinally polarized electron beam (Carman et al. 2003). In this case a circularly polarized virtual photon (emitted by the polarized electron) strikes an oppositely polarized u quark inside the proton (figure 2a). The spin of the struck quark flips in direction according to helicity conservation and recoils from its neighbours, stretching a flux-tube of gluonic matter between them. When the stored energy in the flux-tube is sufficient, the tube is “broken” by the production of a strange quark–antiquark pair (the hadronization process).

In this simple model, the observed direction of the Λ polarization can be explained if it is assumed that the quark pair is produced with two spins in opposite directions – anti-aligned – with the spin of the s quark aligned opposite to the final u quark spin. The resulting Λ spin, which is essentially the same as the s quark spin, is predominantly in the direction of the spin of the incident virtual photon. The spin anti-alignment of the ss pair is unexpected, because according to the popular 3P0 model, the quark–antiquark pair should be produced with vacuum quantum numbers (J = 0, S = 1, L = 1, i.e. Jπ = 0+), which means that their spins should be aligned two-thirds of the time (Barnes 2002). This could imply that this model for hadronization may not be as widely applicable as previously thought.

The new photoproduction analysis, with data using a circularly polarized real photon beam in the 0.5–2.5 GeV range, introduces a different model that can also explain the Λ polarization data. In this hypothesis, shown in figure 2b, the strange quark–antiquark pair is created in a 3S1 configuration (J = 1, S = 1, L = 0, i.e. Jπ = 1). Here, following the principle of vector-meson dominance, the real photon fluctuates into a virtual φ meson that carries the polarization of the incident photon. Therefore, the quark spins are in the direction of the spin of the photon before the hadronization interaction.

The s quark of the pair merges with the unpolarized di-quark within the target proton to form the Λ baryon. The s  quark merges with the remnant u quark of the proton to form a spinless K+ meson. In this model, the strong force, which rearranges the s and s quarks into the Λ and K+, respectively, can precess the spin of the s quark away from the beam direction, but the s quark, and therefore the Λ, remains 100% polarized. This provides a natural explanation for the observed unit magnitude of the Λ polarization vector seen for the first time in the measurements by CLAS.

The model interpretations presented from the two viewpoints do not necessarily contradict each other. Both assume that the mechanism of spin transfer to the Λ hyperon involves a spectator Jπ = 0+ di-quark system. The difference is in the role of the third quark. Neither model specifies a dynamical mechanism for the process, namely the detailed mechanism for quark-pair creation in the first case or for quark spin precession in the second. If we take the gluonic degrees of freedom into consideration, the model proposed in the electroproduction paper (Carman et al. 2003) can be realized in terms of a possible mechanism in which a colourless Jπ = 0 two-gluon subsystem is emitted from the spectator di-quark system and produces the ss pair (figure 2a). This is in conflict with the 3P0 model, which requires a Jπ= 0+ exchange. To the same order of gluon coupling, the model interpretation proposed by the photoproduction analysis (Schumacher 2007) is the quark-exchange mechanism, which is again mediated by a two-gluon current. The amplitudes corresponding to these models may both be present in the production, in principle, and contribute at different levels depending on the reaction kinematics.

Extending these studies to the K*+Λ exclusive final state should be revealing. In the electroproduction model, the spin of the u quark is unchanged when switching from a pseudoscalar K+ to a vector K*+. If the ss quark pair is produced with anti-aligned spins, the spin direction of the Λ should flip. On the other hand, in the photoproduction model the u quark in the kaon is only a spectator. Changing its spin direction – changing the K+ to a K*+ – should not change the Λ spin direction. Thus, there are ways to disentangle the relative contributions and to understand better the reaction mechanism and dynamics underlying the associated strangeness-production reaction. Analyses at CLAS are underway to extract the polarization transfer to the hyperon in the K*+Λ final state.

Beyond the studies of hyperon production, understanding the dynamics in a process of this sort can shed light on quark–gluon dynamics in a domain thought to be dominated by traditional meson and baryon degrees of freedom. These issues are relevant for a better understanding of strong interactions and hadroproduction in general, owing to the non-perturbative nature of QCD at these energies. We eagerly await further experimental studies and new theoretical efforts to understand which multi-gluonic degrees of freedom dominate in quark pair creation and their role in strangeness production, as well as the appropriate mechanism(s) for the dynamics of spin transfer in hyperon production.

Understanding flavour is the key to new physics

Résumé

Comprendre la saveur pour accéder à la nouvelle physique

La physique portant sur les divers types (ou saveurs) de quarks a un fort impact sur la physique des particules et elle pourrait s’avérer cruciale pour accéder à la physique au-delà du modèle standard. L’atelier «Flavour Physics in the LHC Era» a tenu cinq sessions au CERN entre novembre 2005 et mars 2007 pour examiner la manière dont la branche va évoluer à l’avenir. Le but était d’esquisser un programme pour la physique de la saveur dans la prochaine décennie, d’examiner de nouvelles propositions d’expérimentation et de s’intéresser à la relation entre le LHC et les «usines» à saveurs du point de vue du potentiel de découvertes et de l’exploration de la nouvelle physique.

For many years, the interactions between quarks of different flavour and the phenomenon of CP violation – the non-invariance of weak interactions under combined charge-conjugation and parity transformations – have played an important role in particle physics. In 1963, a year before the observation of CP violation in KL → π+π decays, Nicola Cabibbo introduced the concept of flavour mixing. Ten years later, Makoto Kobayashi and Toshihide Maskawa discovered that quark-flavour mixing allows the accommodation of CP violation in the framework of the Standard Model, provided that there are at least three different replicas – or generations – of the fermion content of this theory. Sheldon Glashow, John Iliopoulos and Luciano Maiani had already introduced the charm quark in 1970 to suppress the flavour-changing neutral currents, and Mary K Gaillard and Benjamin W Lee in 1974 estimated the mass of that quark with the help of the K0K0 oscillation frequency. Then, in the 1980s, the large value of the top-quark mass was first suggested by the large B0dB0d mixing seen in the ARGUS and UA1 experiments at DESY and CERN, respectively.

Flavour physics has since continued to progress, and flavour-changing neutral-current processes and CP-violating phenomena are still key targets for research because they may be sensitive to physics lying beyond the Standard Model. Experiments have revealed the particles of all three generations, and have established non-vanishing neutrino masses, leading to a rich flavour phenomenology in the lepton sector, pointing towards new physics. Studies on how the field will continue to progress formed the basis for the five meetings in a series of workshops on Flavour Physics in the LHC Era, which were held at CERN between November 2005 and March 2007.

The rise of the B meson

The kaon system dominated the exploration of the quark-flavour sector for more than 30 years. For the past decade, however, the B-meson system has been the key player. Thanks to the B-factories based on e+e colliders at SLAC and KEK, with their detectors BaBar and Belle, respectively, CP violation is now also definitely seen in B decays, where the "golden" decay B0d → J/ψKs shows CP-violating effects at the 70% level. These effects can be translated into the angle β of the unitarity triangle, which characterizes the Kobayashi–Maskawa mechanism of CP violation. Several strategies to determine the other angles of the triangle, α and γ, have been proposed and successfully applied to the data from the B-factories. After important first steps in experiments at LEP and at the SLAC Large Detector, the CDF and DØ collaborations at Fermilab’s Tevatron collider last year eventually measured the B0sB0s oscillation frequency ΔMs. This spring, the B-factories reported evidence for D0D0 mixing – the last missing meson–anti-meson mixing phenomenon.

So far, these results – together with intensive theoretical work – have shown that the Kobayashi-Maskawa mechanism of CP violation works remarkably well. This complements the precision tests of the gauge sector of the Standard Model and therefore highly constrains any scenario for new physics beyond the Standard Model. On the other hand, neutrino oscillations and the baryon asymmetry of the universe require sources of flavour mixing and CP violation beyond what is present in the Standard Model. This demands the continued exploration of flavour phenomena, improving the current accuracy and probing new observables.

When the LHC at CERN starts up next year, these efforts will be boosted because B-decay studies will be the main theme of the LHCb experiment (CERN Courier July/August 2007 p30). The ATLAS and CMS experiments will mostly focus on the properties of the top quark and on the direct search for new particles, which could themselves be the mediators of new flavour and CP violating interactions. The Bs-meson system is the new territory of the B-physics landscape that can be fully explored at the LHC; this was not accessible at the e+e B-factories operating at the Υ(4S) resonance. The experimental value of ΔMs is consistent with the Standard Model prediction, although this suffers from lattice QCD uncertainties and still leaves much room for CP-violating new-physics contributions to B0sB0s mixing, which could be detected at the LHC with the help of the B0s → J/ψφ decay. Bs-physics will also open new ways to determine the angle γ of the unitarity triangle.

These methods make use of pure "tree" decays (e.g. B0s → D±sK+), on the one hand, and of decays with penguin contributions (e.g. B0s → K+K) on the other. Moreover, the B0s → φφ channel will shed more light on possible new-physics contributions to the CP asymmetries of various b → s penguin modes, which may be indicated by the current B-factory data for B0d → π0KS, B0d → φKS and similar modes. Another key aspect of the LHC B-physics programme will be studies of strongly suppressed rare decays, such as Bs → μ+μ, which could be highly enhanced through the impact of physics beyond the Standard Model.

Investigations of the extremely rare decays K+ → π+νν and KL → π0νν will complement the studies of Bs-physics. These are very clean from a theoretical point of view, but unfortunately hard to measure. Nevertheless, there is a proposal to take up this challenge and to measure the former channel at CERN’s SPS, and there are efforts to explore the latter – even more difficult decay – at the Japan Proton Accelerator Research Complex (J-PARC).

There are many other fascinating aspects of flavour; the D-meson system is an interesting example. The recently observed D0D0 mixing can be accommodated in the Standard Model, but suffers from large theoretical uncertainties. New physics may actually be hiding there and could be unambiguously detected through CP-violating effects. Other important flavour probes are offered by the physics of top and by flavour violation in the neutrino and charged-lepton sectors. For the latter, an investigation of the lepton-flavour-violating decay μ → e_ is about to start this year with the MEG experiment at the Paul Scherrer Institute (CERN Courier July/August 2004 p21). In addition, studies of μ → e conversion are proposed at Fermilab and J–PARC. Further studies using τ decays at the LHC and at a possible future super-B-factory will be important in this area. Finally, continued searches for electric dipole moments and measurements of the anomalous magnetic moment of the muon are essential parts of the future experimental programme.

Towards the LHC era

As the start of the LHC rapidly approaches, there is one burning question: what is the synergy between the plentiful information following from analyses of the flavour sector with the high-Q2 programme of the ATLAS and CMS experiments? The extended workshop at CERN focused on this topic and received remarkable interest from the particle-physics community, attracting more than 200 participants from around the world. The workshop followed the standard CERN format with three working groups devoted to the flavour aspects of high-Q2 collider physics; the physics of the B, K and D meson systems; and flavour physics in the lepton sector. This framework allowed many new studies to be performed. The goals of the workshop were to outline and document a programme for flavour physics for the next decade, to discuss new experimental proposals and to address the complementarity and synergy between the LHC and the flavour factories with respect to the potential for discovery and the exploration of new physics. In this context, detailed discussions took place on two proposals for an e+e super-B-factory, one at KEK and one near Frascati. Such a "super-flavour factory" would allow for precision experiments in quark and lepton flavour physics by accessing the B, the _ and the charm sector. The final meeting complemented this discussion with a review of upgrade plans for LHCb.

The workshop confirmed that flavour physics is an essential element in the further testing of the Standard Model, which should reveal inconsistencies in an unambiguous manner. Should particles associated with new physics be produced at the LHC, studies of flavour physics will play a key role, helping to uncover the underlying new physics, to study the properties of the new-physics particles and to detect or exclude new sources of CP violation and flavour structures.

The detailed results and conclusions of the workshop will be published as a CERN report (in preparation). For further information, see the workshop homepage at http://cern.ch/flavlhc.

Electron cooling: challenges at RHIC

Discoveries at RHIC, at the Brookhaven National Laboratory (BNL), have captured worldwide attention. These findings have raised compelling new questions about the theory that describes the interactions of the smallest known components of the atomic nucleus. To address these questions at RHIC, we need to study rare processes. To do this, we must increase the collider’s luminosity, which is the rate at which ions collide inside the accelerator. BNL’s Collider-Accelerator Department (C-AD) is therefore investigating various upgrades, including the possibility of a luminosity upgrade through a process of electron cooling.

The electron-cooled RHIC, known as RHIC-II, would use low-emittance (“cool”), energetic and high-charge bunches of electrons to cool the ion bunches. This would increase the density of the ion bunches and lead to a higher luminosity. Achieving the necessary characteristics for the electron bunches will require using advanced accelerator techniques such as a high-brightness, high-current energy-recovery linac (ERL). A linac of this type may have other applications, including in an eRHIC (energetic electron ion collider at RHIC) and future light sources.

As RHIC operates, its luminosity goes down because of intra-beam scattering (IBS), which causes the bunches of gold ions to increase their longitudinal emittance and transverse emittance. This means that the bunches “heat up” and become more diffuse. A variety of other mechanisms can also induce emittance growth, regardless of IBS. These include instabilities of the ions’ motion, mechanical vibration of the magnets and the collisions themselves. Whatever the cause, more diffuse beams will produce lower luminosity and fewer collisions. So to improve luminosity, accelerator physicists at RHIC aim to eliminate, or reduce, the build-up of heat within the bunches by using electron cooling.

In 1966 Gersh Budker, of what is now the Budker Institute of Nuclear Physics (BINP) in Novosibirsk, invented electron cooling, which has been applied at numerous storage rings around the world. The idea is very intuitive: bring cold electrons into contact with the ions so that heat can flow from the warmer ions to the colder electrons. Cold electrons are produced by an electron source and are then accelerated to match precisely the speed of the ions in a straight section of the ring. Here, the two beams overlap and have a chance to exchange heat. The electrons are discarded after one pass and replaced by fresh electrons to continue the cooling process. At RHIC, which has two 3800 m rings, this straight section will be more than 100 m long. There are other differences between RHIC and previous electron-cooled rings: RHIC will be the first collider to be cooled during collisions and will be the first cooler using bunched electron beams.

To gain confidence in the calculated performance of the RHIC electron-cooler, the team at BNL has strived to develop dependable simulation techniques and benchmark them in experiments. Many institutes have helped in this challenge: BINP, JINR, Tech-X Corporation, Jefferson Laboratory, Fermilab, and the Svedberg Laboratory. The last two laboratories also helped in benchmarking experiments on their electron coolers.

One of the challenges of cooling RHIC lies with the machine’s high energy, which is around 10 times higher than any previous electron cooler (54 MeV electron energy for RHIC’s gold ions at 100 GeV per nucleon). This slows electron cooling because the cooling time is approximately proportional to the cube of the energy. The cooler therefore requires an electron beam that has a high energy and a high current. It must also cool over a long straight section, which means that a conventional DC electron accelerator cannot be used for cooling RHIC. For this reason, an ERL electron accelerator was adopted by BNL to produce electron bunches with a high charge (about 5 nC), a low emittance (under 3 μm normalized rms) and a high energy of 54 MeV. Another challenge is matching precisely – in position, speed and angular deviation – the electrons to the ions.

CCrhi1_07_07

Figure 1 shows a possible layout of an electron cooler at RHIC. The cooling will take place in a 100 m straight section in the RHIC tunnel between two superconducting RHIC quadrupoles. The electron beam, generated by a 54 MeV superconducting RF ERL, will first travel with the beam in the anticlockwise ring and then loop back and travel with the beam in the clockwise ring. In doing so, the electron beam cools both rings.

The task of producing the necessary low-emittance and high-charge (high-brightness) electron bunches is even more difficult. The BNL team is currently working on a laser-photocathode superconducting radiofrequency source for the continuous production of a high-brightness electron beam that is capable of about 0.1 A. The design aims for a 0.5 A continuous average current. To make the ERL work without beam breakup, a superconducting accelerator cavity has been developed, which is capable of more than 3 A without beam-breakup, together with other technologies for accelerating a very high current efficiently.

CCrhi2_07_07

Following several years of intensive R&D, we are confident that, according to our calculations, these techniques will increase the luminosity at RHIC and allow more sensitive, precision studies of the substructure of matter.

CCrhi3_07_07

Figure 2 shows an ERL superconducting cavity and figure 3 gives the results of a cooling simulation. The five-cell cavity, developed by the C-AD and built by local industry (Advanced Energy Systems), is the first dedicated ERL cavity to be developed. After chemistry and testing at Jefferson Laboratory, it demonstrated 20 MeV acceleration at low-power investment.

The accelerator technologies that we are developing may also have applications at BNL beyond the RHIC-II upgrade. For example, the eRHIC upgrade would add electrons from an ERL to collide with the ion beams of RHIC. Another possible application could be at future “light source” facilities, using very high brightness X-rays to study the properties of materials and biological samples.

The ATLAS detector walks another mile

CClhc2_07_07

In the time since the vast underground cavern that houses the ATLAS experiment for the LHC was completed in 2003, it has gradually filled with the many different components that make up the largest-volume collider detector ever built (figure 1). Installation is due to be complete in early 2008. Meanwhile, commissioning the various subdetectors underground has been progressing in parallel. June saw the successful completion of the third “milestone week” (M3) in the global commissioning process. For the first time, the tests involved components from the tracking system as well as the calorimetry and muon detection systems, and each detector was operated from within the ATLAS control room.

CClhc1_07_07

The milestone weeks are dedicated to operating the experiment as a whole, from subdetector to permanent data storage, with an increasing number of subdetector systems involved at each stage. Even if a particular subdetector is not fully installed, these tests can still incorporate parts that are ready in order to exercise as many steps in the detection and data collection chain as possible. Keeping a sub-system that is still being installed in stable running conditions for many hours over many days is no small challenge. Multiply this by the 12 subdetectors involved, and add the computing, power and cooling infrastructure, detector control systems (DCSs) and safety systems, and it might seem questionable that it can work at all. But work it did during the M3 period, which in fact ran over two weeks from 4 to 18 June.

The first week of M3 was dedicated to the stable running of systems that had been integrated in previous exercises, with the emphasis this time on monitoring and exercising the formal procedures for running shifts when the experiment begins full operation in 2008. The subdetectors involved in this first week were the liquid-argon and tile calorimeters, together with part of the muon spectrometer (barrel and endcap). Each detector was initialized and monitored from a dedicated desk in the ATLAS control room, with the overall running controlled from the run-control desk.

The tile calorimeter, which basically consists of a steel-scintillator sandwich, is designed to measure the energy of hadrons emerging at angles greater than 25° to the beam. For hadron calorimetry between 25° and 5° in the endcaps, liquid argon and copper take over, with a different variation based on a tungsten absorber in the forward direction (less than 5°). Liquid argon also figures in the electromagnetic calorimeter, which is optimized for electrons and photons. However, in this case, lead (rather than copper) is used to initiate particle showers.

For the M3 tests, around 75% of the tile calorimeter and 50% of the liquid argon calorimeter were powered with high voltage and included in the final digital read-out. The tile calorimeter will provide a fast read-out for triggering when finally operational at the LHC, adding together calorimeter cells into trigger towers that point to the interaction point. In the M3 set-up, 500 trigger towers (around 25% of the final number) were used to provide a first-level trigger on cosmic muons, supplying signals to special trigger electronics for commissioning, which in turn delivered a trigger signal to the central trigger processor. This yielded a couple of cosmic events per minute that were read out by the central data acquisition (DAQ). During the run, a dozen or so non-expert “shifters” looked after routine operations, such as data and hardware monitoring, testing procedures as well as the detector systems.

The muon system for ATLAS is based on the huge toroid magnet system, with several different kinds of detector to register and track muons as they pass beyond the layers of calorimetery. Monitored drift tubes (MDTs) provide the precision measurements in the bending region of the magnetic field in both the barrel and the endcap region of the detector. They are complemented by trigger chambers – resistive plate chambers (RPCs) in the barrel and thin gap chambers (TGCs) in the endcap regions – which provide fast signals for the muon trigger and the second co-ordinate for the measurement of the muon momentum.

For the barrel muon detectors, sections of both the MDTs and the RPCs took part in the M3 tests using the final set-up for the high-voltage, low-voltage and gas systems, and all monitored by the central DCS. Some 27,000 drift tubes were read out during the run, which is more than the barrel muon read-out of the LEP experiments (e.g. ALEPH had approximately 25,000 muon channels) but is less than 10% of the final total for ATLAS. Two sectors of RPCs were used to provide a trigger on cosmic muons.

The integration of new components into the global system formed the main goal of week two, which saw the addition of detectors from the inner tracking system and more trigger equipment. The inner detector uses three different systems for tracking particles within the volume of a superconducting solenoid magnet inside the calorimeters. The tracking systems form three layers, the outermost being the transition radiation tracker (TRT) based on “straws” (4 mm diameter drift tubes). Silicon strip detectors in the semiconductor tracker (SCT) are used at radii closer to the beam pipe, while silicon pixel detectors form the innermost layer.

CClhc3_07_07

The barrel TRT was successfully installed and tested in October 2006. Since then a number of stand-alone and system tests combined with the SCT have taken place to characterize the detector. For M3, six TRT barrel modules – altogether 20,000 channels or 19% of the TRT barrel – were connected to the back-end read-out electronics and successfully integrated into the central DAQ. Steps were also taken towards the integration of the TRT DCS into the ATLAS DCS, and cosmic-ray data were collected in combination with other detectors by the end of the M3 period (figure 2).

Cooling for the SCT was not available during the M3 run, so this detector could not take part fully. However, its DAQ was nonetheless successfully integrated using some test modules installed adjacent to the SCT read-out driver (ROD) crates. Despite using only a small number of modules, M3 provided a valuable opportunity to exercise the final DAQ infrastructure and the functionality of the trigger timing in preparation for running with the full SCT.

On the second week of the run, the first-level calorimeter trigger (L1Calo) also joined the data collection, taking part for the first time in a milestone run, although not yet providing real triggers. For this initial test, the system consisted of one-eighth of the final set of preprocessor modules and one ROD. The pre-processor modules perform the analogue-to-digital conversion for L1Calo and will also identify the bunch-crossing that the signals have come from when there is beam in the LHC. Running this system was smooth and provided valuable experience of stable running with parts of the final trigger hardware integrated with the other ATLAS subsystems.

CClhc4_07_07

For the muon system, elements of the endcap wheels were brought into the trigger during the second week. The TGCs, which provide the level-1 trigger for the muon-endcaps, had already been integrated into the central trigger and DAQ system, but on 13 June some of them were used for the first time to provide a cosmic-ray trigger to other subdetectors, in particular endcap monitored drift tube chambers. This involved 1 out of the 72 sectors of TCGs, using final chambers, electronics equipment and DCS (figure 3). The alignment of the TGCs was sufficiently well known that triggers from cosmic rays were produced with good efficiency at a rate of 3 Hz.

The region of interest builder (RoIB) was another component of the final trigger hardware that was successfully integrated during M3. Although the first-level trigger decision is based on the multiplicity of objects, and not on their position, the first-level trigger processors do remember where they encountered objects passing their thresholds, and, for events accepted by the first-level trigger, they pass this information on to level 2. The role of the custom-built electronics forming the RoIB is to receive these region of interest fragments from the first-level muon and calorimeter trigger subsystems and the central trigger processor, and then to combine them into a single record that is passed on to the level-2 trigger supervisor. The initial hardware for the high-level trigger (level-2 and Event Filter) was also successfully integrated. This consisted of 20 level-2 nodes running cosmic algorithms (but not rejecting events) and 10 event filter nodes (without algorithm processing), which passed data to one of six subfarm output units (SFOs) in the final system. The SFO was able to write events to disk at a rate of up to 70 MB/s and subsequently transferred these files to CASTOR, the data storage on the CERN site, at a rate of around 35 MB/s.

CClhc5_07_07

M3 provided the first opportunity for Tier-0 processing to take part in a real data-taking exercise. The existing Tier-0 infrastructure, so far only used in large-scale tests decoupled from the on-line world, was adapted to the needs of M3 and run during almost the whole data-taking period. Its tasks were to pick up the data files written to CASTOR by the DAQ and to run the offline reconstruction. For the first time, the complete offline software chain could reconstruct cosmic-ray events from data in the calorimeters, the inner detector and part of the muon system (figure 4).

The full monitoring chain was also running, taking the different reconstructed objects as input and producing the relevant monitoring histograms for each subdetector in order to check its correct performance. In a subsequent processing step, monitoring histograms produced as outputs of the individual reconstruction jobs were also merged to allow data quality monitoring over longer periods of time.

Progress during M3 – the third “mile” – has demonstrated that almost all of the subsystems of ATLAS can work together in at least a fraction of their final size, and, in the case of the calorimeters, a large fraction. There are still a few more miles to go. The challenge will be to increase the system in size as commissioning continues while keeping the running-efficiency high and the failure-rate low.

Black holes appear to play a role in the evolution of the universe

Are black holes just a curiosity of nature or do they play a role in the formation and shaping of galaxies? The results of the most accomplished simulation to date show that the evolution of galaxies is influenced by the supermassive black holes at their centres.

Supercomputer simulations result in fascinating movies of the evolution of a portion of the universe over billions of years. They show how a uniform distribution of matter in the early universe progressively forms galaxies and clusters of galaxies – predominantly along a filamentary, sponge-like structure. This characteristic structure arises naturally from the mutual gravitational pull of matter making dense regions denser and voids emptier. However, a realistic simulation down to scales of single galaxies should include astrophysical processes that are related to the formation and evolution of stars and supermassive black holes.

The results of the first simulation to incorporate black-hole physics have now been published by Tiziana Di Matteo from the Carnegie Mellon University in Pittsburgh and collaborators. This gives the best picture to date of how the cosmos formed. Called BHCosmo, this simulation tracks 230 million hydrodynamic particles in a cube the size of about 100 million light-years. It used the 2000 processors of the Cray XT3 system at the Pittsburgh Supercomputing Center more than four weeks of run time.

The BHCosmo simulation starts from initial conditions corresponding to the standard ΛCDM model of cosmology and self-consistently incorporates dark-matter dynamics, radiative gas-cooling, star formation, as well as black hole growth and associated feedback processes. It is not well understood how supermassive black holes form, so Di Matteo and colleagues circumvented this problem by inserting a black hole of 105 solar masses at the centre of each dark-matter halo reaching a critical mass. These "seed" black holes can already form 300 million years after the Big Bang and their growth – by gas accretion and mergers with other black holes – depends very much on their environment. Strongly accreting black holes manifest themselves as quasars by emitting intense radiation that heats the surrounding gas. This heating in the BHCosmo simulation is found to be an important feedback process that expels the galaxy’s interstellar gas into inter-galactic space and in the mean time extinguishes the quasar activity. This mechanism helps to explain why most elliptical galaxies are left with a low gas content and can therefore no longer form new stars. Past quasar activity of the supermassive black holes in their centres would thus be responsible for their old and reddish stellar populations. The BHCosmo simulation also confirms the observed relationship between the mass of the black hole and the mass of stars in galaxies. This suggests that black holes regulate galaxy formation and growth, but how this happens is not yet understood.

This work shows how the formation and evolution of about a million galaxies can be simulated in a representative portion of the universe with relatively simple assumptions. The good match between the simulated and observed properties of galaxies allows the prediction of what sort of galaxies will be seen at very high redshift by next-generation telescopes.

Europe’s astroparticle physicists publish roadmap to the stars

The Astroparticle Physics European Coordination (ApPEC) consortium and the AStroParticle European Research Area (ASPERA) network have together published a roadmap giving an overview of the status and perspectives of astroparticle physics in Europe. This important step for astroparticle physics outlines the leading role that Europe plays in this new discipline – which is emerging at the intersection of particle physics, astronomy, and cosmology.

Grouped together in ApPEC and ASPERA, European astroparticle physicists and their research agencies are defining a common strategic plan in order to gain international consensus on what future facilities will be needed. This rapidly developing field has already led to new types of infrastructure that employ new detection methods, including underground laboratories or use of specially designed telescopes and satellite experiments to observe a wide range of cosmic particles, from neutrinos and gamma rays to dark-matter particles.

Over the past few years, ApPEC and ASPERA have launched an important effort to organize the discipline and ensure a leading position for Europe in this field, engaging the whole astroparticle-physics community. The roadmap is a result of this process, and though still in its first phase, it has started to identify a common policy.

In the process, ApPEC has reviewed several proposals and has recommended engaging in design studies for four large new infrastructures: the Cherenkov telescope array, a new-generation European observatory for high-energy gamma rays; EURECA, a tonne-scale bolometric detector for cryogenic research of dark matter; LAGUNA, a very large detector for proton decay and neutrino astronomy; and the Einstein telescope, a next-generation gravitational-wave antenna. ApPEC has also iterated its strong support for the high-energy neutrino telescope KM3 in the Mediterranean region.

These projects – as well as proposals for tonne-scale detectors for the measurement of neutrino mass, dark-matter detectors and high-energy cosmic-ray observatories – will be discussed and prioritized further in a workshop in Amsterdam on 21–22 September. During the workshop, which 300 European physicists are expected to attend, Europe’s priorities for astroparticle physics will be compared with those in other parts of the world.

NSF selects Homestake for deep lab site

The US National Science Foundation (NSF) has selected a proposal to produce a technical design for a deep underground science and engineering laboratory (DUSEL) at the former Homestake gold mine near Lead, South Dakota, site of the pioneering solar-neutrino experiment by Raymond Davis. A 22-member panel of external experts reviewed proposals from four teams and unanimously determined that the Homestake proposal offered the greatest potential for developing a DUSEL.

CCnew10_07_07

The selection of the Homestake proposal, which was submitted through the University of California (UC) at Berkeley by a team from various institutes, only provides funding for design work. The team, led by Kevin Lesko from UC Berkeley and the Lawrence Berkeley National Laboratory, could receive up to $5 million a year for up to three years. Any decision to construct and operate a DUSEL, however, will entail a sequence of approvals by the NSF and the National Science Board. Funding would ultimately have to be approved by the US Congress. If eventually built as envisioned by its supporters, a Homestake DUSEL would be the largest and deepest facility of its kind in the world.

The concept of DUSEL grew out of the need for an interdisciplinary “deep science” laboratory that would allow researchers to probe some of the most compelling mysteries in modern science, from the nature of dark matter and dark energy to the characteristics of microorganisms at great depth. Such topics can only be investigated at depths where hundreds of metres of rock can shield ultra-sensitive physics experiments from background activity, and where geoscientists, biologists and engineers can have direct access to geological structures, tectonic processes and life forms that cannot be studied fully in any other way. Several countries, including Canada, Italy and Japan, have extensive deep-science programmes, but the US has no existing facilities below a depth of 1 km. In September 2006, the NSF solicited proposals to produce technical designs for a DUSEL-dedicated site. Four teams had submitted proposals by the January 2007 deadline, but in four different locations.

The review panel included outside experts from relevant science and engineering communities and from supporting fields such as human and environmental safety, underground construction and operations, large project management, and education and outreach.

Scientists from Japan, Italy, the UK and Canada also served on the panel. The review process included site visits by panellists to all four locations, with two meetings to review the information, debate and vote on which, if any, of the proposals would be recommended for funding.

bright-rec iop pub iop-science physcis connect