Comsol -leaderboard other pages

Topics

Physics in the multiverse

Is our entire universe a tiny island within an infinitely vast and infinitely diversified meta-world? This could be either one of the most important revolutions in the history of cosmogonies or merely a misleading statement that reflects our lack of understanding of the most fundamental laws of physics.

CCnbb_10_07

A self-reproducing universe. This computer-generated simulation shows exponentially large domains, each with different laws of physics (associated with different colours). Peaks are new “Big Bangs”, with heights corresponding to the energy density.
Image credit: simulations by Andrei and Dimitri Linde. The idea in itself is far from new: from Anaximander to David Lewis, philosophers have exhaustively considered this eventuality. What is especially interesting today is that it emerges, almost naturally, from some of our best – but often most speculative – physical theories. The multiverse is no longer a model; it is a consequence of our models. It offers an obvious understanding of the strangeness of the physical state of our universe. The proposal is attractive and credible, but it requires a profound rethinking of current physics.

At first glance, the multiverse seems to lie outside of science because it cannot be observed. How, following the prescription of Karl Popper, can a theory be falsifiable if we cannot observe its predictions? This way of thinking is not really correct for the multiverse for several reasons. First, predictions can be made in the multiverse: it leads only to statistical results, but this is also true for any physical theory within our universe, owing both to fundamental quantum fluctuations and to measurement uncertainties. Secondly, it has never been necessary to check all of the predictions of a theory to consider it as legitimate science. General relativity, for example, has been extensively tested in the visible world and this allows us to use it within black holes even though it is not possible to go there to check. Finally, the critical rationalism of Popper is not the final word in the philosophy of science. Sociologists, aestheticians and epistemologists have shown that there are other demarcation criteria to consider. History reminds us that the definition of science can only come from within and from the praxis: no active area of intellectual creation can be strictly delimited from outside. If scientists need to change the borders of their own field of research, it would be hard to justify a philosophical prescription preventing them from doing so. It is the same with art: nearly all artistic innovations of the 20th century have transgressed the definition of art as would have been given by a 19th-century aesthetician. Just as with science and scientists, art is internally defined by artists.

For all of these reasons, it is worth considering seriously the possibility that we live in a multiverse. This could allow understanding of the two problems of complexity and naturalness. The fact that the laws and couplings of physics appear to be fine-tuned to such an extent that life can exist and most fundamental quantities assume extremely “improbable” values would appear obvious if our entire universe were just a tiny part of a huge multiverse where different regions exhibit different laws. In this view, we are living in one of the “anthropically favoured” regions. This anthropic selection has strictly teleological and no theological dimension and absolutely no link with any kind of “intelligent design”. It is nothing other than the obvious generalization of the selection effect that already has to be taken into account within our own universe. When dealing with a sample, it is impossible to avoid wondering if it accurately represents the full set, and this question must of course be asked when considering our universe within the multiverse.

The multiverse is not a theory. It appears as a consequence of some theories, and these have other predictions that can be tested within our own universe. There are many different kinds of possible multiverses, depending on the particular theories, some of them even being possibly interwoven.

CCexp_10_07

The most elementary multiverse is simply the infinite space predicted by general relativity – at least for flat and hyperbolic geometries. An infinite number of Hubble volumes should fill this meta-world. In such a situation, everything that is possible (i.e. compatible with the laws of physics as we know them) should occur. This is true because an event with a non-vanishing probability has to happen somewhere if space is infinite. The structure of the laws of physics and the values of fundamental parameters cannot be explained by this multiverse, but many specific circumstances can be understood by anthropic selections. Some places are, for example, less homogenous than our Hubble volume, so we cannot live there because they are less life-friendly than our universe, where the primordial fluctuations are perfectly adapted as the seeds for structure formation.

General relativity also faces the multiverse issue when dealing with black holes. The maximal analytic extension of the Schwarzschild geometry, as exhibited by conformal Penrose–Carter diagrams, shows that another universe could be seen from within a black hole. This interesting feature is well known to disappear when the collapse is considered dynamically. The situation is, however, more interesting for charged or rotating black holes, where an infinite set of universes with attractive and repulsive gravity appear in the conformal diagram. The wormholes that possibly connect these universes are extremely unstable, but this does not alter the fact that this solution reveals other universes (or other parts of our own universe, depending on the topology), whether accessible or not. This multiverse is, however, extremely speculative as it could be just a mathematical ghost. Furthermore, nothing allows us to understand explicitly how it formed.

A much more interesting pluriverse is associated with the interior of black holes when quantum corrections to general relativity are taken into account. Bounces should replace singularities in most quantum gravity approaches, and this leads to an expanding region of space–time inside the black hole that can be considered as a universe. In this model, our own universe would have been created by such a process and should also have a large number of child universes, thanks to its numerous stellar and supermassive black holes. This could lead to a kind of cosmological natural selection in which the laws of physics tend to maximize the number of black holes (just because such universes generate more universes of the same kind). It also allows for several possible observational tests that could refute the theory and does not rely on the use of any anthropic argument. However, it is not clear how the constants of physics could be inherited from the parent universe by the child universe with small random variations and the detailed model associated with this scenario does not yet exist.

One of the richest multiverses is associated with the fascinating meeting of inflationary cosmology and string theory. On the one hand, eternal inflation can be understood by considering a massive scalar field. The field will have quantum fluctuations, which will, in half of the regions, increase its value; in the other half, the fluctuations will decrease the value of the field. In the half where the field jumps up, the extra energy density will cause the universe to expand faster than in the half where the field jumps down. After some time, more than half of the regions will have the higher value of the field simply because they expand faster than the low-field regions. The volume-averaged value of the field will therefore rise and there will always be regions in which the field is high: the inflation becomes eternal. The regions in which the scalar field fluctuates downward will branch off from the eternally inflating tree and exit inflation.

CCyau_10_07

A tri-dimensional representation of a quadri-dimensional Calabi–Yau manifold. This describes the geometry of the extra “internal” dimensions of M-theory and relates to one particular (string-inspired) multiverse scenario.
Image credit: simulation by Jean-François Colonna, CMAP/École Polytechnique. On the other hand, string theory has recently faced a third change of paradigm. After the revolutions of supersymmetry and duality, we now have the “landscape”. This metaphoric word refers to the large number (maybe 10500) of possible false vacua of the theory. The known laws of physics would just correspond to a specific island among many others. The huge number of possibilities arises from different choices of Calabi–Yau manifolds and different values of generalized magnetic fluxes over different homology cycles. Among other enigmas, the incredibly strange value of the cosmological constant (why are the 119 first decimals of the “natural” value exactly compensated by some mysterious phenomena, but not the 120th?) would simply appear as an anthropic selection effect within a multiverse where nearly every possible value is realized somewhere. At this stage, every bubble-universe is associated with one realization of the laws of physics and contains itself an infinite space where all contingent phenomena take place somewhere. Because the bubbles are causally disconnected forever (owing to the fast “space creation” by inflation) it will not be possible to travel and discover new laws of physics.

This multiverse – if true – would force a profound change of our deep understanding of physics. The laws reappear as kinds of phenomena; the ontological primer of our universe would have to be abandoned. At other places in the multiverse, there would be other laws, other constants, other numbers of dimensions; our world would be just a tiny sample. It could be, following Copernicus, Darwin and Freud, the fourth narcissistic injury.

Quantum mechanics was probably among the first branches of physics leading to the idea of a multiverse. In some situations, it inevitably predicts superposition. To avoid the existence of macro-scopic Schrödinger cats simultaneously living and dying, Bohr introduced a reduction postulate. This has two considerable drawbacks: first, it leads to an extremely intricate philosophical interpretation where the correspondence between the mathe-matics underlying the physical theory and the real world is no longer isomorphic (at least not at any time), and, second, it violates unitarity. No known physical phenomenon – not even the evaporation of black holes in its modern descriptions – does this.

CCete_10_07

These are good reasons for considering seriously the many-worlds interpretation of Hugh Everett. Every possible outcome to every event is allowed to define or exist in its own history or universe, via quantum decoherence instead of wave function collapse. In other words, there is a world where the cat is dead and another one where it is alive. This is simply a way of trusting strictly the fundamental equations of quantum mechanics. The worlds are not spatially separated, but exist more as kinds of “parallel” universes. This tantalizing interpretation solves some paradoxes of quantum mechanics but remains vague about how to determine when splitting of universes happens. This multiverse is complex and, depending on the very quantum nature of phenomena leading to other kinds of multiverses, it could lead to higher or lower levels of diversity.

More speculative multiverses can also be imagined, associated with a kind of platonic mathematical democracy or with nominalist relativism. In any case, it is important to underline that the multiverse is not a hypothesis invented to answer a specific question. It is simply a consequence of a theory usually built for another purpose. Interestingly, this consequence also solves many complexity and naturalness problems. In most cases, it even seems that the existence of many worlds is closer to Ockham’s razor (the principle of simplicity) than the ad hoc assumptions that would have to be added to models to avoid the existence of other universes.

Given a model, for example the string-inflation paradigm, is it possible to make predictions in the multiverse? In principle, it is, at least in a Bayesian approach. The probability of observing vacuum i (and the associated laws of physics) is simply Pi = Piprior fi where Piprior is determined by the geography of the landscape of string theory and the dynamics of eternal inflation, and the selection factor fi characterizes the chances for an observer to evolve in vacuum i. This distribution gives the probability for a randomly selected observer to be in a given vacuum. Clearly, predictions can only be made probabilistically, but this is already true in standard physics. The fact that we can observe only one sample (our own universe) does not change the method qualitatively and still allows the refuting of models at given confidence levels. The key points here are the well known peculiarities of cosmology, even with only one universe: the observer is embedded within the system described; the initial conditions are critical; the experiment is “locally” irreproducible; the energies involved have not been experimentally probed on Earth; and the arrow of time must be conceptually reversed.

However, this statistical approach to testing the multiverse suffers from severe technical short cuts. First, while it seems natural to identify the prior probability with the fraction of volume occupied by a given vacuum, the result depends sensitively on the choice of a space-like hypersurface on which the distribution is to be evaluated. This is the so-called “measure problem” in the multiverse. Second, it is impossible to give any sensible estimate of  fi. This would require an understanding of what life is – and even of what consciousness is – and that simply remains out of reach for the time being. Except in some favourable cases – for example when all the universes of the multiverse present a given characteristic that is incompatible with our universe – it is hard to refute explicitly a model in the multiverse. But difficult in practice does not mean intrinsically impossible. The multiverse remains within the realm of Popperian science. It is not qualitatively different from other proposals associated with usual ways of doing physics. Clearly, new mathematical tools and far more accurate predictions in the landscape (which is basically totally unknown) are needed for falsifiability to be more than an abstract principle in this context. Moreover, falsifiability is just one criterion among many possible ones and it should probably not be over-determined.

CCpcd_10_07

When facing the question of the incredible fine-tuning required for the fundamental parameters of physics to allow the emergence of complexity, there are few possible ways of thinking. If one does not want to use God or rely on an unbelievable luck that led to extremely specific initial conditions, there are mainly two remaining possible hypotheses. The first would be to consider that since complexity – and in particular, life – is an adaptive process, it would have emerged in nearly any kind of universe. This is a tantalizing answer, but our own universe shows that life requires extremely specific conditions to exist. It is hard to imagine life in a universe without chemistry, maybe without bound states or with other numbers of dimensions. The second idea is to accept the existence of many universes with different laws where we naturally find ourselves in one of those compatible with complexity. The multiverse was not imagined to answer this specific question but appears “spontaneously” in serious physical theories, so it can be considered as the simplest explanation to the puzzling issue of naturalness. This of course does not prove the model to be correct, but it should be emphasized that there is absolutely no “pre-Copernican” anthropocentrism in this thought process.

It could well be that the whole idea of multiple universes is misleading. It could well be that the discovery of the most fundamental laws of physics will make those parallel worlds totally obsolete in a few years. It could well be that with the multiverse, science is just entering a “no through road”. Prudence is mandatory when physics tells us about invisible spaces. But it could also very well be that we are facing a deep change of paradigm that revolutionizes our understanding of nature and opens new fields of possible scientific thought. Because they lie on the border of science, these models are dangerous, but they offer the extraordinary possibility of constructive interference with other kinds of human knowledge. The multiverse is a risky thought – but, then again, let’s not forget that discovering new worlds has always been risky.

Pierre Auger Observatory pinpoints source of mysterious highest-energy cosmic rays

The Pierre Auger Collaboration has discovered that active galactic nuclei are the most likely candidate for the source of the ultra-high-energy (UHE) cosmic rays arriving on Earth. Using the world’s largest cosmic-ray observatory, the Pierre Auger Observatory (PAO) in Argentina, the team of 370 scientists from 17 countries has found that the sources of the highest-energy particles are not distributed uniformly across the sky. Instead, the results link the origins of these mysterious particles to the locations of nearby galaxies that have active nuclei at their centres (Pierre Auger Collaboration 2007).

CCait_10_07

Low-energy charged cosmic rays (by far the majority) lose their initial direction when travelling through galactic or intergalactic magnetic fields, and therefore cannot reveal their point of origin when detected on Earth. UHE particles, by contrast, with energies of more than 40 EeV (4 × 1019 eV) are only slightly deflected, so they come almost straight from their sources. These are the particles that the Auger Observatory was built to detect.

When UHE cosmic rays hit nuclei in the upper atmosphere, they create cascades of secondary particles that can spread across an area of around 30 km2 as they arrive at the Earth’s surface. The PAO records these extensive air showers using an array of 1600 particle detectors placed 1.5 km apart in a grid spread across 3000 km2. A group of 24 specially designed telescopes record the emission of fluorescence light from excitation of the atmospheric nitrogen by the air shower, and water tanks record shower particles arriving at the Earth’s surface by detecting Cherenkov radiation. The combination of particle detectors and fluorescence telescopes provides an exceptionally powerful instrument for determining the energy and direction of the primary UHE cosmic ray.

While the observatory has recorded almost a million cosmic-ray showers, the Auger team can link only the rare, highest-energy cosmic rays to their sources with sufficient precision. The observatory has so far recorded 81 cosmic rays with energy of more than 40 EeV – the largest number of cosmic rays at these energies ever recorded. At these ultra-high energies, there is only a degree or so uncertainity in the direction from which the cosmic ray arrived, allowing the team to determine the location of the particle’s source.

The Auger Collaboration discovered that the 27 highest-energy events, with energy of more than 57 EeV, do not come from all directions equally. Comparing the clustering of these events with the known locations of 381 active galactic nuclei (AGNs), the collaboration found that most of these events correlated well with the locations of AGNs in some nearby galaxies, such as Centaurus A. Astrophysicists believe that AGNs are powered by supermassive black holes that are devouring large amounts of matter. They have long been considered sites where high-energy particle production might take place, but the exact mechanism of how AGNs can accelerate particles to such high energies is still a mystery.

These UHE events are rare and, even with its large size, the PAO can record only about 30 of them each year. The collaboration is already developing plans for the construction of a second, larger installation in Colorado. This will extend coverage to the entire sky while substantially increasing the number of high-energy events recorded; there are, it turns out, even more nearby AGNs in the northern sky than in the southern sky visible from Argentina.

The LHC: a new high energy photon collider

CCpho01_08_07

Photon-induced interactions have traditionally been studied with electron beams in fixed-target experiments and colliders, LEP (electron–positron) and HERA (electron–proton) in particular. However, photon–hadron and photon–photon interactions also occur when the electron beams are replaced by ultra-relativistic beams of other charged particles such as protons or heavy nuclei. In these cases, the maximum photon energies are restricted by the form factor of the projectile, but at the extremely high energies of the LHC they will be higher than at any other existing accelerator: up to a photon energy of around 4 TeV in the photon–proton centre-of-mass frame. Furthermore, since the intensity of the electromagnetic field – the number of photons in the “cloud” surrounding the charge of the beam particle – is proportional to the square of the particle’s charge Z, photonic interactions are enhanced by up to a factor of Z2, or around 104 for heavy ions. Indeed, the fields from heavy ions are strong enough that multiple photons may be exchanged in a single event. Figure 1 shows a schematic view of such an electromagnetic (or ultra-peripheral) nucleus–nucleus collision.

The study of photon-induced interactions at the LHC, as well as at existing hadron colliders such as RHIC at Brookhaven or the Tevatron at Fermilab, is challenging despite the high photon energies and fluxes. The interaction is always electromagnetic with an electron beam and the small contribution from the weak interaction can usually be neglected or easily separated. By contrast, the photonic interactions at hadron colliders must be separated from a dominant QCD background. The low multiplicity and mostly longitudinal kinematics of electromagnetic processes result in an event topology that is different from hadronic interactions. In particular, event triggering is a critical issue that depends much on instrumentation in the very forward direction, close to the beam line. The workshop on Photoproduction at collider energies: from RHIC and HERA to the LHC (held at ECT*-Trento in January), looked at how these issues have been addressed and solved in previous experiments, and considered the perspectives at the LHC. The workshop gathered around 40 physicists, equally divided between theorists and experimentalists.

Much of the workshop focused on the latest advances in the study of low-x parton densities in protons and nuclei probed by photons. Ultra-peripheral collisions at the LHC can probe the physics of parton saturation at Bjorken-x values as low as 10–5. Talks by SLAC’s Stan Brodsky, Mark Strikman of Pennsylvania and Leonid Frankfurt of Tel Aviv highlighted these theoretical aspects. HERA saw its last collisions at the end of June and has been an important machine for the field. Michael Klasen from Grenoble and DESY’s Sergey Levonian gave theoretical and experimental overviews, respectively, of the HERA results. At the Tevatron, the CDF collaboration has recently published its first analysis of two-photon interactions in proton–antiproton collisions. Andrew Hamilton of Geneva presented the results at the workshop. At RHIC, the STAR and PHENIX collaborations have studied ultra-peripheral gold–gold collisions. Yury Gorbunov of Creighton and David Silvermyr from Oak Ridge showed the latest results on vector meson photoproduction.

Looking to the future, Krzysztof Piotrzkowski from UC Louvain presented the group’s comprehensive study of various photon-induced electroweak and beyond-Standard Model processes that can be studied in proton–proton collisions at the LHC. These include associated W-Higgs and single-top photoproduction, as well as two-photon production of W boson pairs. To conclude the series of talks at the workshop, Otto Nachtmann of Heidelberg and Ute Dreyer of Basel covered the theory of anomalous gauge-boson couplings in γ– γ, γ–p and γ–A interactions.

The physics of photon–nucleus interactions in ultra-peripheral collisions is also the focus of a CERN Yellow Report, completed in June. This 230-page document, the joint effort of more than 20 contributors, summarizes results from the SPS at CERN and from RHIC. It examines planning for ultra-peripheral collisions at the ALICE, ATLAS, and CMS experiments at the LHC. The vitality of this research field was also evident in the number of contributions at the Photon 2007 conference held in Paris in July.

The conclusion is that the LHC has much to offer as a photon collider. Photon–hadron and photon–photon processes will reach energies an order of magnitude larger than at previous colliders. They will not only provide valuable information on the strong interaction – in particular of low-x parton densities and non-linear QCD phenomena – but will also open new windows on electroweak processes and physics beyond the Standard Model, which will complement the mainstream studies in proton–proton and nucleus–nucleus collisions.

Father of the shell model

Hans Jensen (1907–1973) is the only theorist among the three winners from Heidelberg University of the Nobel Prize for Physics. He shared the award with Maria Goeppert-Mayer in 1963 for the development of the nuclear shell model, which they published independently in 1949. The model offered the first coherent explanation for the variety of properties and structures of atomic nuclei. In particular, the “magic numbers” of protons and neutrons, which had been determined experimentally from the stability properties and observed abundances of chemical elements, found a natural explanation in terms of the spin-orbit coupling of the nucleons. These numbers play a decisive role in the synthesis of the elements in stars, as well as in the artificial synthesis of the heaviest elements at the borderline of the periodic table of elements.

CCmod01_08_07

Hans Jensen was born in Hamburg on 25 June 1907. He studied physics, mathematics, chemistry and philosophy in Hamburg and Freiburg, obtaining his PhD in 1932. After a short period in the German army’s weather service, he became professor of theoretical physics in Hannover in 1940. Jensen then accepted a new chair for theoretical physics in Heidelberg in 1949 on the initiative of Walther Bothe, who received the Nobel prize in 1954 for the development of the coincidence method. Apart from his work in nuclear and particle physics, Jensen became the driving force behind the rebuilding of physics research in Heidelberg after the Second World War. The Institute for Theoretical Physics obtained new chairs, particularly in theoretical particle physics. Together with Bothe, he expanded the experimental-physics department and convinced well-known experimentalists to come to Heidelberg, including his collaborator in the development of the shell model, Otto Haxel, in 1950 and Hans Kopfermann, a specialist on nuclear moments and hyperfine interactions, three years later.

The shell model past and present

To celebrate the centenary of Jensen’s birth, the Heidelberg Physics Faculty and the Institute for Theoretical Physics organized a symposium on Fundamental Physics and the Shell Model. A series of talks looked at Jensen’s life plus the role of the shell model in astrophysics and nuclear physics today. In keeping with Jensen’s interest in music, performances by the Heidelberg Canonical Ensemble complemented the talks. In the introductory talk on The Shell Model: Past and Present, former director at the Heidelberg Max Planck Institute, Hans Weidenmüller, gave an overall view of Jensen’s Nobel-prizewinning contribution to nuclear physics. The paper on the shell model by Haxel, Jensen and Hans Suess appeared in the same 1949 edition of Physical Review as Goeppert-Mayer’s work (Haxel, Jensen and Suess 1949 and Goeppert-Mayer 1949). It proved to be a surprising solution to the problem of nuclear-energy levels. Based on the picture of independent particle motion of protons and neutrons with strong spin-orbit coupling, the model yields the correct sequence of energy levels and explains the magic numbers in terms of energy gaps above full levels.

The apparent contradictions with the collective properties of nucleons in nuclei (evident from the rotational spectra) as well as with the chaotic properties of nuclei (evident in Niels Bohr’s compound nucleus picture) only found their explanations much later. Today, shell-model calculations in large configuration spaces can indeed explain rotational spectra, and within individual shells consistency with the random nuclear properties appears once the residual interaction is considered. However, a derivation of the shell model from the basic nucleon–nucleon interaction is still missing.

Berthold Stech, Jensen’s former colleague and long-time director of the Heidelberg theory institute, presented his recollections of Jensen with photographs and anecdotes. As a student representative after the war, Stech contributed to Jensen’s move to Heidelberg by writing a letter to the publisher of the local newspaper, who then went to the state government to ensure that the offer was made to Jensen. He talked about Jensen’s vital contributions to making Heidelberg a famous physics centre. With private rooms in the institute, Jensen often invited students and colleagues for discussions and to listen to music. Stech also quoted from a recent letter by Aage Bohr and Ben Mottelson, who emphasized Jensen’s inspiring personality.

CNLmod2_08_07

Wolfgang Hillebrandt, director at the Max Planck Institute for Astrophysics in Munich-Garching, spoke about supernovae and the shell model. This active field of research represents a synthesis of astrophysics and nuclear physics. In Type Ia supernovae there is a high and almost identical fraction of nickel-56. Even though this is a doubly magic nucleus, it is not stable (its half-life is six days) and its decay through cobalt-56 to iron-56 is what makes these supernovae shine. Hence, the brightness of the supernova is proportional to the produced mass of nickel-56. For progenitor stars that are similar, this allows for very precise determination of distances, which since 1998 have been used to infer the accelerated expansion of the universe. Many physicists consider this to be the consequence of dark energy. Its origins are currently under investigation in many institutes, for example, at the Bonn–Heidelberg–Munich research centre “The Dark  Universe”.

Core-collapse supernovae (Type II), such as SN1987A in the Large Magellanic Cloud, where a blue supergiant exploded in several seconds, allow the direct test of ideas about the synthesis of heavy elements. For example, observations of the characteristic gamma rays indicate the presence of the corresponding isotopes synthesized in the particular star or during the explosion. Elements beyond iron are, in particular, produced in a sequence of rapid neutron captures known as the r-process. It turns out that the element abundances are mainly determined by nuclear structure, and hence, by the shell model; the subtleties of the astrophysical processes prove to be comparatively unimportant.

In the final talk of the symposium, Peter Armbruster of the GSI in Darmstadt explained the synthesis of the heaviest elements using cold fusion (only one neutron emitted) up to and beyond roentgenium, symbol Rg and atomic number Z = 111. The relative stability of these elements, with mean lifetimes in the order of milli-seconds to seconds, is a consequence of the Goeppert–Jensen shell effects. Without these they would not exist. The element Z = 112, synthesized at GSI in 1996, is still unnamed. Meanwhile, Yuri Oganessian’s group at the Flerov Laboratory at JINR, Dubna, used radioactive targets in hot-fusion reactions with the emission of up to five neutrons, to create synthetically the elements 114, 116 and 118. Kosuke Morita and co-workers at RIKEN in Japan made element 113 in 2004.

Relativistic mean-field calculations indicate that the closed shell should occur at Z = 120 (the number of protons), with the magic neutron number of 184, as had appeared in the book of Jensen and Goeppert-Mayer about the shell model (Goeppert-Mayer and Jensen 1955). This means that this doubly magic superheavy nucleus should have 304 nucleons. It will, however, be extremely difficult to synthesize since its relatively low density of energy levels above the ground-state favours fission over neutron emission, as Armbruster emphasized. This would lead to a drastic reduction of the survival probability.

As a lasting tribute to Jensen, starting next year, the Jensen Guest Professorship will be created with the financial support of the Klaus Tschira Foundation, Heidelberg. During a five-year period, internationally renowned physicists will visit the Institute for Theoretical Physics in Heidelberg to conduct research, give seminars and one public lecture a year.

Exotic lead nuclei get into shape at ISOLDE

CCrad01_08_07

In nature, relatively few nuclei have a spherical shape in their ground state. Examples are 16O, 40Ca, 48Ca and 208Pb, which are “doubly magic”, with numbers of both protons and neutrons corresponding to closed shells in the nuclear shell model. By moving away from the closed shells and increasing the number of valence nucleons, both protons and neutrons, these nuclei can eventually acquire a permanent deformation in their ground state. Experiments reveal that sometimes – due to the complex interplay of single-particle and collective degrees of freedom – both a spherical and deformed shape occur in the same nucleus at low excitation energies. In the region around lead, for example, physicists in the 1970s first observed this “shape co-existence”, using optical spectroscopy at the ISOLDE facility at CERN (Bonn et al. 1972 and Dabkiewicz et al. 1979). Since then, an extensive amount of data has been collected throughout the chart of nuclei (Wood et al. 1992 and Julin et al. 2001).

Some of the best-known examples of shape co-existence are found in neutron-deficient lead nuclei (atomic number or number of protons, Z = 82). The uniqueness of this region is mainly due to three effects. First, the energy gap of 3.9 MeV above the Z = 82 closed proton shell forces the nuclei to adopt a spherical shape in their ground state. However, the energy difference is small enough for a second effect to occur: the creation of “extra” valence proton particles and holes as a result of proton-pair excitation across the gap. Third, a very large neutron valence space between the shell closures with the number of neutrons N = 82 and 126 results in a large number of possible valence neutrons as nuclei approach the neutron mid-shell at N = 104. The strong deformation -driving interaction between the “extra” valence protons and the valence neutrons produces unusually low-lying, deformed oblate (disc-like) and prolate (cigar-like) states in the vicinity of N = 104, where the number of valence neutrons is maximal (Wood et al. 1992). In some cases, the deformation-driving effect is so strong that the deformed state becomes the ground state, as happens near N = 104 in the light isotopes of mercury (Z = 80) and platinum (Z = 78).

Atomic spectroscopy provides direct and model-independent information on the properties of nuclear ground and isomeric states via a determination of hyperfine structure and the isotope shift. These are small effects on atomic energy levels due to the nuclear moments, masses, sizes and shapes of nuclear isotopes, allowing the spins, moments and changes in charge-radii of nuclei to be deduced. In particular, the changes in charge radii determined from the isotope shifts by optical spectroscopy in long isotopic chains have revealed collective nuclear properties clearly.

Figure 1 shows changes of mean-square charge radii (δ<r2>) of lead, mercury and platinum isotopes as a function of the number of neutrons. All the data for the nuclides furthest from stability were determined at ISOLDE by a variety of techniques (Otten 1989 and Kluge and Nörtershäuser 2003). In the 1970s, nuclear-radiation detected optical pumping and laser fluorescence spectroscopy were used, collinear spectroscopy in the 1980s and resonance ionization mass spectroscopy from the late 1980s onwards. Now laser spectroscopy in the laser ion source is used, as described below.

Figure 1 shows how the measured δ<r2> for platinum isotopes develop a distinct deviation from the smoothly decreasing trend expected from the spherical-droplet model. For mercury, a sudden and dramatic change in δ<r2> known as “shape staggering”, occurs between 187Hg and 185Hg (N = 107 and 105 respectively). A similar change occurs between the isomeric (I = 13/2) and ground (I = 1/2) states in 185Hg, in this case, “shape isomerism” or “shape co-existence” (Bonn et al. 1972 and Dabkiewicz et al. 1979). These effects are interpreted as a change from weakly deformed oblate to strongly deformed prolate shapes. The  neutron-deficient lead isotopes are a particularly interesting example of shape co-existence. Theoretical calculations have long suggested the co-existence in these nuclei of three different shapes: spherical, prolate and oblate – hence triple co-existence. Recent particle (α, β) and in-beam studies have found strong evidence for this phenomenon in some of the isotopes from 182Pb to 208Pb.

One of the most spectacular examples is the mid-shell nucleus 186Pb, as indicated in figure 2. Here, studies of the α-decay of the parent nucleus 190Po have revealed a triplet of low-lying (E* < 650 keV)>+ states (Andreyev et al. 2000). These were assigned to co-existing spherical, oblate and prolate shapes, with the spherical state being the ground state. Subsequent in-beam studies identified excited bands built on top of these states. An important question arises, however, concerning the degree of mixing between different configurations. As the excited 0+ states decrease their energy when approaching N = 104 (186Pb), their mixing with the 0+ ground state could increase substantially, an effect that could possibly be seen in the value of the charge radii.

CCrad02_08_07

Therefore, the aim of experiment IS483 at ISOLDE was to measure for the first time the isotope shifts in the atomic spectra of the very neutron-deficient nuclei in the region 182Pb to 190Pb, deducing the mean-square charge radii in order to probe the ground state directly (De Witte et al. 2007 and Anderyev et al. 2002). However, the expected production rates were far too low (e.g. 1 ion/s for 182Pb) for the laser spectroscopy  techniques used previously at ISOLDE. Instead, an extremely sensitive spectroscopic technique was employed: resonance ionization spectroscopy in the ion source, first developed at the Petersburg Nuclear Physics Institute in Gatchina for the investigation of rare-earth isotopes (Alkhazov et al. 1992).

The radioactive lead isotopes are produced at ISOLDE in a proton-induced spallation reaction, using protons at 1.4 GeV on a thick (50 g/cm2) target of uranium carbide (UCx). The reaction products diffuse out of the target toward the ionizer tube, which is heated to around 2050 °C. In the tube, a three-step laser ionization process selectively ionizes the lead isotopes. To determine the isotope shift of the appropriate optical spectral line, the laser for the first excitation step is set to a narrow linewidth of 1.2 GHz and its frequency is scanned over the resonance. After ionization and extraction, the radioactive ions are accelerated to 60 keV, mass separated and subsequently implanted in a carbon foil mounted on a rotating wheel at the focal plane of ISOLDE. A circular silicon detector (150 mm2 × 300 μm) placed behind the foil measures the α-radiation during a fixed implantation time, after which the laser frequency is changed and the implantation-measurement cycle repeated again. The implanted lead ions are counted via their characteristic α-decay.

CCrad03_08_07

Figure 3 shows the intensity of the α-lines as a function of laser frequency for a sequence of nuclei (with even N) from 188Pb to 182Pb. This reveals the optical isotope shift, which allows us to deduce the values of δ<r2> shown in figure 1. Similarly, the experiment also measured isotopes with an odd number of neutrons, 183,185,187Pb, all of them produced in the ground and isomeric states. Note that the “isomer separation” could be obtained by tuning the laser frequency to some specific values at which only one of the isomers is selectively ionized in the cavity and subsequently extracted and analysed.

Figure 1 compares the deduced values of δ<r2> with the predictions of the spherical-droplet model. The deviation from these predictions increases when moving away from the Z = 82 closed proton shell of lead. The large deviation observed for the ground state of the odd-mass mercury isotopes and the odd- and even-mass platinum isotopes around N = 104 has been interpreted as a result of the onset of strong prolate deformation. In the case of lead, from 190Pb downwards, the δ<r2> data show a distinct deviation from the spherical-droplet model. This suggests modest ground-state deformation, but comparisons of the data with model calculations show that δ<r2> is sensitive to correlations in the ground-state wave functions and that the lead isotopes essentially stay spherical in their ground state at – and even beyond – the N = 104 mid-shell region.

This experiment has shown that the extreme sensitivity of the combined in-source laser spectroscopy and α-detection allows us to explore the heavy-mass regions far from stability with isotopes produced at a rate of only a few ions a second (182Pb). An important development would be: to use the isomer shift in the case of odd-mass-number isotopes to ionize nuclei selectively in their ground or isomeric state; to post-accelerate these with the REX-ISOLDE facility; and use the isomerically pure beams of the 13/2+ and 3/2 isomers to investigate, for example, the influence of different spin states of the same incident particle on the reaction mechanism.

Strangeness, charm and beauty come to Slovakia

The International Conference on Strangeness in Quark Matter, SQM 2007, took place on 24–29 June in the charming old town of Levoc̆a, located in Spis̆ in north-eastern Slovakia. Organized by the Institute of Experimental Physics of the Slovak Academy of Sciences, Kos̆ice, it was the 12th in a well-established series of topical conferences that bring together experts working in particle physics, nuclear physics and cosmology. More than 100 scientists from 20 countries took part this year, and the contributions covered a wide range of issues, from the bulk properties of the partonic matter created in nucleus–nucleus collisions, to the energy loss of fast partons traversing the medium, with a particular emphasis on the perspectives for the future.

CCstr01_08_07

The SQM series is currently dedicated to understanding what the production of strange – and also charm and beauty – particles can reveal about the hot and dense partonic matter formed in a high-energy nucleus–nucleus collision. It could perhaps more appropriately be called Strangeness, Charm and Beauty in Quark Matter. However, because of tradition, the original name has stuck. The extension to flavours heavier than strangeness has occurred naturally over the years as the high energies available at RHIC (and expected at the LHC) have turned charm- and beauty-flavoured particles into practical and promising probes for exploring QCD matter. On the experimental side, the challenge of detecting strange, charm and beauty particles is similar – although more difficult with charm and beauty – as the complete identification of all of these types of particle relies on identifying their decay products and decay vertices. Hence the need for similar techniques with the three flavours, both for the apparatus (high-granularity vertex detectors) and for the analysis. The SQM conferences therefore provide an excellent forum for researchers in this field to exchange not only physics results, but also information on experimental techniques and analysis methods.

There were more than 70 theoretical and experimental contributions this year, including review talks and reports from all of the active experiments at Brookhaven’s RHIC (BRAHMS, PHENIX, PHOBOS and STAR), at CERN’s SPS (CERES, NA49, NA57 and NA60) and at GSI’s heavy-ion synchrotron, SIS (FOPI). As the start-up of the LHC is just around the corner, more contributions than ever illustrated the plans for physics at future facilities. There were presentations on ALICE, the LHC experiment dedicated to heavy-ion physics, and the heavy-ion programmes for ATLAS and CMS and on the experiment on compressed baryonic matter (CBM) that is planned at the Facility for Antiproton and Ion Research at GSI.

The first day was devoted to a symposium where graduate students and post-doctorates had the opportunity to present their research results. Before the summary talks on the last day, a brief commemoration took place in honour of Maurice Jacob. He was a leader in the theory of high-energy hadron physics, a strong supporter of heavy-ion physics and a friend to many of us. He passed away on 2 May and we are all sorry that he did not live to enjoy the LHC’s results.

Hadronization and fragmentation

The bulk of the observed hadrons with low transverse momenta (pT < 2 GeV/c) are produced from matter that seems to be well-equilibrated by the time it dresses up into hadrons. In other words, statistical hadronization models reproduce hadron yields and ratios well, and in terms of only a few fitted -parameters, such as temperature and chemical potentials. A robust collective flow accompanies this equilibration. In non-central collisions, the spatial azimuthal asymmetry of the initial state transfers very efficiently to a momentum asymmetry of the final state. In a hydro-dynamical description, an “elliptic flow” of this kind – generated at the early stages of the expansion – gives access to the equation of state of partonic matter. The combination of hydrodynamics and statistical hadronization leads to a reasonable parameterization of the low-pT hadronic spectra and elliptic flow.

CCstr02_08_07

Many of the theory presentations dealt with the understanding of relativistic hydrodynamics and of the quark matter equation of state. Among several new results on the experimental side, we note that the RHIC data on copper–copper collisions at 200 GeV show enhancements of the Λ, Ξ, anti-Λ, anti-Ξ and Φ-meson with respect to proton–proton collisions. These enhancements are similar to those found (at a given number of participant nucleons) in gold–gold collisions at the same energy and in lead–lead collisions at the energies of CERN’s SPS.

The presence of the medium appears to modify fragmentation functions, which describe the dressing up of partons into final state particles. At high pT, the fragmentation of the parent parton is the dominating process. At intermediate pT (2 < psub>T < 6 GeV/c), however, the valence quark recombination or coalescence seems to play an important role. As a result, hadron production cannot be considered to be either thermal or perturbative, since the medium interferes with the hadronization process. For example, if hadrons are formed by recombination, the features of the parton spectrum are shifted to higher pT in the hadron spectrum – and in a different way for mesons than for baryons.

In this context interesting new results on K* production were presented. The azimuthal asymmetry of these particles corresponds to that expected from the recombination of two valence quarks. This would occur if coalescence of a valence quark–antiquark pair forms the K*. This is in contrast to what would happen if the K* were produced in the hadronic phase by combining a K and a π, each formed from a valence quark–antiquark pair, therefore requiring the recombination of four valence quarks (figure 1).

Fast parton energy loss

Strong quenching of hadrons with large transverse momentum (pT > 6 GeV/c) is another striking phenomenon, first observed at RHIC. The high-pT partons generated in hard scatterings at the initial stages of the nucleus–nucleus collisions do not fly away and hadronize freely. Instead, the nearby matter seems to largely absorb them. High-pT photons instead remain essentially unaffected, leading to a picture of a dense medium that is opaque to partonic, coloured projectiles but relatively transparent to photons.

Vigorous theoretical and experimental efforts are under way to understand parton energy loss in terms of perturbative QCD (pQCD). Various groups have described the suppression of light hadrons in terms of radiative energy loss by gluon bremsstrahlung. According to such calculations, charm and beauty quarks should be absorbed significantly less than light quarks and gluons. However, data from the PHENIX and STAR experiments, which compare the production in nucleus–nucleus and proton–proton collisions of high-pT “non-photonic” electrons (thought to originate mainly in heavy-flavour decays), seem to indicate that heavy quarks lose energy as much as light quarks do.

CCsro03_08_07

There were many contributions devoted to this puzzle at SQM 2007. Attempts to reduce disagreement by including  elastic-scattering losses in addition to the radiative ones are being considered. On the experimental side, participants stressed the need to separate out the fraction of electrons coming from the decay of beauty hadrons, since b quarks are expected to lose even less energy than c quarks. Another important experimental caveat concerns the distribution of heavy quarks among the different heavy-flavour hadron species. This could change when going from proton–proton to nucleus–nucleus collisions, leading to pT-dependent variations of the semi-electronic branching ratios. Such an effect should obviously be kept under control when comparing electron production in nucleus–nucleus and proton–proton collisions. Some groups are making useful attempts in these directions by identifying the charmed meson D0 from the reconstruction of its decay. However, vertex detectors such as those of the LHC experiments are necessary for pursuing these studies further.

The fate of the energy deposited by the partons along their path also turns out to be non-trivial. It appears as though the partons’ propagation gives rise to some collective hydro-dynamical motion. Among the contributions on this subject, there was an interesting study of the response of the medium to energy loss, by analysing two- and three-particle correlations. The results seem to indicate a peak in particle production on a cone at an angle of about one radian from the direction of the propagating parton. A possible explanation would be the generation of a shock wave in the medium. The answer to this and many other questions will probably have to wait for the LHC data. We hope that there will be some to discuss at the next two conferences in this series being held in Beijing (2008) and Rio de Janeiro (2009).

Fast fragmentation produces double firsts for exotic nuclei

Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have made new observations in different regions of the isotopic landscape by examining the nuclear structure of 64Ge and 36Mg.

Krzysztof Starosta

Nuclei with equal proton (Z) and neutron (N) numbers are important in unravelling nuclear structure, in particular in the context of the shell model. Between 56Ni and 100Sn they exhibit a variety of shapes, evolving from spherical to prolate (cigar-shaped) to oblate (pancake-shaped) as the mass increases. Studies of transition rates between excited states and ground states in these nuclei provide important information to test shell-model predictions.

CCplu02_08_07

One such experiment at NSCL has studied 64Ge (N = Z = 32), making use of the recoil distance method (RDM) to measure the lifetime of two excited states (Starosta et al. 2007.) This was only the second measurement of this kind conducted in this region of isotopes, and the first to use the RDM at a fast-fragmentation facility. The beam speed at NSCL, 10 times higher than in previous RDM studies, allows for greater precision and gives access to a range of previously unattainable isotopes.

The experiment used a variety of state-of-the-art techniques, including a plunger device developed at the University of Cologne for use with the RDM. The plunger device produced the 64Ge nuclei in reactions where a single neutron was knocked out of incident 65Ge nuclei in a beam that contained a mixture of rare isotopes. The RDM used high-resolution gamma-ray spectroscopy and the Doppler effect to determine the lifetime of the excited states. The results agree well with large-scale shell-model calculations for the two excited states studied, and show the promise of the techniques used.

Exotic nuclei far from N = Z, with too many neutrons, offer other possibilities for testing shell-model predictions. One area of interest is the “island of inversion” where around a dozen neutron-rich isotopes should exhibit shell orderings that differ from standard theoretical predictions.

Studies of magnesium isotopes have already placed 31–34Mg (Z = 12, N = 19–22) in the island. Now, for the first time, an experiment at NSCL has examined the shell structure of 36Mg which has as many as 24 neutrons (Gade et al. 2007). In this case, a secondary beam of 38Si collided with a beryllium target to create 36Mg on rare occasions: only 1 in 400,000 38Si nuclei yielded the desired 36Mg. Spectroscopic measurements of the first excited state confirmed shell-model predictions, placing 36Mg in the island of inversion as expected.

Polarized hyperons probe dynamics of quark spin

A continuing mystery in nuclear and particle physics is the large polarization observed in the production of Λ hyperons in high-energy, proton–proton interactions. These effects were first reported in the 1970s in reactions at incident proton momenta of several hundred GeV/c, where experiments measured surprisingly strong hyperon polarizations of around 30% (Heller 1997). Although the phenomenology of these reactions is now well known, the inability to distinguish between various competing theoretical models has hampered the field (Zuo-Tang and Boros 2000).

CChyp1_07_07

Two new measurements from the US Department of Energy’s Jefferson Lab in Virginia are now challenging existing ideas on quark spin dynamics through studies of beam-recoil spin transfer in the electro- and photoproduction of K+Λ final states from an unpolarized proton target. Analyses of the two experiments in Hall B at Jefferson Lab using the CLAS spectrometer (figure 1) have provided extensive results of spin transfer from the polarized incident photon (real or virtual) to the final state Λ hyperon.

The results indicate that the Λ polarization is predominantly in the direction of the spin of the incoming photon, independent of the centre-of-mass energy or the production angle of the K+. Moreover, the photoproduction data show that, even where the transferred Λ polarization component along the photon direction is less than unity, the total magnitude of the polarization vector is equal to unity. Since these observations are not required by the kinematics of the reaction (except at extreme forward and backward angles) there must be some underlying dynamical origin.

CChyp2_07_07

Both analyses have proposed simple quark-based models to explain the phenomenology, however they differ fundamentally in their description of the spin transfer mechanism. In the electroproduction analysis a simple model has been proposed from data using a 2.567 GeV longitudinally polarized electron beam (Carman et al. 2003). In this case a circularly polarized virtual photon (emitted by the polarized electron) strikes an oppositely polarized u quark inside the proton (figure 2a). The spin of the struck quark flips in direction according to helicity conservation and recoils from its neighbours, stretching a flux-tube of gluonic matter between them. When the stored energy in the flux-tube is sufficient, the tube is “broken” by the production of a strange quark–antiquark pair (the hadronization process).

In this simple model, the observed direction of the Λ polarization can be explained if it is assumed that the quark pair is produced with two spins in opposite directions – anti-aligned – with the spin of the s quark aligned opposite to the final u quark spin. The resulting Λ spin, which is essentially the same as the s quark spin, is predominantly in the direction of the spin of the incident virtual photon. The spin anti-alignment of the ss pair is unexpected, because according to the popular 3P0 model, the quark–antiquark pair should be produced with vacuum quantum numbers (J = 0, S = 1, L = 1, i.e. Jπ = 0+), which means that their spins should be aligned two-thirds of the time (Barnes 2002). This could imply that this model for hadronization may not be as widely applicable as previously thought.

The new photoproduction analysis, with data using a circularly polarized real photon beam in the 0.5–2.5 GeV range, introduces a different model that can also explain the Λ polarization data. In this hypothesis, shown in figure 2b, the strange quark–antiquark pair is created in a 3S1 configuration (J = 1, S = 1, L = 0, i.e. Jπ = 1). Here, following the principle of vector-meson dominance, the real photon fluctuates into a virtual φ meson that carries the polarization of the incident photon. Therefore, the quark spins are in the direction of the spin of the photon before the hadronization interaction.

The s quark of the pair merges with the unpolarized di-quark within the target proton to form the Λ baryon. The s  quark merges with the remnant u quark of the proton to form a spinless K+ meson. In this model, the strong force, which rearranges the s and s quarks into the Λ and K+, respectively, can precess the spin of the s quark away from the beam direction, but the s quark, and therefore the Λ, remains 100% polarized. This provides a natural explanation for the observed unit magnitude of the Λ polarization vector seen for the first time in the measurements by CLAS.

The model interpretations presented from the two viewpoints do not necessarily contradict each other. Both assume that the mechanism of spin transfer to the Λ hyperon involves a spectator Jπ = 0+ di-quark system. The difference is in the role of the third quark. Neither model specifies a dynamical mechanism for the process, namely the detailed mechanism for quark-pair creation in the first case or for quark spin precession in the second. If we take the gluonic degrees of freedom into consideration, the model proposed in the electroproduction paper (Carman et al. 2003) can be realized in terms of a possible mechanism in which a colourless Jπ = 0 two-gluon subsystem is emitted from the spectator di-quark system and produces the ss pair (figure 2a). This is in conflict with the 3P0 model, which requires a Jπ= 0+ exchange. To the same order of gluon coupling, the model interpretation proposed by the photoproduction analysis (Schumacher 2007) is the quark-exchange mechanism, which is again mediated by a two-gluon current. The amplitudes corresponding to these models may both be present in the production, in principle, and contribute at different levels depending on the reaction kinematics.

Extending these studies to the K*+Λ exclusive final state should be revealing. In the electroproduction model, the spin of the u quark is unchanged when switching from a pseudoscalar K+ to a vector K*+. If the ss quark pair is produced with anti-aligned spins, the spin direction of the Λ should flip. On the other hand, in the photoproduction model the u quark in the kaon is only a spectator. Changing its spin direction – changing the K+ to a K*+ – should not change the Λ spin direction. Thus, there are ways to disentangle the relative contributions and to understand better the reaction mechanism and dynamics underlying the associated strangeness-production reaction. Analyses at CLAS are underway to extract the polarization transfer to the hyperon in the K*+Λ final state.

Beyond the studies of hyperon production, understanding the dynamics in a process of this sort can shed light on quark–gluon dynamics in a domain thought to be dominated by traditional meson and baryon degrees of freedom. These issues are relevant for a better understanding of strong interactions and hadroproduction in general, owing to the non-perturbative nature of QCD at these energies. We eagerly await further experimental studies and new theoretical efforts to understand which multi-gluonic degrees of freedom dominate in quark pair creation and their role in strangeness production, as well as the appropriate mechanism(s) for the dynamics of spin transfer in hyperon production.

NSCL discovers the heaviest known silicon isotope to date

Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have produced the heaviest silicon isotope ever observed. The recent identification of 44Si expands the chart of known isotopes and lays the groundwork for the future study of rare, neutron-rich nuclei.

CCnew5_07_07

Beyond a certain range of combinations of protons and neutrons, nuclei cannot form at all, and additional nucleons will immediately leave the nucleus owing to zero binding energies. Pursuit of this limit, known as the drip line, has proved to be a scientific and technical challenge – particularly when it comes to neutron-rich nuclei. While the proton drip line has been mapped out for much of the chart of nuclei, the neutron drip line is known only up to oxygen (Z = 8). Producing isotopes at or near the neutron drip line remains a long-standing goal in experimental nuclear physics. For example, 43Si was detected for the first time at Japan’s Institute of Physical and Chemical Research (RIKEN) in 2002 (Notani et al. 2002). That same year, researchers at the GANIL laboratory in France detected the neutron-rich isotopes 34Ne and 37Na (Lukyanov et al. 2002).

In the 44Si experiment conducted at the NSCL Coupled Cyclotron Facility in January, a primary beam of 48Ca was accelerated to 142 MeV/u and directed at a tungsten target. Downstream from the target, the beam was filtered through NSCL’s A1900 fragment separator. Eventually, some 20 different isotopes (including three nuclei of 44Si) hit a set of detectors that could identify each ion as it arrived (Tarasov et al. 2007).

CCnew6_07_07

The study was intended to document the yield of isotopes containing 28 neutrons that lie between 48Ca (the nuclei in the beam) and 40Mg to extrapolate the expected yields in this region. 40Mg is yet to be observed, and according to some theories should be on the drip line. Knocking out only protons from 48Ca could create these isotopes, although this is a difficult feat because of the larger number of neutrons in the beam nuclei. The production of 44Si is therefore an even greater feat, given that the collision must also transfer two neutrons from the tungsten target to the beam nucleus as it speeds past. The observation of 44Si in the A1900 fragment separator stretches the limits of its single-stage separation. The excessive number of particles that come along with the rare nuclei can swamp the detectors used to identify the beam in the separator. The next-generation technique will use two-stage separation, delivering fewer particles to the detectors as more are filtered out travelling down the beamline.

Researchers are developing new two-stage separators that could run experiments with higher initial beam intensities, which offer a better chance of generating the sought-after, near-dripline nuclei. Preliminary testing on a new two-stage separator at NSCL has delivered promising results. Also, a new device has just been constructed at RIKEN in Japan, and one is planned for GSI in Germany. Nuclear scientists at NSCL hope that two-stage separation will help uncover the next generation of rare isotopes.

LHCb prepares for a RICH harvest of rare beauty

CCbea1_07_07

When the LHC starts up at CERN, it will provide proton collisions at higher energies than any previous accelerator and at high collision rates. While these conditions should reveal new high-energy phenomena, such as the Higgs mechanism and supersymmetry, they will at the same time open a different window onto new physics through the study of rare processes among existing particles in the Standard Model. This is the territory that the Large Hadron Collider beauty (LHCb) experiment will explore.

By undertaking precision studies of the decays of particles that contain heavy flavours of quarks (charm and beauty), LHCb will stringently test our knowledge of the Standard Model. In addition, these studies will search for new particles beyond the Standard Model through their virtual effects – just as the mass of the top quark was known well before it was directly observed. The results will provide a profound understanding of the physics of flavour and will cast more light on the subtle difference between matter and antimatter that is manifest in CP violation.

Good particle identification is a fundamental requirement

The LHCb detector looks very different from the average hadron collider detector – indeed, it looks more like a fixed-target detector (figure 1) – because of its focus on heavy flavour particles. This choice of detector geometry is motivated by the fact that, at high energies, both B(D) and B(D) hadrons are produced at predominantly low angles and in the same “forward” cone. The detector geometry is optimized to detect these forward events efficiently.

LHCb’s physics programme depends on being able to distinguish between the particle species produced so good particle identification is a fundamental requirement. The LHCb detector contains calorimeters and muon chambers to identify electrons, photons and muons. But to separate pions, kaons and protons in selected decays, a powerful different technique comes into play. This is the ring imaging Cherenkov (RICH) detector, first proposed at CERN in 1977 by Jacques Séguinot and Tom Ypsilantis, who was a member of the LHCb collaboration until his death in 2000.

The basic idea is that when a charged particle passes through a medium faster than the speed of light in that medium, it will emit Cherenkov radiation (named after the 1958 Nobel prize winner Pavel Cherenkov, who was the first to characterize the radiation rigorously). The effect is like a shock wave of light similar to the sonic boom of an aircraft travelling faster than the speed of sound.

This radiation is emitted at an angle to the direction of motion of the particle, forming a cone of light around the particle’s track. The angle of emission, θ, depends on the velocity of the particle but not on its mass, with cosθ = 1/nβ, where n is the refractive index of the medium and β is the velocity relative to the velocity of light in free space, c. Combining this velocity information with a measurement of the momentum of the particle (using tracking detectors and a known magnetic field), yields the mass of the particle and therefore its identity.

The simplest Cherenkov detectors are threshold devices that only produce a signal if the velocity of a charged particle exceeds the minimum necessary to produce Cherenkov radiation in a particular medium, or “radiator”. Taken together with a momentum measurement, this allows particles that are heavier than a certain mass to be separated from lighter ones. Such detectors have been employed in many experiments since the 1950s, for example in the classic detection of the antiproton at Berkeley – an experiment in which the young Ypsilantis participated.

Rings and radiators

CCbea2_07_07

The RICH detector is a far more sophisticated development. In a RICH device, the cone of Cherenkov light emitted in the radiator is detected on a position-sensitive photon detector. This allows the reconstruction of a ring or disc, the radius of which depends on the emission angle, θ, and hence on the velocity of the particle. In the RICH used by LHCb, the photons are collected by a spherical mirror and focused onto an array of photon detectors at the focal plane (figure 2 shows the principal in LHCb’s RICH1 detector). By focusing the radiation, the photons will form a ring with a radius that depends on the emission angle, θ, but not on where the light is emitted along the particle track.

The choice of which radiator to use is crucial, as every medium has a restricted velocity range over which it can usefully identify particles. Too low a velocity, and the particle will produce no light; too high, and the Cherenkov angle for all particle species will saturate to a common value, making identification impossible. It was therefore important for LHCb to choose a medium, or combination of different media, that would be effective over the full momentum range of interest – from around 1 GeV/c, up to and beyond 100 GeV/c. To achieve this coverage, the experiment uses a combination of three radiators – aerogel, perfluoro-n-butane (C4F10) and carbon tetrafluoride (CF4).

Silica aerogel is a colloidal form of quartz solid, but with an extremely low density and a high refractive index (1.01–1.10), which makes it perfect for the lowest-momentum particles (order of a few GeV/c). One of the key design issues for LHCb was the use of aerogel in ring-imaging mode. This was a new idea, inspired by the development of much higher-quality, very clear aerogel (figure 3). Previously, the material had only been used in threshold counters. To cover the regions of medium and high momentum, LHCb uses a combination of C4F10 and CF4 radiators for momenta from around 10 GeV/c to around 65 GeV/c, and from around 15 GeV/c to more than 100 GeV/c, respectively.

CCbea3_07_07

The early design of the system had three separate detectors, one for each radiator, but for a variety of reasons it proved more practical to combine the aerogel and C4F10 radiators into a single device with wide acceptance. This is the RICH1 detector, which is located upstream to detect the low-momentum particles (figure 1). The CF4 radiator is housed in RICH2, downstream of the tracking system and the LHCb magnet. This has an acceptance that is limited to the low-angle region where there are mostly high-momentum particles.

One challenge in both cases, was to minimize the amount of material within the detector acceptance. Therefore, the designs were changed at an early stage to tilt the focusing mirrors slightly and to introduce secondary flat mirrors that bring the Cherenkov radiation right out of the detector acceptance. This allows for a smaller photon-detector area and a more compact system.

A more radical redesign took place later, as the engineering designs for the various subdetectors became more realistic. It became clear that LHCb had too much material and needed re-designing. The challenge was also to improve the trigger performance by increasing the precision of the momentum measurement, and this required increasing the magnetic field in the region of the VErtex LOcator (VELO) and the trigger tracker (TT) between RICH1 and the dipole magnet (see figure 1).

While RICH2 remained relatively unaffected, RICH1 underwent a major redesign. To protect the sensitive photon detectors from the greatly increased magnetic field, extremely heavy iron shielding had to be added to the apparatus. Accommodating these shields in the very congested region of LHCb’s experimental area near RICH1 was a major challenge.

Seeing the light

Particles produced in the collisions in LHCb will travel through the mirrors of RICH1 prior to reaching measurement components further downstream. To reduce the amount of scattering, RICH1 uses special lightweight spherical mirrors constructed from a carbon-fibre reinforced polymer (CFRP), rather than glass. There are four of these mirrors, each made from two CFRP sheets moulded into a spherical surface with a radius of 2700 mm and separated by a reinforcing matrix of CFRP cylinders. The overall structure contributes about 1.5% of a radiation length to the material budget of RICH1. As RICH2 is located downstream of the tracking system and magnet, glass could be used for its spherical mirrors, which in this case are composed of hexagonal elements (see cover).

Perhaps surprisingly, the “flat” secondary mirrors in the RICH detectors are not truly flat. Producing completely flat, but thin, mirrors is a difficult technological challenge because it is hard to maintain their rigidity over a long period of time. Instead, giving the mirrors a small amount of curvature (a radius of curvature greater than 600 m in RICH1 and around 80 m in RICH2), increases their structural integrity. The small distortions that this curvature introduces to the images of the Cherenkov ring can be corrected with software during data analysis, and therefore do not degrade the final performance of the system.

The experiment requires 484 tubes in total

Both RICH detectors use hybrid photon detectors (HPDs) to measure the positions of the emitted Cherenkov photons. The HPD is a vacuum photon detector in which a photoelectron, released when an incident photon converts within a photocathode, is accelerated by a high voltage of typically 10–20 kV onto a reverse-biased silicon detector. The tube focuses the photoelectron electrostatically – with a demagnification factor of around five – onto a small silicon detector array.

The LHCb collaboration has developed a novel dedicated pixel–HPD for the RICH detectors, working in close co-operation with industry. Here, the silicon detector is segmented into 1024 “super” pixels, each 500 μm × 500 μm in area and arranged as a matrix of 32 rows and 32 columns. When a photoelectron loses energy in silicon, it creates electron-hole pairs at an average yield of one for every 3.6 eV of deposited energy. The nominal operating voltage of LHCb’s HPDs is –20 kV, corresponding to around 5000 electron-hole pairs released in the silicon. Careful design of read-out electronics and interconnects to the silicon detector results in a high efficiency for detecting single photoelectrons. The experiment requires 484 tubes in total – 196 for RICH1 and 288 for RICH2 – to cover the four detection surfaces.

Testing times

To verify the quality of the HPDs and the associated components in the low-level data acquisition (DAQ), the LHCb collaboration has conducted a series of RICH test-beam exercises, most recently during September 2006 in the North Area at CERN’s Prévessin site. In the test beam, the apparatus consisted of a gas vessel filled with either nitrogen (N2) or C4F10 as the radiator medium, together with a housing for the photo-detectors that was separated from the gas enclosure by a transparent quartz window. The test beam from the SPS consisted mainly of pions, with small contributions from electrons, kaons and protons, and had a 25 ns bunch-structure; the same as will be provided by the LHC.

CCbea4_07_07

Columns of 16 HPDs observed the Cherenkov radiation emitted by the particles as they traversed the gas enclosure. The ring of Cherenkov light illuminated either one HPD, when using N2 as radiator, or up to four neighbouring HPDs with the C4F10 radiator (figure 4). The resulting data were recorded using final versions of the DAQ electronics and pre-production releases of the LHCb online software environment. An early version of the LHCb online-monitoring kept a check on the status of the test set-up and the quality of recorded data.

The analysis of the recorded test-beam data using the full LHCb reconstruction and analysis software involved a significant effort, but the results made it worthwhile. The tests verified the design specifications of the HPDs in a “real life” environment, with the measurement of properties such as the photoelectron yield and the resolution of the Cherenkov angle reconstructed from the data. Using the official LHCb software framework for the analyses also allowed the quality of the software to be verified with real data, so the team could spot any issues not seen in earlier simulation studies. The evaluation of the beam-tests indicates so far that all the hardware and software components involved in the tests match – or exceed – expectations, successfully passing an important milestone on the way to the start-up of the LHCb experiment.

CCbea5_07_07

The full LHCb detector has been extensively modelled in a detailed simulation, based on the Geant4 software package, taking into account all important aspects of the geometry and materials together with a full description of the optics of the RICH detectors. This has provided a platform for the development of sophisticated analysis software to reconstruct the events and provide excellent particle identification. Figure 5 shows an example of the complex event environment that LHCb will face in collisions at the LHC. To disentangle the event, the analysis performs a combined likelihood-fit to all known tracks in the event. By considering all tracks and radiators in a single fit, the algorithm naturally accounts for the most predominant background to a given ring, namely the neighbouring rings.

CCbea6_07_07

Figure 6 illustrates just how powerful this technique is. Here, using the detailed Geant4 simulations, the mass peak for the decay BS → KK is shown, together with the background contributions from other two-body decays. Without the kaon identification capabilities provided by the RICH detectors, the BS signal is swamped by background. Such efficient hadron identification will be a crucial component in the successful analysis of LHCb data.

Currently, the RICH group is fully focused on the commissioning of the RICH detectors at the experimental area at Point 8 on the LHC ring. The RICH2 detector is completely installed and the HPDs and readout systems are being commissioned. The magnetic shielding and radiator enclosure for RICH1 is in place and installation of all HPDs and optics will be completed later this year. Commissioning of the detector control and safety systems, together with the readout DAQ systems is also progressing at full speed. Everything is on track to have the system fully functional and ready for action for first data in 2008.

bright-rec iop pub iop-science physcis connect