Comsol -leaderboard other pages

Topics

40 MHz ball reveals sticky fingers

The LHC, with its “two in one” magnet structure cooled by superfluid helium for operation at 1.9 K, is its own prototype. It is therefore no surprise that problems arise that demand ingenious solutions, such as a newly invented diagnostic tool. Slightly larger than a ping-pong ball, it contains a tiny 40 MHz transmitter and fits just inside the beam pipe. Its purpose is to check interconnections within a sector (an eighth) of the machine without the need to open it up.

The need for such a device came to light when teams detected a fault in one of the interconnections during the warm-up of sector 7-8, the first to have been cooled to 1.9 K. One of the “plug in” modules responsible for the continuity of the electrical circuit in each of the LHC’s two vacuum chambers was damaged as the sector warmed up.

CCstic1_08_07

The plug-in modules ensure that mirror currents produced by the beams in the walls of the vacuum chambers can circulate freely. Any impedance would create hot points and reduce the intensity of the beam. The modules consist of copper “fingers” that slide along a cylinder and allow for contraction and expansion of the LHC’s components during cooling and warming. Each module expands or shrinks by about 40 mm, but the fingers always remain in contact with the cylinder in which they are sliding. In the faulty unit, the fingers failed to slide properly when the vacuum pipes returned to their original length, buckling into the space where the beam would normally pass.

CCstic2_08_07

It is difficult and time-consuming to open the magnet cryostats to check interconnections; it takes three weeks to open a sector and five weeks to close it again. X-ray studies revealed four more faulty modules in sector 7-8, but it was clear that a device that could check the space inside the beam pipes would be extremely useful. The solution is a ball 34 mm in diameter, which transmits at 40 MHz – the frequency of beam bunches in the LHC. A pumping system propels it through the vacuum pipe and beam-position monitors located every 50 m pick up the emitted signals. As the ball is a fraction smaller than the 36 mm beam screen, any obstacles will stop its progress and there will be no signal in the next monitor. This information allows the team to concentrate on the small number of interconnections between the two beam-position monitors concerned.

A first test on 13 September proved successful as the ball travelled 800 m through one vacuum pipe, detecting a sixth faulty module in the process. Altogether, only 6 out of 366 modules have proved to be damaged as sector 7-8 warmed up, and repairs are now in progress. An extra benefit is that the device allows the team to inspect the beam pick-ups around the ring.

Elsewhere around the LHC, by the end of October teams had cooled a second sector to 80 K and begun pressure testing on a third. Any vacuum leaks that were found have been isolated and are currently being repaired. Cooling of further sectors should begin in November.

In addition, all of the inner triplet magnet assemblies have been repaired and are in position in the tunnel. Three of them had passed their pressure tests by the beginning of October. The cryogenic electrical distribution feedboxes (DFBX), which form part of the triplet assembly, have also undergone repairs. Only the triplet that was damaged during the spring test, plus one DFBX, have been removed from the tunnel. The others have been repaired in situ, a prerequisite for a solution.

The LHC: a new high energy photon collider

CCpho01_08_07

Photon-induced interactions have traditionally been studied with electron beams in fixed-target experiments and colliders, LEP (electron–positron) and HERA (electron–proton) in particular. However, photon–hadron and photon–photon interactions also occur when the electron beams are replaced by ultra-relativistic beams of other charged particles such as protons or heavy nuclei. In these cases, the maximum photon energies are restricted by the form factor of the projectile, but at the extremely high energies of the LHC they will be higher than at any other existing accelerator: up to a photon energy of around 4 TeV in the photon–proton centre-of-mass frame. Furthermore, since the intensity of the electromagnetic field – the number of photons in the “cloud” surrounding the charge of the beam particle – is proportional to the square of the particle’s charge Z, photonic interactions are enhanced by up to a factor of Z2, or around 104 for heavy ions. Indeed, the fields from heavy ions are strong enough that multiple photons may be exchanged in a single event. Figure 1 shows a schematic view of such an electromagnetic (or ultra-peripheral) nucleus–nucleus collision.

The study of photon-induced interactions at the LHC, as well as at existing hadron colliders such as RHIC at Brookhaven or the Tevatron at Fermilab, is challenging despite the high photon energies and fluxes. The interaction is always electromagnetic with an electron beam and the small contribution from the weak interaction can usually be neglected or easily separated. By contrast, the photonic interactions at hadron colliders must be separated from a dominant QCD background. The low multiplicity and mostly longitudinal kinematics of electromagnetic processes result in an event topology that is different from hadronic interactions. In particular, event triggering is a critical issue that depends much on instrumentation in the very forward direction, close to the beam line. The workshop on Photoproduction at collider energies: from RHIC and HERA to the LHC (held at ECT*-Trento in January), looked at how these issues have been addressed and solved in previous experiments, and considered the perspectives at the LHC. The workshop gathered around 40 physicists, equally divided between theorists and experimentalists.

Much of the workshop focused on the latest advances in the study of low-x parton densities in protons and nuclei probed by photons. Ultra-peripheral collisions at the LHC can probe the physics of parton saturation at Bjorken-x values as low as 10–5. Talks by SLAC’s Stan Brodsky, Mark Strikman of Pennsylvania and Leonid Frankfurt of Tel Aviv highlighted these theoretical aspects. HERA saw its last collisions at the end of June and has been an important machine for the field. Michael Klasen from Grenoble and DESY’s Sergey Levonian gave theoretical and experimental overviews, respectively, of the HERA results. At the Tevatron, the CDF collaboration has recently published its first analysis of two-photon interactions in proton–antiproton collisions. Andrew Hamilton of Geneva presented the results at the workshop. At RHIC, the STAR and PHENIX collaborations have studied ultra-peripheral gold–gold collisions. Yury Gorbunov of Creighton and David Silvermyr from Oak Ridge showed the latest results on vector meson photoproduction.

Looking to the future, Krzysztof Piotrzkowski from UC Louvain presented the group’s comprehensive study of various photon-induced electroweak and beyond-Standard Model processes that can be studied in proton–proton collisions at the LHC. These include associated W-Higgs and single-top photoproduction, as well as two-photon production of W boson pairs. To conclude the series of talks at the workshop, Otto Nachtmann of Heidelberg and Ute Dreyer of Basel covered the theory of anomalous gauge-boson couplings in γ– γ, γ–p and γ–A interactions.

The physics of photon–nucleus interactions in ultra-peripheral collisions is also the focus of a CERN Yellow Report, completed in June. This 230-page document, the joint effort of more than 20 contributors, summarizes results from the SPS at CERN and from RHIC. It examines planning for ultra-peripheral collisions at the ALICE, ATLAS, and CMS experiments at the LHC. The vitality of this research field was also evident in the number of contributions at the Photon 2007 conference held in Paris in July.

The conclusion is that the LHC has much to offer as a photon collider. Photon–hadron and photon–photon processes will reach energies an order of magnitude larger than at previous colliders. They will not only provide valuable information on the strong interaction – in particular of low-x parton densities and non-linear QCD phenomena – but will also open new windows on electroweak processes and physics beyond the Standard Model, which will complement the mainstream studies in proton–proton and nucleus–nucleus collisions.

Father of the shell model

Hans Jensen (1907–1973) is the only theorist among the three winners from Heidelberg University of the Nobel Prize for Physics. He shared the award with Maria Goeppert-Mayer in 1963 for the development of the nuclear shell model, which they published independently in 1949. The model offered the first coherent explanation for the variety of properties and structures of atomic nuclei. In particular, the “magic numbers” of protons and neutrons, which had been determined experimentally from the stability properties and observed abundances of chemical elements, found a natural explanation in terms of the spin-orbit coupling of the nucleons. These numbers play a decisive role in the synthesis of the elements in stars, as well as in the artificial synthesis of the heaviest elements at the borderline of the periodic table of elements.

CCmod01_08_07

Hans Jensen was born in Hamburg on 25 June 1907. He studied physics, mathematics, chemistry and philosophy in Hamburg and Freiburg, obtaining his PhD in 1932. After a short period in the German army’s weather service, he became professor of theoretical physics in Hannover in 1940. Jensen then accepted a new chair for theoretical physics in Heidelberg in 1949 on the initiative of Walther Bothe, who received the Nobel prize in 1954 for the development of the coincidence method. Apart from his work in nuclear and particle physics, Jensen became the driving force behind the rebuilding of physics research in Heidelberg after the Second World War. The Institute for Theoretical Physics obtained new chairs, particularly in theoretical particle physics. Together with Bothe, he expanded the experimental-physics department and convinced well-known experimentalists to come to Heidelberg, including his collaborator in the development of the shell model, Otto Haxel, in 1950 and Hans Kopfermann, a specialist on nuclear moments and hyperfine interactions, three years later.

The shell model past and present

To celebrate the centenary of Jensen’s birth, the Heidelberg Physics Faculty and the Institute for Theoretical Physics organized a symposium on Fundamental Physics and the Shell Model. A series of talks looked at Jensen’s life plus the role of the shell model in astrophysics and nuclear physics today. In keeping with Jensen’s interest in music, performances by the Heidelberg Canonical Ensemble complemented the talks. In the introductory talk on The Shell Model: Past and Present, former director at the Heidelberg Max Planck Institute, Hans Weidenmüller, gave an overall view of Jensen’s Nobel-prizewinning contribution to nuclear physics. The paper on the shell model by Haxel, Jensen and Hans Suess appeared in the same 1949 edition of Physical Review as Goeppert-Mayer’s work (Haxel, Jensen and Suess 1949 and Goeppert-Mayer 1949). It proved to be a surprising solution to the problem of nuclear-energy levels. Based on the picture of independent particle motion of protons and neutrons with strong spin-orbit coupling, the model yields the correct sequence of energy levels and explains the magic numbers in terms of energy gaps above full levels.

The apparent contradictions with the collective properties of nucleons in nuclei (evident from the rotational spectra) as well as with the chaotic properties of nuclei (evident in Niels Bohr’s compound nucleus picture) only found their explanations much later. Today, shell-model calculations in large configuration spaces can indeed explain rotational spectra, and within individual shells consistency with the random nuclear properties appears once the residual interaction is considered. However, a derivation of the shell model from the basic nucleon–nucleon interaction is still missing.

Berthold Stech, Jensen’s former colleague and long-time director of the Heidelberg theory institute, presented his recollections of Jensen with photographs and anecdotes. As a student representative after the war, Stech contributed to Jensen’s move to Heidelberg by writing a letter to the publisher of the local newspaper, who then went to the state government to ensure that the offer was made to Jensen. He talked about Jensen’s vital contributions to making Heidelberg a famous physics centre. With private rooms in the institute, Jensen often invited students and colleagues for discussions and to listen to music. Stech also quoted from a recent letter by Aage Bohr and Ben Mottelson, who emphasized Jensen’s inspiring personality.

CNLmod2_08_07

Wolfgang Hillebrandt, director at the Max Planck Institute for Astrophysics in Munich-Garching, spoke about supernovae and the shell model. This active field of research represents a synthesis of astrophysics and nuclear physics. In Type Ia supernovae there is a high and almost identical fraction of nickel-56. Even though this is a doubly magic nucleus, it is not stable (its half-life is six days) and its decay through cobalt-56 to iron-56 is what makes these supernovae shine. Hence, the brightness of the supernova is proportional to the produced mass of nickel-56. For progenitor stars that are similar, this allows for very precise determination of distances, which since 1998 have been used to infer the accelerated expansion of the universe. Many physicists consider this to be the consequence of dark energy. Its origins are currently under investigation in many institutes, for example, at the Bonn–Heidelberg–Munich research centre “The Dark  Universe”.

Core-collapse supernovae (Type II), such as SN1987A in the Large Magellanic Cloud, where a blue supergiant exploded in several seconds, allow the direct test of ideas about the synthesis of heavy elements. For example, observations of the characteristic gamma rays indicate the presence of the corresponding isotopes synthesized in the particular star or during the explosion. Elements beyond iron are, in particular, produced in a sequence of rapid neutron captures known as the r-process. It turns out that the element abundances are mainly determined by nuclear structure, and hence, by the shell model; the subtleties of the astrophysical processes prove to be comparatively unimportant.

In the final talk of the symposium, Peter Armbruster of the GSI in Darmstadt explained the synthesis of the heaviest elements using cold fusion (only one neutron emitted) up to and beyond roentgenium, symbol Rg and atomic number Z = 111. The relative stability of these elements, with mean lifetimes in the order of milli-seconds to seconds, is a consequence of the Goeppert–Jensen shell effects. Without these they would not exist. The element Z = 112, synthesized at GSI in 1996, is still unnamed. Meanwhile, Yuri Oganessian’s group at the Flerov Laboratory at JINR, Dubna, used radioactive targets in hot-fusion reactions with the emission of up to five neutrons, to create synthetically the elements 114, 116 and 118. Kosuke Morita and co-workers at RIKEN in Japan made element 113 in 2004.

Relativistic mean-field calculations indicate that the closed shell should occur at Z = 120 (the number of protons), with the magic neutron number of 184, as had appeared in the book of Jensen and Goeppert-Mayer about the shell model (Goeppert-Mayer and Jensen 1955). This means that this doubly magic superheavy nucleus should have 304 nucleons. It will, however, be extremely difficult to synthesize since its relatively low density of energy levels above the ground-state favours fission over neutron emission, as Armbruster emphasized. This would lead to a drastic reduction of the survival probability.

As a lasting tribute to Jensen, starting next year, the Jensen Guest Professorship will be created with the financial support of the Klaus Tschira Foundation, Heidelberg. During a five-year period, internationally renowned physicists will visit the Institute for Theoretical Physics in Heidelberg to conduct research, give seminars and one public lecture a year.

Postcards from the LHC

CCpos01_08_07

March: Precision is the name of the game as, once in position in the tunnel, the LHC’s magnets are carefully aligned.

CCpos02_08_07

March: The Train Inspection Monorail, affectionately referred to as “TIM,” will allow teams to view the LHC tunnel and take measurements remotely when it is inaccessible to humans.

CCpos03_08_07

The last of 1746 superconducting magnets is lowered into the LHC tunnel via a specially constructed pit at 12.00 on 26 April. This 15 m long dipole magnet is one of 1232 dipoles that will guide the two proton beams in opposite directions around the 27 km circumference.

CCpos04_08_07

Gently does it: In January, the lorry transporting the time projection chamber for the ALICE experiment took an hour to travel the 200 m from the assembly hall to the access shaft for the underground cavern.

CCpos05_08_07

The first half of the CMS barrel hadron calorimeter cylinder was lowered into the underground cavern in February. It weighs almost 600 tonnes.

CCpos13_08_07

In July the CMS forward pixel detector, which was built at Fermilab, underwent an installation test. The photo shows the central opening of the silicon strip tracker where the beam pipe and pixel detector will be located.

CCpos06_08_07

January: The CMS tracker outer barrel is inside the tracker support tube, fully cabled. The golden rectangles are digital optohybrid modules for distributing clock and trigger signals.

CCpos07_08_07

ALICE’s inner tracking system (ITS) was installed into the heart of the experiment in March. It was a delicate task to fit the ITS within the time projection chamber.

CCpos09_08_07

The 42nd and final module for LHCb’s vertex locator arrived from Liverpool in March, marking the culmination of 10 years of development. The detector will be placed just 5 mm from the beam line.

CCpos08_08_07

The outer layers of ALICE’s ITS, seen prior to installation in March, contain almost 5 m2 of double-sided silicon strip detectors.

CCpos10_08_07

The first inner detector endcap for the ATLAS experiment is fully inserted into the liquid-argon cryostat in May.

CCpos11_08_07

March: End view of the heat shield and cryostat of one of the ATLAS endcap toroids while still in the assembly hall before the mounting of detectors.

CCpos12_08_07

Lowering the second ATLAS endcap toroid magnet into the cavern in July.

 

ALMA: a guided tour with Massimo Tarenghi

CCint01_08_07

Massimo Tarenghi has been described as “an excellent scientist and an energetic manager” by physics Nobel laureate Riccardo Giacconi, his colleague and fellow pupil of Beppo Occhialini. Tarenghi had wanted to be an astronomer since childhood. He graduated from the University of Milan with a degree in theoretical astrophysics, plus a thesis on gamma radiation from the galaxy core. In Arizona he took part in the first research work on large-scale galaxy distribution. He returned to Europe to lay the foundations of the European Organization for Astronomy in the Southern Hemisphere (ESO) at CERN, where ESO had its first offices.

CCint02_08_07

Like Giacconi, Tarenghi is a pioneer of the first large telescopes. He put forward the idea of building ESO’s Very Large Telescope (VLT) and directed the project from 1986 to 2002, commuting from ESO’s eventual premises in Garching, Germany, to the telescope’s site in the Paranal desert in Chile. He was appointed director of the Atacama Large Millimetric and Submillimetric Array (ALMA) in 2003, which is under construction in northern Chile on the Chajnantor plateau of the Atacama desert, the highest desert in the world. ALMA is a radio telescope made up of 64 antennas over an area of 25 km2 at an altitude of 5100 m.

As we climb by jeep from the Operation Support Facility (OSF), ALMA’s base camp at 2900 m, to the construction site at 5100 m, Tarenghi tells us: “When ALMA is ready in 2010, it will be to astronomers the equivalent of the LHC to particle physicists.” ALMA will be operated from the OSF, which will also house offices and laboratories. The ALMA assembly hall and the control room to operate the telescopes remotely are under construction. The circular structure of the buildings echoes the Atacameno architecture, honouring the 20,000 indigenous population who have lived in this extreme environment for 10,000 years. They will be given free access and job opportunities on the ALMA site.

By late August, three of the antennas had been shipped to Chile from Japan. Assembly and adjustment is taking place in the assembly hall, with the first of the three expected to be installed on the Chajnantor site before the end of the year. “Most of the work after commissioning will also be done here, just below 3000 m, which is surely more comfortable than above 5000 m, where there is 50% less oxygen in the air. It is also more convenient from a legal point of view,” says Tarenghi.

Before leaving the OSF, we went through medical screening to check blood pressure and oxygen levels and we collected oxygen bottles for the trip. With the magnificent background of the snow-capped Licancabur and Lascar volcanoes, the road to Chajnantor winds up through the Atacamenos archaeological sites and past examples of Echinopsis atacamensis, a rare protected species of centennial cacti that grows up to 9 m tall and only at altitudes of 3200 – 3800 m. The exceptionally flat access road is 20 km long and 12 m wide. It was built specially to enable the smooth transportation of the 64 antennas from the base camp to the Chajnantor site. The builders had to go around the cacti and archaeological remains to leave them untouched.

After stopping a few times to adjust to the altitude, we reach the ALMA site at 5100 m, and admire the view, which was literally breathtaking. The site has been chosen because of its ideal conditions for radio astronomy. Being isolated guarantees the total absence of other radio signals from human communications. The lack of humidity, which would otherwise absorb the millimetre and submillimetre emissions that ALMA is designed to catch, is also important. Tarenghi explains: “Most of the energy of the universe is made of the millimetric and submillimetric radio waves that ALMA is specialized in. In this area of the electromagnetic spectrum, half of the stars in the universe are formed inside intergalactic dust, which makes them invisible to optical telescopes. Here interesting astronomical phenomena take place, such as the birth of new stars and galaxies, immediately after the Big Bang. ALMA will be able to tell 90% of the history of the universe, which we still do not know. Moreover, in the submillimetric radio waves organic molecules are found, such as carbon and sugars. They are the origin of life in space, far from the Earth.”

CCint03_08_07

On the Chajnantor we reach the Atacama Pathfinder Experiment (APEX), the first antenna installed on the ALMA site in 2004. It has a disc of 12 m diameter and weighs 120 tonnes. “APEX already obtained an important result in August 2006. It found fluorine, the first organic molecule to be found in the intergalactic dust of the Orion Nebula – a nice start that shows the scientific richness of this area of the spectrum. We have all the more reason to expect spectacular discoveries after 2010,” Tarenghi tells us.

In 2010 the site will be covered by 64 antennas similar to APEX, made in Europe, Japan and the US. Their signals will be combined by interferometry, making a combined radio telescope as big as the distance between two antennas. The site will have 197 concrete platforms to enable astronomers to lay out the 64 antennas according to their needs. The “compact” configuration, with a 150 m diameter around the centre of array (COFA), will be used to observe a slice of the sky at the maximum resolution (20 μm). The large array with a 3 km diameter will enlarge the visual range exactly like the zoom of a camera. (An array of diameter more than 10 km is also under construction) Tarenghi adds: “ALMA will be like Hubble on Earth. It’s a unique effort by Europe, the US and Japan. Like VLT, ALMA is different from other observatories not just for the size and number of telescopes in the array, but because it was designed from scratch as an astronomical research machine with all the telescopes being part of a large unit – like the accelerator complex at CERN, in a way.”

As former director of the VLT, Tarenghi can uniquely explain how the two projects differ. “VLT looks at the hot universe, ALMA is specialized in the cold universe. The difference is enormous. ALMA will explore areas that are not accessible to optical telescopes. In the millimetric radio waves, luminosity decreases and in the cold areas you have clouds, dust, disks where entire planetary systems are formed around stars. ALMA will be able to see the first galaxies in the universe as they were born around 14 billion [thousand million] years ago,” says Tarenghi.

He also explains to us why ALMA will be able to see the origin of life in space. The submillimetre region that ALMA specializes in is also the area where organic molecules are born and where planets form around other stars. ALMA will detect the emission of the atmospheres of other planets and will be able to find the presence of life. “Through the physical and chemical analysis of the atmosphere, ALMA will detect the presence of water, find out when dust grains formed and reconstruct the molecular history of the universe,” he says. “It will map the presence of water to the extreme limits of the universe and stand the highest chance, compared with any other instrument, to find life on other planets.”

The real innovation that ALMA will bring about is a radical change in the way astronomers work. “ALMA will be an all-rounder, an observatory open to all astronomers, irrespective of their specialization,” says Tarenghi. “Instead of sharing observation time, astronomers will have access to ALMA’s data. Like LEP or the LHC, ALMA will provide access to scientific data that can be used by the entire community, including theoreticians who want to test a theory. This is a huge new step in astronomy.”

The goals of ALMA reflect the challenges in astronomy today. Tarenghi tells us: “We are ignorant of the way planetary systems are formed, we do not know how the first objects were formed and what they looked like, what is the birth rate of stars in the universe. We know the first galaxies were made of just hydrogen and the second generation of heavier elements, but the process that gave origin to the formation of planets was born from a sequence of birth and death of stars that we do not know with accuracy. Only by going to large distances with telescopes that can perform both a physical and chemical analysis, we will be able to understand the mechanism that formed stars and reconstruct the history of the stars’ birth rate.” Investigating dark matter and energy are also challenges for ALMA, and are shared with experiments at the LHC. “These phenomena require a detailed knowledge of the large-scale structures of the universe. Only instruments like ALMA and telescopes like VLT, which can reach the limits of the warm universe, will give us an idea, as they can provide more data from different observation sources,” concludes Tarenghi. It seems that ALMA, like the LHC, is set to give us a much clearer view of the nature of the universe.

Exotic lead nuclei get into shape at ISOLDE

CCrad01_08_07

In nature, relatively few nuclei have a spherical shape in their ground state. Examples are 16O, 40Ca, 48Ca and 208Pb, which are “doubly magic”, with numbers of both protons and neutrons corresponding to closed shells in the nuclear shell model. By moving away from the closed shells and increasing the number of valence nucleons, both protons and neutrons, these nuclei can eventually acquire a permanent deformation in their ground state. Experiments reveal that sometimes – due to the complex interplay of single-particle and collective degrees of freedom – both a spherical and deformed shape occur in the same nucleus at low excitation energies. In the region around lead, for example, physicists in the 1970s first observed this “shape co-existence”, using optical spectroscopy at the ISOLDE facility at CERN (Bonn et al. 1972 and Dabkiewicz et al. 1979). Since then, an extensive amount of data has been collected throughout the chart of nuclei (Wood et al. 1992 and Julin et al. 2001).

Some of the best-known examples of shape co-existence are found in neutron-deficient lead nuclei (atomic number or number of protons, Z = 82). The uniqueness of this region is mainly due to three effects. First, the energy gap of 3.9 MeV above the Z = 82 closed proton shell forces the nuclei to adopt a spherical shape in their ground state. However, the energy difference is small enough for a second effect to occur: the creation of “extra” valence proton particles and holes as a result of proton-pair excitation across the gap. Third, a very large neutron valence space between the shell closures with the number of neutrons N = 82 and 126 results in a large number of possible valence neutrons as nuclei approach the neutron mid-shell at N = 104. The strong deformation -driving interaction between the “extra” valence protons and the valence neutrons produces unusually low-lying, deformed oblate (disc-like) and prolate (cigar-like) states in the vicinity of N = 104, where the number of valence neutrons is maximal (Wood et al. 1992). In some cases, the deformation-driving effect is so strong that the deformed state becomes the ground state, as happens near N = 104 in the light isotopes of mercury (Z = 80) and platinum (Z = 78).

Atomic spectroscopy provides direct and model-independent information on the properties of nuclear ground and isomeric states via a determination of hyperfine structure and the isotope shift. These are small effects on atomic energy levels due to the nuclear moments, masses, sizes and shapes of nuclear isotopes, allowing the spins, moments and changes in charge-radii of nuclei to be deduced. In particular, the changes in charge radii determined from the isotope shifts by optical spectroscopy in long isotopic chains have revealed collective nuclear properties clearly.

Figure 1 shows changes of mean-square charge radii (δ<r2>) of lead, mercury and platinum isotopes as a function of the number of neutrons. All the data for the nuclides furthest from stability were determined at ISOLDE by a variety of techniques (Otten 1989 and Kluge and Nörtershäuser 2003). In the 1970s, nuclear-radiation detected optical pumping and laser fluorescence spectroscopy were used, collinear spectroscopy in the 1980s and resonance ionization mass spectroscopy from the late 1980s onwards. Now laser spectroscopy in the laser ion source is used, as described below.

Figure 1 shows how the measured δ<r2> for platinum isotopes develop a distinct deviation from the smoothly decreasing trend expected from the spherical-droplet model. For mercury, a sudden and dramatic change in δ<r2> known as “shape staggering”, occurs between 187Hg and 185Hg (N = 107 and 105 respectively). A similar change occurs between the isomeric (I = 13/2) and ground (I = 1/2) states in 185Hg, in this case, “shape isomerism” or “shape co-existence” (Bonn et al. 1972 and Dabkiewicz et al. 1979). These effects are interpreted as a change from weakly deformed oblate to strongly deformed prolate shapes. The  neutron-deficient lead isotopes are a particularly interesting example of shape co-existence. Theoretical calculations have long suggested the co-existence in these nuclei of three different shapes: spherical, prolate and oblate – hence triple co-existence. Recent particle (α, β) and in-beam studies have found strong evidence for this phenomenon in some of the isotopes from 182Pb to 208Pb.

One of the most spectacular examples is the mid-shell nucleus 186Pb, as indicated in figure 2. Here, studies of the α-decay of the parent nucleus 190Po have revealed a triplet of low-lying (E* < 650 keV)>+ states (Andreyev et al. 2000). These were assigned to co-existing spherical, oblate and prolate shapes, with the spherical state being the ground state. Subsequent in-beam studies identified excited bands built on top of these states. An important question arises, however, concerning the degree of mixing between different configurations. As the excited 0+ states decrease their energy when approaching N = 104 (186Pb), their mixing with the 0+ ground state could increase substantially, an effect that could possibly be seen in the value of the charge radii.

CCrad02_08_07

Therefore, the aim of experiment IS483 at ISOLDE was to measure for the first time the isotope shifts in the atomic spectra of the very neutron-deficient nuclei in the region 182Pb to 190Pb, deducing the mean-square charge radii in order to probe the ground state directly (De Witte et al. 2007 and Anderyev et al. 2002). However, the expected production rates were far too low (e.g. 1 ion/s for 182Pb) for the laser spectroscopy  techniques used previously at ISOLDE. Instead, an extremely sensitive spectroscopic technique was employed: resonance ionization spectroscopy in the ion source, first developed at the Petersburg Nuclear Physics Institute in Gatchina for the investigation of rare-earth isotopes (Alkhazov et al. 1992).

The radioactive lead isotopes are produced at ISOLDE in a proton-induced spallation reaction, using protons at 1.4 GeV on a thick (50 g/cm2) target of uranium carbide (UCx). The reaction products diffuse out of the target toward the ionizer tube, which is heated to around 2050 °C. In the tube, a three-step laser ionization process selectively ionizes the lead isotopes. To determine the isotope shift of the appropriate optical spectral line, the laser for the first excitation step is set to a narrow linewidth of 1.2 GHz and its frequency is scanned over the resonance. After ionization and extraction, the radioactive ions are accelerated to 60 keV, mass separated and subsequently implanted in a carbon foil mounted on a rotating wheel at the focal plane of ISOLDE. A circular silicon detector (150 mm2 × 300 μm) placed behind the foil measures the α-radiation during a fixed implantation time, after which the laser frequency is changed and the implantation-measurement cycle repeated again. The implanted lead ions are counted via their characteristic α-decay.

CCrad03_08_07

Figure 3 shows the intensity of the α-lines as a function of laser frequency for a sequence of nuclei (with even N) from 188Pb to 182Pb. This reveals the optical isotope shift, which allows us to deduce the values of δ<r2> shown in figure 1. Similarly, the experiment also measured isotopes with an odd number of neutrons, 183,185,187Pb, all of them produced in the ground and isomeric states. Note that the “isomer separation” could be obtained by tuning the laser frequency to some specific values at which only one of the isomers is selectively ionized in the cavity and subsequently extracted and analysed.

Figure 1 compares the deduced values of δ<r2> with the predictions of the spherical-droplet model. The deviation from these predictions increases when moving away from the Z = 82 closed proton shell of lead. The large deviation observed for the ground state of the odd-mass mercury isotopes and the odd- and even-mass platinum isotopes around N = 104 has been interpreted as a result of the onset of strong prolate deformation. In the case of lead, from 190Pb downwards, the δ<r2> data show a distinct deviation from the spherical-droplet model. This suggests modest ground-state deformation, but comparisons of the data with model calculations show that δ<r2> is sensitive to correlations in the ground-state wave functions and that the lead isotopes essentially stay spherical in their ground state at – and even beyond – the N = 104 mid-shell region.

This experiment has shown that the extreme sensitivity of the combined in-source laser spectroscopy and α-detection allows us to explore the heavy-mass regions far from stability with isotopes produced at a rate of only a few ions a second (182Pb). An important development would be: to use the isomer shift in the case of odd-mass-number isotopes to ionize nuclei selectively in their ground or isomeric state; to post-accelerate these with the REX-ISOLDE facility; and use the isomerically pure beams of the 13/2+ and 3/2 isomers to investigate, for example, the influence of different spin states of the same incident particle on the reaction mechanism.

Strangeness, charm and beauty come to Slovakia

The International Conference on Strangeness in Quark Matter, SQM 2007, took place on 24–29 June in the charming old town of Levoc̆a, located in Spis̆ in north-eastern Slovakia. Organized by the Institute of Experimental Physics of the Slovak Academy of Sciences, Kos̆ice, it was the 12th in a well-established series of topical conferences that bring together experts working in particle physics, nuclear physics and cosmology. More than 100 scientists from 20 countries took part this year, and the contributions covered a wide range of issues, from the bulk properties of the partonic matter created in nucleus–nucleus collisions, to the energy loss of fast partons traversing the medium, with a particular emphasis on the perspectives for the future.

CCstr01_08_07

The SQM series is currently dedicated to understanding what the production of strange – and also charm and beauty – particles can reveal about the hot and dense partonic matter formed in a high-energy nucleus–nucleus collision. It could perhaps more appropriately be called Strangeness, Charm and Beauty in Quark Matter. However, because of tradition, the original name has stuck. The extension to flavours heavier than strangeness has occurred naturally over the years as the high energies available at RHIC (and expected at the LHC) have turned charm- and beauty-flavoured particles into practical and promising probes for exploring QCD matter. On the experimental side, the challenge of detecting strange, charm and beauty particles is similar – although more difficult with charm and beauty – as the complete identification of all of these types of particle relies on identifying their decay products and decay vertices. Hence the need for similar techniques with the three flavours, both for the apparatus (high-granularity vertex detectors) and for the analysis. The SQM conferences therefore provide an excellent forum for researchers in this field to exchange not only physics results, but also information on experimental techniques and analysis methods.

There were more than 70 theoretical and experimental contributions this year, including review talks and reports from all of the active experiments at Brookhaven’s RHIC (BRAHMS, PHENIX, PHOBOS and STAR), at CERN’s SPS (CERES, NA49, NA57 and NA60) and at GSI’s heavy-ion synchrotron, SIS (FOPI). As the start-up of the LHC is just around the corner, more contributions than ever illustrated the plans for physics at future facilities. There were presentations on ALICE, the LHC experiment dedicated to heavy-ion physics, and the heavy-ion programmes for ATLAS and CMS and on the experiment on compressed baryonic matter (CBM) that is planned at the Facility for Antiproton and Ion Research at GSI.

The first day was devoted to a symposium where graduate students and post-doctorates had the opportunity to present their research results. Before the summary talks on the last day, a brief commemoration took place in honour of Maurice Jacob. He was a leader in the theory of high-energy hadron physics, a strong supporter of heavy-ion physics and a friend to many of us. He passed away on 2 May and we are all sorry that he did not live to enjoy the LHC’s results.

Hadronization and fragmentation

The bulk of the observed hadrons with low transverse momenta (pT < 2 GeV/c) are produced from matter that seems to be well-equilibrated by the time it dresses up into hadrons. In other words, statistical hadronization models reproduce hadron yields and ratios well, and in terms of only a few fitted -parameters, such as temperature and chemical potentials. A robust collective flow accompanies this equilibration. In non-central collisions, the spatial azimuthal asymmetry of the initial state transfers very efficiently to a momentum asymmetry of the final state. In a hydro-dynamical description, an “elliptic flow” of this kind – generated at the early stages of the expansion – gives access to the equation of state of partonic matter. The combination of hydrodynamics and statistical hadronization leads to a reasonable parameterization of the low-pT hadronic spectra and elliptic flow.

CCstr02_08_07

Many of the theory presentations dealt with the understanding of relativistic hydrodynamics and of the quark matter equation of state. Among several new results on the experimental side, we note that the RHIC data on copper–copper collisions at 200 GeV show enhancements of the Λ, Ξ, anti-Λ, anti-Ξ and Φ-meson with respect to proton–proton collisions. These enhancements are similar to those found (at a given number of participant nucleons) in gold–gold collisions at the same energy and in lead–lead collisions at the energies of CERN’s SPS.

The presence of the medium appears to modify fragmentation functions, which describe the dressing up of partons into final state particles. At high pT, the fragmentation of the parent parton is the dominating process. At intermediate pT (2 < psub>T < 6 GeV/c), however, the valence quark recombination or coalescence seems to play an important role. As a result, hadron production cannot be considered to be either thermal or perturbative, since the medium interferes with the hadronization process. For example, if hadrons are formed by recombination, the features of the parton spectrum are shifted to higher pT in the hadron spectrum – and in a different way for mesons than for baryons.

In this context interesting new results on K* production were presented. The azimuthal asymmetry of these particles corresponds to that expected from the recombination of two valence quarks. This would occur if coalescence of a valence quark–antiquark pair forms the K*. This is in contrast to what would happen if the K* were produced in the hadronic phase by combining a K and a π, each formed from a valence quark–antiquark pair, therefore requiring the recombination of four valence quarks (figure 1).

Fast parton energy loss

Strong quenching of hadrons with large transverse momentum (pT > 6 GeV/c) is another striking phenomenon, first observed at RHIC. The high-pT partons generated in hard scatterings at the initial stages of the nucleus–nucleus collisions do not fly away and hadronize freely. Instead, the nearby matter seems to largely absorb them. High-pT photons instead remain essentially unaffected, leading to a picture of a dense medium that is opaque to partonic, coloured projectiles but relatively transparent to photons.

Vigorous theoretical and experimental efforts are under way to understand parton energy loss in terms of perturbative QCD (pQCD). Various groups have described the suppression of light hadrons in terms of radiative energy loss by gluon bremsstrahlung. According to such calculations, charm and beauty quarks should be absorbed significantly less than light quarks and gluons. However, data from the PHENIX and STAR experiments, which compare the production in nucleus–nucleus and proton–proton collisions of high-pT “non-photonic” electrons (thought to originate mainly in heavy-flavour decays), seem to indicate that heavy quarks lose energy as much as light quarks do.

CCsro03_08_07

There were many contributions devoted to this puzzle at SQM 2007. Attempts to reduce disagreement by including  elastic-scattering losses in addition to the radiative ones are being considered. On the experimental side, participants stressed the need to separate out the fraction of electrons coming from the decay of beauty hadrons, since b quarks are expected to lose even less energy than c quarks. Another important experimental caveat concerns the distribution of heavy quarks among the different heavy-flavour hadron species. This could change when going from proton–proton to nucleus–nucleus collisions, leading to pT-dependent variations of the semi-electronic branching ratios. Such an effect should obviously be kept under control when comparing electron production in nucleus–nucleus and proton–proton collisions. Some groups are making useful attempts in these directions by identifying the charmed meson D0 from the reconstruction of its decay. However, vertex detectors such as those of the LHC experiments are necessary for pursuing these studies further.

The fate of the energy deposited by the partons along their path also turns out to be non-trivial. It appears as though the partons’ propagation gives rise to some collective hydro-dynamical motion. Among the contributions on this subject, there was an interesting study of the response of the medium to energy loss, by analysing two- and three-particle correlations. The results seem to indicate a peak in particle production on a cone at an angle of about one radian from the direction of the propagating parton. A possible explanation would be the generation of a shock wave in the medium. The answer to this and many other questions will probably have to wait for the LHC data. We hope that there will be some to discuss at the next two conferences in this series being held in Beijing (2008) and Rio de Janeiro (2009).

WMAP’s cold spot shows giant void in space

An enormous void, nearly a thousand million light-years across, seems to be at the origin of a cold spot that the Wilkinson microwave anisotropy probe (WMAP) has found in the cosmic microwave background (CMB). This region, largely empty of galaxies and dark matter, is much larger than voids observed or predicted using computer simulations.

The map of temperature fluctuations in the CMB as observed by WMAP shows a distinct feature known as the "cold spot". Some attempts to explain this peculiar feature have invoked non-Gaussian processes to alter the radiation when it was emitted about 400 ,000 years after the Big Bang. Alternatively, the observed radiation could also be modified on its journey of thousands of millions of years from the outer regions of the universe to the Earth.

For instance, a CMB photon can gain energy by "falling" into the potential well of a dense region, such as a cluster of galaxies. Normally, it should lose the same amount of energy again when moving out of this area. However, under the effect of dark energy the potential well becomes less deep with cosmic time and the energy loss of the photon will not balance its earlier energy gain completely. This subtle effect – known as the "late integrated Sachs–Wolfe effect" – would make a hot spot in the map of the CMB in the line of sight of a galaxy cluster. In the opposite case of an extended void in space, the net effect on the CMB map would produce a cold spot.

Is there something special about the distribution of galaxies in the direction of WMAP’s cold spot? This is what Lawrence Rudnick from the University of Minnesota wondered. Investigations are based on the NVSS – the National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) Sky Survey. The NVSS programme observed 82% of the sky visible from the VLA location in New Mexico from 1990 to 1997 and produced a catalogue of more than 1.8 million individual radio sources. By smoothing these observations to a resolution of a few degrees, Rudnick found a clear dip exactly at the position of the WMAP cold spot. Both the radio intensity and the number of radio sources are lower in this region of the river constellation Eridanus.

As radio sources are good tracers of the presence of galaxies – and thus of mass in the universe – it makes sense to assume that the WMAP cold spot comes from this dip in the projected distribution of galaxies through the late integrated Sachs–Wolfe effect. Rudnick and colleagues estimate the size of the volume that would need to be almost empty of matter to explain the cold spot of WMAP through this effect. The result is a big void of almost a thousand million light-years in size, which should be located in the relatively nearby universe – at most, at a redshift of z ˜1 – when the effect of dark energy starts to dominate the expansion rate of the universe (CERN Courier September 2003 p23). Such a big void exceeds by far the size of known regions of empty space and also the expectations of computer simulations of large-scale structures (CERN Courier September 2007 p11). Therefore, the WMAP cold spot remains a puzzle, no longer as a peculiarity of the very early universe but as an oddity of the time of structure formation.

Statistical Methods in Experimental Physics (2nd edition)

By Frederick James, World Scientific Publishing. Hardback ISBN 9789812567956 £33 ($58). Paperback ISBN 9789812705273 £17 ($30).

9789812705273-us

In this second edition many chapters now include considerable new material, especially in areas concerning the theory and practice of confidence intervals, including the important Feldman–Cousins method. Both frequentist and Bayesian methodologies are presented, with a strong emphasis on techniques that are useful to physicists and other scientists in the interpretation of experimental data and comparison with scientific theories. This textbook is suitable for advanced graduate students in the physical sciences, as well as a reference for active researchers.

Fast fragmentation produces double firsts for exotic nuclei

Researchers at the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University have made new observations in different regions of the isotopic landscape by examining the nuclear structure of 64Ge and 36Mg.

Krzysztof Starosta

Nuclei with equal proton (Z) and neutron (N) numbers are important in unravelling nuclear structure, in particular in the context of the shell model. Between 56Ni and 100Sn they exhibit a variety of shapes, evolving from spherical to prolate (cigar-shaped) to oblate (pancake-shaped) as the mass increases. Studies of transition rates between excited states and ground states in these nuclei provide important information to test shell-model predictions.

CCplu02_08_07

One such experiment at NSCL has studied 64Ge (N = Z = 32), making use of the recoil distance method (RDM) to measure the lifetime of two excited states (Starosta et al. 2007.) This was only the second measurement of this kind conducted in this region of isotopes, and the first to use the RDM at a fast-fragmentation facility. The beam speed at NSCL, 10 times higher than in previous RDM studies, allows for greater precision and gives access to a range of previously unattainable isotopes.

The experiment used a variety of state-of-the-art techniques, including a plunger device developed at the University of Cologne for use with the RDM. The plunger device produced the 64Ge nuclei in reactions where a single neutron was knocked out of incident 65Ge nuclei in a beam that contained a mixture of rare isotopes. The RDM used high-resolution gamma-ray spectroscopy and the Doppler effect to determine the lifetime of the excited states. The results agree well with large-scale shell-model calculations for the two excited states studied, and show the promise of the techniques used.

Exotic nuclei far from N = Z, with too many neutrons, offer other possibilities for testing shell-model predictions. One area of interest is the “island of inversion” where around a dozen neutron-rich isotopes should exhibit shell orderings that differ from standard theoretical predictions.

Studies of magnesium isotopes have already placed 31–34Mg (Z = 12, N = 19–22) in the island. Now, for the first time, an experiment at NSCL has examined the shell structure of 36Mg which has as many as 24 neutrons (Gade et al. 2007). In this case, a secondary beam of 38Si collided with a beryllium target to create 36Mg on rare occasions: only 1 in 400,000 38Si nuclei yielded the desired 36Mg. Spectroscopic measurements of the first excited state confirmed shell-model predictions, placing 36Mg in the island of inversion as expected.

bright-rec iop pub iop-science physcis connect