Comsol -leaderboard other pages

Topics

Cornell gets funding for brighter X-rays

The US National Science Foundation (NSF) has awarded Cornell University $18 million to begin developing a high-brilliance, high-current Energy Recovery Linac (ERL) synchrotron radiation X-ray source.

CCEnew7_04-05

All existing hard X-ray synchrotron radiation facilities are based on storage rings. Equilibrium emittance considerations limit the X-ray brilliance that is practically attainable and the ability to make short intense X-ray pulses. In an ERL the electron bunches are not stored; rather, electron bunches with very low emittance are created then accelerated by a superconducting linac.

After one circuit around a transport loop, where the X-rays are produced, the electron energy is extracted back into the radio-frequency (RF) field of the linac and used to accelerate new bunches. The energy-depleted bunches are dumped.

The beams from ERLs are predicted to be around 1000 times better in terms of brightness, coherence and pulse duration than current X-rays. They will enable investigations that are impossible to perform with existing X-ray sources.

The ERL is based on accelerator physics and superconducting microwave technology in which Cornell’s Laboratory of Elementary Particle Physics is a world leader. The NSF award to Cornell will fund the prototyping of critical components of the machine. The design team, led by Cornell’s professors Sol Gruner and Maury Tigner, has already almost completed the prototype design; scientists from Jefferson Laboratory worked with Cornell on the initial design. Prototype construction and testing should finish in 2008. Cornell then will seek funding for a full-scale ERL facility as an upgrade of the present synchrotron radiation facility, the Cornell High Energy Synchrotron Source (CHESS), which is based on the Cornell Electron Storage Ring (CESR).

Radio astronomers observe a possible dark-matter galaxy

An international team of astronomers has discovered what appears to be an invisible galaxy made almost entirely of dark matter – the first ever detected. The mystery galaxy, VIRGOHI21, lies in the Virgo cluster of galaxies, some 50 million light-years from Earth.

Astronomers know that the visible galaxies contain more than the luminous matter observed; the rotation velocities indicate the presence of haloes of large amounts of dark matter. However, simulations of cold dark matter in the universe predict more dark haloes than galaxies, leading to the idea that there are dark haloes without stars: dark galaxies.

The team, from the UK, France, Italy and Australia, has been searching for dark galaxies by studying the distribution of hydrogen atoms throughout the universe through their emissions at radio wavelengths, in particular using the 21 cm line of atomic hydrogen. VIRGOHI21, a huge cloud of neutral hydrogen with a mass 100 million times heavier than the Sun, was first seen with the University of Manchester’s Lovell Telescope at Jodrell Bank, UK. The sighting was later confirmed with the Arecibo telescope in Puerto Rico.

The speed at which it spins indicates that there is more to VIRGOHI21 than hydrogen. The rotation velocity implies a mass 1000 times greater than the amount of hydrogen, and at the distance of the Virgo cluster this should be in the form of a galaxy shining at 12th magnitude. However, when the team studied the area in question using the Isaac Newton Telescope in La Palma, Canary Islands, they found no visible trace of an optical counterpart for VIRGOHI21.

Dark galaxies are thought to form when the density of matter in a galaxy is too low to create the conditions for star formation. The observations of VIRGOHI21 may have other explanations, but they are consistent with the hydrogen being in a flat disc of rotating material – which is what is seen in ordinary spiral galaxies. Similar objects that have previously been discovered have since turned out to contain stars when studied with high-powered optical telescopes. Others have been found to be the remnants of two galaxies colliding, but in this case there is no evidence for such an encounter.

The team first observed the dark object in 2000, but it has taken almost five years to rule out the other possible explanations.

Further reading

R Minchin et al. www.arxiv.org/abs/astro-ph/0502312.

Rewriting the rules on proton acceleration

For half a century, the synchrotron has been the workhorse of high-energy particle physics, from its first use with external beams to the modern particle colliders. The basic principle is to use the electric field in a radio-frequency (RF) wave to accelerate charged particles, the frequency varying to keep in time with particles on a constant trajectory through a ring of guiding magnets.

CCEacc1_04-05

Now a team has demonstrated a different way of accelerating the protons in tests at the proton synchrotron (PS) at KEK, the Japanese High Energy Accelerator Research Organization in Tsukuba. For the first time, a bunch of protons in the synchrotron has been accelerated by an induction method (K Takayama et al. 2004). The technique may overcome certain effects that normally limit the intensity achieved in a synchrotron beam, and could prove to be an important advance for future proton colliders.

The concept of an “induction synchrotron” was first proposed about five years ago by the author and Jun-ichi Kishiro of KEK and the Japan Atomic Energy Research Institute (Takayama and Kishiro 2000). The idea was to overcome shortcomings of the RF synchrotron, in particular the limited longitudinal phase-space available for the acceleration of charged particles – in other words the distribution in energy and position around the ring of the particles being accelerated. In a conventional synchrotron, the particles are accelerated when they pass through an RF cavity, a device that contains the oscillating radio wave. The electric field naturally concentrates the particles into bunches in the direction of motion (i.e. longitudinally).

In the induction synchrotron, however, the accelerating devices are replaced with induction devices, in which a changing magnetic field produces the electric field to accelerate the particles. The basic device is a ferromagnetic ring, or core, through which the particles pass. A pulsed voltage sets up a magnetic field, and the changing magnetic flux in turn induces an electric field along the axis of the core. The induction-acceleration technique was first developed in the late 1960s and has a range of applications in linear accelerators, but the recent KEK experiment was the first time it was applied in a circular machine.

CCEacc3_04-05

The system consists basically of an induction cavity with three cells driven by a pulse modulator as shown in figure 1. The cells developed for the experiment, which are rather like one-to-one transformers, use a nanocrystalline alloy as the magnetic-core material. The pulse modulator is connected to the acceleration cavity through a 40 m long transmission cable to keep the modulator far from the accelerator, where its solid-state switching elements would be exposed to high radiation. A matching resistance at the driver end reduces reflections. The pulse modulator can be operated in various modes from burst to 1 MHz continuous-wave via a system controlled by a digital signal processor (DSP).

In July 2004 the system was demonstrated to be capable of generating a step-pulse of 2 kV and a peak current of 18 A at 1 MHz with a duty cycle of 50%. It was then installed in the KEK PS in September, ready to test induction acceleration.

For the experiment a single bunch of 6 x 1011 protons was injected into the main ring at 500 MeV, trapped in an RF bucket and accelerated up to 8 GeV. The aim was that the RF would simply capture the beam bunch while the induction voltage provided the acceleration. The timing of the master trigger for the pulse modulator was adjusted via the DSP so that the signal from the bunch stayed around the centre of the induction voltage pulse for the entire accelerating period. Figure 2 shows typical waveforms of the induction voltage signals for the three cells, plus the bunch signals.

CCEacc2_04-05

To confirm the induction acceleration, the relative phase difference Δφ between the RF and the bunch centre was measured for three cases: with the RF voltage alone; with both the RF and positive induction voltages for acceleration; and with the RF and negative induction voltages. With both the RF and induction voltages, the centre of the bunch receives an effective voltage per turn of V = Vrfsinφ + Vind’ where Vrf and Vind are the RF and the induction voltages respectively, and φ is the position of the bunch in the RF phase. A value for V of 4.8 kV is required for the RF bunch to follow the linearly ramping bending field of the synchrotron magnets.

Figure 3 shows the temporal evolutions of the measured phase for three cases. The results are in close agreement with the prediction from the equation for the voltage per turn of φ = 5.7°,-1.0° and 12.4° for the three cases, with Vrf = 48 kV and Vind = 5.6 kV for the respective cases. The position of the proton bunch relative to the RF voltage for each case is also shown schematically on the right in the same figure. The plots indicate the successful acceleration of the bunch beyond the transition energy. This is the critical energy characteristic of a strong-focusing synchrotron at which the particles’ revolution period becomes almost independent of energy and the stable phase position switches from one side of the RF pulse to the other, as indicated in figure 3.

CCEacc4_04-05

These results are the first step in demonstrating the feasibility of an induction synchrotron, which could have important implications for future machines. A significant advantage of the induction technique is that the functions of acceleration and longitudinal focusing are achieved separately. This is not the case in the RF synchrotron where the gradient in the electric field provides the longitudinal confinement. In an induction machine voltage pulses of opposite sign separated by some time period can provide the longitudinal focusing forces. A pair of barrier-voltage pulses should work in a similar way to the RF barrier, which has been demonstrated at Fermilab, Brookhaven National Laboratory and CERN.

Separating the acceleration and focusing functions in the longitudinal direction brings about a significant freedom of beam-handling compared with conventional RF synchrotrons. In particular, it offers a means of forming a “superbunch”: an extremely long beam bunch with a uniform density that would be most attractive in future hadron colliders and proton drivers for neutrino physics. In addition, crossing the transition energy without any longitudinal focusing seems to be feasible, and this could substantially mitigate undesired phenomena, such as bunch-shortening from non-adiabatic motion and microwave instabilities. The next step at KEK will be to test the barrier-voltage concept, proceeding further towards the formation of a superbunch in an induction synchrotron.

LHC upgrade takes shape with CARE and attention

CERN’s Large Hadron Collider (LHC), first seriously discussed more than 20 years ago, is scheduled to begin operating in 2007. The possibility of upgrading the machine is, however, already being seriously studied. By about 2014, the quadrupole magnets in the interaction regions will be nearing the end of their expected radiation lifetime, having absorbed much of the power of the debris from the collisions. There will also be a need to reduce the statistical errors in the experimental data, which will require higher collision rates and hence an increase in the intensity of the colliding beams – in other words, in the machine’s luminosity.

CCElhc1_04-05

This twofold motivation for an upgrade in luminosity is illustrated in figure 1, which shows two possible scenarios compatible with the baseline design: one in which the luminosity stays constant from 2011 and one in which it reaches its ultimate value in 2016. An improved luminosity will also increase the physics potential, extending the reach of electroweak physics as well as the search for new modes in supersymmetric theories and new massive particles, some of which could be manifestations of extra dimensions.

The timescale for an upgrade of 10 years from now turns out to be just right for the development, prototyping and production of new superconducting magnets for the interaction regions and of other equipment, provided that an adequate R&D effort starts now. It is against this background that the European Community has supported the High-Energy High-Intensity Hadron-Beams (HHH) Networking Activity, which started in March 2004 as part of the Coordinated Accelerator Research in Europe (CARE) project. HHH has three objectives:
• to establish a roadmap for upgrading the European hadron accelerator infrastructure (at CERN with the LHC and also at Gesellschaft für Schwerionenforschung [GSI], the heavy-ion laboratory in Darmstadt);
• to assemble a community capable of sustaining the technical realization and scientific exploitation of these facilities;
• to propose the necessary accelerator R&D and experimental studies to achieve these goals.
The HHH activity is structured into three work packages. These are named Advancements in Accelerator Magnet Technology, Novel Methods for Accelerator Beam Instrumentation, and Accelerator Physics and Synchrotron Design.

CCElhc2_04-05

The first workshop of the Accelerator Physics and Synchrotron Design work package, HHH-2004, was held at CERN on 8-11 November 2004. Entitled “Beam Dynamics in Future Hadron Colliders and Rapidly Cycling High-Intensity Synchrotrons”, it was attended by around 100 accelerator and particle physicists, mostly from Europe, but also from the US and Japan. With the subjects covered and the range of participants, the workshop was also able to reinforce vital links and co-operative approaches between high-energy and nuclear physicists and between accelerator-designers and experimenters.

The first session provided overviews of the main goals. Robert Aymar, director-general of CERN, reviewed the priorities of the laboratory until 2010, mentioning among them the development of technical solutions for a luminosity upgrade for the LHC to be commissioned around 2012-2015. The upgrade would be based on a new linac, Linac 4, to provide more intense proton beams, together with new high-field quadrupole magnets in the LHC interaction regions to allow for smaller beam sizes at the collision-points – even with the higher-intensity circulating beams. It would also include rebuilt tracking detectors for the ATLAS and CMS experiments. Jos Engelen, CERN’s chief scientific officer, encouraged the audience to consider the upgrade of the LHC and its injector chain as a unique opportunity for extending the physics reach of the laboratory in the areas of neutrino studies and rare hadron decays, without forgetting the requirements of future neutrino factories.

CCElhc3_04-05

For the GSI laboratory, the director Walter Henning described the status of the Facility for Antiproton and Ion Research project (FAIR), and its scientific goals for nuclear physics. He pointed to the need for wide international collaboration to launch this accelerator project and to complete the required R&D.

Further talks in the session looked in more detail at the issues involved in an upgrade of the LHC. Frank Zimmermann and Walter Scandale from CERN presented overviews of the accelerator physics and the technological challenges, addressing possible new insertion layouts and scenarios for upgrading the injector-complex. The role of the US community through the LHC Accelerator Research Program (LARP) was described by Steve Peggs from the Brookhaven National Laboratory, who proposed closer coordination with the HHH activity. Finally, Daniel Denegri of CERN and the CMS experiment addressed the challenges to be faced if the LHC detectors are to make full use of a substantial increase in luminosity. He also reviewed the benefits expected for the various physics studies.

The five subsequent sessions were devoted to technical presentations and panel discussions on more specific topics, ranging from the challenges of high-intensity beam dynamics and fast-cycling injectors, to the development of simulation software. A poster session with a wide range of contributions provided a welcome opportunity to find out about further details, and a summary session closed the workshop.

The luminosity challenge

The basic proposal for the LHC upgrade is, after seven years of operation, to increase the luminosity by up to a factor of 10, from the current nominal value of 1034 cm-2 s-1 to 1035 cm -2 s-1. The table compares nominal and ultimate LHC parameters with those for three upgrade paths examined at the workshop.

The upgrade currently under discussion will include building essentially new interaction regions, with stronger or larger-aperture “low-beta” quadrupoles in order to reduce the spot size at the collision-point and to provide space for greater crossing angles. Moderate modifications of several subsystems, such as the beam dump, machine protection or collimation, will also be required because of the higher beam current. The choice between possible layouts for the new interaction regions is closely linked to both magnet design and beam dynamics; different approaches could accommodate smaller or larger crossing angles, possibly in combination with an electromagnetic compensation of long-range beam-beam collisions or “crab” cavities (as described below), respectively. A more challenging possibility also envisions the upgrade of the LHC injector chain, employing concepts similar to those being developed for the FAIR project at GSI.

The workshop addressed a broad range of accelerator-physics issues. These included the generation of long and short bunches, the effects of space charge and the electron cloud, beam-beam effects, vacuum stability and conventional beam instabilities.

A key outcome is the elimination of the “superbunch” scheme for the LHC upgrade, in which each proton beam is concentrated into only one or a few long bunches, with much larger local charge density. Speakers at the workshop underlined that this option would pose unsolvable problems for the detectors, the beam dump and the collimator system.

For the other upgrade schemes considered, straightforward methods exist to decrease or increase the bunch length in the LHC by a factor of two or more, possibly with a larger bunch intensity. Joachim Tuckmantel and Heiko Damerau of CERN proposed adding conventional radio-frequency (RF) systems operating at higher-harmonic frequencies to vary the bunch length and in some cases also the longitudinal emittance of the beam, while Ken Takayama promoted a novel scheme based on induction acceleration.

Experiments at CERN and GSI, reported by Giuliano Franchetti of GSI, have clarified mechanisms of beam loss and beam-halo generation, both of which occur as a result of synchrotron motion and space-charge effects due to the natural electrical repulsion between the beam particles. These mechanisms have been confirmed in computer simulations. Studies of the beam-beam interaction – the electromagnetic force on a particle in a beam of all the particles in the other beam – are in progress for the Tevatron at Fermilab, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven, and the LHC.

Tanaji Sen of Fermilab showed that sophisticated simulations can reproduce the lifetimes observed for beam in the Tevatron, and Werner Herr of CERN presented self-consistent 3D simulations for beam-beam interactions in the LHC. Kazuhito Ohmi of KEK conjectured on the origin in hadron colliders of the beam-beam limit – the current threshold above which the size of colliding beams increases with increasing beam intensity. If the limit in the LHC arises from diffusion related to the crossing angle, then RF “crab” cavities, which tilt the particle bunches during the collision process, thus effectively providing head-on collisions despite the crossing angle of the bunch centroids, could raise the luminosity beyond the purely geometrical gain in making the beams collide head-on.

Another effect to consider in the LHC is the electron cloud created initially when synchrotron radiation from the proton releases photoelectrons at the beam-screen wall. The photoelectrons are pulled toward the positively charged proton bunch and in turn generate secondary electrons when they hit the opposite wall. Jie Wei of Brookhaven presented observations made at RHIC, which demonstrate that the electron cloud becomes more severe for shorter intervals between bunches. This may complicate an LHC upgrade based on shorter bunch-spacing. Oswald Gröbner of CERN also pointed out that secondary ionization of the residual gas by electrons from the electron cloud could compromise the stability of the vacuum.

The wake field generated by an electron cloud requires a modified description compared with a conventional wake field from a vacuum chamber, as Giovanni Rumolo of GSI discussed. His simulations for FAIR suggest that instabilities in a barrier RF system, with a flat bunch profile, qualitatively differ from those for a standard Gaussian bunch with sinusoidal RF. Elias Metral of CERN surveyed conventional beam instabilities and presented a number of countermeasures.

The simulation challenge

In the sessions on simulation tools, a combination of overview talks and panel discussions revisited existing tools and determined the future direction for software codes in the different areas of simulation. The tools available range from well established commercial impedance calculations to the rapidly evolving codes being developed to simulate the effects of the electron cloud. Benchmarking of codes to increase confidence in their predicative power is essential. Examples discussed included beam-beam simulations and experiments at the Tevatron, RHIC and the Super Proton Synchrotron at CERN; impedance calculations and bench measurements (e.g. for the LHC kicker magnets and collimators); observed and predicted impedance effects (at the Accelerator Test Facility at Brookhaven, DAFNE at Frascati and the Stanford Linear Collider at SLAC); single-particle optics calculations for HERA at DESY, SPEAR-3 at the Stanford Linear Accelerator Center, and the Advanced Light Source at Berkeley; and electron-cloud simulations.

Giulia Bellodi of the Rutherford Appleton Laboratory, Miguel Furman of Lawrence Berkeley National Laboratory, and Daniel Schulte of CERN suggested creating an experimental data bank and a set of standard models, for example for vacuum-chamber surface properties, which would ease future comparisons of different codes. New computing issues, such as parallelization, modern algorithms, numerical collisions, round-off errors and dispersion on a computing Grid were also discussed.

The simulation codes being developed should support all stages of an accelerator project that has shifting requirements; communication with other specialized codes is also often required. The workshop therefore recommended that codes should have toolkits and a modular structure as well as a standard input format, for example in the style of the Methodical Accelerator Design (MAD) software developed at CERN.

Frank Schmidt and Oliver Bruning of CERN stressed that the MAD-X program features a modular structure and a new style of code management. For most applications, complete, self-consistent, 3D descriptions of systems have to co-exist with the tendency towards fast, simplified, few-parameter models – conflicting aspects that can in fact be reconciled by a modular code structure.

The workshop established a list of priorities and future tasks for the various simulation needs and, in view of the rapidly growing computing power available, participants sketched the prospect of an ultimate universal code, as illustrated in figure 2.

• The HHH Networking Activity is supported by the European Community-Research Infrastructure Activity under the European Union’s Sixth Framework Programme “Structuring the European Research Area” (CARE, contract number RII3-CT-2003-506395).

Exploiting the synergy between great and small

There are a number of astrophysical phenomena, notably in connection with cosmology and ultrahigh-energy cosmic rays, that open a new window onto particle physics and lead to a better microscopic understanding of matter, space and time. On the other hand, particle physics is often exploited to great depths for an ultimate understanding of astrophysical phenomena, in particular the structure and evolution of the universe. These frontier-physics issues attracted a record number of 188 participants to Hamburg for the latest annual DESY Theory Workshop, held on 28 September – 1 October 2004 and organized by Georg Raffelt.

CCEexp1_04-05

The workshop started with the traditional day of introductory lectures aimed at young physicists, which covered the main topics of the later plenary sessions. Most of the participants jumped at this opportunity. At the end of the day, they had learned much about Big Bang cosmology, including the thermal history of the universe; about the evolution of small fluctuations in the early universe, and their imprints on the cosmic microwave background (CMB) radiation and the large-scale distribution of matter; and about how these initial fluctuations may emerge during an inflationary era of the universe. They were also up to date in ultrahigh-energy cosmic-ray physics. Thus the ground was laid for the workshop proper.

Highlighting the dark

In recent years, significant advances have been made in observational cosmology, as several plenary talks emphasized. Observations of large-scale gravity, deep-field galaxy counts and Type Ia supernovae favour a universe that is currently about 70% dark energy – accounting for the observed accelerating expansion of the universe – and about 30% dark matter. The position of the first Doppler peak in recent measurements of the CMB radiation, by for example the Wilkinson Microwave Anisotropy Probe (WMAP) satellite, strongly suggests that the universe is spatially flat. These values for the cosmological parameters, together with today’s Hubble expansion rate, are collectively known as the “concordance” model of cosmology, for they fit a wide assortment of cosmological data. Indeed, we have entered the era of precision cosmology, with the precision set to continue increasing in the coming decade as a result of further observational efforts. It is now the turn of theoretical particle physicists to explain these cosmological findings, in particular why the dominant contribution to the energy density of the present universe is dark and what it is made of microscopically.

Dark matter

Successful Big Bang nucleosynthesis requires that about 5% of the energy content of the universe is in the form of ordinary baryonic matter. But what about the remaining non-baryonic dark matter?

CCEexp2_04-05

This 25% cannot be accounted for in the Standard Model of particle physics: the only Standard Model candidates for dark matter, the light neutrinos, were relativistic at the time of recombination and therefore cannot explain structure formation on small galactic scales. Studies of the formation of structure – as observed today by the Sloan Digital Sky Survey, for example – from primordial density perturbations measured in the CMB radiation yield an upper bound of about 2% on the energy fraction in massive neutrinos. This translates into an upper bound of around 1 eV for the sum of the neutrino masses. Observations, by means of the forthcoming Planck satellite, of distortions in the temperature and polarization of the CMB will improve the sensitivity in the sum of neutrino masses by an order of magnitude to 0.1 eV. This is comparable to the sensitivity of the future Karlsruhe Tritium Neutrino Experiment (KATRIN), which measures the neutrino mass via the tritium beta-decay endpoint spectrum, and of the planned second-generation experiments on neutrinoless double beta-decay.

In theories beyond the Standard Model, there is no lack of candidates for the dominant component of dark matter. Notable viable candidates are the lightest supersymmetric partners of the known elementary particles, which arise in supersymmetric extensions of the Standard Model: the neutralinos, which are spin-½ partners of the photon, the Z-boson and the neutral Higgs boson, and the gravitinos, which are spin-½ partners of the graviton. Showing that one of these particles accounts for the bulk of dark matter would not only answer a key question in cosmology, but would also shed new light on the fundamental forces and particles of nature.

CCEexp3_04-05

While ongoing astronomical observations will measure the quantity and location of dark matter to greater accuracy, the ultimate determination of its nature will almost certainly rely on the direct detection of dark-matter particles through their interactions in detectors on Earth. Second-generation experiments such as the Cryogenic Dark Matter Search II (CDMS II) and the Cryogenic Rare Event Search with Superconducting Thermometers II (CRESST II), which are currently being assembled, will provide a serious probe of the neutralino as a dark-matter candidate.

Complementary, but indirect, information can be obtained from searches for neutrinos and gamma rays from neutralino-antineutralino annihilation, coming from the direction of particularly dense regions of dark matter, for example in the central regions of our galaxy, the Sun or the Earth. Ultimately, however, the proof of the existence of dark matter and the determination of its particle nature will have to come from searches at accelerators, notably CERN’s Large Hadron Collider (LHC). Even the gravitino, which is quite resistant to detection in direct and indirect dark-matter searches because it interacts only very feebly through the gravitational force, can be probed at the LHC.

Dark energy

In contrast with dark matter, dark energy has so far no explanation in particle physics. Apart from the observed accelerated expansion, the fact that we seem to be living at a special time in cosmic history, when dark energy appears only recently to have begun to dominate dark and other forms of matter, is also puzzling. Explanations put forth for dark energy range from the energy of the quantum vacuum to the influence of unseen space dimensions. Popular explanations invoke an evolving scalar field, often called “quintessence”, with an energy density varying in time in such a way that it is relevant today. Such an evolution may also be linked to a time variation of fundamental constants – a hot topic in view of recent indications of shifts in the frequencies of atomic transitions in quasar absorption systems, which hint that the electromagnetic fine-structure constant was smaller 7-11 billion years ago than it is today.

Depending on the nature of dark energy, the universe could continue to accelerate, begin to slow down or even recollapse. If this cosmic speed-up continues, the sky will become essentially devoid of visible galaxies in only 150 billion years. Until we understand dark energy, we cannot comprehend the destiny of the universe. Determining its nature may well lead to important progress in our understanding of space, time and matter.

The first order of business is to establish further evidence for dark energy and to discern its properties. The gravitational effects of dark energy are determined by its equation of state, i.e. the ratio of its pressure to its energy density. The more negative its pressure, the more repulsive the gravity of the dark energy. The dark energy influences the expansion rate of the universe, which in turn governs the rate at which structure grows, and the correlation between redshift and distance. Over the next two decades, high-redshift supernovae, counts of galaxy clusters, weak-gravitational lensing and the microwave background will all provide complementary information about the existence and properties of dark energy.

Inflationary ideas

The inflationary paradigm that the very early universe underwent a huge and rapid expansion is a bold attempt to extend the Big Bang model back to the first moments of the universe. It uses some of the most fundamental ideas in particle physics, in particular the notion of a vacuum energy, to answer many of the basic questions of cosmology, such as “Why is the observed universe spatially flat?” and “What is the origin of the tiny fluctuations seen in the CMB?”.

The exact cause of inflation is still unknown. Thermalization at the end of the inflationary epoch leads to a loss of details about the initial conditions. There is, however, a notable exception: inflation leaves a telltale signature of gravitational waves, which can be used to test the theory and distinguish between different models of inflation. The strength of the gravitational-wave signal is a direct indicator of what caused inflation. Direct detection of the gravitational radiation from inflation might be possible in the future with very-long-baseline, space-based, laser-interferometer gravitational-wave detectors. A promising shorter-term approach is to search for the signature of these gravitational waves in the polarized radiation from the CMB.

Matter Matters

The ordinary baryonic matter of which we are made is the tiny residue of the annihilation of matter and antimatter that emerged from the earliest universe in not-quite-equal amounts. This tiny imbalance may arise dynamically from a symmetric initial state if baryon number is not conserved in interactions that violate the conservation of C (C = charge conjugation) and the combination CP (P = parity), which produce more baryons than antibaryons in an expanding universe.

There are a few dozen viable scenarios for baryogenesis, all of which invoke more or less physics beyond the Standard Model. A particularly attractive scenario is leptogenesis, according to which neutrinos play a central role in the origin of baryon asymmetry. Leptogenesis predicts that the out-of-equilibrium, lepton-number violating decays of heavy Majorana neutrinos, with an exchange responsible for the smallness of the masses of the known light neutrinos, generate a lepton asymmetry in the early universe that is transferred into a baryon asymmetry by means of non-perturbative electroweak baryon- and lepton-number violating processes. Leptogenesis works nicely within the currently allowed window for the masses of the known light neutrinos.

Heavenly accelerators

The Earth’s atmosphere is continuously bombarded by cosmic particles. Ground-based observatories have measured them in the form of extensive air showers with energies up to 3 x 1020 eV, corresponding to centre-of-mass energies of 750 TeV, far beyond the reach of any accelerator here on Earth. We do not yet know the sources of these particles and thus cannot understand how they are produced.

Astrophysical candidates for high-energy sources include active galaxies and gamma-ray bursts. Alternatively, a completely new constituent of the universe could be involved, such as a topological defect or a long-lived superheavy dark-matter particle, both associated with the physics of grand unification. Only by observing many more of these particles, including the associated gamma rays, neutrinos and perhaps gravitational waves, will we be able to distinguish these possibilities.

Identifying the sources of ultrahigh-energy cosmic rays requires several kinds of large-scale experiments, such as the Pierre Auger Observatory, currently under construction, to collect large enough data samples and determine the particle directions and energies precisely. Dedicated neutrino telescopes of cubic-kilometre size in deep water or ice, such as IceCube at the South Pole, can be used to search for cosmic sources of high-energy neutrinos. An extension of their sensitivity to the ultrahigh-energy regime above 1017 eV will offer possibilities to infer information about physics in neutrino-nucleon scattering beyond the reach of the LHC.

STAR has silicon at its core

STAR silicon vertex tracker.

An important milestone has been reached in the STAR experiment at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) with the integration of the silicon strip detector (SSD). The installation completes the STAR ensemble, which is dedicated to tracking the thousands of charged particles that emerge at large angles to the colliding beams. The SSD has been fully commissioned and is now collecting data.

The completion of the STAR SSD is the result of a multi-year French research and development effort that began shortly after STAR started taking data in 2000, and which was led by the Laboratoire de Physique Subatomique et des Technologies Associées (Subatech) in Nantes, and the Institut de Recherche Subatomique (IreS) in Strasbourg. The detector incorporates state-of-the-art bonding technology as well as front-end electronics and control chips designed and developed by the Laboratoire d’Electronique et de Physique des Systèmes Instrumentaux (LEPSI) in Strasbourg.

The SSD makes extensive use of double-sided silicon microstrip sensors and has a total sensitive area of about 1 square metre. The detector consists of 320 detector modules arranged on 20 ladders (see figures 1 and 2). These form a barrel at a radius of 23 cm from the beam, inserted between the silicon vertex tracker (SVT) and the time projection chamber (TPC).

A section of the silicon strip detecto

The detector enhances the tracking capabilities of the STAR experiment in this region by providing information on the positions of hits and on the ionization energy loss of charged particles. Specifically, the SSD improves the extrapolation of tracks in the TPC to the hits found in the SVT. This increases the average number of space points measured near the collision vertex, significantly improving the detection efficiency for long-lived meta-stable particles such as those found in hyperon decays. Moreover, the SSD will further enhance the SVT tracking capabilities for particles with very low momentum, which do not reach the TPC.

The SSD was based on an early proposal for the Inner Tracking System of the ALICE experiment at CERN’s Large Hadron Collider; however, the design of the detector has evolved and matured considerably after several years of research, development and prototyping. To fulfil the constraints of the STAR environment, innovative solutions were required for electronics, connections and mechanics.

The detector module comprises one double-sided silicon microstrip sensor and floating electronics on two hybrid circuits of very low mass (see figure 3). The silicon sensor contains 1536 analogue channels (768 x 2) and has a resolution of 17 μm in azimuth (Rφ) and 700 μm in the beam direction (z). Each of the hybrid circuits is dedicated to one side of the sensor and hosts six A128C front-end chips and a Costar chip for control purposes. The hybrids are connected to the outer boards via a low-mass Kapton-aluminium bus, manufactured at CERN.

ouble-sided silicon microstrip senso

The A128C front-end chip, developed in a collaboration between LEPSI and IReS, shows an extended input range corresponding to ±13 MIPs (minimum ionizing particles) and an extra-low power consumption of less than 350 μW per channel. A dedicated multipurpose application-specific integrated circuit is in charge of the hybrid controls and temperature measurements.

Kapton-copper microcables and state-of-the-art tape automated bonding (TAB) technology connect the silicon readout strips to the input channels of the front-end electronics chip, and the chips to their hybrids. TAB enables a flexible connection, which acts as an adapter between the different pitches of the detectors and the chips. The technology is also testable and it provided a good yield during production. It was essential to make the detector modules small enough to be integrated into STAR.

Another unique feature of the SSD is its air-cooling system

Another unique feature of the SSD is its air-cooling system. The carbon-fibre-based structure on the ladder that supports the detector modules, analogue-to-digital converters, and control boards is wrapped with a Mylar foil and defines a path to guide the flow of air induced by transvector airflow amplifiers. This design avoids the use of liquid coolant, cooling pipes and heat bridges, and provides a material budget with a total radiation length very close to 1%.

A high level of serialization has been reached by incorporating the analogue-to-digital converters and control boards close to the detector modules. This enables the data from the half million channels of the SSD to be transported to the STAR data acquisition system using only four giga-link optical fibres. In the future, additional parallelization of the readout will enable the readout speed of the SSD to be increased to match the trigger and data-taking rates foreseen for STAR in the high-luminosity era of RHIC II.

The SSD project has been funded by the IN2P3/CNRS, the Ecole des Mines de Nantes, the metropolitan district of Nantes, the Loire-Atlantique department, and the regions of Alsace and Pays de la Loire. Financial support has also been provided by the US Department of Energy through the STAR collaboration.

In need of the human touch

I have led software projects since 1987 and have never known one, including my own, that was not in a crisis. After thinking and reading about it and after much discussion I have become convinced that most of us write software each day for a number of reasons but without ever penetrating its innermost nature.

CCEvie1_04-05

A software project is primarily a programming effort, and this is done with a programming language. Now this is already an oxymoron. Programming is writing before; it entails predicting or dictating the behaviour of something or someone. A language, on the other hand, is the vehicle of communication that in some ways carries its own negation because it is a way of expressing concepts that are inevitably reinterpreted at the receiver’s end. How many times have you raged “Why does this stupid computer do what I tell it [or him or her according to your momentary mood toward one of the genders], and not what I want!?” A language is in fact a set of tools that have been developed through evolution not to “program” but to “interact”.

Moreover every programmer has his own “language” beyond the “programming language”. Many times on opening a program file and looking at the code, I have been able to recognize the author at once and feel sympathy (“Oh, this is my old pal…”) or its opposite (“Here he goes again with his distorted mind…”), as if opening a letter.

Now if only it were that simple. If several people are working on a project, you not only have to develop the program for the project but you also have to manage communication between its members and its customers via human and programming language.

This is where our friends the engineers say to us “Why don’t you build it like a bridge?” However, software engineering is one more oxymoron cast upon us. We could never build software like a bridge, no more than engineers could ever remove an obsolete bridge with a stroke of a key without leaving tons of scrap metal behind. Software engineering’s dream of “employing solid engineering processes on software development” is more a definition than a real target. We all know exactly why it has little chance of working in this way, but we cannot put it into words when we have coffee with our engineer friends. Again, language leaves us wanting.

Attempts to apply engineering to software have filled books with explanations of why it did not work and of how to do it right, which means that a solution is not at hand. The elements for success are known: planning, user-developer interaction, communication, and communication again. The problem is how to combine them into a winning strategy.

Then along came Linux and the open source community. Can an operating system be built without buying the land, building the offices, hiring hundreds of programmers and making a master plan for which there is no printer large enough? Can a few people in a garage outwit, outperform and eventually out-market the big ones? Obviously the answer is yes, and this is why Linux, “the glorified video game” to quote a colleague of mine, has carried a subversive message. I think we have not yet drawn all the lessons. I still hear survivors from recent software wrecks say: “If only we had been more disciplined in following The Plan…”

Is software engineering catching up? Agile technologies put the planning activity at the core of the process while minimizing the importance of “The Plan”, and emphasize the communication between developers and customers.

Have the “rules of the garage” finally been written? Not quite. Open source goes far beyond agile technologies by successfully bonding people who are collaborating on a single large project into a distributed community that communicates essentially by e-mail. Is constraining the communication to one single channel part of the secret? Maybe. What is certain is that in open source the market forces are left to act, and new features emerge and evolve in a Darwinian environment where the fittest survives. But this alone would not be enough for a successful software project.

A good idea that has not matured enough can be burned forever if it is exposed too early to the customers. Here judicious planning is necessary, and the determination and vision of the developer is still a factor in deciding when and how to inject his “creature” into the game. I am afraid (or rather I should say delighted) we are not close to seeing the human factor disappear from software development.

Very High Energy Cosmic Gamma Radiation: A Crucial Window on the Extreme Universe

by Felix A Aharonian, World Scientific. Hardback ISBN 9810245734, £65 ($107).

Astronomy – the study of all kinds of cosmic radiation – meets particle physics at the highest gamma-ray energies. This book offers the opportunity for particle physicists to cross the bridge between the two disciplines. They will discover the nature and properties of the extreme sources in the universe able to emit photons at energies higher than 10 GeV.

CCEboo1_04-05

Very-high-energy astrophysics is entering a new era with the recent achievement by the High Energy Stereoscopic System (HESS) of the first spatially resolved high-energy gamma-ray image of an astronomical object, the supernova remnant RX J1713.7-3946. This image confirms that supernova remnants are at the origin of cosmic rays.

The lead author of the paper in Nature that described the HESS results was Felix Aharonian, the author of this book. Here he uses his expertise to provide a broad and comprehensive overview of the study of cosmic gamma rays, from energies of about 10 GeV to 10 TeV. In nearly 500 pages, he covers all aspects of the field including the theoretical ground of gamma-ray emission and absorption mechanisms, as well as the status of detection facilities. The main part of the book is, however, devoted to the phenomenology of the various sources of very-high-energy gamma rays.

With more figures than equations, the author guides us through the world of supernova remnants, pulsars, jets of quasars and microquasars, and clusters of galaxies. He even discusses the implications for cosmology, as derived from the interaction of very-high-energy gamma rays with the diffuse extragalactic background radiation. As complete as this book tends to be, however, I am a little surprised to find notable omissions, including gamma-ray bursts and the possible annihilation-radiation of weakly interacting massive particules (WIMPs), which are mentioned but not discussed.

Nevertheless, this book with its extensive list of references is a very valuable introduction to the astrophysics of high-energy gamma-ray radiation. Well structured and with its more mathematical parts left for the appendix, it is also suitable for a quick search for a specific topic. It can therefore be used as a reference book for this fascinating “last electromagnetic window” on the cosmos, a topic destined to evolve very rapidly in the coming years.

Debunked! ESP, Telekinesis and Other Pseudoscience

by Georges Charpak and Henri Broch, translated by Bart K Holland, Johns Hopkins University Press. Hardback ISBN 0801878675, $25.

CCEboo1_03-05

Georges Charpak will, as they say, need no introduction to most readers of the CERN Courier. Henri Broch, author of Au Coeur de l’Extraordinaire and a contributor to the American magazine Skeptical Inquirer, is perhaps less familiar to English-speaking readers. Now, their short book Devenez Sorciers, Devenez Savants has been translated into English by Bart Holland, with the title Debunked! ESP, Telekinesis and Other Pseudoscience.

Pseudoscientific mumbo-jumbo has been engulfing the US long enough for an extensive sceptical literature to have grown up around it. Stories about firewalking, dowsing and spoon-benders have already been dealt with by James Randi in Flim-Flam!, Martin Gardner in Science: Good, Bad and Bogus, and, less originally in my opinion, by Victor Stenger in Physics and Psychics. Charpak and Broch treat all these matters with new insight and humour, but include many new examples to show that even France, home of the Cartesian philosophy of doubt and scepticism, is now apparently ready to believe almost anything, provided it is vouched for by fashionable figures in show business or the media.

Thus, in 1982, Broch found that among undergraduate science students at Nice, 52% believed relativistic time dilatation to be pure theoretical speculation, while 68% thought that paranormal spoon-bending was scientifically proven. More recently, Elizabeth Teissier, astrological adviser to millions (including, she would have us believe, François Mitterrand), was awarded a PhD by the Sorbonne for a thinly disguised PR job vaunting her craft.

I cannot resist mentioning two of my own favourites here: Paco Rabanne, the famous fashion designer, ran away from Paris before the 1999 eclipse because he was afraid the sky might fall on his head; and the failed rock musician and racing-car writer Claude Vorilhon, a.k.a. Rael, recently got word about particle physics from the Elohim – the “extraterrestrial guardians”, he says, “of peace, non-violence and harmony at all levels of infinity”. Vorilhon e-mailed many physicists to pass on the message not to mess with the universe by constructing super-colliders; science is good and should be unlimited as long as it fuses elements, it would seem, but it should never be used when breaking or cracking infinitely small particles. As Charpak and Broch point out, the more vague, hollow and absurd the claim, the deeper the truth drawn from it – a phenomenon they term the “Well Effect”.

In his introduction, Bart Holland explains that he has tried to be true to the French original. The result will sometimes be quite confusing to English-speaking readers unfamiliar with what he calls the “glorious Gallic rhetorical style”. In addition, he has not always followed his own rule of keeping sections dealing with popular French culture and public figures intact, but has supplemented them with explanatory footnotes. In several cases, I had to turn to the original version to put arguments into context.

In their final chapter, Charpak and Broch strongly criticize the media, which they see as the natural ally of science and reason, for often (unwittingly or not) promoting the bogus claim that all ideas are of equal value, under the guise of journalistic even-handedness. The authors also differ from their English-language counterparts in that they see wider dangers in pseudoscience, such as its threat to democracy and the emergence of a multinational big business to market it. The authors’ parting advice to the reader is that critical faculties should be allied with human ones. This was more or less the position taken by Sir Walter Raleigh, who once wrote, “The skeptick doth neither affirm nor deny any position but doubteth of it, and applyeth his Reason against that which is affirmed, or denied, to justify his non-consenting.” He was beheaded shortly afterwards.

CMS cavern ready for its detector

On 1 February 2005, the cavern for the CMS detector at CERN was inaugurated in a ceremony attended by many guests, including the Spanish and Italian ambassadors to the United Nations and representatives of the construction companies.

CCEnew1_03-05

The hand-over of this cavern, a gigantic underground structure 100 m underground near the village of Cessy in France, marks the end of the large-scale civil-engineering work for the Large Hadron Collider (LHC) at CERN.
The second of the new caverns for the LHC experiments, the CMS cavern is the result of several years of work by a consortium of Italian, Spanish, British, Austrian and Swiss civil-engineering companies. Problems arising from the local geology made it a spectacular feat of engineering.

The new structures built for CMS in fact comprise two caverns, together with two access shafts, for which 250,000 m3 of spoil had to be removed. The cavern for the detector is 53 m long, 27 m wide and 24 m high. The second cavern, housing the technical services, is adjacent. Unlike the strategy for the ATLAS detector, the various components for the CMS detector are being assembled and tested in a surface building before being lowered into the cavern, starting next January.

Work began six-and-a-half years ago with the excavation of the two access shafts. This was not an easy task given the 50 m deep stratum of extremely unstable moraine that also contains two water tables. To excavate this loose, wet earth, a ground-freezing technique was used, which involved circulating a brine solution at a temperature of -23 °C, followed by liquid nitrogen.

The molasse between the two caverns, which was too weak to withstand the high levels of stress exerted on it, presented a further difficulty and had to be replaced by a huge pillar of reinforced concrete.

To control the environmental impact of the project, special attention was paid to water treatment and to minimizing dust and noise levels. Moreover, the tonnes of spoil were deposited in the immediate vicinity, avoiding noise and disruption on the roads of the nearby villages. These storage areas are being landscaped and will be planted with vegetation between now and June 2005.

bright-rec iop pub iop-science physcis connect