Comsol -leaderboard other pages

Topics

The enigmatic Sun: a crucible for new physics

The Sun, a typical middle-aged star, is the most important astronomical body for life on Earth, and since ancient times its phenomena have had a key role in revealing new physics. Answering the question of why the Sun moves across the sky led to the heliocentric planetary model, replacing the ancient geocentric system and foreshadowing the laws of gravity. In 1783 a sun-like star led the Revd John Mitchell to the idea of the black hole, and in 1919 the bending of starlight by the Sun was a triumphant demonstration of general relativity. The Sun even provides a laboratory for subatomic physics. The understanding that it shines by nuclear fusion grew out of the nuclear physics of the 1930s; more recently the solution to the solar neutrino “deficit” problem has implied new physics.

CCsol1_06_08

This progress in science, triggered by the seemingly pedestrian Sun, seems set to continue, as a variety of solar phenomena still defy theoretical understanding. It may be that one answer lies in astroparticle physics and the curious hypothetical particle known as the axion. Neutral, light, and very weakly interacting, this particle was proposed more than 25 years ago to explain the absence of charge-parity (CP) symmetry violation in the strong interaction.

So what are the problems with the Sun? These lie, perhaps surprisingly, with the more visible, outermost layers, which have been observed for hundreds, if not thousands, of years.

CCsol2_06_08

First, why is the corona – the Sun’s atmosphere with a density of only a few nanograms per cubic metre – so hot, with a temperature of millions of degrees? This question has challenged astronomers since Walter Grotrian, of the Astrophysikalisches Observatorium in Potsdam, discovered the corona in the 1930s. Within a few hundred kilometres, the temperature rises to be about 500 times that of the underlying chromosphere, instead of continuing to fall to the temperature of empty space (2.7 K). While the flux of extreme ultraviolet photons and X-rays from the higher layers is some five orders of magnitude less than the flux from the photosphere (the visible surface), it is nevertheless surprisingly high and inconsistent with the spectrum from a black body with the temperature of the photosphere (figure 1). Thus, some unconventional physics must be at work, since heat cannot run spontaneously from cooler to hotter places. In short, everything above the photosphere should not be there at all.

Another question is how does the corona continuously accelerate the solar wind of some thousand million tonnes of gas per second at speeds as high as 800 km/s? The same puzzle holds for the transient but dramatic coronal mass ejections (CMEs). How and where is the required energy stored, and how are the ejections triggered? This question is probably related to the mystery of coronal heating. And what is it that triggers solar flares, which heat the solar atmosphere locally up to about 10 to 30 million degrees, similar to the high temperature of the core, some 700,000 km beneath? These unpredictable events appear to be like violent “explosions” occurring near sunspots in the lower corona. This suggests magnetic energy as their main energy source, but how is the energy stored and how is it released so rapidly and efficiently within seconds? Even though many details are known, new observations call into question the 40-year-old standard model for solar flares, which 150 years after their discovery still remain a major enigma.

CCsol3_06_08

On the Sun’s surface, what is it that causes the 11-year solar cycle of sunspots and solar activity? This seems to be the biggest of all solar mysteries, since it involves the oscillation of the huge “magnets” of a few kilogauss on the face of the Sun, ranging from 300 to 100,000 km in size. The origin of sunspots has been one of the great puzzles of astrophysics since Galileo Galilei first observed them in the early 1600s. Their rhythmic comings and goings, first measured by the apothecary Samuel Heinrich Schwabe in 1826, could be the key to understanding the unpredictable Sun, since everything in the solar atmosphere varies in step with this magnetic cycle.

Beneath the Sun’s surface, the contradiction between solar spectroscopy and the refined solar interior models provided by helioseismology has revived the question about the heavy-element composition of the Sun, with new abundances some 25 to 35% lower than before. Abundances vary from place to place and from time to time in the Sun, and are enhanced near flares, showing an intriguing dependence on the square of the magnetic intensity in these regions. The so-called “solar oxygen crisis” or “solar model problem” is thus pointing at some non-standard physical process or processes that occur only in the solar atmosphere, and with some built-in magnetic sensor.

CCsol4_06_08

These are just some of the most striking solar mysteries, each crying out for an explanation. So can astroparticle physics help? The answer could be “yes”, using a scenario in which axions, or particles like axions, are created and converted to photons in regions of high magnetic fields or by their spontaneous decay.

The expectation from particle physics is that axions should couple to electromagnetic fields, just as neutral pions do in the Primakoff effect known since 1951, which regards the production of pions by high-energy photons as the reverse of the decay into two photons. Interestingly, axions could even couple coherently to macroscopic magnetic fields, giving rise to axion–photon oscillation, as the axions produce photons and vice versa. The process is further enhanced in a suitably dense plasma, which can increase the coherence length. This means that the huge solar magnetic fields could provide regions for efficient axion–photon mutation, leading to the sudden appearance of photons from axions streaming out from the Sun’s interior. The photosphere and solar atmosphere near sunspots are the most likely magnetic regions for this process to become “visible”, as the material above is transparent to emerging photons.

According to this scenario, the Sun should be emitting axions, or axion-like particles, with energies reflecting the temperature of the source. Thus one or more extended sources of new low-energy particles (below around 1 keV), and the ubiquitous solar magnetic fields of strengths varying from around 0.5 T, as measured at the surface, up to 100 T or much more in the interior, might together give rise to the apparently enigmatic behaviour of a star like the Sun.

Conventional solar axion models, inspired by QCD, have one small source of particles in the solar core, with an energy spectrum that peaks at 4 to 5 keV. They therefore exclude the low energies where the solar mysteries predominantly occur. This immediately suggests an extended axion “horizon”. Experiments to detect solar axions – axion helioscopes such as the CERN Solar Axion Telescope (CAST) – should widen their dynamic range towards lower energies, in order to enter this new territory.

The revised solar axion scenario must also accommodate two components of photon emission, namely, a continuous inward emission together, occasionally, with an outward radiation pressure. Massive and light axion-like particles, both of which have been proposed, can provide these thermodynamically unexpected inward and outward photons respectively. They offer an exotic but still simple solution, given the Sun’s complexity.

The emerging picture is that the transition region (TR) between the chromosphere and the corona (which is only about 100 km thick and only some 2000 km above the solar surface) is the manifestation of a space and time dependent balance between the two photon emissions. However, the almost equally probable disappearance of photons into axion-like particles in a magnetic environment must also be taken into account in understanding the solar puzzles. The TR could be the most spectacular place in the Sun, since it is where the mysterious temperature inversion appears, while flares, CMEs and other violent phenomena originate near the TR.

Astrophysicists generally consider the ubiquitous solar magnetism to be the key to understanding the Sun. The magnetic field appears to play a crucial role in heating up the corona, but the process by which it is converted into heat and other forms of energy remains an unsolved problem. In the new scenario, the generally accepted properties of the radiative decay of particles like axions and their coupling to magnetic fields are the device to resolve the problem – in effect, a real “απó μηχανηζ θεóζ” (the deus ex machina of Greek tragedy). The magnetic field is no longer the energy source, but is just the catalyst for the axions to become photons, and vice versa.

The precise mechanism for enhancing axion–photon mutation in the Sun that this picture requires remains elusive and challenging. One aim is to reproduce it in axion experiments. CAST, for example, seeks to detect photons created by the conversion of solar axions in the 9 T field of a prototype superconducting LHC dipole. However, the process depends on the unknown mass of the axion. Every day the CAST experiment changes the density of the gas inside the two tubes in the magnet in an attempt to match the velocity of the solar axion with that of the emerging photon propagating in the refractive gas.

CCsol5_06_08

It is reasonable to assume that fine tuning of this kind in relation to the axion mass might also occur in the restless magnetic Sun. If the energy corresponding to the plasma frequency equals the axion rest mass, the axion-to-photon coherent interaction will increase steeply with the product of the square of the coherence length and the transverse magnetic field strength. Since solar plasma densities and/or magnetic fields change continuously, such a “resonance crossing” could result in an otherwise unexpected photon excess or deficit, manifesting itself in a variety of ways, for example, locally as a hot or cold plasma. Only a quantum electrodynamics that incorporates an axion-like field can accommodate such transient brightening as well as dimming (among many other unexpected observations).

These ideas also have implications for the better tuning not only of CAST, but also of orbiting telescopes such as the Japanese satellite Hinode (formerly Solar B), NASA’s Reuven Ramaty High Energy Solar Spectroscopic Imager and the NASA–ESA Solar and Heliospheric Observatory, which have been transformed recently to promising axion helioscopes, following suggestions by CERN’s Luigi di Lella among others. The joint Japan–US–UK mission Yohkoh has also joined the axion hunt, even though it ceased operation in 2001, by making its data freely available.

The revised axion scenario therefore seems to fit as an explanation for most (if not all) solar mysteries. Such effects can provide signatures for new physics as direct and as significant as those from laboratory experiments, even though they are generally considered as indirect; the history of solar neutrinos is the best example of this kind.

Following these ideas and others on millicharged particles, paraphotons or any other weakly interacting sub-electron-volt particles, axion-like exotica will mean that the Sun’s visible surface – and probably not its core – holds the key to its secrets. As in neutrino physics, the multifaceted Sun, from its deep interior to the outer corona and the solar wind, could be the best laboratory for axion physics and the like. The Sun, the most powerful accelerator in the solar system, whose working principle is not yet understood, has not been as active as it is now for some 11,000 years. Is this an opportunity not to be missed?

Milagro maps out gamma-ray frontier

Milagro – Spanish for miracle – was the first of a new generation of extensive air shower (EAS) detectors. Traditionally, EAS arrays have been composed of a discrete set of small detectors, spread over large areas. Typically active over approximately 1% of the enclosed area only, they were sensitive to cosmic gamma rays with energies of around 100 TeV and above. The combination of steeply falling source spectra and the absorption in flight of these high-energy gamma rays via interactions with the cosmic microwave background radiation meant that this first generation of instruments did not succeed in detecting any astrophysical sources. In contrast, imaging atmospheric Cherenkov telescopes (IACT), pioneered by Trevor Weekes at Mount Hopkins, led to the discovery of several tera-electron-volt gamma-ray sources, the first of which was the Crab Nebula, the remnant of a supernova that occurred in 1054 (Weekes et al. 1989). More recently an array of such detectors, the HESS telescopes in Namibia, have demonstrated the richness of the tera-electron-volt sky.

CCmil1_06_08

Despite these difficulties, the advantages of EAS arrays, with their large instantaneous field of view (around 2 sr) and continuous operation, provided strong motivation to improve the technique. The key to success was to lower the energy threshold and simultaneously improve the ability to reject the abundant cosmic-ray background. Water Cherenkov technology, developed for underground proton-decay physics experiments such as the Irvine Michigan Brookhaven and Kamiokande detectors, led the way to this success.

When employed above ground as an EAS array, water Cherenkov technology enables the construction of an array that is sensitive over its entire area. The Cherenkov angle in water is 41° so an array of photomultiplier tubes (PMTs) placed at a depth comparable to their spacing can detect the Cherenkov light emitted from any electromagnetic particle entering the water volume. Moreover, the composition of an EAS at ground level is predominantly photons (which are around six times as numerous as electrons and positrons), and, as the depth of water above the PMTs is sufficient to convert these gamma rays to charged particles, these photons can also be detected by the PMTs.

The Milagro detector is located in the Jemez Mountains of northern New Mexico. It is operated by the Los Alamos National Laboratory in partnership with the National Science Foundation and the US Department of Energy Office of Science. Milagro uses a covered water reservoir that contains 2.5 × 107 litres of water and measures 80 m × 60 m, with a depth of 8 m. The reservoir is instrumented with 750 PMTs deployed in two layers. The top layer of 450 PMTs is beneath 1.5 m of water with a spacing of 2.8 m. This layer is used to reconstruct the direction of the primary gamma ray or cosmic ray by measuring the relative arrival time of the shower front to around 0.5 ns. The second layer of PMTs, beneath 6 m of water, is used to detect the penetrating component of any EAS initiated by hadronic cosmic rays. An array of 175 water tanks surrounds the central water reservoir. Each is 1 m high and 3 m in diameter and is lined with reflective Tyvek. A single 8 inch PMT mounted at the top of each tank looks down into the water volume.

CCmil2_06_08

After seven years of operation, four of which included the array of outrigger water tanks, Milagro ceased operation in April this year. Its results have been impressive and ushered in a new era for ground-based gamma-ray astrophysics at tera-electron-volt energies, where the role of the EAS arrays is now clearly established.

The figure above shows a region around the galactic plane as observed by Milagro, where the median energy of the detected gamma rays is 20 TeV (Abdo et al. 2007b). It contains several noteworthy features. The sources marked JXXXX+YY, where XXXX and YY are the right ascension and declination, respectively, are three new sources that Milagro discovered. MGRO J2031+41 and MGRO J2019+37 lie within the Cygnus region of the galaxy. This direction points into our spiral arm and is rich with possible cosmic-ray acceleration sites, such as Wolf–Rayet stars, OB associations (a sign of star formation) and supernova remnants. The locations of these two sources are coincident with sources of giga-electron-volt gamma rays discovered by the Energetic Gamma Ray Emission Telescope (EGRET) on NASA’s Compton Gamma Ray Observatory (the squares mark the locations of gamma-ray sources of more than 100 MeV reported in the 3rd EGRET catalogue). However, the true nature of the sources is still to be determined.

The third new source shown in the figure above is MGRO J1908+06. This was subsequently observed by HESS, which measured a “hard” energy spectrum, falling more or less with the square of the energy. Preliminary analysis of Milagro data indicates that this source may be emitting gamma rays with energies in excess of 100 TeV, which would make it the highest-energy gamma-ray source detected to date and a likely site of cosmic-ray acceleration.

In addition to these three sources, there are four other regions in Milagro’s view of the galaxy that are likely to be sources of tera-electron-volt gamma rays. The image above shows three of these regions: C1, C3 and C4. C2, which is not indicated, lies just above C1.

The source candidate C4 is coincident with the Boomerang pulsar wind nebula, and the shape seen in tera-electron-volt gamma rays is similar to that observed at 100 MeV. C3 is coincident with the Geminga pulsar (although no pulsed emission is observed at tera-electron-volt energies), which, at a distance of 180 pc, is the closest pulsar to the Earth and the brightest source of giga-electron-volt gamma rays visible in the northern sky. Finally, C1 has no giga-electron-volt source in the vicinity and its nature is at present completely unknown. The air shower array operating at Yangbajing cosmic-ray observatory in Tibet has confirmed this source, in addition to the two others that lie in the Cygnus region. One interesting feature of these is that they appear to be extended, with diameters ranging from 0.25° to more than 1°. Large sources are difficult for IACTs to detect, possibly explaining why they have eluded detection until now, despite the fact that these regions had been examined by past IACT arrays, such as the Whipple Observatory and the High Energy Gamma Ray Astronomy experiment.

CCmil3_06_08

The second image also shows a diffuse glow visible around the galactic plane, especially in the Cygnus region and at lower galactic longitude. This arises from the interaction of hadronic cosmic rays and high-energy cosmic-ray electrons with matter and radiation in the galaxy. The interaction of cosmic-ray protons with matter leads to the production of neutral pions that subsequently decay into gamma rays. The high-energy electrons interact with low-energy (optical, infrared and cosmic microwave background) photons through Compton scattering to produce high-energy gamma rays. Prior to Milagro’s measurements, EGRET observed this galactic diffuse radiation up to about 30 GeV and discovered an excess of diffuse emission over predictions based on the known matter density in the galaxy and the cosmic-ray rate and spectrum measured at the Earth. The explanation for this excess is still a matter of debate, with possible solutions including the annihilation of dark matter. A much greater intensity of high-energy electrons throughout the galaxy than is measured at the Earth, and a miscalibration of the EGRET response at high energies, are also possible explanations.

CCmil4_06_08

The third image shows Milagro’s measurement of the diffuse emission at 12 TeV in the Cygnus region (Abdo et al. 2007a). This measurement indicates that at tera-electron-volt energies the excess over expectations is even larger than it is at giga-electron-volt energies. While the cause of this excess is a matter of debate, possible explanations include cosmic-ray acceleration sites in the region, unresolved sources of tera-electron-volt gamma rays in the region, and the presence of very-high-energy electrons in the region. The resolution of this puzzle will require more detailed observations. Whatever the final explanation, it is clear that gamma-ray astronomy is an important tool in answering the nearly century-old problem of the origin of cosmic radiation.

CCmil5_06_08

While observations with Milagro have drawn to a close, plans for a new instrument are proceeding. A joint US–Mexico collaboration has proposed the High Altitude Water Cherenkov (HAWC) telescope to be located at Volcà n Sierra Negra (Tliltepetl) near the site of the Large Millimeter Telescope in Mexico. At 4100 m above sea level (compared with 2600 m above sea level for Milagro) and with a dense sampling detector that encloses around 22,000 m2, HAWC is expected to be about 15 times as sensitive as Milagro and have an energy threshold of less than 1 TeV. Unlike Milagro, it will comprise 900 individual water tanks. Each tank will be 5 m in diameter and 4.6 m tall – much larger than those used by Milagro or the Pierre Auger Observatory in Argentina – and would have a PMT at the bottom looking up into the water volume. If built, the complete array will have an unprecedented level of sensitivity to the highest-energy particle accelerators in our galaxy, as well as the sensitivity needed to detect short flares from active galaxies and the ability to make a detailed map of the diffuse gamma-ray emission in our galaxy.

DAMA strengthens claim of annual modulation with new intriguing evidence

Nine years ago the DAMA collaboration announced intriguing evidence for an annual modulation in the signals in its detectors, which could be evidence of dark-matter particles in the galactic halo. Now, with results presented first at a conference in Venice in April, the team claims the observation of a similar signal with a larger detector, measuring more flashes in June than in December.

Such a modulation would be the consequence of the Earth’s rotation around the Sun. There would be different detection rates for dark-matter particles when the Earth goes in the same direction as the flux from the galactic halo compared with when it goes against the flux, six months later.

The current experiment, DAMA/LIBRA, has been taking data at the Gran Sasso National Laboratory in Italy since March 2003. Located at almost 1 km deep, so as to be shielded against the cosmic-ray background, the experiment uses 25 crystals of sodium iodide, each with a mass of 9.7 kg and extremely high radiopurity. If a dark-matter particle collides in one of these, it should produce a faint flash of light, which is measured.

Taking the new data together with those from the previous results gives a total exposure of 0.82 tonne-years, and a result that suggests the presence of dark-matter particles in the galactic halo at a confidence level of 8.2 σ (Bernabei et al. 2008). The effect observed is independent of the various theoretical models of dark matter, such as weakly interacting massive particles or axions. Currently, it remains that no other dark-matter experiment has detected the modulation, and so the hunt continues.

Reflections offer new way to bend particles

Channelling of particles by the arrangement of atoms in crystals has been known for many decades. The effect is nowadays used in accelerators to steer high-energy beams, which are guided by the strong coherent electric field arising from the nuclear charges in bent crystals. Some 20 years ago, Alexander Taratin and S A Vorobiev predicted that the coherent field of a bent crystal could also reflect particles through small angles (Taratin and Vorobiev 1987). It was only in 2006, however, that experiments with 1 GeV and 70 GeV protons made the first observation and measurement of this “volume reflection” effect (Ivanov et al. 2006). A year later, a team at CERN’s SPS reported a nice demonstration of the effect with 400 GeV protons.

These studies have found that the range of entrance angles over which ions undergo volume reflection can be much greater than the critical angle of channelling. Furthermore, the experiment at the SPS showed that the probability of reflection far exceeds that of channelling. It is still less than 100%, however, because some particles “stick” to the atomic planes instead of bouncing back – because incoherent scattering (volume capture) traps them into channelled states.

A single-volume reflection at the energy of the SPS is of the order of 14 μrad. It is possible to obtain greater deflection by reflecting the particles from several bent-crystal layers, as figure 1 indicates. This leads to a multiple-volume reflection (MVR) angle that increases in proportion to the number of layers (Taratin and Scandale 2007, Breese and Biryukov 2007). One experimental limitation is that some particles are volume captured with every reflection, therefore reducing the number of reflected particles linearly with the number of reflections, N.

CCref1_04_08

Computer simulations have shown two ways to overcome this limitation and increase the reflection efficiency to remarkably high values (Biryukov and Breese 2007). One way is to arrange each subsequent bent layer to reflect the complete distribution of particles passing through the layer above, including the tail of volume-captured particles. Simulations show that, in this case, the MVR angle grows linearly with N, while the efficiency remains constant, limited mainly by the volume capture in the last layer. The second way to increase reflection efficiency is to suppress the volume-capture process itself. The volume-captured particles occupy the top of the potential well and are easily affected by variations in the crystal curvature – an effect already known from experiments at 70 GeV and from theory. To suppress fully the few per cent probability of volume capture observed in single-volume reflection, the curvature should vary significantly over the length of the crystal so that it quickly releases most of the volume-captured particles.

CCref2_04_08

Computer studies of 7 TeV protons show that the rate of volume capture is suppressed by a factor of 20 in a silicon crystal, in which the curvature varies by 40% along its length compared to the same crystal with a constant curvature. Figure 2 shows a 7 TeV proton beam bent through an angle of about 40 μrad, which could serve well for collimation purposes at the LHC. Here, a structure comprising 20 (110) layers of silicon, each bent through 65 μrad with a radius of 50 m, has deflected 7 TeV protons with an efficiency of 99.95%. This efficiency level by far outperforms the capabilities of channelling in crystals and the angular acceptance of this structure is 65 μrad, which is around 20 times greater than the acceptance of bent-crystal channelling. Such perfect deflection efficiency over a broad angular acceptance makes MVR ideal for collimation.

The near-100% deflection efficiency obtained in the single encounter of a particle with the multilayered structure may be important in many types of accelerators, including linear machines (such as a future International Linear Collider), machines with a short beam lifetime (such as muon or short-cycle machines) and in high-intensity beams with a fast-developing instability. The possibility of an efficiency of close to 100% makes MVR attractive for high-intensity beam applications, where beam losses usually rule out the use of bent-crystal channelling.

At an energy of 7 TeV, the crystal material along the beam direction in this example becomes as long as 6.5 cm. Some protons undergo inelastic nuclear interactions in the crystal layers. Figure 2 shows the deflection angle for these protons at the moment of nuclear interaction. On average they are bent by half the bending angle of non-interacting protons, with a bending efficiency of a remarkably high 95%. This is different behaviour compared to both an amorphous target and to bent-crystal channelling, where the products of nuclear interactions move in a forward direction. When using MVR for collimation, not only are the primary particles bent towards an absorber but the debris of the particles that have interacted with the crystal nuclei are also bent towards an absorber with high efficiency.

CCref3_04_08

MVR also provides an attractive mechanism for a space shield that can deflect ions with energies of mega- or giga-electron volts per nucleon. Here, highly efficient deflection over a range of entrance angles at high energies is of paramount importance for the design of a space shield for radiation protection that is based on curved crystals. Such a bent-crystal shield was recently proposed for deflecting cosmic-radiation ions of all atomic numbers away from spacecraft (Breese 2007). A team at the National University of Singapore fabricated a bent-crystal shield with a surface area of 1 × 1 cm2 that is capable of deflecting ions with energies of up to 100 GeV/nucleon. Figure 3 shows the simulated results of the crystal shield protecting a spacecraft from high-energy ions approaching from a single direction. This adds yet another link between the microcosm of a particle-physics laboratory and the macrocosm of space travel.

Terascale Alliance takes off in Germany

The next big advances in particle physics are expected to happen at the “terascale”. The tremendous complexity and size of experiments at the LHC and the proposed International Linear Collider (ILC) challenge the way that physicists have traditionally worked in high-energy physics. The German project Physics at the Terascale – a Helmholtz Alliance that will receive €25 m over five years from Germany’s largest organization of research centres, the Helmholtz Association – will address these challenges.

CCtera1_04_08

The Alliance bundles and enhances resources at 17 German universities, two Helmholtz Centres (the Forschungszentrum Karlsruhe and DESY) and at the Max Planck Institute for Physics in Munich. It focuses on the creation of a first-class research infrastructure and complements the existing funding mechanisms in Germany at local and federal level. With the help of the new project, central infrastructures are developed and are shared among all Alliance members. The Alliance will fund many of these measures for the first few years. From the beginning, a central point of the proposal has been that the long-term future of these activities is guaranteed by the universities and the research centres beyond the running period of the Alliance funds.

CCtera2_04_08

The Alliance supports four well defined research topics (physics analysis, Grid computing, detector science and accelerators) and a number of central “backbone” activities, such as fellowships, interim professorships, communication and management.

Close-knit infrastructure

What is new about this common infrastructure? Previously, each of these institutes developed their infrastructure and expertise for their own purposes. Now, triggered by the Alliance, different institutes share their resources. Common infrastructures are developed and are made available to all physicists in Germany working on terascale physics. For example, this means that if PhD student Jenny Boek of Wuppertal wants to develop a chip for slow controls, she can now use the infrastructure and take advantage of the expertise in chip design in Bonn.

These central infrastructures can be concrete installations – like a chip development and design laboratory, located at a specific location – or virtual ones, like the National Analysis Facility, which will help all LHC groups in Germany to participate more efficiently in the analysis of data from the LHC. Common to all of these is that these infrastructures are open to all members of the Alliance, and are initially funded through it.

CCtera3_04_08

An important goal of the Alliance is to organize interactions between the different experimental groups and between the experiment and theory communities on all topics of interest for physics analysis at the terascale. This includes meetings and the formation of working groups with members from all interested communities, the organization of schools and other common activities. It can also mean basic services, such as the design and maintenance of Monte Carlo generators, or include exchanges on the underlying theoretical models. In all of these studies, while the focus is initially on the LHC, the role of the ILC will also feature as a future facility of key importance in the field.

CCtera4_04_08

In the same spirit, Alliance funds are used to improve the Grid infrastructure significantly in Germany, to serve the global computing needs of the LHC as well as the specific requirements of German physicists to contribute to the data analysis. Funds are provided to supplement the existing Tier-2 structure in Germany by building up Tier-2s at several universities, and to support the National Analysis Facility at DESY. Additional money is provided to allow for significant contributions to improve Grid technologies with the emphasis on making the Grid easier for the general user.

The third research topic, detector development, involves plans for the future beyond the immediate data flow from the LHC. Institutes are already developing next-generation detectors for the ILC and for LHC upgrades. A Virtual Laboratory for Detector Development will provide central infrastructures to support the different groups for these projects. A number of universities and DESY are setting up infrastructures with special emphasis on chip design, irradiation facilities, test beams and engineering support. Again, although these facilities are at specific locations, they serve the whole community.

Fostering young talent

The Alliance also wants to increase the involvement of universities in accelerator research in Germany. Through a number of programmes – for example a school for students on accelerator science or lectures at universities – the Alliance tries to increase the involvement of universities in accelerator research over the long term. Rolf Heuer of DESY, one of the initiators of the project, explains the motivation: “Germany led the way to the TESLA technology collaboration and its success, and we want to stay at the forefront of accelerator development. Without it, progress in many areas of science will not be possible.”

A substantial part of the Alliance’s funding goes into the creation of more than 50 positions for young scientists and engineers all over Germany. The five Alliance Young Investigators groups and the Alliance fellowships play a special role: they are supposed to attract young physics talents from all over the world to Germany and to the terascale. Many of these positions are tenure-track, something quite rare in Germany. In addition, positions are created to support the infrastructure activities, to set up the central tasks and support the work of the Alliance. More than 250 people have already applied for the new positions over the last eight months.

A significant fraction of the accepted applications are from women. This is in accordance with the Alliance’s aim to enhance the role of women in physics. One way to attract smart and ambitious young people to the German research landscape is the dual career option – the Alliance pays half a salary for the partner to work at the same institution. So when Karina Williams, now in the final year of her particle physics phenomenology PhD at Durham University in the UK, applied for postdoctorate positions, she made sure that the places where she applied would also have a job for her partner. It worked out at Bonn University, where she and her partner start later this year. “I think it’s wonderful that schemes like this exist,” she says. “I know so many people who have either had to put up with very long-distance relationships or left the subject because their partner could not get a job nearby. When I first started applying for jobs, I was told that long-distance relationships were just part of the postdoc life.”

Centralized community

DESY plays a special role within the Alliance. It provides unique and basic infrastructures for accelerator research, as well as large-scale engineering support for detector research. This is a tradition that goes back to when DESY ran accelerators for high-energy physics. A new role for DESY is to host central services for the German physics community to support physics analysis in Germany. One of these services is the Analysis Centre, where research will focus on areas of general interest, which are often emphasised less at universities. Examples of these topics are statistical tools or parton distribution functions, where the Alliance will profit from the outstanding expertise at DESY from HERA. Of course it is not only R&D that researchers at the Analysis Centre will pursue; another purpose is to form a kind of helpdesk to answer questions and offer help in organizing topical workshops. Expanding on its role as an LHC Tier-2 centre, DESY is also setting up the National Analysis Facility, a large-scale computer installation to support the user analysis of LHC data. The first processors are already installed in DESY’s computing centre, providing fast computing power for efficient analyses by German LHC groups.

CCterabox1_04_08

Another example of “central services” – like Alliance fellowships, equal opportunity measures or dual career options – is a “scientist replacement” programme. The goal of scientist replacement is to enable senior professors to take up roles of responsibility at the LHC experiments by sponsoring junior professors to replace them at university. Karl Jakobs is physics coordinator at ATLAS and a part-time bachelor. His home and family are in Freiburg in southern Germany, but he has had a flat in Saint Genis-Pouilly near CERN since October last year and a great deal of long-term responsibility within the experiment – something that would have been impossible less than a year ago. Now the Alliance is funding his replacement in Freiburg. In this way, German particle physicists can play leading roles in current and future experiments more easily. This may sound like a trivial thing – but all German professors are obliged both to do research and to teach, binding them to their university and only releasing them during breaks and the occasional half-year sabbatical. Jakobs’ classes are currently, until the end of the summer, being taught by Philip Bechtle from DESY. Another example is Ian Brock, scientific manager of the Alliance, whose replacement during his leave of absence from Bonn University is paid for and provided by the Alliance.

CCterabox2_04_08

The Alliance was officially approved in May last year, funding started in July, and it is already a prominent part of the German landscape of particle physics. It had an impressive start and most of the structures of the Alliance have begun working intensively. A major event was the “kick-off” workshop at DESY in December. With 354 registered participants (many of them undergraduate, graduate and PhD students), a large part of the German high-energy physics community was there. The workshop proved a great opportunity for young particle physicists to get to know each other and exchange ideas: Terascale gives them a backbone structure that they will now fill with content.

The Alliance is already changing the way particle physics is done in Germany. The main idea is to establish cooperation among the different pillars of German research in particle physics. Expertise, which is scattered around many different places, is being combined to become more efficient. As Heuer explains: “The Alliance strengthens R&D on LHC physics in Germany, pushes for accelerator physics and prepares for the ILC. It is our hope that this helps in the worldwide effort to unravel the basic structure of matter and to understand how the universe has developed.”

Jakobs, meanwhile, is happy to benefit from the arrangement at CERN. “Everything is happening here. You cannot be physics coordinator and not be stationed at CERN. There are regular meetings, you talk to people all the time, watch their progress and coordinate to optimize.” As physics coordinator he has to make sure that all ATLAS people who work on Higgs analysis and other special topics work together in a coherent way. There is a complicated sub-group structure and all simulations and data have to be perfectly understood. “The good thing is that after my job here, I will be able to return to Freiburg with a clear conscience and spend a lot of time analysing the data I helped to prepare,” he explains. “Administration, teaching, funding proposals, forms and management – all that takes time at home. It is a great luxury to be able to concentrate on one thing only here: pure physics.

• For more information about Physics at the Terascale, see www.terascale.de.

• For further information about the Helmholtz research association, see www.helmholtz.de/en

Particle physics proves that arsenic didn’t kill Napoleon

A meticulous new examination performed at the INFN laboratories in Milano-Bicocca and Pavia in Italy has shown that arsenic poisoning did not kill Napoleon. The researchers demonstrated that there is no evidence of a significant increase in the levels of arsenic in the emperor’s hair during the final period of his life.

Physicists performed the study using a small nuclear reactor located at the university in Pavia, which was built for the Cryogenic Underground Observatory for Rare Events (Cuore) experiment. Currently in development at the INFN’s National Laboratories in Gran Sasso, the completed Cuore facility will be the most advanced experiment for studying the rare phenomenon of neutrinoless double-beta decay and for measuring neutrino mass.

To examine Napoleon’s hair, the team used the technique of neutron activation, which has two important advantages: it does not destroy the sample and it provides extremely precise results, even from samples with a small mass. The researchers placed Napoleon’s hair in the core of the nuclear reactor in Pavia and used neutron activation to establish that all of the hair samples contained traces of arsenic. They chose to test for arsenic in particular because various historians have hypothesized that guards poisoned Napoleon during his imprisonment in Saint Helena. A diverse sample of hairs from different periods of Napoleon’s life were examined, along with hair samples from people living today, to compare arsenic levels.

The examination produced some surprising results. First, the level of arsenic in all of the hair samples from 200 years ago is 100 times as great as the average level detected in samples from people living today. In other words, people at the beginning of the 19th century evidently ingested arsenic from the environment in quantities that are today considered dangerous. The other surprise is that there was no significant difference in arsenic levels between when Napoleon was a boy and during his final days in Saint Helena. According to the toxicologists who participated in the study, this provides evidence that this was not a case of poisoning, but rather the result of a lifetime’s absorption of arsenic.

Under control: keeping the LHC beams on track

The scale and complexity of the Large Hadron Collider (LHC) under construction at CERN are unprecedented in the field of particle accelerators. It has the largest number of components and the widest diversity of systems of any accelerator in the world. As many as 500 objects around the 27 km ring, from passive valves to complex experimental detectors, could in principle move into the beam path in either the LHC ring or the transfer lines. Operation of the machine will be extremely complicated for a number of reasons, including critical technical subsystems, a large parameter space, real-time feedback loops and the need for online magnetic and beam measurements. In addition, the LHC is the first superconducting accelerator built at CERN and will use four large-scale cryoplants with 1.8 K refrigeration capability.

CCcon1_03_08

The complexity means that repairs of any damaged equipment will take a long time. For example, it will take about 30 days to change a superconducting magnet. Then there is the question of damage if systems go wrong. The energy stored in the beams and magnets is more than twice the levels of other machines. That accumulated in the beam could, for example, melt 500 kg of copper. All of this means that the LHC machine must be protected at all costs. If an incident occurs during operation, it is critical that it is possible to determine what has happened and trace the cause. Moreover, operation should not resume if the machine is not back in a good working state.

CCcon2_03_08

The accelerator controls group at CERN has spent the past four years developing a new software and hardware control system architecture based on the many years of experience in controlling the particle injector chain at CERN. The resulting LHC controls infrastructure is based on a classic three-tier architecture: a basic resource tier that gathers all of the controls equipment located close to the accelerators; a middle tier of servers; and a top tier that interfaces with the operators (figure 1).

CCcon3_03_08

Complex architecture

The LHC Software Application (LSA) system covers all of the most important aspects of accelerator controls: optics (twiss, machine layout), parameter space, settings generation and management (generation of functions based on optics, functions and scalar values for all parameters), trim (coherent modifications of settings, translation from physics to hardware parameters), operational exploitation, hardware exploitation (equipment control, measurements) and beam-based measurements. The software architecture is based on three main principles (figure 2). It is modular (each module has high cohesion, providing a clear application program interface to its functionality), layered (with three isolated logical layers – database and hardware access layer, business layer, user applications) and distributed (when deployed in the three-tier configuration). It provides homogenous application software to operate the SPS accelerator, its transfer lines and the LHC, and it has already been used successfully in 2005 and 2006 to operate the Low Energy Ion Ring (LEIR) accelerator, the SPS and LHC transfer lines.

The front-end hardware of the resource tier consists of 250 VMEbus64x sub-racks and 120 industrial PCs distributed in the surface buildings around the 27 km ring of the LHC. The mission of these systems is to perform direct real-time measurements and data acquisition close to the machine, and to deliver this information to the application software running in the upper levels of the control system. These embedded systems use home-made hardware and commercial off-the-shelf technology modules, and they serve as managers for various types of fieldbus such as WorldFIP, a deterministic bus used for the real-time control of the LHC power converters and the quench-protection system. All front ends in the LHC have a built-in timing receiver that guarantees synchronization to within 1 μs. This is required for time tagging of post-mortem data. The tier also covers programmable logic controllers, which drive various kinds of industrial actuator and sensor for systems, such as the LHC cryogenics systems and the LHC vacuum system.

The middle tier of the LHC controls system is mostly located in the Central Computer Room, close to the CERN Control Centre (CCC). This tier consists of various servers: application servers, which host the software required to operate the LHC beams and run the supervisory control and data acquisition (SCADA) systems; data servers that contain the LHC layout and the controls configuration, as well as all of the machine settings needed to operate the machine or to diagnose machine behaviours; and file servers containing the operational applications. More than 100 servers provide all of these services. The middle tier also includes the central timing that provides the information for cycling the whole complex of machines involved in the production of the LHC beam, from the linacs onwards.

CCcon4_03_08

At the top level – the presentation tier – consoles in the CCC run GUIs that will allow machine operators to control and optimize the LHC beams and supervise the state of key systems. Dedicated displays provide real-time summaries of key machine parameters. The CCC is divided into four “islands”, each devoted to a specific task: CERN’s PS complex; the SPS; technical services; and the LHC. Each island is made of five operational consoles and a typical LHC console is composed of five computers (figure 3). These are PCs running interactive applications, fixed displays and video displays, and they include a dedicated PC connected only to the public network. This can be used for general office activities such as e-mail and web browsing, leaving the LHC control system isolated from exterior networks.

Failsafe mechanisms

In building the infrastructure for the LHC controls, the controls groups developed a number of technical solutions to the many challenges facing them. Security was of paramount concern: the LHC control system must be protected, not only from external hackers, but also from inadvertent errors by operators and failures in the system. The Computing and Network Infrastructure for Controls is a CERN-wide working group set up in 2004 to define a security policy for all of CERN, including networking aspects, operating systems configuration (Windows and Linux), services and support (Lüders 2007). One of the group’s major outcomes is the formal separation of the general-purpose network and the technical network, where connection to the latter requires the appropriate authorization.

CCcon5_03_08

Another solution has been to deploy, in close collaboration with Fermilab, “role-based” access (RBAC) to equipment in the communication infrastructure. The main motivation to have RBAC in a control system is to prevent unauthorized access and provide an inexpensive way to protect the accelerator. A user is prevented from entering the wrong settings – or from even logging into the application at all. RBAC works by giving people roles and assigning permissions to those roles to make settings. An RBAC token – containing information about the user, the application, the location, the role and so on – is obtained during the authentication phase (figure 4). This is then attached to any subsequent access to equipment and is used to grant or deny the action. Depending on the action made, who is making the call and from where, and when it is executed, access will be either granted or denied. This allows for filtering, control and traceability of modifications to the equipment.

An alarm service for the operation of all of the CERN accelerator chain and technical infrastructure exists in the form of the LHC Alarm SERvice (LASER). This is used operationally for the transfer lines, the SPS, the CERN Neutrinos to Gran Sasso (CNGS) project, the experiments and the LHC, and it has recently been adapted for the PS Complex (Sigerud et al. 2005). LASER provides the collection, analysis, distribution, definition and archiving of information about abnormal situations – fault states – either for dedicated alarm consoles, running mainly in the control rooms, or for specialized applications.

LASER does not actually detect the fault states. This is done by user surveillance programs, which run either on distributed front-end computers or on central servers. The service processes about 180,000 alarm events each day and currently has more than 120,000 definitions. It is relatively simple for equipment specialists to define and send alarms, so one challenge has been to keep the number of events and definitions to a practical limit for human operations, according to recommended best practice.

CCcon6_03_08

The controls infrastructure of the LHC and its whole injector chain spans large distances and is based on a diversity of equipment, all of which needs to be constantly monitored. When a problem is detected, the CCC is notified and an appropriate repair has to be proposed. The purpose of the diagnostics and monitoring (DIAMON) project is to provide the operators and equipment groups with tools to monitor the accelerator and beam controls infrastructure with easy-to-use first-line diagnostics, as well as to solve problems or help to decide on responsibilities for the first line of intervention.

The scope of DIAMON covers some 3000 “agents”. These are pieces of code, each of which monitors a part of the infrastructure, from the fieldbuses and frontends to the hardware of the control-room consoles. It uses LASER and works in two main parts: the monitoring part constantly checks all items of the controls infrastructure and reports on problems; while the diagnostic part displays the overall status of the controls infrastructure and proposes support for repairs.

The frontend of the controls system has its own dedicated real-time frontend software architecture (FESA). This framework offers a complete environment for equipment specialists to design, develop, deploy and test equipment software. Despite the diversity of devices – such as beam-loss monitors, power converters, kickers, cryogenic systems and pick-ups – FESA has successfully standardized a high-level language and an object-oriented framework for describing and developing portable equipment software, at least across CERN’s accelerators. This reduces the time spent developing and maintaining equipment software and brings consistency across the equipment software deployed across all accelerators at CERN.

This article illustrates only some of the technical solutions that have been studied, developed and deployed in the controls infrastructure in the effort to cope with the stringent and demanding challenges of the LHC. This infrastructure has now been tested almost completely on machines and facilities that are already operational, from LEIR to the SPS and CNGS, and LHC hardware commissioning. The estimated collective effort amounts to some 300 person-years and a cost of SFr21 m. Part of the enormous human resource comes from international collaborations, the valuable contributions of which are hugely appreciated. Now the accelerator controls group is confident that they can meet the challenges of the LHC.

• This article is based on the author’s presentation at ICALEPCS ’07 (Control systems for big physics reach maturity).

QCD: string theory meets collider physics

With the title Quantum Chromodynamics – String Theory meets Collider Physics, the 2007 DESY theory workshop brought together a distinguished list of speakers to present and discuss recent advances and novel ideas in both fields. Among them was Juan Maldacena from the Institute for Advanced Study, Princeton, pioneer of the interrelationship between gauge theory and string theory, who also gave the Heinrich Hertz lecture for the general public.

CCqcd1_03_08

From a dynamical point of view, quantum chromodynamics (QCD), the theory of strong interactions, represents the most difficult sector of the Standard Model. Mastering the complexities of strong interactions is essential for a successful search for new physics at the LHC. In addition, the relevance of the QCD phase transition for the early evolution of our universe has ignited an intense interest in heavy-ion collisions, both at RHIC in Brookhaven and at the LHC at CERN. The QCD community is thus deeply engaged in investigations to further our understanding of QCD, to reach the highest accuracy in its theoretical predictions and to advance existing computational tools.

CCqcd2_03_08

String theory, initially considered a promising theoretical model for strong interactions, was long believed incapable of capturing, in detail, the correct high-energy behaviour. In 1997, however, Maldacena overcame a prominent obstacle for applications of string theory to gauge physics. He proposed describing strongly coupled four-dimensional (supersymmetric) gauge theories through closed strings in a carefully chosen five-dimensional background. In fact, equivalences (dualities in modern parlance) between gauge and string theories emerge, provided that the strings propagate in a five-dimensional space of constant negative curvature. Such a geometry is called an anti deSitter (AdS) space and the duality involving strings in an AdS background became known as AdS/CFT correspondence, where CFT denotes conformal field theory. If the duality turns out to be true, string-theory techniques can give access to strongly coupled gauge physics, a regime that only lattice gauge theory has so far been able to access. Though a string theory dual to real QCD has still to be found, AdS/CFT dualities are beginning to bring string theory closer to the “real world” of particle physics.

With the duality conjecture as its focus, the DESY workshop covered the full spectrum of research topics that have entered this interdisciplinary endeavour. Topics ranged from the role of QCD in the evaluation of experimental data and in Monte Carlo simulations to string theory calculations in AdS spaces.

To begin with the more practical side, QCD clearly dominates the daily analysis of data from RHIC, HERA at DESY, and Fermilab’s Tevatron. Tom LeCompte of Argonne presented results from the Tevatron, and Uta Klein of Liverpool looked at what we have learned from HERA. The results relating to parton densities will be of utmost importance for measurements at the LHC, not least in the kinematic region of small x, which was among the highlights of HERA physics. Diffraction – one of the puzzles for the HERA community – continues to demand attention at the LHC, in particular as a clean channel for the discovery of new physics, as Brian Cox of the University of Manchester explained.

Monte Carlo simulations represent an indispensable tool for analysing experimental data, and existing models need steady improvement as we approach the new energy regime at the LHC. Gösta Gustafson of Lund and Stefan Gieseke of Karlsruhe described the progress that is being made in this respect. Topics of particular current interest include a careful treatment of multiple parton interactions and the implementation of next-to-leading-order (NLO) QCD matrix elements in Monte Carlo programs.

At present, lattice calculations still offer the most reliable framework for studies of QCD beyond the weak coupling limit. Among other issues, the workshop addressed the calculation of low-energy parameters such as hadron masses and decay constants. In this context, Federico Farchioni of Münster noted that the limit of small quark masses calls for careful attention, and Philipp Hägler of Technische Universität, München discussed developments in calculating hadron structure from the lattice. Another important direction concerns the QCD phase structure and, in particular, accurate estimates of the phase-transition temperature, Tc, as Akira Ukawa of Tsukuba explained. Lattice gauge theories also allow the investigation of connections with string theory. Michael Teper of Oxford showed how once the dependence of gauge theory on the number of colours, Nc, is sufficiently well controlled, it may be possible to determine the energy spectrum of closed strings in the limit of large ‘t Hooft coupling.

QCD perturbation theory

NLO and next-to-NLO calculations in QCD perturbation theory are needed to derive precise expressions for cross-sections – they are crucial in describing experimental data at the existing colliders, and indispensable input for the discrimination of new physics from mere QCD background at the LHC. The necessary computations require a detailed understanding of perturbative QCD, as Werner Vogelsang from Brookhaven National Laboratory discussed. For example, the theoretical foundation of kt factorization and of unintegrated parton densities, along with their use in hadron–hadron collisions, is attracting much attention. For higher-order QCD calculations, Alexander Mitov of DESY, Zeuthen, described how advanced algorithms are being developed and applied.

Higher-order computations in QCD are becoming one of the most prominent examples of an extremely profitable bridge between gauge and string theories. Multiparton final states at the LHC have sparked interest in perturbative gauge theory computations of scattering amplitudes that involve a large number of incoming and/or outgoing partons. At the same time there is an urgent need for higher-loop results, which, in view of the rapidly growing number of Feynman diagrams, seem to be out of reach for more conventional approaches. Recent investigations in this direction have unravelled new structures, such as in the perturbative expansion of multigluon amplitudes.

In a few special cases, such as four-gluon amplitudes in N = 4 supersymmetric Yang–Mills theory, these investigations have led to highly non-trivial conjectures for all loop expressions. This was the topic of talks by David Dunbar of Swansea and Lance Dixon of Stanford. According to the AdS/CFT duality, the strong coupling behaviour of these amplitudes should be calculable within string theory. Indeed, Maldacena described how the relevant string-theory computation of four-gluon amplitudes has been performed, yielding results that agree with the gauge theoretic prediction. On the gauge theory side, a conjecture for a larger number of gluons has also been formulated. Maldacena noted that this is currently contested both by string theoretic arguments and more refined gauge theory calculations.

The expressions for four-gluon amplitudes contain a certain universal function, the so-called cusp anomalous dimension, which can again be computed at weak (gauge theory) and strong (supergravity) coupling. Gleb Arutyunov of Utrecht showed how this particular quantity is also being investigated using modern techniques of integrable systems. Remarkably, as Niklas Beisert of the Albert Einstein Institute in Golm explained, a formula for the cusp anomalous dimension in N = 4 super-Yang–Mills theory has recently been proposed that interpolates correctly between the known weak and strong coupling expansions. In addition, Vladimir Braun of Regensburg and Lev Lipatov of Hamburg and St Petersburg described how integrability features in the high-energy regime of QCD, both in the short distance and the small-x limit. The integrable structures have immediate applications to data analysis. Yuri Kovchegov of Ohio also pointed out that low-x physics in QCD, with all the complexities appearing in the NLO corrections, might possess close connections with the supersymmetric relatives of QCD. The higher order generalization of the Balitsky–Fadin–Kuraev–Lipatov pomeron, which is expected to correspond to the graviton, is of particular interest. In this way, studies of the high-energy regime seem to carry the seeds for new relations to string theory.

Another close contact between string theory and QCD appears at temperatures near and above the QCD phase transition. Heavy-ion experiments that probe this kinematic region are currently taking place at RHIC and will soon be carried out at the LHC. CERN’s Urs Wiedemann introduced the topic, and John Harris of Yale presented results and discussed their interpretation. The analysis of RHIC data requires somewhat unusual theoretical concepts, including, for example, QCD hydrodynamics. As in any other system of fluid mechanics, viscosity is an important parameter used to characterize quark–gluon plasmas, but its measured value cannot be explained through perturbative QCD. This suggests that the quark–gluon plasma at RHIC is strongly coupled, so string theory should be able to predict properties such as the plasma’s viscosity through the AdS/CFT correspondence. David Mateos of Santa Barbara and Hong Liu of Boston showed that the string theoretic computation of viscosity and other quantities is indeed possible, based on investigations of gravity in a thermal black-hole background. It leads to values that are intriguingly close to experimental data.

String theory is often perceived as an abstract theoretical framework, far away from the physics of the real world and experimental verification. When considered as a theory of strongly coupled gauge physics, however, it is beginning to slip into a new role – one that offers novel views of qualitative features of gauge theory and, in some cases, even quantitative predictions. The QCD community, on the other hand, is beginning to realize that its own tremendous efforts may profit from the novel alliance with string theory. The participants of the 2007 DESY Theory workshop witnessed this recent shift, through lively discussions and numerous excellent talks that successfully bridged the two communities.

Symmetries and hadron dynamics go on the MENU

CCmen1_02_08

Hadron physics investigates one of the open frontiers of the Standard Model: the strong interaction for large gauge couplings. Experimentally, there are currently two major strategies. Precision experiments study symmetries and their violations with the aim of extracting fundamental quantities of QCD, such as the quark masses. Studies of the excited states and their decays, on the other hand, try to establish the ordering principles of the hadronic spectra to shed light on the problem of the confinement of the quarks.

The common aspects in both the charmed sector and the light quark sector were the major reason to bring together 350 experts from high-energy physics and nuclear physics to the 11th International Conference on Meson–Nucleon Physics and the Structure of the Nucleon (MENU 2007), which took place on 10–14 September 2007 at the Research Centre Jülich. The plenary sessions provided a broad review of the field, while invited and contributing speakers covered special topics, such as spin physics, meson and baryon spectroscopy, lattice calculations and in-medium physics, in five parallel sessions.

The light quark sector

Jürg Gasser, of the University of Bern, opened the conference with a review talk on chiral effective field theory, the standard tool for hadron physics in the threshold region. Lattice calculations have come into contact with chiral perturbation theory (χPT) by obtaining values for the low-energy constants l3 and l4. The DIRAC and NA48 experiments at CERN have tested the predictions of χPT by studying the level shifts of pionium and the decay of charged kaons into three pions. Rainer Wanke, of Mainz University, reported on the recent high-statistics data from NA48/2. The data have allowed the extraction of the S-wave pion–pion scattering length with great precision from studies of the Wigner cusp in the two-pion subsystem, as Ulf-G Meissner and collaborators predicted in 1997. The results agreed with the χPT predictions after inclusion of isospin-breaking effects.

Johan Bijnens of Lund University emphasized in his review on η physics that the decays of both the η and the η’ mesons are good laboratories to study non-dominant strong interaction effects. The slope parameter α in the neutral three pion decay of the η is a puzzling challenge, as χPT does not explain the sign of the slope parameter, even when pushed to next-to-next-to-leading order, while non-perturbative approaches do. Magnus Wolke of the Research Centre Jülich showed Dalitz plots for the decay of the η into three neutral pions, which the WASA-at-COSY Collaboration obtained in the first production run in April/May 2007. The WASA detector was transferred from Uppsala to Jülich in 2005. Cesare Bini of the Sapienza Università di Roma reviewed recent KLOE results featuring the η mass, measurement of η–η’ mixing, the slope parameter of the η decay and results on the scalar mesons ƒ0(980) and α0 (980) seen in φ-decay. Patrick Achenbach of Mainz showed the first results on η’ decays into η and two neutral pions from the CB-TAPS experiment at the MAMI-C electron accelerator at Mainz. Catalina Curceanu presented the recent progress on kaonic hydrogen by the SIDDHARTA collaboration at the DAΦNE facility, which will allow physicists to obtain the antikaon–nucleon scattering lengths.

Effective field theory is beginning to make an impact on traditional nuclear physics with a consistent treatment of two-nucleon and three-nucleon interactions. Theorists have for many decades considered three-nucleon forces as a possible explanation for the unsolved problem of the saturation properties of nuclear matter. Kimiko Sekiguchi of RIKEN, Stanislaw Kistryn of the Jagiellonian University Krakow, and Daniel Phillips of Ohio University showed how the possibilities of studying polarized proton–deuteron reactions provide a direct experimental access to the three-nucleon force. In addition, the progress in applying lattice methods to study hadrons, hadron–hadron interactions and eventually nuclei, figured in the talks by Silas R Beane of the University of New Hampshire, Uwe-Jens Wiese of the University of Bern, and Andreas Schäfer of the University of Regensburg.

The charm sector

The decay of heavy mesons produced by the present generation of electron–positron colliders sheds new light on the light meson sector because the scalar mesons ƒ0(980) and α0(980) are found in the decay products, for example in the reaction D0 → K0π+π, as Michael Pennington of Durham University stressed in his talk. Joseph Schechter of Syracuse University presented effective Lagrangian methods for the light scalar meson sector. The new charmed mesons Ds(2317) and Ds(2460), together with the charmonia-like states X, Y and Z, can be considered as unexpected contributions from B-factories, as Ruslan Chistov from ITEP Moscow pointed out in his overview of results from the Belle experiment at KEK. B-decays suppress the background contributions and offer large branching fractions, thus allowing an angular analysis to obtain quantum numbers. Walter Toki of Colorado State University discussed the recent results on the X(3872) and Y(3940) mesons from the BaBar experiment at SLAC and on the Z(4430) discovered by Belle, which apparently do not fit into the conventional quark–antiquark model for mesons. The Z(4430) may be a hadronic molecule made of a D*(2010) and a D1(2420) or a tetraquark, and, if confirmed, would be as exciting as the first charged hidden-charm state.

Matthias Lutz from Gesellschaft für Schwerionenphysik, Darmstadt, and Craig Roberts of Argonne discussed various aspects of hadron spectroscopy, while Ulrich Mosel of the University of Giessen highlighted recent theoretical progress in modelling the medium-dependence of nucleon resonances. Ulrike Thoma of the University of Bonn reported on evidence for two new Baryons – a D15(2070) and D33(1940), seen in η production on the nucleon at the ELSA facility at Bonn. Bing-Song Zou, of the Chinese Academy of Science, Beijing, observed that J/ψ decay is an ideal isospin filter for studying baryons, allowing the identification of the elusive Roper resonance as a visible bump, quite in contrast to pion–nucleon scattering. The Roper resonance is the first excited state of the nucleon with the quantum numbers of the nucleon. Results from the Beijing Spectrometer experiment show a surprisingly small Roper mass of 1360 MeV.

Haiyan Gao of Duke University showed how quark–hadron duality studies in charged pion photoproduction can be used to obtain information about resonances in the energy region above 2 GeV. Kai Brinkmann of Technical University Dresden reviewed results from the cooler synchrotron COSY at Research Centre Jülich, in particular the negative result for the search for pentaquarks, while Takashi Nakano reported on the recent status of experiments at the SPring-8 synchrotron radiation facility in Japan. Mikail Voloshin, of Minnesota, reviewed the decay of charmed hadrons and pointed out the open opportunities to improve our knowledge of the Kobayashi–Maskawa matrix element Vub. Ikaros Bigi, of Notre Dame du Lac, focused on D0 oscillations, which open a unique window on flavour dynamics.

The future will see exciting new machine developments. Naohito Saito discussed progress at the new Japan Proton Accelerator Research Complex, while the European project for the Facility for Antiproton and Ion Research was covered by Paolo Lenisa from the Università di Ferrara, Mauro Anselmino of INFN Torino and Johan Messchendorp of KVI Groningen. Anthony Thomas presented the 12 GeV upgrade for the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab, and Günther Rosner of the University of Glasgow gave an overview of physics with the CEBAF Large Acceptance Spectrometer.

Willem Van Oers of Manitoba University gave a lively address as the representative of the International Union of Pure and Applied Physics, who together with Forschungszentrum Jülich, Deutsche Forschungsgemeinschaft, Jefferson Lab, and the Hadron Physics I3 FP6 European Community Programme made this conference possible.

• The next MENU conference will be held in two years’ time in Newport News, Virginia, in 2010.

New symposium links the vacuum and the universe

CCvac1_02_08

The first Austria–France–Italy (AFI) symposium, From the Vacuum to the Universe, took place on 19–20 October at the University of Innsbruck. Inspired by developments in particle and astrophysics, it explored the physics of the vacuum, its manifestations in the subatomic world and its consequences for the large-scale structure of the universe. Studies of quark confinement; searches for the Higgs boson and other LHC physics; neutrinos; cosmic rays; and astrophysical probes of dark matter – all promise to reveal vital information about the structure of the universe, from the scale of QCD to tera-electron-volts.

The physical world is built from spin-1/2 fermions interacting through the exchange of gauge bosons: massless spin-1 photons and gluons; massive W and Z bosons; and gravitational interactions. The Pauli exclusion principle (PEP), which says that two identical fermions cannot exist in the same quantum state, is responsible for the stability of the physical world and is a pillar of chemistry. Further ingredients are needed to allow the formation of large-scale structures on the galactic scale and to explain the accelerating expansion of the universe. These are the mysterious dark matter and dark energy, respectively. Current observations point to an energy budget of the universe where just 4% is composed of atoms, 23% involves dark matter (possibly made of new elementary particles) and 73% is dark energy (the energy density of the vacuum perceived by gravitational interactions).

The AFI meeting, with a mix of colloquium talks and discussion sessions, deliberated the interplay of this physics and possible synergies between different methods to learn about the physics of the vacuum. It also considered the use of particle physics to understand problems in astrophysics and the large-scale structure of the universe.

The vacuum is associated with various condensates. The QCD scale associated with quark and gluon confinement is around 1 GeV, while the electroweak mass scale associated with the W and Z boson masses is around 100 GeV. These scales are many orders of magnitude less than the Planck-mass scale of around 1019 GeV, where gravitational interactions are supposed to be sensitive to quantum effects. The vacuum energy density associated with dark energy is characterized by a scale around 0.002 eV, typical of the range of possible light neutrino masses, and a cosmological constant, which is 54 orders of magnitude less than the value expected from the Higgs condensate and no extra new physics. Finally, the mass scale associated with dark matter remains to be determined. The physics of confinement, the origin of electroweak symmetry breaking, the nature of dark matter and why the dark-energy scale is finite and so much less than the electroweak and QCD scales, are fundamental questions for sub-atomic physics and its consequences for the macroscopic world.

For fermions, the VIP Collaboration at Frascati and Gran Sasso is performing precise new tests of the PEP for electrons, as Johann Marton of the Austrian Academy of Science described. These experiments look for anomalous 2p → 1s X-ray transitions in copper. Recent results have reduced the probability of a violation of the PEP by two orders of magnitude, with results of tests to a further two orders of magnitude expected shortly. The parameter characterizing possible PEP violation is currently measured to be β2/2 < 6 × 10–29.

CCvac2_02_08

The origin of mass is a fundamental problem in QCD and electroweak physics. In QCD the coupling constant that describes the strength of quark–gluon interactions (and gluon–gluon) grows in the infrared. It becomes so large that the quarks and gluons are confined, and in isolation particles carrying the colour quantum number can propagate a maximum distance of only around 1 fm. Reinhard Alkofer of Karl-Franzens University, Graz, explained that recent studies suggest that confinement works differently in the pure gluon theory and in QCD with light quarks. Ghost loops seem to be important. The physical-confinement mechanism is associated with dynamical breaking of the chiral symmetry between left and right-handed quarks; 98% of the proton’s mass is produced by the binding energy between quarks.

The subtle role of spin-1/2 quarks in the proton is further highlighted by the proton-spin problem, as Fabienne Kunne of CEA/Dapnia described. Polarized deep inelastic scattering experiments at CERN, DESY and SLAC have revealed that only about 30% of the spin of the proton comes from the intrinsic spin of the quarks that it contains. Where is the “missing” spin and why is the quark contribution so small? Possibilities include a topological effect where the spin becomes in part delocalized in the proton, or sea quarks polarized against the direction of the spin of the proton. The COMPASS experiment at CERN, as well as spin experiments at RHIC and Jefferson Lab, are currently investigating these issues.

QCD and electroweak interactions are governed by Yang–Mills fields – the gluons and W and Z bosons, respectively. The interactions appear fundamentally different because of the large mass of the W and Z bosons. This means that the electroweak force has a short range of around 0.01 fm, which stops the electroweak coupling from increasing to be large enough in the infrared to produce confinement: electrons and neutrinos are not confined. Electroweak interactions are also characterized by parity violation and CP violation. Furthermore, only neutrinos with left-handed chirality are observed.

The origin of the W and Z boson masses is believed to be associated with the Higgs mechanism, a major target for LHC physics. The LHC’s 14 TeV collisions will eventually cover the entire mass range, with an integrated luminosity of around 30 fb–1. Joachim Mnich of DESY, Hamburg, presented the status of the collider and early expectations. The LHC experiments will also look for new physics such as the lightest supersymmetric-particle (LSP) candidate for dark matter, possible extra dimensions, and strong WW scattering if the Higgs mechanism proves to be an electroweak dynamical effect – topics described by Caroline Collard of the Laboratoire de l’Accélérateur Linéaire, Orsay. LHC physics and its interface with gravitational interactions pose many challenges. The Higgs mechanism required to explain the W and Z boson masses with no additional physics yields a cosmological constant larger than the observed value by a factor around 1054.

These experiments, as well as those at the LHC, will look for new particles that help to explain the mysterious dark matter.

Silvia Pascoli of Durham University talked about the neutrino sector, where evidence from solar, atmospheric and reactor experiments points to oscillations with a different mixing pattern from that of quarks. Oscillations between different neutrino species require small but finite neutrino masses. Open questions for future experiments include possible CP violation for neutrinos, the order of masses (is the flavour hierarchy the same as for quarks?), the absolute mass determination, and whether neutrinos are their own antiparticles.

The origin of cosmic radiation has been a mystery since its discovery by Victor Hess in 1912. Neutrinos have no electromagnetic interaction and do not bend in magnetic fields in space. Neutrino telescopes that look for point sources of neutrinos in space are probing the origin of cosmic rays, complementing studies at the Pierre Auger Observatory. These use kilometre-scale detectors in the sea or ice, which act as transparent media. Mieke Bouwhuis of Nikhef and Carlos de los Heros of Uppsala University presented the status and plans for ANTARES in the Mediterranean and IceCube at the South Pole, respectively.

These experiments, as well as those at the LHC, will look for new particles that help to explain the mysterious dark matter, described by Antonaldo Diaferio of Torino, which is needed to account for structure formation in galaxies and the large-scale structure of the universe. Galaxy rotation curves reveal that the variation of the velocity, v, of the stars with the distance, r, from the centre of the galaxy is approximately flat, rather than v2 falling off as 1/r, which should occur if gravity couples only to the visible matter. Extra mass must be present and to explain this, either extra matter or some modification to gravity over large distances is required. It is a mystery whether this dark matter is made of fermions, bosons or of both. Possible candidates for dark matter include weakly interacting massive particles with no electro-magnetic interactions, which behave almost like collisionless particles and yield cold dark matter in the outer halos of galaxies. Celine Boehm of the Laboratoire d’Annecy-le-Vieux de Physique Théorique described how, for dark matter at the tera-electron-volt scale, the LHC collisions might produce and reveal the conjectured fermionic LSP. If the dark matter is bosonic, new particles of lighter mass are possible. The 511 keV positron-annihilation radiation observed from the centre of the galaxy could be evidence for light-mass dark matter.

The nature of the missing galaxy mass and its connection to possible new physics is undoubtedly an open question. While the masses of the known fermions may depend on the same mechanism of electroweak symmetry breaking that produces the W and Z boson masses, the origin of dark-matter mass will involve new physics. The connections between particle physics and gravitation, taking us from the very small to the very large, promise to inspire much experimental and theoretical investigation in the decades ahead.

• The AFI symposium was organized in collaboration with the Frankreich Schwerpunkt and Italien Zentrum of the University of Innsbruck whose mandates are to develop and promote scientific and cultural relations between the West Austrian University and French and Italian experts and institutes. It was further supported by the BMWF, the Austrian Science Fund FWF and the University of Innsbruck. For more information, see www.uibk.ac.at/italienzentrum/italienzentrum/afi-meeting.html.

bright-rec iop pub iop-science physcis connect