Comsol -leaderboard other pages

Topics

NSRRC operates in top-up mode

On 12 October the National Synchrotron Radiation Research Center (NSRRC), Taiwan, became the fourth synchrotron facility in the world to operate fully in top-up mode, joining the Swiss Light Source (SLS), the Advanced Photon Source (APS) in the US, and SPring-8 in Japan. While the SLS and APS were originally designed to operate in top-up mode, the NSRRC is an example of how a third-generation synchrotron accelerator that previously operated in decay mode can successfully advance to full top-up operation.

CCEnew2_12-05

In top-up mode, the storage ring is kept full by frequent injections of beam, in contrast with decay mode, where the stored beam is allowed to decay to some level before refilling occurs. Top-up operation has the advantage for light-source users that the photon intensity produced is essentially stable. This provides valuable gains in usable beamtime for experiments, and significantly shortens the time for optical components in beamlines to achieve thermal equilibrium.

The upgrade to top-up mode at the NSRRC, which started in 2003, included improvement to kickers, the addition of various diagnostic instruments, a redesign of radiation safety shielding, modification of control software, and a revised operation strategy for the injector and booster. In parallel a more powerful superconducting radio-frequency cavity was installed and commissioned in November 2004, as part of a five-year programme. This has prepared the NSRRC to serve its users in biology and genomic medicine.

The injection chain at the NSRRC consists of a 140 keV electron gun, a 50 MeV linac and a 1.5 GeV booster that sends the beam into the storage ring at a rate of 10 Hz. With the upgrade, the time interval between two injections is now set to 2 min, while previously, in decay mode, it was every 6 h. The stored beam current has initially been maintained at 200 mA with approximately 0.6 mA per current bin and photon stability in the range of 10-3 to 10-4. As experience is gained, the current will gradually be increased up to the 400 mA maximum allowed by the new superconducting RF in the storage ring.

As a user-driven facility, NSRRC chose the fixed time interval injection mode rather than fixed current bin to reduce interference with data-acquisition processes. Since early 2005, the operational division has informed beamline managers of the new characteristics of the beam’s time cycle, injection perturbations and top-up status. Users thus have access to enough information to conduct their experiments successfully.

During the transition period, special attention was paid to finding a reproducible filling pattern through optimizing and fine-tuning a variety of parameters. Other tasks included mastering the timing jitters of injection components, launching position and angle, as well as understanding the horizontal acceptance of the ring. These are some of the key determinants of injection efficiency.

The overall programme, led by NSRRC director Chien-Te Chen, now allows students from more than 60 universities access to beamtime allocated on one of 27 beamlines. These include two at SPring-8 Japan that are owned by the NSRRC. The NSRRC itself supported more than 3000 user-runs in 2005, 20% more than in 2004.

SNS reaches major milestone on journey to completion next June

The Spallation Neutron Source (SNS) at the Oak Ridge National Laboratory (ORNL) of the US Department of Energy (DOE) has met a crucial milestone on its way to completion in June 2006 – operation of the superconducting section of the linear accelerator. The SNS will produce neutrons by accelerating a pulsed beam of high-energy H ions down a 300 m linac, compressing each pulse to high intensity, and delivering them to a liquid mercury target where neutrons are produced in a process known as spallation.

CCEnew3_12-05

The SNS linac is the world’s first high-energy, high-power linac to apply superconducting technology to the acceleration of protons. It has two sections: a room-temperature section, for which beam commissioning was completed last January, and a superconducting section, which operates at 2 K (or recently as high as 4.2 K). The cold linac provides the bulk of the acceleration and has already achieved a beam energy of 912 MeV, or 91% of the linac’s design energy of 1  GeV.

Although the superconducting cavities are designed to operate at 2 K, much of the beam commissioning was performed at 4.2 K, with minimal loss in cavity performance – an unexpected outcome. Compared with the design intensity of 1.6 × 1014 H ions per pulse, beam pulses as high as 8 × 1013 ions per pulse were accelerated at repetition rates of up to 1 Hz (compared with the 60 Hz design), limited by the power capability of the 7.5 kW commissioning beam dump. All basic beam parameters were verified without any major surprises and transverse beam profiles were measured using a newly developed laser-profile measurement system that is noninvasive and unique to this H ion linac.

Six DOE national laboratories are collaborating on this DOE Office of Science project. Thomas Jefferson National Accelerator Facility in Virginia was responsible for the superconducting linac and its refrigeration system while Los Alamos National Laboratory in New Mexico provided the radio-frequency systems that drive the linac. The other laboratories are Argonne, Berkeley and Brookhaven.

During its first two years of operation, the SNS will increase the intensity of pulsed neutrons available to researchers nearly tenfold compared with existing facilities, providing higher-quality images of molecular structure and motion. Together, ORNL’s High Flux Isotope Reactor and the SNS will represent the world’s foremost facilities for neutron scattering, a technique the laboratory pioneered shortly after the Second World War.

LEIR gets ions on course for the LHC

On 10 October, at the very first attempt, a beam travelled round the Low Energy Ion Ring (LEIR) at CERN. LEIR is a central part of the injector chain to supply lead ions to the Large Hadron Collider (LHC) from 2008. It will transform long pulses from Linac 3 into short and dense bunches for the LHC.

The following day, after only 1 h of tuning, the beam circulated for about 500 ms per injection. The RF cavities were not yet in operation, so the beam was lost at the end of the injection plateau. The beam used consisted of O4+ ions, which have a longer lifetime than lead ions; work with lead ions will begin at a later stage.

After the installation of a new ion source built by a team from the Low Temperatures Department of the French Atomic Energy Commission (CEA/DRFMC/SBT) in Grenoble at the beginning of 2005, final work on installing LEIR took place in the summer. Now the aim is to improve understanding of the accelerator’s behaviour and to optimize the ion beam. In addition, the new electron-cooling system, developed and manufactured in collaboration with the Budker Institute of Nuclear Physics in Novosibirsk, is to be commissioned. This should reduce the beam dimensions, making it possible to accumulate several pulses from Linac 3.

Spitzer sees light from the very first stars

Deep observations with NASA’s Spitzer Space Telescope have revealed infrared background anisotropies that can be attributed to emission from the very first stars in the universe. The diffuse glow detected by Spitzer would have been emitted more than 13 billion years ago by the first generation of stars, which are believed to have been more than a hundred times more massive than the Sun and to have lived for only a few million years before exploding as the first supernovae.

Stars of the very first generation in the universe – made only of primordial hydrogen and helium – are called Population III stars. The absence of heavy chemical elements allows them to reach masses and sizes well beyond those of the metal-rich Population I stars – like our Sun – and the older Population II stars, which are poorer in metal. These early giant stars, with masses of several hundred solar masses, have so far never been seen in galaxies observable today. Even the Hubble Ultra Deep Field has only seen galaxies with stars already enriched in heavy elements such as carbon and oxygen; and there are now doubts on the true nature and distance of the candidate Population III galaxy, which was thought to be the farthest known galaxy at a redshift of 10 (see CERN Courier May 2004 p13). It therefore seems that the detection of the very remote first stars and galaxies emerging from the “dark ages” at redshifts of about 10 to 30 will require next-generation facilities like the James Webb Space Telescope, which should replace the Hubble Space Telescope in 2013.

Now, rather unexpectedly, A Kashlinsky and collaborators claim they have already had a glimpse at those unknown territories with the modest 85 cm Spitzer Space Telescope. Removing the emission of all foreground stars and galaxies in deep infrared observations, they identified fluctuations of the background radiation on larger scales, which could be attributed to the diffuse light from the Population III era. In an article published in Nature, they discuss all other possible sources of these infrared background fluctuations: from instrumental artefacts to distant unresolved galaxies or clusters, including solar-system zodiacal light and galactic emission from interstellar clouds. They conclude that the angular size of the anisotropies is significantly different from what is expected from sources in the solar system and in our galaxy, and that their amplitude is too large to be due to unresolved faint galaxies. The fluctuations are also rather independent of the parameters used to remove foreground objects.

The relatively large amplitude (of the order of 10%) of the anisotropies could result from the fact that the era of Population III stars lasted only about 300 million years and that the very bright stars are forming at the peaks of the density field, which has been strongly amplified since the time of the cosmic microwave background emission by the gravitational pull of dark matter. A more detailed comparison of the results, with theoretical models predicting the level of fluctuations during the Population III era, will have to await forthcoming papers that are likely to be based on better results extracted from longer-exposure images. It is nevertheless exciting to think that an 85 cm telescope was able to detect the light emitted only a few hundred million years after the big bang by the very first stars lighting up the universe.

Further reading

A Kashlinsky et al. 2005 Nature 438 45.

A magnetic memorial to decades of experiments

CCEmag1_12-05

This is the simple story of a magnet, albeit a rather special one, which is celebrating its 45th birthday at CERN this year. It is somewhat surprising that it has survived! It lives out a peaceful retirement at the far end of the site, as befits a senior magnet that can claim to have fathered a family sharing the same aim.

The magnet came to CERN as the heart of the first g-2 experiment, the aim of which was to measure accurately the anomalous magnetic moment, or g-factor, of the muon. This experiment was one of CERN’s outstanding contributions to physics, and for many years was unique to the laboratory. Indeed, three generations of the experiment were performed at CERN during its first 25 years.

At present the best determined value of g for the muon is 2.0011659208 (Bennett et al. 2004). Clearly one is trying to measure to very high precision a number that is very close to two. The elegance of the experimental method, which uses physics to measure g-2 directly through a determination of frequency (hence facilitating precision measurement), has attracted experimentalists for more than five decades. In addition this parameter has, with considerable reason, fascinated theorists over the same period and continues to be a rare target where experiment can test theory to the limit of its precision.

The purchase of this first 6 m-long g-2 magnet was agreed by the CERN Finance Committee on 14 November 1959, and the magnet was delivered by Oerlikon of Switzerland on 11 July 1960 (figure 1). But was this really the first g-2 magnet, and why was it of this form?

Before 1960 there were a number of experiments, and a list of outstanding names, each of which contributed their piece of the puzzle. If one piece is to be singled out, it must be the establishment of parity non-conservation in the pion-muon-electron decay sequence by the experiments of Richard Garwin, Leon Lederman and Marcel Weinrich, and by Jerome Friedman and Valentine Telegdi (Garwin et al. 1957; Friedman and Telegdi 1957). In this way, two fundamental, enabling “gifts of nature” became known: the muons are born 100% polarized in the pion rest frame, and the asymmetry of the angular distribution of the electrons emitted in their subsequent decay enables the polarization of the muon sample to be traced as a function of time (Combley et al. 1981).

The stage was thus set for a direct attack on the magnetic-moment anomaly for muons. The team that assembled was more than noteworthy with, in alphabetic order, Georges Charpak, Francis Farley, Richard Garwin, Theo Muller, Johannes Sens and Antonino Zichichi (figure 2). The design of their experiment fully exploited the initial muon polarization and final decay electron asymmetry through the idea that it should be possible to store muons in a conventional bending magnet that provided an approximately uniform vertical field.

CCEmag2_12-05

The magnet was installed in a longitudinally polarized beam of positive muons, arising from the decay of pions produced by CERN’s 600 MeV synchrocyclotron (SC). The magnetic field was arranged in such a way that the muons, introduced at one end of the magnet, were stored in circular orbits that moved along the magnet until they exited at the far end into the analyser; there they decayed, emitting an electron as a sign of polarization. Figure 3 shows how these orbits were suitably spaced (2 cm/turn) for capture upon entry to the magnet; then bunched closely together (0.4 cm/turn) in the centre for maximum storage times; and lastly spread out (11 cm/turn) at the end to eject the muons into the analyser. Some clever work was needed to add carefully calculated shims in order to create the very special magnetic field. Figure 4 illustrates the work of shimming the magnet and preparation in the halls of the SC.

CCEmag3_12-05

In the subsequent experiment much thought and care went into reducing systematic errors, with a result of g = 2.001165±5 sent for publication only six months after the magnet was delivered (Charpak et al. 1961). The result agreed rather well with the theoretical value current at the time, g = 2.001165.

CCEmag4_12-05

In such a short article there is no intention to make a comprehensive review of g-2 physics or experiments. This has been done exceedingly well by others, notably in the recent review article by Francis Farley and Yannis Semertzidis (Farley and Semertzidis 2004). The two subsequent generations of g-2 experiments at CERN were both real storage rings and allowed for higher muon energies and longer lifetimes. They permitted measurements of g-2 over many more frequency cycles, which increased the precision considerably.

One name among many in these two generations of experiments is that of Emilio Picasso, who became interested in g-2 as of 1963, when he was at Bristol and Cecil Powell urged him to work with Farley on theoretical calculations of the g-factor. (My own interest in g-2 was also triggered by Powell.) Picasso went on to lead the third-generation experiment and later the construction of a much bigger storage ring, the Large Electron-Positron Collider. The g-2 experiments moved to the US as of 1983 and have continued the battle at Brookhaven. Of the original pioneers at CERN, Farley still continues to be involved.

CCEmag5_12-05

The first g-2 magnet at CERN – the focus of this article – can still be found at the far end of the Meyrin site. It is partially disassembled, a little battered and those clever shims have disappeared, but fundamentally it still looks the same as in the pictures of 1960. Luckily no over-enthusiastic administrator has seen fit to scrap this monument to CERN history; perhaps there was a wise guardian angel who knew the magnet’s value. The physics principles of the g-2 experiments are of a rare elegance and the essential parts could be explained to visitors on one panel. Is it not time to give a new lease of life to this 45-year-old magnet as the focus of a new historical exhibit at CERN?

Setting the record straight

Dan Brown’s novel Angels and Demons has been enormously popular. A secret brotherhood murders a physicist who managed to produce the first antimatter on Earth. You have surely heard about the book?

I have even read it. Indeed the author has me killed at the very beginning.

CCEint1_12-05

Correct. You die and the antimatter stolen from CERN is used to blackmail the Vatican. CERN does produce antimatter, and the contact of antimatter with ordinary matter results in annihilation where large quantities of energy appear. Aren’t you scared that one day Brown’s scenario may become real?

No, since there is no way to produce and store a large quantity of antimatter.

What does “a large quantity” mean? Are we talking about kilograms?

No, not even about nanograms. I am talking about single atoms. We are not able to produce and store amounts of antimatter that would cause damage of any kind, e.g. that could be used as an explosive, as in the book.

You mean we are not able to now – or ever? Is that a problem of technology or perhaps a result of the laws of physics?

Both. Let us start with technological reasons, which are probably less convincing. Even if somebody could produce lots of antimatter, their main headache would be how to store it. First, they must place it in a vacuum – any other “container” would immediately annihilate, that is disappear! So antimatter must be kept in the very middle of a vacuum by a magnetic field. This is possible, we hope to do it at CERN, but for a few or a few tens of atoms only.

The vacuum must be of the best quality. What we call a vacuum in daily life is far from the ideal. An electric light bulb is not empty but contains a very, very diluted gas. In the CERN “antimatter trap” the gas pressure is 10−17 mbar. This means that on average there are a few tens of thousands of atoms per cubic metre. So even here we have annihilation with “stray” atoms. While it is possible to guard a few hundred antimatter atoms, protecting, say, 1 mg of antimatter from annihilation is practically impossible. And every act of annihilation results in the freeing of a certain amount of energy and degrading of the vacuum. This is a chain reaction.

The technical limit is not a real one. What is impossible today may very well be possible tomorrow. Surely we will learn how to get a better and better vacuum?

I agree that technical arguments may not be convincing. However, in one day CERN produces about 1012 antiprotons. Renovations, equipment maintenance and upgrading, holidays and other interruptions limit the antiproton production to about 200 days a year. In 50 years of operation at CERN about 1016 antiprotons would be produced. Even if all of them made anti-atoms, we would arrive at about one millionth of a milligram of antihydrogen – I repeat, in 50 years!

I must add that in the process of antihydrogen production only a tiny percentage of antiprotons make anti-atoms. Once, I calculated that even if all the natural energy resources of our planet – coal, petrol, gas – were used to produce antimatter, it would be enough to drive about 15,000 km by car. This is physics. It does not depend on our technological development.

So we can forget about antimatter as a future energy source?

Of course! Until we find “natural resources” of antimatter (and I would not count on that), the production of antimatter on Earth for energy or, as in the book, for terrorism will never pay off. Much more energy would be used for its production than we could ever get back from annihilation.

Would you agree that more people learned about CERN from Angels and Demons than from reading scientific information? Is CERN correctly described in Brown’s book?

This question is a trap so my answer will be diplomatic. In my opinion antimatter, and thus CERN as the only place where we are able to produce it, came into the book by accident. They were just a background. An atomic bomb at the Vatican would have done as well. I do not want to speak on behalf of the author but I have the impression that he wanted to touch on the conflict between science and faith. History shows that sometimes such a conflict has indeed been seen by the church and the scientific community.

And what about CERN? Is CERN really working on a proof that God does not exist – that scientific knowledge is the real god?

A difficult question. For sure there are many people working at CERN who believe in God and their work actually confirms their convictions. There are also those who do not believe in God but believe in science. For them every discovery may be proof that God does not exist, but it is not true that we are working to prove that.

“Soon all gods will be proven to be false idols. Science has now provided answers to almost every question man can ask.” This statement is made in the book by Maximilian Kohler, the [fictional] director-general of CERN. Do you agree?

No, I do not agree. I am sure that science does not contradict faith. One person may say that he studies the laws of nature, another one that he wishes to understand how God initiated or created our world. In my opinion it is the same. Both are doing the same even if they believe in different things. The point of view represented by the head of CERN in the book was very popular in the 1950s and 60s. Not for the first time people believed then that science was close to completion; that technology would save the world. It seemed that building a sufficiently large number of nuclear reactors would solve the energy problem on Earth and so all other problems would disappear. However, people have not become happier and the old problems are still here. We continue to be dependent on nature, which dictates the conditions. I believe that despite more and more knowledge, ultimately it is nature that wins.

Production of the first antimatter atom on Earth brought you great recognition…

Antihydrogen production indeed led to extraordinary publicity. That is probably the reason that this work is considered to be one of 16 very important discoveries made at CERN. In my opinion, and from the scientific point of view, producing the first antihydrogen atom does not deserve such honour; the very production of antihydrogen is not a revolution in physics. It did not bring anything new and we do not care about the production itself but about studies of the antihydrogen atom. This is not at all simple. The first atoms produced moved with almost the speed of light. Indeed, one has to be fast to study such an object. Antihydrogen thus has to be cooled down and locked in a bottle; the slower it is, the better we can watch it. So the real goal is not the production of, but studies of, antimatter. I am sure that at some time physicists will manage to measure its gravitational interactions. That would really be something.

Why are antimatter studies so interesting to the public? Usually it is difficult to sell what physicists do in their large laboratories.

This is not completely true. There are at least a few problems that may be sold easily and in an interesting way even when drinking a good wine at a garden party. One example is relativistic physics – everybody is interested in the fact that the faster you move the younger you are. Another subject is astrophysics or the surrounding universe. It is fascinating to many, probably because we can make certain observations ourselves on a cloudless night. Besides, the astrophysical photographs are so impressive they are printed on the front pages of the daily papers. The curvature of time and space is also an extremely interesting problem. The shortest path between two points is not at all a straight line.

And what about antimatter?

When it comes to particle physics, the problem is complicated. People do not know what we really do. Antimatter is an exception, which is surely due to science-fiction films, where antimatter is very often a subject. Serials gather an audience. If every Monday evening we watch the adventures of the same heroes who conquer the universe in space vessels powered by antimatter, then television characters are quickly treated as one’s own family. In this way antimatter has become a family member.

Is your interest in antimatter also a result of those films?

No. I must admit that I have not seen many of them. Discussions on antimatter began much earlier than when the first episode of Star Trek was produced. The ancient Greeks had already discussed it – albeit under different names. One can read about it in the writings of Aristotle or Plato – writings that are rather philosophical according to our modern views. But 19th-century physicists also wrote about it, not yet knowing about the existence of its components.

I was always fascinated by the idea of symmetry, especially between the world and the antiworld. Does it exist at all? I think that studies of antimatter are so interesting because even in our everyday life we like asymmetry. Just have a look at an ancient Greek temple or a medieval church. But not only buildings – look at a Persian carpet. Only those that are factory-made display a full symmetry; the really expensive ones are handmade. Most appreciated are the very small breaks in symmetry, the subtle “faults” of the carpet weaver.

It is said that as a young man you considered being an actor. Would you accept the role of Leonard Vetra, the creator of antimatter at CERN, if a film based on Angels and Demons was produced?

Yes, but only on the condition that they do not take my eye or burn “Illuminati” across my chest with a hot iron. I think that from the acting point of view I would manage – after all Vetra is murdered on the first page of the novel. Does he say anything at all?

Oh yes, but only a little. Exactly four sentences.

Then I am sure I would manage.

Do gamma rays reveal our galaxy’s dark matter?

It is well known that visible matter in the form of stars and galaxies makes up only a small fraction of the total energy in our universe. The latest evidence is that 5% is made from particles we know about, while 95% is in a form we know nothing about. The large non-visible, “dark” fraction is known to exist from its gravitational effects and comes in two forms: dark matter, constituting 23% of the total energy, provides the familiar gravitational pull, thus slowing down the expansion of the universe; the remainder, the dominant 72% of the total energy, causes antigravity, i.e. it accelerates the expansion of the universe.

Dark matter was so named by the Swiss scientist Fritz Zwicky. In studying the movements of the galaxies in the Coma cluster in the 1930s, he discovered that there must be much more matter than is visible. Later, the rotation speeds of gases and stars in spiral galaxies revealed that practically every galaxy has a halo of dark matter surrounding it. This dark matter must be much more widely distributed than the visible matter, since the rotation speeds do not fall off like 1⁄√r, as expected from the visible matter in the centre, but stay more or less constant.

The fact that the dark matter is distributed over large distances implies that it undergoes little energy loss, so any interactions it has must be weak. Therefore, dark-matter particles are generically called WIMPs, for weakly interacting massive particles. These WIMPs must, however, be able to annihilate if they were produced in thermal equilibrium with all other particles in the early universe. At that time the number densities of different particles were all of the same order of magnitude and just as the baryon/photon ratio was reduced by 10 orders of magnitude by baryon annihilation, the WIMP number density, which is of the same order of magnitude as the baryon number density, can only have been reduced by annihilation, assuming the WIMPs are stable. (If they are not stable they must have a lifetime of the order of the lifetime of the universe, otherwise they would no longer exist.)

The gamma rays play a very special role as they point straight back to the source

If WIMPs in our galaxy collide and annihilate into quark pairs, these in turn will produce stable particles including gamma rays. The gamma rays play a very special role as they point straight back to the source, in contrast to charged particles, which change their direction in galactic magnetic fields; moreover, as they hardly interact they can be easily observed from across the galaxy. Gamma rays therefore offer a perfect means for reconstructing the distribution or halo profile of dark matter though observations in different sky directions.

Of course this assumes that gamma rays from dark-matter annihilation can be differentiated from the background, but this is indeed possible, since the spectral shapes are very different, as can be understood as follows. WIMPs have almost no kinetic energy, so after their annihilation into quark pairs the WIMP mass is converted into the energy of the quarks. The gamma rays produced in the fragmentation of such mono-energetic quarks have been well studied at CERN’s Large Electron Positron collider; they originate mainly from the decay of the copiously produced π0 mesons. The background, on the other hand, originates predominantly from the decay of π0 mesons produced by cosmic rays (mainly protons) scattering inelastically on the gas of the galactic disc, and so corresponds to the spectrum of gamma rays produced in fixed-target experiments with proton-proton collisions. In this case the gamma-ray spectrum can be calculated from the known cosmic-ray spectrum.

Clearly the steep power-law spectrum of cosmic rays will yield a spectrum of gamma rays that differs from that of the mono-energetic quarks produced in dark-matter annihilation. These different shapes can therefore be fitted to the data with free normalization factors, which then determine the relative contributions from dark-matter annihilation and background. Fitting the shapes has the advantage that the amount of background is determined from the data itself in each sky direction, so there is no need to rely on complicated galactic propagation models to obtain absolute background fluxes.

So what can be seen in the gamma-ray sky? A very detailed gamma-ray distribution over the whole sky was obtained by the Energetic Gamma Ray Emission Telescope (EGRET) on NASA’s Compton Gamma Ray Observatory, which collected data from 1991 to 2000. The EGRET telescope was carefully calibrated at SLAC in a quasi-monochromatic photon beam in the energy range 0.02 to 10 GeV. In 1997 the EGRET collaboration published their findings on a diffuse component of the gamma rays that cannot be described by the background: they observed an excess as large as a factor of two above the background for gamma-ray energies above 1 GeV. Recently, at the University of Karlsruhe, we have shown that this apparent excess traces the distribution of dark matter, since knowing the distribution of both the visible and dark matter allows us to reconstruct the rotation curve of our galaxy, especially its peculiar non-flat shape, which can be explained by the EGRET excess.

Mapping the flux

Figure 1 shows the excess for the flux from the galactic centre. The curve through the data points corresponds to the two-parameter fit, where the parameters are the normalization factors for the two known spectral shapes of signal and background, as discussed above; the red and yellow areas indicate the contributions from the dark-matter annihilation signal and the background, respectively. The WIMP mass was taken to be 60 GeV, which gives an excellent fit, although WIMP masses between 50 and 100 GeV are allowed, if extremes of the background shapes are allowed. The fit was repeated for 180 independent sky directions. In every direction the excess was observed and in every direction an excellent fit could be obtained for a WIMP mass of 60 GeV, if the contribution from the extragalactic background was also taken into account towards the galactic poles.

CCEgam1_12-05

Such a detailed mapping of the flux of dark-matter annihilation in the sky allows a reconstruction of the distribution of dark matter in our galaxy. The result is surprising: it yields a pseudo-isothermal profile, as observed from the rotation curves in many galaxies, but with a substructure in the galactic plane in the form of doughnut-shaped rings at radii of 4 and 14 kpc. The position of our solar system at a distance of 8 kpc from the centre is located between this inner and outer ring. The enhanced gamma radiation at 14 kpc was also discussed in the original paper by Hunter et al. in 1997 and called the “cosmic enhancement factor”.

The ring structures in the dark-matter halo are expected to have a significant influence. A star inside the outer ring will feel an inward gravitational force from the galactic centre and an outward force from the outer ring, so the total gravitational force is reduced. This means that fast stars will go out of orbit inside the outer ring, thus causing a minimum in the rotation curve for radii within the outer ring. Outside the ring the gravitational forces from the centre and the ring add together, thus providing a maximum in the rotation curve. These effects are indeed observed, as shown in figure 2, indicating that the EGRET excess really does trace the dark matter in our galaxy.

The origin of these substructures in the dark-matter distribution is thought to be the hierarchical clustering of dark matter into galaxies: small clumps of dark matter grow from the quantum fluctuations appearing after inflation in the early universe and these clumps combine to form galaxies. That the outer ring originates from the infall of a dwarf galaxy is supported by the fact that hundreds of millions of old, mostly burned-out stars have recently been discovered in this region (Newberg et al. 2002, Ibata et al. 2003 and Crane et al. 2003). The small velocity dispersion and large-scale height perpendicular to the galactic disc of these stars proves that they cannot be part of the disc.

CCEgam2_12-05

The position and shape of the inner ring coincides with a ring of molecular hydrogen. Molecules form from atomic hydrogen in the presence of dust or heavy nuclei, so a ring of neutral hydrogen suggests an attractive gravitational potential well in which the dust can settle. The significant contribution of the inner ring to the rotation curve is also indicated in figure 2.

The perfect WIMP

The conclusion that the EGRET excess traces dark matter makes no assumption about the nature of the dark matter, except that its annihilation produces hard gamma rays consistent with the fragmentation of mono-energetic quarks between 50 and 100 GeV. Supersymmetry, which presupposes a symmetry between particles with even spin (bosons) and odd spin (fermions), provides a good WIMP candidate. This symmetry requires a doubling of the particle species in the Standard Model: each boson obtains a fermion as superpartner and vice versa. These superpartners are still to be found, but the lightest is expected to be stable, neutral, massive and barely interacting with normal matter, i.e. it is the perfect WIMP.

Although the present data cannot prove the supersymmetric nature of dark matter, it is intriguing that the WIMP mass and WIMP annihilation cross-section (which can be calculated from the present WIMP density) are perfectly compatible with supersymmetry, including all constraints from electroweak precision experiments and limits from direct searches for Higgs bosons and supersymmetric particles, at least if the spin-0 superpartners are in the tera-electron-volt range. Figure 3 shows the allowed range of masses for spin-0 and spin-½ superpartners, assuming mass unification at the grand unification scale, i.e. common masses m0 (m½) for the spin-0 (½) supersymmetric particles.

CCEgam3_12-05

The allowed region in figure 3 is within reach of the Large Hadron Collider, so finding the predicted spectrum of light spin-½ and heavy spin-0 superpartners would prove the supersymmetric nature of the WIMP, especially if the lightest superpartner is stable and has the same mass as the WIMP mass deduced from the EGRET data. The lightest superpartner has properties akin to a spin-½ photon for the allowed region of figure 3, in which case the dark matter could be considered the supersymmetric partner of the cosmic microwave background, if supersymmetry is discovered. It is interesting to note that this region of parameter space yields perfect unification of the gauge couplings without any free parameters. In our first analysis in 1991, the scale of the supersymmetric masses had to be treated as a free parameter (Amaldi et al. 1991).

The statistical significance of the EGRET excess is at least 10 s and alternative models without dark matter do not yield good fits if all sky directions are considered. Furthermore, alternative models do not explain the peculiar shape of the rotation curve, or the occurrence of the hydrogen rings at 4 and 14 kpc and the high density of old stars at 14 kpc. Therefore, we conclude that the EGRET excess provides an intriguing hint that dark matter is not so dark, but is visible by flashes of typically 30-40 gamma rays for each annihilation.

Gamma-ray bursts: a look behind the headlines

Gamma-ray bursts (GRBs) – intense but brief flashes of gamma rays – were first discovered accidentally by US military satellites in 1967, and have since become a major puzzle for astrophysics. By 1992, however, observations mainly with the Burst and Transient Source Experiment (BATSE) on-board NASA’s Compton Gamma Ray Observatory, had provided compelling evidence that GRBs originate mostly at large cosmological distances and, moreover, divide into two distinct classes: short hard-spectrum bursts (SHBs) with a typical duration of less than one second, and long soft bursts that typically last longer than two seconds. However, the nature of the GRBs remained a mystery.

CCEray1_12-05

A significant breakthrough came when the Italian and Dutch space agencies put BeppoSAX into orbit in 1996. This X-ray satellite localized GRBs in its field of view with arcminute precision and led to the discovery of X-ray, optical and radio afterglows for long-duration GRBs. These afterglows faded relatively slowly and enabled subarcsecond localization of the long GRBs, as well as measurement of their cosmological redshifts and absolute brightness, identification of their star-forming galaxies and finally their progenitors – ultrarelativistic jets ejected from supernova explosions due to the core collapse of massive stars (see CERN Courier June 2003 p5 and p12). Yet despite this impressive progress, many important questions regarding long GRBs remained unanswered. What type of core-collapse supernova produces them? What sort of remnant is left over? What is the true production mechanism? Moreover, despite extensive searches no afterglow was detected for the SHBs, and their redshifts, intrinsic brightness, host galaxies and progenitors remained unknown.

CCEray2_12-05

This situation has changed dramatically in the past few months after the successful launch in November 2004 of Swift, NASA’s multi-wavelength observatory dedicated to the study of GRBs. Its main missions are to detect GRBs, measure their properties, localize their sky positions with sufficient precision shortly after detection, and communicate these positions automatically to other space- and ground-based telescopes in order to discover and follow up the afterglows in a broad range of wavelengths, soon after the beginning of the bursts. By the end of September 2005, Swift had detected and localized 70 GRBs. Three of these – 050509B, 050724 and 050813 – were SHBs and follow-up observations have discovered elliptical host galaxies at redshifts 0.225, 0.258 and 0.722, respectively. Shortly after Swift’s detection and localization of SHB 050509B, NASA’s High Energy Transient Explorer satellite, HETE-2, which had been launched in 2000, detected and localized another SHB, 050709, on 9 July 2005 (Gehrels et al. 2005 and Villasenor et al. 2005). Follow-up measurements have found and measured its X-ray and optical afterglows, which led in turn to the discovery of its host – a star-forming young galaxy at redshift 0.16 (Hjorth et al. 2005 and Fox et al. 2005).

CCEray3_12-05

The observed brightness and energy fluence, and the measured redshifts of the SHBs imply that their intrinsic brightness is smaller than that of typical long GRBs by two to three orders of magnitude. Moreover, their inferred total emitted radiation, assuming isotropic emission, is smaller by four to five orders of magnitude. So it is quite possible that SHBs are seen at relatively small redshifts because they are intrinsically faint and cannot be seen from large cosmological distances. However, it is not clear why around 20% of the bursts observed by BATSE are SHBs, but only 5% of those seen by Swift.

CCEray4_12-05

Has the mystery been solved?

These observations have led to recent press releases by NASA and some prestigious universities, and the publication of articles in astrophysical journals and in Nature, Science and Scientific American, which claim that “the 35 year old mystery of GRBs” has finally been solved and that SHBs have been proven to be produced by the merger of neutron stars or of a neutron star and a stellar black hole in close binary systems. But is this so?

The relatively small redshifts of SHBs and their association with both star-forming spiral galaxies and elliptical galaxies containing mainly old stars appears consistent with their origin in the merger of neutron stars in binary systems, as the merger usually takes place a long time after the formation of the neutron stars. The idea is that a large number of the neutrinos and antineutrinos that are emitted in the merger collide with each other outside the merging stars and annihilate into electron-positron pairs, which form a fast expanding fireball that produces the GRB (Goodman et al. 1987). Later, it was suggested that instead of spherical fireballs, mergers produce highly relativistic jets along the rotation axis, which can produce shorter and brighter GRBs through, for example, inverse Compton scattering of ambient light around the merging stars.

At first sight the merger scenario seems consistent with the observations, but a more careful examination raises serious doubts. The cosmic rate of such mergers as a function of redshift can be calculated from general relativity using the observed properties of galactic neutron-star binaries and their production rate, which must be proportional to the measured star formation rate. Despite the small statistics the redshift distribution of the SHBs detected by Swift and HETE-2 appears inconsistent with the theoretical expectations from the merger model.

A second problem concerns an X-ray flare observed by the Chandra X-ray Observatory in the afterglow of SHB 050709 on day 16 after the burst. In the fireball models of GRBs, X-ray flares in the afterglow are interpreted as due to “re-energization” of the afterglow by the central engine (Zhang and Meszaros 2004). The final merger in a neutron-star binary due to gravitational wave emission, however, takes place in less than a millisecond and produces a black hole. It is hard to imagine that the remnant can “re-energize” the X-ray afterglow after 16 days, a time scale one billion times larger than a millisecond. On the other hand, in the alternative “cannonball” model of GRBs, X-ray flares are produced when the highly relativistic jets from the central engine (in this case mass accretion on a compact object) encounter density changes in the interstellar medium (Dado et al. 2002). Indeed, SHB 050709 took place not far from the centre of a galaxy where star formation produces strong winds and density irregularities.

Other scenarios for SHB production have been dismissed as unfavoured by the observations, but this may have been premature. Accretion-induced collapse of neutron stars in compact neutron-star/white-dwarf binaries is consistent with all the observations. Origin in a supernova collapse was ruled out for 050509B and 050709 by follow-up measurements with powerful optical telescopes, but only for these SHBs; much larger statistics are needed to conclude that SHB production in a type Ia supernova is unlikely. Origin in soft gamma-ray repeaters (SGRs), which are anomalous pulsars that occasionally produce GRBs, was ruled out by the claim that they are too faint to be observed at the measured redshifts of SHBs. Consider, however, the burst emitted on 27 December 2004 by the galactic SGR 1806-20. It was the brightest GRB ever recorded from any astronomical object, beginning with a short 0.2 second spike that was followed by a much longer and dimmer tail modulated by the 7.55 second period of the pulsar. Had it taken place in a distant galaxy, the spike, if detected, would have been classified as an SHB.

The maximum distance from which such a spike could be detected depends on the uncertain distance of SGR 1806-20 and, if it was relativistically beamed into a small solid angle like ordinary long GRBs, on its viewing angle. A viewing angle three to four times smaller than that for the spike presumably beamed from the superburst of SGR 1806-20 would be sufficient to make it look like a normal SHB at a redshift of z ∼ 0.25 (Dar 2005). Moreover, SGRs may be born not only in core collapse supernova explosions but, for example, in the accretion-induced collapse of white dwarfs, as suggested by the fact that, despite their young age, only one of the four known SGRs is located inside a supernova remnant. This may explain why SHBs are produced both in elliptical galaxies with old stellar populations and in star-forming spiral galaxies with young stellar populations.

Unsurprising behaviour

Because of its higher sensitivity, Swift can see deeper into space than any previous gamma-ray satellite. Indeed, the 14 long GRBs localized by Swift for which a redshift has been reported have a mean redshift of  = 2.8. This is twice the mean of  = 1.4 for the 43 GRBs with a known redshift that have been localized by BeppoSAX, HETE-2 and the interplanetary network over the past seven years.

Swift record redshift so far is z = 6.29, for GRB 050904, which looks like an ordinary burst with an ordinary afterglow (Haislip et al. 2005). This redshift is comparable to that of the most distant quasar measured to date, and in the standard cosmological model it corresponds to a look-back time of nearly 14 billion years, to when the universe was only one billion years old. Thus this single GRB already indicates that star formation and core-collapse supernova explosions took place at this early cosmic time, and together with previous measurements shows that the rate of star formation has not declined between z = 1.4 and z = 6.29. It also demonstrates that long GRBs and their optical afterglows, which are more luminous than any known astronomical object by many orders of magnitude, can be used as excellent tools for studying the history of star formation, galaxies, and intergalactic space since the time of the early universe.

The X-ray telescope on-board Swift also recorded, in fine detail, the evolution of X-ray afterglows of a dozen or so GRBs, from right after the burst until they became undetectable. Most of these X-ray afterglows demonstrate a universal behaviour: an initial fast fall-off within the first few minutes followed by a shallow decline over the next few hours, which afterwards steepens gradually into a much faster power-law decline (see figure 1). In some cases Swift has also detected X-ray flares superimposed on this universal behaviour. These observations were presented in Nature and Science as complete surprises that cannot easily be explained by current theoretical models of GRBs. This may be true for the popular fireball models of GRBs, but it is not true for the cannonball model, which correctly predicted these effects long ago.

The origin of SHBs is still an unsolved mystery.

The fast initial fall-off and the gradual roll over of the shallow decline to a later power-law decline were in fact already indicated by observations in 1998. Figure 2 shows the comparison with these observations of the universal behaviour predicted from the cannonball model in 2001. Moreover, an X-ray flare had also already been seen in 1997 by BeppoSAX in the afterglow of GRB 970508 (Pian et al. 2001) and can also be explained in the cannonball model.

In the cannonball model, the early X-ray afterglow originates in thin bremsstrahlung from a rapidly expanding plasmoid – the cannonball – which stops expanding within a few observer minutes after ejection. Synchrotron emission from the ionized interstellar electrons, which are swept into the decelerating cannonball, then takes over. The shallow decline followed by the roll-over into a power-law decline is a simple effect of off-axis viewing of decelerating jets in the interstellar medium, which has been observed in many optical afterglows but misinterpreted by fireball models. The flares are caused by collisions of the jet with density jumps in the interstellar medium produced by stellar winds and supernova explosions.

In conclusion, it seems that the localization of SHBs by Swift and HETE-2, which led to the discovery of their afterglows, the identification of their host galaxies and the measurements of their redshifts, have been over-interpreted. While these are undoubtedly observational breakthroughs, the origin of SHBs is still an unsolved mystery. Nevertheless, the small redshifts of SHBs are good news for gravitational wave detectors such as LIGO and LISA, in particular, if SHBs are produced mainly by mergers of neutron stars or a neutron star and a black hole in binaries, as first suggested in 1987. Moreover, the observed behaviour of the early X-ray afterglows of long GRBs and the X-ray flares – both claimed to be a complete surprise and unexpected in the fireball models – were predicted correctly long ago, like many other features of long gamma-ray bursts, by the cannonball model.

ILC comes to Snowmass

In August 2004 the Executive Committee of the American Linear Collider Physics Group (ALCPG), galvanized by the technology choice for the future International Linear Collider (ILC), decided to host an extended international summer workshop to further the detector designs and advance the physics arguments. Subsequently, the International Linear Collider Steering Committee (ILCSC) elected to hold their Second ILC Accelerator Workshop in conjunction with the Physics and Detector Workshop. Ed Berger of Argonne and Uriel Nauenberg of Colorado were selected to co-chair the organizing committee for this joint workshop, which was held at Snowmass, Colorado, US, for two weeks in August. ALCPG co-chairs Jim Brau of Oregon and Mark Oreglia of Chicago, along with accelerator community representatives Shekhar Mishra from Fermilab and Nan Phinney from SLAC, rounded out the committee. While hosted by the North American community, the workshops were planned with worldwide participation in all the advisory committees and in the scientific programme committees for the accelerator, detector, physics and outreach activities.

CCEilc1_12-05

As Berger described in the opening address, the primary accelerator goals at Snowmass were to define an ILC Baseline Configuration Document – to be completed by the end of 2005 – and to identify critical R&D topics and timelines. On the detector front, the goal was to develop detector design studies with a firm understanding of the technical details and physics performance of the three major detector concepts, the required future R&D, test-beam plans, machine-detector interface issues, beamline instrumentation and cost estimates. The physics goals were to advance and sharpen ILC physics studies, including precise higher-order calculations, synergy with the physics programme of CERN’s Large Hadron Collider (LHC), connections to cosmology, and, very importantly, relationships to the detector designs. A crucial fourth goal was to facilitate and strengthen the broad participation of the scientific and engineering communities in ILC physics, detectors and accelerators, and to engage the greater public in this exciting work.

A rich new world

Over the past few years, prestigious panels in Europe (the European Committee for Future Accelerators – ECFA), Asia (the Asian Committee for Future Accelerators – ACFA) and the US (the High Energy Physics Advisory Panel – HEPAP) have reached an unprecedented consensus that the next major accelerator for world particle physics should be a 500 GeV electron-positron linear collider with the capability of extension to higher energies. This machine would be ideal for exploiting the anticipated discoveries at the LHC and would also have its own unique discovery capabilities. The ability to control the collision energy, polarize one or both beams, and measure cleanly the particles produced will allow the linear collider to zero in on the crucial features of a rich new world that Peter Zerwas of DESY described on the first day of the workshop, which might include Higgs bosons, supersymmetric particles and evidence of extra spatial dimensions.

This physics programme dictates specific requirements for the detectors and for the accelerator design. As the ILC community turns increasingly to design and engineering, there was considerable activity in the physics groups to formulate these requirements concretely. Early in the workshop, an international panel set up this spring presented a proposed list of benchmark processes to be used in optimizing the ILC detector designs. This brought a new flavour to the physics discussions – one that will continue in future work on physics at the ILC.

CCEilc2_12-05

This influence was felt most strongly in the working groups on Higgs physics and supersymmetry. Precision electroweak data predict that the neutral Higgs boson will be observed within the initial energy reach of the ILC, which will provide a microscope to study the whole range of possible Higgs boson decays and measure coupling strengths to the percent level. To accomplish this goal, the ILC detectors must have significantly better performance in several respects than those at CERN’s Large Electron-Position collider (LEP). In contrast with the quite specific implications of Higgs boson physics, the idea of supersymmetry encompasses various models with diverse implications. Some of the signatures of supersymmetry will be studied at the LHC, but the problem of understanding the exact nature of any new physics will be a difficult one. Through the study of a diverse set of specific parameter sets for supersymmetry, work done at Snowmass showed that the ILC experiments could address this problem robustly, and the necessary detector performances were specified.

CCEilc3_12-05

Precision is crucial

The precision of the ILC experiments should be supported by equally precise theoretical calculations. Among those discussed at the workshop were Standard Model analyses, including higher-order contributions in quantum chromodynamics, calculations of radiative corrections to the key Higgs boson production processes, and precision calculations within models of new physics. The Supersymmetry Parameter Analysis project, presented at Snowmass, proposes a convention for the parameters for supersymmetry models from which observables can be computed to the part-per-mille level for unambiguous comparison of theory and experiment. The fourth in the series of LoopFest conferences on higher-order calculations took place during the Snowmass workshop, the highlight this year being a presentation of new twistor space methods for computing amplitudes for emission of very large numbers of gluons and other massless particles. New calculations of the process e+e → tbar th showed that higher-order corrections enhance this process by a factor of two near threshold, making it possible for the 500 GeV ILC to obtain a precise measurement of the top quark Yukawa coupling.

CCEilc4_12-05

The capabilities of the ILC will make it possible to explore new models, which include Higgs sectors with CP violation (for which the ILC offers specific probes of quantum numbers), and models with a “warped” extra dimension, which predict anomalies in the top quark couplings that can be seen in tbar t production just above threshold.

Many of the discussions of new physics highlighted the connections to current problems of cosmology. Supersymmetry and many other models of new physics contain particles that could make up (at least part of) the cosmic dark matter. If these models are correct, dark-matter candidates will be produced in the laboratory at the LHC. Studies at Snowmass showed how precise measurements at the ILC could be used to verify whether these particles have the properties required to account for the densities and cross-sections of astrophysical dark matter. Here all the strands of ILC physics – exotic models, precision calculations and incisive experimental capabilities – could combine to provide physical insight that can be obtained in no other way.

The accelerator design effort

In August 2004 the International Technology Recommendation Panel concluded that the ILC should be based on superconducting radio-frequency accelerating structures. This recommendation has been universally adopted as the basis for the ILC project, now being coordinated via the Global Design Effort (GDE), led by Barry Barish from Caltech. At Snowmass, the accelerator experts carried the baton from the successful launch of the ILC design effort at the first ILC workshop at KEK in Japan in November 2004. Snowmass also provided the forum for the first official meeting of the GDE. The working groups established for the first ILC workshop at KEK formed the basis of the organizing units through Snowmass. In addition, six global groups were formed to work towards a realistic reference design: Parameters, Controls & Instrumentation, Operations & Availability, Civil & Siting, Cost & Engineering, and Options.

Sources of electrons and positrons are the starting points of the accelerator chain. The successful production of intense beams of polarized electrons at the SLAC Linear Collider (SLC) between 1992 and 1998 demonstrated the best mechanism for producing electrons. When polarized laser light is fired at special cathode materials, electrons are produced with their spin vectors aligned, with polarization of up to 90% achieved in the laboratory. The ability to select the “handedness” of the beam is an incisive capability that will allow probes of the left- or right-handed nature of the couplings of new particles, such as those in supersymmetric models.

As well as the positron production systems used previously, other approaches are being studied to achieve polarized beams. One involves passing the high-energy electron beam through the periodic magnetic field provided by an “undulator”, similar to those used at synchrotron light sources. The intense photon beams radiated by the undulating electrons can be converted in a thin target into electron-positron pairs. A second method involves boosting the energies of photons produced in laser beams by Compton back-scattering them from electrons, and then similarly converting the boosted photons to yield positrons. If the intermediate photons are polarized, both of these methods allow polarized positron production.

The electron and positron beams produced must be “cooled” in so-called damping rings, in which their transverse size is reduced via synchrotron radiation during several hundred circuits. A few different designs are being studied for these rings. Challenges include precise component alignment and the high degree of stability required for low emittance, while minimizing collective effects that can blow up the beams.

Most of the length of the linear collider, some 20 km or so, will be devoted to accelerating the electron and positron beams in two opposing linacs. The debate at Snowmass centred on critical issues, such as the operating choice for the accelerating voltage gradient in the superconducting niobium cavities and the choice of advanced technologies that must be used to power the cavities. The details of the shape and surface preparation of the cavities are among the issues that affect the gradient that can be supported. Larger radii of curvature of the cavity lobes are desirable to reduce peak surface electric fields that can induce breakdown. Also, advanced surface preparation techniques such as electropolishing are being refined, and cavities are being produced and tested by strong international teams at regional test facilities. Based on experience to date, a draft recommendation was reached for a mean initial operating gradient of around 31 MV per metre. Each linac would then need to be just over 10 km long to reach the initial target centre-of-mass energy of 500 GeV.

Similar expert attention was devoted to the modulators, klystrons and distribution systems that convert “wall-plug” power into the high-power (10 MW) millisecond-long pulses applied to the cavities. Industrial companies in Europe, Asia and the US have developed prototype klystrons for this purpose. These are in use at the TESLA Test Facility at DESY, which provides a working prototype linac system. Several innovative ideas for solid-state modulators or more compact klystrons are also being explored with industry.

Once at their final energy, the beams must be carefully focused and steered into collision. The collision point lies at the ends of the two linacs and encompasses the interaction region, including the detector(s). The working recommendation, defined at the workshop at KEK, is to consider two interaction regions, each with one detector. Many important ramifications were discussed at Snowmass. For example, the current plan calls for the beams to be brought into the interaction region with a small horizontal crossing angle of either 2 or 20 mrad. In either case the final-focus magnets must be carefully designed to be compact and stable with respect to vibrations that could be transferred to beam motion. A detailed engineering design is being prepared, which will also include beam-steering feedback systems to maintain the beams in collision and optimize the luminosity. Intermediate values for the crossing angle, such as 14 mrad, are also under study.

Of no less importance is the need to remove the spent beams safely from the interaction region and transport them to the beam dumps. As each beam carries an equivalent of several megawatts of power, the design must allow the necessary clearances, and be capable of being aborted safely in the event of equipment failure. The “machine protection” system and beam dumps remain subjects for active R&D. Many crucial diagnostic systems for measuring the beam energy, polarization and luminosity will be based in the extraction lines, and excellent progress was made in defining the locations and configurations of the necessary instrumentation.

The GDE will build on the consensus reached at Snowmass and produce an accelerator Baseline Configuration Document (BCD) by the end of 2005. As Nick Walker from DESY summarized at the end of the workshop, the BCD will define the most important layout and technology choices for the accelerator. For each subsystem a baseline technology will be specified, along with possible alternatives which, with further R&D, will offer the promise to reduce the cost, minimize the risk or further optimize the performance of the ILC. The engineering details of the baseline design will then be refined and costed. A Reference Design Report will follow at the end of 2006. This will represent a first “blueprint” for the ILC, paving the way for a subsequent effort to achieve a fully engineered technical design.

Detector concepts

The Snowmass workshop was an important opportunity for proponents of the three major detector-concept studies to work together on their detector designs. They are planning to draft detector outline documents before the next Linear Collider Workshop (LCWS06) in Bangalore in March 2006. Detector capabilities are challenged by the precision physics planned at the ILC. The environment is relatively clean, but the detector performance must be two to ten times better than at LEP and the SLAC Linear Collider. Details of tracking, vertexing, calorimetry, software algorithms and other aspects of the detectors were discussed vigorously.

The three major international detector concepts rely on a “particle flow” approach in which the energy of jets is measured by reconstructing individual particles. This technique can be much more precise than the purely calorimetric approach employed at hadron colliders like the LHC. In a typical jet, 70% of the energy consists of hadrons, which are measured with only moderate resolution in the hadron calorimeter, while 30% consists of photons, which are measured with much better precision in the electromagnetic calorimeter. Of the hadronic energy typically 60% is carried by charged particles, which can be measured precisely with the tracking system. The hadron calorimeter is thus relied on only for the 10% carried by neutral hadrons. For the particle-flow approach, it is necessary to separate the charged and neutral particles in the calorimeters, where the showers overlap or are often very close to each other. Separation of the showers is accomplished differently in each of the detector concepts, trading off detector radius, magnetic-field strength and granularity of the calorimeter.

Physical processes to be studied at the ILC require tagging of bottom and charm quarks with unprecedented efficiency and purity, as well as of tau leptons.

A specialized group worked on the development of the particle-flow algorithms. A conventional shower-reconstruction algorithm tends to combine the showers of different hadrons, but more sophisticated software should be able to separate them based on the substructure of the showers. At present the energy resolution of jets is still limited by confusion in the reconstruction, but significant progress was achieved at Snowmass, with optimism that a resolution of 30%⁄√E can be reached.

In the Silicon Detector Concept (SiD) the goal is a calorimeter with the best possible granularity, consisting of a tungsten absorber and silicon detectors. To make this detector affordable, a relatively small inner calorimeter radius of 1.3 m is chosen. Shower separation and good momentum resolution are achieved with a 5 T magnetic field and very precise silicon detectors for charged particle tracking. The fast timing of the silicon tracker makes SiD a robust detector with respect to backgrounds.

The Large Detector Concept (LDC), derived from the detector described in the technical design report for TESLA, uses a somewhat larger radius of 1.7 m. It also plans a silicon-tungsten calorimeter, possibly with a somewhat coarser granularity. For charged particle tracking, a large time-projection chamber (TPC) is planned to allow efficient and redundant particle reconstruction. The larger radius is needed to achieve the required momentum resolution.

The GLD concept chooses a larger radius of 2.1 m to take advantage of a separation of showers just by distance. It uses a calorimeter with even coarser segmentation and gaseous tracking similar to the LDC. Progress at Snowmass on the GLD, LDC and SiD concepts was summarized at the end of the workshop by Yasuhiro Sugimoto of KEK, Henri Videau of Ecole Polytechnique and Harry Weerts of Argonne, respectively. A fourth concept was introduced at Snowmass, one not relying on the particle-flow approach.

A common challenge for all detector concepts is the microvertex detector. Physical processes to be studied at the ILC require tagging of bottom and charm quarks with unprecedented efficiency and purity, as well as of tau leptons. This task is complicated by backgrounds from the interacting beams and the long bunch trains during which readout is difficult. The detectors must be extremely precise, and also extremely thin, to avoid deflection of low-momentum particles and deterioration of interesting information. Several technologies are under discussion, all employing a “pixel” structure based on the excellent experience of the SLD vertex detector at SLAC, ranging from charge-coupled devices (CCDs) and complementary metal-oxide semiconductor (CMOS) sensors used in digital cameras, to technologies that improve on the ones already used for the LHC.

In the gaseous tracking groups much discussion centred on methods to increase the number of points in the TPC, compared with LEP experiments. A possibility is to use gas electron-multiplier foils, a technology that was developed at CERN for the LHC detectors. Micromesh gaseous structure detectors, or “micromegas”, are another option, pursued mainly in France. The availability of test beams is crucial for advancing detector designs. TPC tests have been performed at KEK, and a prototype of the electromagnetic calorimeter has been tested at DESY. Further tests are also planned at the test-beam facilities at CERN and Fermilab.

As befitting a workshop with both detector and accelerator experts present in force, discussion of the machine-detector interface issues played a big role. The layout of the accelerator influences detectors in many ways – for example, beam parameters determine the backgrounds, the possible crossing angle of the beams affects the layout of the forward detectors, and the position of the final focus magnets dictates the position of important detector elements. All of these parameters have to be optimized by accelerator and detector experts working in concert. At a well attended plenary “town meeting” one afternoon, several speakers debated “The Case for Two Detectors”. Issues included complementary physics capabilities, cross-checking of results, total project cost and two interaction regions versus one.

Outreach, communication and education

A special evening forum on 23 August addressed “Challenges for Realizing the ILC: Funding, Regionalism and International Collaboration”. Eight distinguished speakers, representing committees and funding agencies with direct responsibility for the ILC, shared their wisdom and perspectives: Jonathan Dorfan (chairman of the International Committee for Future Accelerators, ICFA), Fred Gilman (HEPAP), Pat Looney (formerly of the US Office of Science and Technology Policy), Robin Staffin (US Department of Energy), Michael Turner (US National Science Foundation), Shin-ichi Kurokawa (ACFA chair and incoming ILCSC chair), Roberto Petronzio (Funding Agencies for the Linear Collider) and Albrecht Wagner (incoming ICFA chair). The brief presentations were followed by animated questions and comments from many in the audience.

Educational activities played a prominent role in the Snowmass workshop. Reaching out to particle experimenters and theorists, the accelerator community ran a series of eight lunchtime accelerator tutorials. Of broader interest to the general public were a dark-matter café and quantum-universe exhibit in the Snowmass Mall, a Workshop on Dark Matter and Cosmic-Ray Showers for high-school teachers, and a cosmic-ray-shower study in the Aspen Mall. Two evening public lectures attracted many residents and tourists, with Young-Kee Kim talking on “E = mc2“, and Hitoshi Murayama on “Seeing the Invisibles”. A physics fiesta took place one Sunday in a secondary school in Carbondale, where physicists and teachers from the workshop engaged children in hands-on activities.

Communication was also on the Snowmass agenda, and the communications working group defined a strategic communication plan. During the workshop a new website, www.linearcollider.org , was launched together with ILC NewsLine, a new weekly online newsletter open to all subscribers.

• Proceedings of the Snowmass Workshop will appear on the SLAC Electronic Conference Proceedings Archive, eConf.

How CERN keeps its cool

Cryogenics at CERN has now reached an unprecedented scale. When the Large Hadron Collider (LHC) starts up it will operate the largest 1.8 K helium refrigeration and distribution systems in the world, and the two biggest experiments, ATLAS and CMS, will deploy an impressive range of cryogenic techniques. However, the use of cryogenics at CERN, first in detection techniques and later in applications for accelerators, dates back to some of the earliest experiments.

The need for cryogenics at CERN began in the 1960s with the demand for track-sensitive targets – bubble chambers – that contained up to 35 m3 of liquid hydrogen, deuterium or neon/hydrogen mixtures. These devices required cryogenic systems on an industrial scale to cool down to a temperature of 20 K. For more than a decade they were a major part of CERN’s experimental physics programme. At the same time, cryogenic non-sensitive targets were used in other experiments. Over the past 30 years some 120 such targets have been constructed, ranging in size from a few cubic centimetres to about 30 m3 and usually filled with liquid hydrogen or deuterium, again requiring cooling to 20 K.

Cool targets, cool detectors

At the smallest scale, the demand from the fixed-target programme for polarized targets at very low temperatures led to the development of dilution refrigerators at CERN in the 1970s (figure 1). Going below the range of helium-3 evaporating systems, these require small-scale but highly sophisticated cryogenic techniques.

CCEhow1_12-05

Polarized targets remain very much part of the current physics programme at CERN, where the COMPASS experiment uses solid targets made of ammonia or lithium deuteride. The basic method for obtaining a high polarization of the nuclear spins in the targets is the dynamic nuclear polarization process. This uses microwave irradiation to transfer to the nuclei the almost complete polarization of electrons that occurs at low temperatures (less than 1 K) and in a high magnetic field (2.5 T), generated by a superconducting solenoid.

On a larger scale, in detector technology the development in the 1970s of sampling ionization chambers – calorimeters – broadened the demand for low temperatures at CERN. Using liquid argon to measure the energy of ionizing particles, these detectors required cryogenic systems to cool down to 80 K. Several calorimeters, with typical volumes of 2-4 m3, were built in this period, both for fixed-target experiments and use at CERN’s first collider, the Intersecting Storage Rings (ISR) – which was also the world’s first proton collider.

Two decades later, in 1997, the NA48 experiment extended the technique from argon to krypton. With its very high density, liquid krypton not only provides the “read out” through the ionization of the liquid by charged particles, but also acts as a passive particle absorber, so avoiding the use of a material such as lead or uranium. The cooling fluid for this detector is saturated with liquid nitrogen and the heat is extracted by re-condensing the evaporated krypton via an intermediate bath of liquid argon, which in turn feeds the 10 m3 liquid-krypton cryostat by gravity.

Around the same time as the development of the first liquid-argon calorimeters, experiments began to require helium cryogenics, mainly at 4.5 K, for superconducting magnets. These were used to analyse particle momenta in magnetic spectrometers. The largest built for the fixed-target programme at CERN was the superconducting solenoid constructed for the Big European Bubble Chamber (BEBC) in the 1970s (figure 2). This had an internal diameter of 4.7 m and produced a field of 3.5 T. The associated combined He/H2 refrigeration system had a cooling capacity of 6.7 kW at 4.5 K.

CCEhow2_12-05

With the advent of the Large Electron-Positron (LEP) collider at the end of the 1980s, collider experiments took on a much greater role at CERN. Two of the LEP experiments, ALEPH and DELPHI, opted for large superconducting solenoids for momentum analysis – the choice between superconducting and normal (resistive) magnets depending on considerations related to “transparency” (to particles) and/or economy. Each of these solenoids required a helium cooling system of 800 W at 4.5 K.

A current novel application for a superconducting magnet occurs in the CAST experiment located on the surface above the cavern where the DELPHI experiment for LEP was installed. This uses a 10 m, 9.5 T prototype LHC superconducting dipole and also makes use of the DELPHI refrigerator to cool the superfluid helium cryogenic system for the magnet. The aim of the experiment is to detect axions, a possible candidate particle for dark matter that could be emitted by the Sun, through their production of photons in the dipole’s magnetic field.

Now, however, the major effort at CERN is focused on the LHC, with four big experiments: ALICE, ATLAS, CMS and LHCb. Basic design criteria led the two largest experiments, ATLAS and CMS, to construct superconducting spectrometers of unprecedented size, while ALICE and LHCb opted for resistive magnets.

CCEhow3_12-05

ATLAS has several components for its magnetic spectrometry. A “slim” central solenoid (with a length of 5.3 m, a 2.4 m inner diameter and a 2 T field) is surrounded by a toroid consisting of three separate parts – a barrel and two end-caps. The overall length of the toroid is 26 m, with an external diameter of 20 m (figure 3). It is powered up to 20 kA and has a stored energy of 1.7 GJ. CMS, by contrast, is built around a single large solenoid, 13 m long, with an inner diameter of 5.9 m and a uniform field of 4 T (figure 4). When powered up to 20 kA it has a stored energy of 2.6 GJ.

CCEhow4_12-05

ATLAS also has a cryogenic electromagnetic calorimeter, with the largest liquid-argon ionization detector in the world to measure the energy of electrons and photons. This consists of a cylindrical structure made of a barrel and two end-caps, with a length of 13 m and an external diameter of 9 m. Altogether, the cryostats for the three sections contain 83 m3 of liquid argon and operate at 87 K.

Both ATLAS and CMS have refrigerating plants that are independent from the system required to cool the LHC to 1.8 K (see below). ATLAS will use two helium refrigerators and one nitrogen refrigerator, while CMS will have a single helium refrigerator. These will provide cooling for current leads and thermal shields, as well as for the refrigeration at 4.5 K for the spectrometer magnets, and in the case of ATLAS also at 84 K for the electromagnetic calorimeter.

Cool accelerators

The use of helium cryogenics was extended to accelerator technology at CERN during the 1970s, when superconducting radiofrequency beam separators were constructed for the Super Proton Synchrotron, and superconducting high-luminosity insertion quadrupoles were built for use at the ISR. These required cooling of 300 W at 1.8 K and 1.2 kW at 4.5 K, respectively. The 1990s saw the larger scale use of cryogenics for accelerators with the upgrade of LEP to higher energies.

LEP was built initially with conventional copper accelerating cavities, but with the successful development of 350 MHz superconducting cavities in 1980s, its energy could be doubled. As many as 288 superconducting cavities were eventually installed, increasing the energy from 45 to 104 GeV per beam (figure 5). This involved the installation of the first very large capacity helium refrigerating plant at CERN, with four units each of a capacity of 12 kW at 4.5 K, later upgraded to 18 kW, supplying helium to eight 250 m long strings of superconducting cavities, and a total helium inventory of 9.6 tonnes.

CCEhow5_12-05

LEP was closed down at the end of 2000 to make way for the construction of the LHC in the same tunnel. This liberated most of the existing cryogenic infrastructure from LEP for further use and upgrading for the LHC, which will require the largest 1.8 K refrigeration and distribution system in the world to cool some 1800 superconducting magnet systems distributed around the 27 km long tunnel. A total of 37,500 tonnes has to be cooled to 1.9 K, requiring about 96 tonnes of helium, two-thirds of which is used for filling the magnets.

Although normal liquid helium at 4.5 K would be able to cool the magnets so that they become superconducting, the LHC will use superfluid helium at the lower temperature of 1.8 K to improve the performance of the magnets. The magnets are cooled by making use of the very efficient heat-transfer properties of superfluid helium, and kilowatts of refrigeration power are transported over more than 3 km with a temperature difference of less than 0.1 K.

The LHC is divided into eight sectors, and each will be cooled by a two-stage cryoplant consisting of a 4.5 K refrigerator coupled to a 1.8 K refrigeration unit. The transport of the refrigeration capacity along each sector is made by a cryogenic distribution line, which feeds the machine every 107 m. A cryogenic interconnection box will link the 4.5 K and 1.8 K refrigerators and the distribution line. Together the refrigerators will provide a total cooling power of 144 kW at 4.5 K and 20 kW at 1.8 K. The 4.5 K refrigerators are equipped with a 600 kW liquid-nitrogen precooler, which will be used to cool down the corresponding LHC sector to 80 K in less than 10 days.

Four new 4.5 K refrigerators built by two industrial companies have been in place since the end of 2003, and four 4.5 K refrigerators recovered from LEP are being upgraded for use at the LHC. In addition, eight 1.8 K refrigerator units procured from industry provide the final stage of cooling (figures 6 and 7). Four 1.8 K units built by one company have already been installed; the other four units, made by the other company, are currently being installed and will be tested in 2006.

CCEhow6_12-05

For the next 15 years or so, CERN will need to continue to provide strong support in cryogenics for its unique accelerator facilities, including the final consolidation and operation of the LHC. Further long-term perspectives will depend a great deal on the next generation of accelerators. Detectors, on the other hand, have proved quantitatively less demanding for cryogenics in comparison with the accelerators; however, over the years their cryogenic needs have generated a variety of different applications, with a temperature range from 130 K (liquid-krypton calorimeters) down to a few tenths of a millikelvin for polarized targets. Innovation in detector technology has often in the past led to the application of cryogenics – a trend that will no doubt continue into the future.

CCEhow7_12-05

• This article is based on: G Passardi and L Tavian 2002 Cryogenics at CERN Proceedings of the 19th International Cryogenic Engineering Conference (ICEC 19); L Tavian 2005 Latest developments in cryogenics at CERN Proceedings of the 20th National Symposium on Cryogenics, Mumbai (TNSC 20).

bright-rec iop pub iop-science physcis connect