Comsol -leaderboard other pages

Topics

Looking beyond the LHC

CClhc1_09_09

The LHC at CERN is about to start the direct exploration of physics at the tera-electron-volt energy scale. Early ground-breaking discoveries may be possible, with profound implications for our understanding of the fundamental forces and constituents of the universe, and for the future of the field of particle physics as a whole. These first results at the LHC will set the agenda for further possible colliders, which will be needed to study physics at the tera-electron-volt scale in closer detail.

Once the first inverse femtobarns of experimental data from the LHC have been analysed, the worldwide particle-physics community will need to converge on a strategy for shaping the field over the years to come. Given that the size and complexity of possible accelerator experiments will require a long construction time, the decision of when and how to go ahead with a future major facility needs to be undertaken in a timely fashion. Several projects for future colliders are currently being developed and soon it may be necessary to set priorities between these options, informed by whatever the LHC reveals at the tera-electron-volt scale

The CERN Theory Institute “From the LHC to a Future Collider” reviewed the physics goals, capabilities and possible results coming from the LHC and studied how these relate to possible future collider programmes. Participants discussed recent physics developments and the near-term capabilities of the Tevatron, the LHC and other experiments, as well as the most effective ways to prepare for providing scientific input to plans for the future direction of the field. To achieve these goals, the programme of the institute centred on a number of questions. What have we learnt from data collected up to this point? What may we expect to know about the emerging new physics during the initial phase of LHC operation? What do we need to know from the LHC to plan future accelerators? What scientific strategies will be needed to advance from the planned LHC running to a future collider facility? To answer the last two questions, the participants looked at what to expect from the LHC with a specific early luminosity, namely 10 fb–1, for different scenarios for physics at the tera-electron-volt scale and investigated which strategy for future colliders would be appropriate in each of these scenarios. Figure 1 looks further ahead and indicates a possible luminosity profile for the LHC and its sensitivity to new physics scenarios to come.

Present and future

The institute’s efforts were organized into four broad working groups on signatures that might appear in the early LHC data. Their key considerations were the scientific benefits of various upgrades of the LHC compared with the feasibility and timing of other possible future colliders. Hence, the programme also included a series of presentations on present and future projects, one on each possible accelerator followed by a talk on the strong physics points. These included the Tevatron at Fermilab, the (s)LHC, the International Linear Collider (ILC), the LHeC, the Compact Linear Collider (CLIC) concept and a muon collider.

Working Group 1, which was charged with studying scenarios for the production of a Higgs boson, assessed the implications of the detection of a state with properties that are compatible with a Higgs boson, whether Standard Model (SM)-like or not. If nature has chosen an SM-like Higgs, then ATLAS and CMS are well placed to discover it with 10 fb–1 (assuming √s = 14 TeV, otherwise more luminosity may be needed) and measure its mass. However, measuring other characteristics (such as decay width, spin, CP properties, branching ratios, couplings) with an accuracy better than 20–30% would require another facility.

The ILC would provide e+e collisions with an energy of √s = 500 GeV (with an upgrade path to √s = 1 TeV). It would allow precise measurements of all of the quantum numbers and many couplings of the Higgs boson, in addition to precise determinations of its mass and width – thereby giving an almost complete profile of the particle. CLIC would allow e+e collisions at higher energies, with √s = 1–3 TeV, and if the Higgs boson is relatively light it could give access to more of the rare decay modes. CLIC could also measure the Higgs self-couplings over a large range of the Higgs mass and study directly any resonance up to 2.5 TeV in mass in WW scattering.

Working Group 2 considered scenarios in which the first 10 fb–1 of LHC data fail to reveal a state with properties that are compatible with a Higgs boson. It reviewed complementary physics scenarios such as gauge boson self-couplings, longitudinal vector-boson scattering, exotic Higgs scenarios and scenarios with invisible Higgs decays. Two generic scenarios need to be considered in this context: those in which a Higgs exists but is difficult to see and those in which no Higgs exists at all. With higher LHC luminosity – for instance with the sLHC, an upgrade that gives 10 times more luminosity – it should be possible in many scenarios to determine whether or not a Higgs boson exists by improving the sensitivity to the production and decays of Higgs-like particles or vector resonances, for example, or by measuring WW scattering. The ILC would enable precision measurements of even the most difficult-to-see Higgs bosons, as would CLIC. The latter would be also good for producing heavy resonances.

Working Group 3 reviewed missing-energy signatures at the LHC, using supersymmetry as a representative model. The signals studied included events with leptons and jets, with the view of measuring the masses, spins and quantum numbers of any new particles produced. Studies of the LHC capabilities at √s = 14 TeV show that with 1 fb–1 of LHC luminosity, signals of missing energy with one or more additional leptons would give sensitivity to a large range of supersymmetric mass scales. In all of the missing-energy scenarios studied, early LHC data would provide important input for the technical and theoretical requirements for future linear-collider physics. These include the detector capabilities where, for example, the resolution of mass degeneracies could require exceptionally good energy resolution for jets, running scenarios, required threshold scans and upgrade options – for a γγ collider, for instance, and/or an e+e collider operating in “GigaZ” mode at the Z mass. The link with dark matter was also explored in this group.

CClhc2_09_09

Working Group 4 studied examples of phenomena that do not involve a missing-energy signature, such as the production of a new Z’ boson, other leptonic resonances, the impact of new physics on observables in the flavour sector, gravity signatures at the tera-electron-volt scale and other exotic signatures of new physics. The sLHC luminosity upgrade has the capability to provide additional crucial information on new physics discovered during early LHC running, as well as to increase the search sensitivity. On the other hand, a future linear collider – with its clean environment, known initial state and polarized beams – is unparalleled in terms of its abilities to conduct ultraprecise measurements of new and SM phenomena, provided that the new-physics scale is within reach of the machine. For example, in the case of a Z’, high-precision measurements at a future linear collider would provide a mass reach that is more than 10 times higher than the centre-of-mass energy of the linear collider itself. Attention was also given to the possibility of injecting a high-energy electron beam onto the LHC proton beam to provide an electron–proton collider, the LHeC. Certain phenomena such as the properties of leptoquarks could be studied particularly well with such a collider; for other scenarios, such as new heavy gauge-boson scattering, the LHeC can contribute crucial information on the couplings, which are not accessible with the LHC alone.

The physics capabilities of the sLHC, the ILC and CLIC are relatively well understood but will need refinement in the light of initial LHC running. In cases where the exploration of new physics might be challenging at the early LHC, synergy with a linear collider could be beneficial. In particular, a staged approach to linear-collider energies could prove promising.

The purpose of this CERN Theory Institute was to provide the particle-physics community with some tools for setting priorities among the future options at the appropriate time. Novel results from the early LHC data will open exciting prospects for particle physics, to be continued by a new major facility. In order to seize this opportunity, the particle-physics community will need to unite behind convincing and scientifically solid motivations for such a facility. The institute provided a framework for discussions now, before the actual LHC results start to come in, on how this could be achieved. In this context, the workshop report was also mentioned and made available to the European Strategy Session of the CERN Council meetings in September 2009. We now look forward to the first multi-tera-electron-volt collisions in the LHC, as well as to the harvest of new physics that these results will provide.

• For more about the institute, see http://indico.cern.ch/conferenceDisplay.py?confId=40437. The institute summary is available at http://arxiv.org/abs/0909.3240.

The Nobel path to a unified electroweak theory

CCnob1_09_09

Electromagnetism and the weak force might appear to have little to do with each other. Electromagnetism is our everyday world – it holds atoms together and produces light, while the weak force was for a long time known only for the relatively obscure phenomenon of beta-decay radioactivity.

The successful unification of these two apparently highly dissimilar forces is a significant milestone in the constant quest to describe as much as possible of the world around us from a minimal set of initial ideas.

“At first sight there may be little or no similarity between electromagnetic effects and the phenomena associated with weak interactions,” wrote Sheldon Glashow in 1960. “Yet remarkable parallels emerge…”

Both kinds of interactions affect leptons and hadrons; both appear to be “vector” interactions brought about by the exchange of particles carrying unit spin and negative parity; both have their own universal coupling constant, which governs the strength of the interactions.

CCnob2_09_09

These vital clues led Glashow to propose an ambitious theory that attempted to unify the two forces. However, there was one big difficulty, which Glashow admitted had to be put to one side. While electromagnetic effects were due to the exchange of massless photons (electromagnetic radiation), the carrier of weak interactions had to be fairly heavy for everything to work out right. The initial version of the theory could find no neat way of giving the weak carrier enough mass.

Then came the development of theories using “spontaneous symmetry breaking”, where degrees of freedom are removed. An example of such a symmetry breaking is the imposition of traffic rules (drive on the right, overtake on the left) to a road network where in principle anyone could go anywhere. Another example is the formation of crystals in a freezing liquid.

CCnob3_09_09

These symmetry-breaking theories at first introduced massless particles which were no use to anybody, but soon the so-called “Higgs mechanism” was discovered, which gives the carrier particles some mass. This was the vital development that enabled Steven Weinberg and Abdus Salam, working independently, to formulate their unified “electroweak” theory. One problem was that nobody knew how to handle calculations in a consistent way…

…It was Gerardus ’t Hooft’s and Martinus Veltman’s work that put this unification on the map, by showing that it was a viable theory that could make predictions possible.

Field theories have a habit of throwing up infinities that at first sight make sensible calculations difficult. This had been a problem with the early forms of quantum electrodynamics and was the despair of a whole generation of physicists. However, its reformulation by Richard Feynman, Julian Schwinger and Sin-Ichiro Tomonaga (Nobel prizewinners in 1965), showed how these infinities could be wiped clean by redefining quantities like electric charge.

CCnob4_09_09

Each infinity had a clear origin, a specific Feynman diagram, the skeletal legs of which denote the particles involved. However, the new form of quantum electrodynamics showed that the infinities can be made to disappear by including other Feynman diagrams, so that two infinities cancel each other out. This trick, difficult to accept at first, works very well, and renormalization then became a way of life in field theory. Quantum electrodynamics became a powerful calculator.

For such a field theory to be viable, it has to be “renormalizable”. The synthesis of weak interactions and electromagnetism, developed by Glashow, Weinberg and Salam – and incorporating the now famous “Higgs” symmetry-breaking mechanism – at first sight did not appear to be renormalizable. With no assurance that meaningful calculations were possible, physicists attached little importance to the development. It had not yet warranted its “electroweak” unification label.

The model was an example of the then unusual “non-Abelian” theory, in which the end result of two field operations depends on the order in which they are applied. Until then, field theories had always been Abelian, where this order does not matter.

CCnob5_09_09

In the summer of 1970, ’t Hooft, at the time a student of Veltman in Utrecht, went to a physics meeting on the island of Corsica, where specialists were discussing the latest developments in renormalization theory. ’t Hooft asked them how these ideas should be applied to the new non-Abelian theories. The answer was: “If you are a student of Veltman, ask him!” The specialists knew that Veltman understood renormalization better than most other mortals, and had even developed a special computer program – Schoonschip – to evaluate all of the necessary complex field-theory contributions.

At first, ’t Hooft’s ambition was to develop a renormalized version of non-Abelian gauge theory that would work for the strong interactions that hold subnuclear particles together in the nucleus. However, Veltman believed that the weak interaction, which makes subnuclear particles decay, was a more fertile approach. The result is physics history. The unified picture based on the Higgs mechanism is renormalizable. Physicists sat up and took notice.

One immediate prediction of the newly viable theory was the “neutral current”. Normally, the weak interactions involve a shuffling of electric charge, as in nuclear beta decay, where a neutron decays into a proton. With the neutral current, the weak force could also act without switching electric charges. Such a mechanism has to exist to assure the renormalizability of the new theory. In 1973 the neutral current was discovered in the Gargamelle bubble chamber at CERN and the theory took another step forward.

The next milestone on the electroweak route was the discovery of the W and Z carriers, of the charged and neutral components respectively, of the weak force at CERN’s proton–antiproton collider. For this, Carlo Rubbia and Simon van der Meer were awarded the 1984 Nobel Prize for Physics…

…At CERN, the story began in 1968 when Simon van der Meer, inventor of the “magnetic horn” used in producing neutrino beams, had another brainwave. It was not until four years later that the idea (which van der Meer himself described as “far-fetched”) was demonstrated at the Intersecting Storage Rings. Tests continued at the ISR, but the idea – “stochastic beam cooling” – remained a curiosity of machine physics.

CCnob6_09_09

In the United States, Carlo Rubbia, together with David Cline of Wisconsin and Peter McIntyre, then at Harvard, put forward a bold idea to collide beams of matter and antimatter in existing large machines. At first, the proposal found disfavour, and it was only when Rubbia brought the idea to CERN that he found sympathetic ears.

Stochastic cooling was the key, and experiments soon showed that antimatter beams could be made sufficiently intense for the scheme to work. With unprecedented boldness, CERN, led at the time by Leon Van Hove as research director-general and the late Sir John Adams as executive director-general, gave the green light.

CCnob7_09_09

At breathtaking speed, the ambitious project became a magnificently executed scheme for colliding beams of protons and antiprotons in the Super Proton Synchrotron, with the collisions monitored by sophisticated large detectors. The saga was chronicled in the special November 1983 issue of the CERN Courier, with articles describing the development of the electroweak theory, the accelerator physics that made the project possible and the big experiments that made the discoveries.

• Extracts from CERN Courier December 1979 pp395–397, December 1984 pp419–421 and November 1999 p5.

Picturing the proton by elastic scattering

CCpro1_09_09

High-energy proton–proton (pp) and antiproton–proton (pp) elastic-scattering measurements have been at the forefront of accelerator research since the early 1970s, when pp elastic scattering was measured at the Intersecting Storage Rings (ISR) at CERN – the world’s first proton–proton collider – over a wide range of energy and momentum transfer. This was followed by measurements of pp elastic scattering in a fixed-target experiment at Fermilab, by pp elastic-scattering measurements at the Super Proton Synchrotron (SPS) at CERN operating as a pp collider and, finally, in the 1990s by pp elastic-scattering measurements at Fermilab’s Tevatron. Table 1 chronicles this sustained and dedicated experimental effort by physicists, which extended over a quarter of a century as the centre-of-mass energy increased from the giga-electron-volt region to the tera-electron-volt region.

CCpro2_09_09

With the first collisions at CERN’s LHC on the horizon, pp elastic scattering will come under the spotlight at the experiment known as TOTEM – for TOTal cross-section, Elastic and diffractive scattering Measurement. The TOTEM collaboration has detailed plans to measure pp elastic scattering at 14 TeV in the centre-of-mass – that is, seven times the centre-of-mass energy at the Tevatron – over a range of momentum transfer, |t| around 0.003–10.0 GeV2. By contrast, the ATLAS collaboration at the LHC plans to measure pp elastic scattering at 14 TeV in the small momentum-transfer range, |t| around 0.0006–0.1 GeV2, where the pp Coulomb amplitude and strong interaction amplitude interfere.

A phenomenological investigation of high-energy pp and pp elastic scattering commenced in the late 1970s with the goal of quantitatively describing the measured elastic differential cross-sections as the centre-of-mass energy increased and as one proton probed the other at smaller and smaller distances with increasing momentum transfer. This three-decade long investigation has led to both a physical picture of the proton and an effective field-theory model that underlies the picture (Islam et al. 2009 and 2006).

Three-layer proton

CCpro3_09_09

The proton appears to have three regions, as figure 1 indicates: an outer region consisting of a quark–antiquark (qq) condensed ground state; an inner shell of baryonic charge – where the baryonic charge is geometrical or topological in nature (similar to the “Skyrmion Model” of the nucleon); and a core region of size 0.2 fm, where the valence quarks are confined. The part of the proton structure comprised of a shell of baryonic charge with three valence quarks in a small core has been known as a “chiral bag” model of the nucleon in low-energy studies (Hosaka and Toki 2001). What we are finding from high-energy elastic scattering then is that the proton is a “condensate-enclosed chiral bag”.

CCpro4_09_09

The proton structure shown in figure 1 leads to three main processes in elastic scattering, illustrated in figure 2. First, in the small |t| region, i.e. in the near forward direction, the outer cloud of qqcondensate of one proton interacts with that of the other, giving rise to diffraction scattering. This process underlies the observed increase of the total cross-section with energy and the equality of pp and pp total cross-sections at high energy. It also leads to diffraction minima, like in optics, which are visible in figure 5. Second, in the intermediate momentum-transfer region, with |t| around 1–4 GeV2, the topological baryonic charge of one proton probes that of the other via ω vector–meson exchange. This process is analogous to one electric charge probing another via photon exchange. The spin-1 ω acts like a photon because of its coupling with the topological baryonic charge. Third is the process in the large |t| region – where |t| is around 4 GeV2 or larger. Here one proton probes the other at transverse distances around or less than 1/q, where q = √|t|, i.e. at transverse distances of the order of 0.1 fm or less. Elastic scattering in this region originates from the hard collision of a valence quark from one proton with a valence quark from the other proton – a process that can be better visualized in momentum space (figure 3).

CCpro5_09_09

We have considered two alternative quantum-chromodynamical processes for the qq-scattering mechanism (represented by the blob in figure 3). One is the exchange of gluons in the form of ladders – called the “BFKL ladder” – as figure 4a shows. The other process we have considered is where the “dense low-x gluon cloud” of one quark interacts strongly with that of the other, as in figure 4b. The low-x gluons accompanying a quark are gluons that carry tiny fractions of the energy and longitudinal momentum of the quark. The finding of the high-density, low-x gluon clouds surrounding quarks is one of the major discoveries at the HERA collider at DESY.

CCpro6_09_09

The solid curve in figure 5 shows our predicted elastic differential cross-section at the LHC at a centre-of-mass energy of 14 TeV and in the momentum-transfer region |t| = 0–10 GeV2 arising from the combination of the three processes – diffraction, ω-exchange, and valence quark–quark scattering (from low-x gluon clouds). The figure also indicates separately the differential cross-sections for each of the three processes. It shows that diffraction dominates in the small |t| region (0 <|t| <1 GeV2), ω-exchange dominates in the intermediate |t| region (1 <|t| <4 GeV2) and valence qq scattering dominates in the large |t| region (5 GeV2 <|t|).

CCpro7_09_09

In figure 6 we compare our predicted differential cross-section at the LHC with the predictions of several prominent dynamical models (Islam et al. 2009). A distinctive feature of our prediction is that the differential cross-section falls off smoothly beyond the bump at |t| around 1 GeV2. By contrast, the other models predict visible oscillations. Furthermore, these models lead to much smaller differential cross-sections than ours in the large |t| region, i.e. where |t| is greater than or about 5 GeV2.

If the planned measurement of the elastic differential cross-section by the TOTEM collaboration in the momentum-transfer range of |t| around 0–10 GeV2 shows quantitative agreement with our prediction, then it will support the underlying picture of the proton as depicted in figure 1. The consequent discovery of the structure of the proton at the LHC at the beginning of the 21st century would be analogous to the discovery of the structure of the atom from “high-energy” α-particle scattering by gold atoms at the beginning of the 20th century.

• The authors wish to thank the members of the TOTEM collaboration for discussions and comments.

Inside Story: Recounting fond memories of when DESY first began

Do you remember? If you are old enough you certainly will. I refer to the sixth decade of last century, when the research centres CERN and DESY were created. About that time I tried to explain to my sister Jutta (an artist who always considered logarithms as some species of worms) our understanding of the structure of matter. I started with the usual story about all visible matter being made of molecules which in turn are composed of atoms. And all atoms are made of very small particles called protons, neutrons and electrons. I even tried to explain some details on nuclear, electromagnetic and gravitational forces; three basic particles and three forces, an elegant and simple scheme. I left out solar energy and radioactivity.

But Jutta was not happy. In the early 1950s she came with us to the Andes mountains to expose nuclear emulsions in which we searched for cosmic mesons and hyperons. Jutta was also an attentive observer during the many evenings that I spent with Gianni Puppi in the ancient building of the Physics Institute of Bologna, scanning bubble-chamber pictures provided from the US by Jack Steinberger. We were looking for so called Λ and θ particles, trying to learn about their spin and some difficult-to-understand parity violation. So Jutta knew that there were many more particles and effects in existence, which I could not explain to her.

And, at a certain point, we particle physicists did not like the situation either. Our initial excitement with the discovery of exotic particles did not last long. We were not pleased with the several hundred particles and excited states (most of them unstable) that had been found but which did not fit into our traditional scheme of the structure of stable matter. There was no good reason for them to exist. It seemed at a certain moment quite useless to continue adding more and more particles to this “particle zoo” as it was condescendingly called. We were just making a kind of “particle spectroscopy” with no visible goal in mind.

In addition, at that time we had already been forced to abandon our beloved organization in small university groups, each one proud of their individual discoveries. Now, it was often the case that several of these groups had to join forces to reach significant results. One extreme example was a collaboration of about a hundred physicists on a single project to expose an enormous emulsion stack in the higher atmosphere and subsequently to undertake its inspection. Results were published with more than a hundred authors on a single paper, a kind of horror vision for individualists. It was the beginning of the international globalization of research, initiated (as are so many other issues) by particle physicists.

But none of this helped us understand the particle zoo. There was general agreement that new ways should be found, perhaps by the systematic study of reactions at higher energies. It was in this period that the European research centre CERN was created in 1954. Other local accelerator projects were started in a number of countries too, some of which were designed as a complement to the planned proton accelerator at CERN. A group of German physicists were dreaming about an electron machine, and this led to the foundation of DESY in Hamburg exactly 50 years ago.

However, life for electron-accelerator enthusiasts was not easy. While most particle physicists agreed about building proton machines, several did not accept the idea of working with electrons. I remember serious claims that everything related to electrons and electric charges could be accurately calculated within the framework of quantum electrodynamics. Consequently nothing new could be learnt from experimenting with electrons. Fortunately this was wrong!

The results of the following 50 years of global research are well known. Single papers are now often signed by more than a thousand authors and our understanding of the inner structure of matter has improved by a factor of thousand. The existence of most of the particles of our zoo can be understood and their inner structure has been explained (including our protons and neutrons). Quarks and leptons as basic particles and several fundamental forces with their exchange quanta form an elegant scheme called the “Standard Model of particle physics”. There are still some problems to solve, but I did try again to explain the basics to my sister Jutta. She illustrated her feelings after our last discussion.

US industry-built ILC cavity reaches 41 MV/m

CCnew1_09_09

For the first time, a US-industry-made superconducting radiofrequency (SRF) cavity has reached and exceeded the accelerating gradient required for the envisioned International Linear Collider (ILC). The cavity achieved 41 MV/m at the ILC’s superconducting operating temperature of 2 K, thus far exceeding the specification of the ILC Global Design Effort (GDE) of 35 MV/m. The ILC would require about 16,000 such cavities.

CCnew2_09_09

Advanced Energy Systems Inc (AES) in Medford, New York, built the hollow niobium accelerating structure. A team at the Jefferson Lab processed it by electropolishing and then tested it as part of R&D funded by the US Department of Energy. In addition, they tested seven more AES cavities, one of which reached 34 MV/m, close to the specification. Several other North American companies are also attempting to manufacture ILC test cavities.

Jefferson Lab’s Rongli Geng, leader of the GDE Cavity Group, characterizes the 41 MV/m result as “remarkable”. He believes that it may be attributable to improvements in cavity treatment specific to AES cavities, which are aimed at optimizing the properties of the materials. Such optimization provides opportunities to attack the performance limitations of SRF cavities and improve the production yield in a realm other than processing and fabrication.

One such opportunity may have appeared during Jefferson Lab’s testing of AES cavities in conjunction with the heat treatment that removes hydrogen from cavity surfaces. Both the successful cavity and the one that was nearly successful underwent quicker, hotter heat treatment than had previously been standard: 2 hours at 800 °C instead of 10 hours at 600 °C. Because the AES-built cavities appeared to be stiffer, the revised treatment temperature primarily targeted the optimization of mechanical properties. However, because other improvements in material properties might also have occurred, the team at Jefferson Lab is conducting further investigations.

New temperature-mapping and optical-inspection tools adopted about a year ago under the guidance of ILC GDE project managers may also help to overcome the performance limitations of SRF cavities and improve the mass-production yield. “T-mapping” of cavity outer surfaces involves strategically placing thermal sensors to provide vital information about excessive heating in defective regions up to the point of local breakdown of superconductivity that causes a cavity to quench. This diagnostic procedure works in conjunction with the optical inspection of the surfaces within a cavity, which involves a mirror and a long-distance (around 1 m) microscope that together afford detailed mirror-reflected views of defective regions magnified at scales of about 0.1–1 mm.

First ions for ALICE and rings for LHCb

CCnew3_09_09

Injection tests on 25–29 September delivered heavy ions for the first time to the threshold of the LHC. Particles were extracted from the Super Proton Synchrotron (SPS) and transported along the TI2 and TI8 transfer lines towards the LHC, before being dumped on beam stoppers. These crucial tests not only showed that the whole injection chain performs well but they were also interesting for the ALICE collaboration because they included bunches of lead ions. By using a dedicated “beam injection” trigger, the ALICE detector registered bursts of particles emerging from the beam stopper at the end of the TI2 transfer line, some 300 m upstream of the detector, shedding light on the timing of the trigger.

CCnew4_09_09

While the LHC has undergone repairs and consolidation work since the incident that brought commissioning to an abrupt end in September 2008, the ALICE collaboration has been busy with important installation work, which has included the first modules of the electromagnetic calorimeter. This allowed the start in August of a full detector run with cosmic rays, which was scheduled to last until the end of October. In addition to trigger information from the silicon pixel and ACORDE detectors (the latter built specially for triggering on cosmic muons) ALICE is now making extensive use of the trigger provided by its time-of-flight array (TOF). The high granularity and the low noise (0.1 Hz/cm2) of the multigap resistive-plate chambers of the TOF, combined with the large coverage (around 150 m2), offers a range of trigger combinations.

More than 100 million cosmic events had been accumulated in the central detectors by early October, both with and without magnetic field. Even the forward muon system – oriented parallel to the LHC beam – has collected several tens of thousands of the very rare quasi-horizontal cosmic rays, which traverse the full length of the spectrometer at a rate of one particle every couple of minutes.

Near-horizontal cosmic rays are also valuable for checking out the LHCb detector, which is aligned along the LHC beam line, and they recently allowed observation of the first rings from the one of the two ring-imaging Cherenkov detectors, RICH1. There are two types of radiating material in RICH1: aerogel for lowest momentum particles (around a few GeV/c) and perfluoro-n-butane (C4F10) to cover momenta from 10 GeV/c to around 65 GeV/c. This is the first time that the RICH detector has seen a particle as it will once the LHC re-starts.

The shutdown of the LHC has also provided the opportunity for the LHCb collaboration to finish the detector completely, with the installation of the fifth and final plane of muon chambers. Other improvements include modifications to reduce noise in the electromagnetic calorimeter to a negligible level and network upgrades. During a recent commissioning week, in preparation for the LHC re-start, the LHCb team managed to read out the full detector at a rate of almost 1 MHz. Data packets were sent at 100 kHz through to the LHCb computer farm and each sub-detector was tested to ensure that the system could handle data at this rate.

Nobel for optical fibres and CCDs

Charles Kao, who worked at Standard Telecommunication Laboratories, Harlow, UK, and was vice-chancellor of the Chinese University of Hong Kong, recieves the 2009 Nobel Prize in Physics for “groundbreaking achievements concerning the transmission of light in fibres for optical communication”. Kao’s studies indicated in 1966 that low-loss fibres should be possible using high-purity glass, which he proposed could form waveguides with high information capacity.

Willard Boyle and George Smith, who worked at Bell Laboratories, Murray Hill, New Jersey, share the other half of the prize “for the invention of an imaging semiconductor circuit – the CCD sensor”. They sketched out the structure of the CCD in 1969, their aim being better electronic memory – but they went on to revolutionize photography.

ATLAS and CMS collect cosmic-event data…

CCnew5_09_09

The ATLAS collaboration has made the most of the long shutdown of the LHC by undertaking a variety of maintenance, consolidation and repair work on the detector. as well as major test runs with cosmic rays. The crucial repairs included work on the cooling system for the inner detector, where vibrations of the compressor caused structural problems. The extended shutdown also allowed some schedules to be brought forward. For instance, the very forward muon chambers have been partially installed, even though this was planned for the 2009/10 shutdown. The collaboration has also undertaken several upgrades to prepare for higher luminosity, such as the replacement of optical fibres on the muon systems in preparation for higher radiation levels.

CCnew6_09_09

In parallel, the analysis of cosmic data collected last year has allowed the collaboration to perform detailed alignment and calibration studies, achieving a level of precision far beyond expectations for this stage of the experiment. This work is set to continue, in particular from 12 October, when the ATLAS Control Room is to be staffed round the clock. The experiment will collect cosmic data continuously until first beam appears in the LHC. During this time, the teams will study the alignment, calibration, timing and performance of the detector.

CMS has also been making the most of testing with cosmic rays. During a five-week data-taking exercise starting on 22 July, the experiment recorded more than 300 million cosmic events with the magnetic field on. This large data-set is being used to improve further the alignment, calibration and performance of the various sub-detectors in the run up to proton–proton collisions.

As with the other experiments, the shutdown period provided the opportunity for consolidation work on the detector. One of the most important items in CMS was the complete refurbishment of the cooling system for the tracker. The shutdown also gave the collaboration a chance to install the final sub-detector, the pre-shower, which consists of a lead–silicon “sandwich” with silicon-strip sensors only 2 mm wide. The pre-shower, which sits in front of the endcap calorimeters, can pinpoint the position of photons more accurately than the larger crystal detectors in the endcaps. This will allow a distinction to be made between two low-energy photons and one high-energy photon – crucial for trying to spot certain kinds of Higgs-boson decay.

…while the LHC gets colder and colder

The cool-down and commissioning of the LHC continues to progress well. Six of the eight sectors were at a nominal temperature of 1.9 K by the end of the first week of October, and the final two sectors, 3-4 and 6-7, were on course to be fully cold two weeks later. Teams are starting to power the magnets as each sector reaches 1.9 K, so the machine should be fully powered soon after the cool-down is completed.

The new layer of the quench detection system (QDS), installed in four sectors, is functioning well. In particular, the new software and hardware QDS components allowed teams to measure the resistance of all of the splices in sector 1-2 quickly and with unprecedented accuracy. All of the measured resistances showed small values and most are significantly below the original specifications. Teams were also able to test the new energy-extraction system that dumps the stored magnetic energy twice as quickly as previously. This provides better protection for the whole machine.

Preparations are thus continuing towards the planned restart, with the injection of the first bunches of protons into the machine scheduled for mid-November. The procedure will be to establish stable beam initially in each direction, clockwise and anticlockwise, just as with LEP 20 years ago (When LEP, CERN’s first big collider, saw beam). This will be followed by a short period of collisions at the injection energy of 450 GeV per beam. Commissioning will then begin on ramping the energy to 3.5 TeV, again working first with each beam in turn. After this, LHC physics will finally begin with collisions at this energy.

• CERN publishes regular updates on the LHC in its internal Bulletin, available at www.cern.ch/bulletin, as well as on its main website www.cern.ch and via Twitter and YouTube, at www.twitter.com/cern and www.youtube.com/cern respectively.

LISOL takes dipole moments that are close to magic

CCnew7_09_09

The nuclear shell model remains an essential tool in describing the structure of nuclei heavier than carbon, with shells corresponding to the “magic” numbers of protons (Z) or neutrons (N) associated with particular stability. A good way to probe the shell model is through the study of the magnetic dipole moment of a nucleus. Indeed, the model should describe particularly well the magnetic dipole moment of an isotope with a single particle outside a closed shell, as in this case the moment should be solely determined by this last nucleon. Copper isotopes (Z = 29), with one proton outside the closed shell of nickel (Z = 28), provide an example of such a system, which has been systematically studied at CERN’s ISOLDE facility with the Resonance Ionization Laser Ion Source (RILIS). The COLlinear LAser SPectroscopy (COLLAPS) collaboration uses collinear laser spectroscopy on fast beams and the NICOLE facility employs nuclear magnetic resonance on oriented nuclei.

Unfortunately, the tendency for chemical compounds to form in the thick target of the ISOLDE facility, does not permit the efficient release of the short-lived isotope 57Cu (T½= 199 ms), This isotope is of particular interest as it can simply be described as the doubly-magic 56Ni plus one proton, but a recent measurement of its magnetic moment strongly disagreed with this picture (Minamisono et al. 2006).

CCnew8_09_09

The Leuven Isotope Separator On-Line (LISOL), a gas-cell-based laser ionization facility at the Cyclotron Research Centre in Louvain-La-Neuve in Belgium, is perfectly suited for 57Cu. Beams of protons at 30 MeV and of 3He at 25 MeV impinge on a thin target of natural nickel. The radioactive copper isotopes produced recoil directly out of the target and are thermalized and neutralized in the argon buffer gas. The flow of the buffer gas then transports the isotopes to a second chamber where two laser beams, tuned on atomic transitions specific to the element of interest, give rise to resonance ionization of the atoms.

Resonance ionization has provided very pure beams of radioactive isotopes for more than a decade. It also enables in-source resonance ionization laser spectroscopy, as at ISOLDE’s RILIS. The new feature recently developed at LISOL is the implementation of laser spectroscopy in a gas-cell ion source (Sonoda et al. 2009). Its first on-line application has been the measurement of the magnetic dipole moment of the interesting copper isotopes 57,59Cu.

A team at LISOL observed the hyperfine structure spectra of several isotopes of copper, namely 57,59,63,65Cu, and extracted the hyperfine parameters, which yield the magnetic dipole moments. They were able to perform the measurement of 57Cu with yields as low as 6 ions a second, showing the high sensitivity of the technique (Cocolios et al. 2009). The accuracy is demonstrated by the very good agreement with known hyperfine parameters for 63,65Cu and with the measured magnetic dipole moments for the stable isotope 65Cu and for the radioactive isotope 59Cu, studied previously at ISOLDE. This meant that the team at LISOL was able to disprove with confidence the previous measurement of the magnetic dipole moment of 57Cu. Moreover, the new value is in agreement with several nuclear shell model calculations based on the N = Z = 20 40Ca core and the N = Z = 28 56Ni core, thereby confirming understanding of nuclear structure in this region.

This new technique opens the door for the study of short-lived refractory elements, which are not accessible at ISOLDE, to be probed in new radioactive ion beam facilities, such as at the accelerator laboratory at the university of Jyväskylä (JYFL), GANIL in Caen, RIKEN in Tokyo and the National Superconducting Cyclotron Laboratory at Michigan State University.

bright-rec iop pub iop-science physcis connect