The last of the 62,960 lead tungstate crystals arrived at CERN on 9 March, marking the end of a 15 year project for the CMS experiment and the Crystal Clear Collaboration. These crystals will form the 36 supermodules of the barrel electromagnetic calorimeter.
Lead tungstate crystals were chosen because of their high density and ability to stop particles over short distances. In addition, they offer good scintillation properties and radiation hardness. In 1994, the development of the avalanche photodiode detector, which allows small amounts of light to be read in a magnetic field, provided the possibility of using the crystals. By 1998 the Bogoroditsk factory in the Tula region of Russia had begun producing the crystals. The Shanghai Institute of Ceramics in China supplemented this factory in 2005.
Half of the crystals were delivered to the CERN regional centre and the other half to INFN/ENEA. Each crystal underwent strict quality-control where automatic machines measured 67 parameters. There are 1700 crystals in one supermodule of the electromagnetic calorimeter. The first supermodule was inserted in mid-April and the final one should be installed by June 2007.
On 26 April, the last superconducting magnet for the LHC descended into the accelerator tunnel. The hundreds of guests attending the final lowering ceremony applauded as the superconducting dipole, 15 m long and weighing 34 tonnes, descended through the PM12 shaft. Few of the guests would be well-versed in the Welsh language, but all intuitively understood the inscription on the banner at the top of the shaft: “Magned olaf yr LHC” (Last magnet for the LHC), in honour of Lyn Evans, the LHC’s (Welsh) Project Leader.
The PM12 shaft, which was created for the express purpose of lowering the long magnets into the tunnel, has seen 1232 dipoles pass down over the past two years, and 1746 magnets in total. Before going underground, the magnets were fitted with beam screens and underwent final tests and welds in the SM12 hall above the shaft. The lowering operation was a massive challenge owing to the quantity, size and fragility of the items, not to mention the tight deadlines. In addition, it took nearly 10,000 truck journeys to transport the magnets from the various locations where they were stored in France and Switzerland – a total of some 40,000 km, all at 10 kph.
Earlier in the month, on 4 April, work began on the last stretch of interconnections in the LHC as brazers began welding on the final octant, between Points 1 and 2. All of the LHC magnets will be interconnected by September, by which time the teams working on them will have made 123,000 connections in only two years. The task of connecting up all of the machine components has also been a challenge. Vacuum systems, superconducting cables, beam screens, cryogenic pipes and thermal and electrical insulations all have to be interconnected, with each interconnection requiring about 60 operations.
For all of the teams involved, another great challenge is to work in parallel with other ongoing activities. During the final phase, some 200 engineers and technicians, half from CERN and half from the contractor, are working in the LHC tunnel under rather difficult conditions. The work involves a collaboration between CERN, the Krakóv Institute of Nuclear Physics (HNINP) and the Franco–Dutch consortium IEG, which took responsibility for the interconnection work and for supplying welding and brazing machines.
At the same time, physicists and engineers from CERN, Fermilab, Lawrence Berkeley National Laboratory and KEK are preparing to repair 18 sets of structural supports for quadrupole magnets built at Fermilab, one of which failed a high-pressure test in the LHC tunnel in March. The failure was in a magnet that is part of an “inner triplet” of three magnets, Q1, Q2 and Q3. To fix a design flaw in the supports, the team has proposed to add to each Q1 and each Q3 a set of four cartridges that can absorb the longitudinal force generated during the pressure test. The cartridges are stiff mechanical springs that will be installed parallel to the magnet’s cold mass.
The final design reviews for the cartridges will take place at Fermilab and CERN before the end of May and installation of the cartridges in the Q1 and Q3 magnet of at least one inner triplet is scheduled to be complete in early June, in time for the next pressure test. The work can be done in the LHC tunnel, with the magnets in place. Only the inner triplet damaged during the previous pressure test will be removed for repairs of its structural supports.
The first sector of the LHC to be cooled reached its operating temperature of 1.9 K for the first time on 10 April. Although only an eighth of the LHC ring, this sector is already the world’s largest superconducting installation. This achievement marks the end of more than two months of commissioning work, which began in January and was carried out in three stages.
The 3.3 km sector comprises more than 200 dipole magnets and short straight sections, which contain quadrupole magnets, and has a total mass of 4700 tonnes. During the first stage, it was pre-cooled to 80 K, just above the temperature of liquid nitrogen. At this temperature, the material reaches 90% of its final thermal contraction, representing a 3 mm shrinkage for each metre of the steel structures. The total contraction over the sector as a whole is close to 10 m, and special devices (bellows and expansion loops) in the interconnections between the magnets compensate for this.
On 5 March, the teams began work on the second stage, which involved cooling the sector to 4.5 K using the gigantic refrigeration plants. For the final stage, which began in mid-March, the 1.8 K refrigeration plants came into play. These use a sophisticated pumping system to bring down the heat-exchanger saturation pressure to cool the magnets and the 10 tonnes of helium that they contain to 1.9 K. To achieve a pressure of 15 millibars, the system uses a combination of hydrodynamic centrifugal compressors operating at low temperature and positive-displacement compressors operating at room temperature. At 1.9 K, helium is superfluid, flowing with virtually no viscosity and allowing greater heat-transfer capacity.
The complexity and large number of sub-systems to be commissioned for the first time, together with various interface conditions to be managed, account for the time needed to cool the sector. The control system of one sector has to manage approximately 4000 inputs/ouputs and 500 regulation loops that need to be adjusted. In addition, the teams have carried out extensive checks to make sure that the cooling was done with all the necessary caution. This learning phase, which was long but vital, has also enabled the teams to prepare for cooling the other sectors.
While the sector cooling progressed steadily, problems arose in a different sector when a quadrupole magnet, one of an “inner triplet” of three focusing magnets, failed a high-pressure test at Point 5 on 27 March. Each inner triplet set of magnets contains two quadrupole magnets (Q2 and Q3) built at KEK and one (Q1) built at Fermilab. The asymmetric force generated during the test broke the supports, made of the glass cloth–epoxy laminate G-11, that hold the Q1 magnet’s cold mass inside the cryostat, and also damaged electrical connections.
CERN and Fermilab now know that this is an intrinsic design flaw that must be addressed in all triplet magnets assembled at Fermilab. Computer-aided calculations after the accident show that the G-11 support structure could not withstand the associated longitudinal forces. Review of engineering designs reveals that the longitudinal force from asymmetric loading was not included in the engineering design or identified as an issue in the four design reviews. An external review committee will analyse how this problem occurred and determine the root causes and the lessons learned.
The goal at CERN and Fermilab is now to redesign and repair the inner triplet magnets and, if necessary, the electrical distribution feed-box without affecting the LHC start-up schedule. Teams at CERN and Fermilab have identified potential repairs that could be carried out without removing undamaged triplet magnets from the tunnel. In the meantime, all three of the pressure-tested triplet magnets at Point 5, plus the associated feed-box, will be removed from the tunnel for inspection and, if necessary, repair.
Calculations of the structure of heavy nuclei have long suffered from the difficulties presented by the sheer complexity of the many-body system, with all of its protons and neutrons. Using theory to make meaningful predictions requires massive datasets that tax even high-powered supercomputers. Recently researchers from Michigan State and Central Michigan universities have reported dramatic success in stripping away much of this complexity, reducing computational time from days or weeks to minutes or hours.
One way to tackle the many-body problem is first to construct mathematical functions that describe each particle, and then start multiplying these functions together to get some idea of the underlying physics of the system. This approach of making the full configuration-interaction (CI) calculation works well enough to describe light nuclei, but becomes extremely challenging with heavier elements. For example, to calculate wave functions and energy levels for the pf-shell structure of 56Ni, it means in effect solving an equation with around 109 variables.
Researchers face a similar problem in quantum chemistry in studying molecules with many dozens of interacting electrons. For several years, however, they have used a computationally cost-effective alternative to CI known as coupled-cluster (CC) theory, which was originally suggested in nuclear theory, but largely developed by quantum chemists and atomic and molecular physicists. Now the CC method is making its way back into nuclear physics, first in calculations of light nuclei, and most recently in developments for heavy nuclei. The key is correlation, the idea that some pairs of fermions in the system (whether nucleons or electrons) are strongly linked and related.
The researchers first used the Michigan-State High Performance Computing Center and the Central Michigan Center for High Performance Scientific Computing for the several-week-long task of solving the CI equation describing 56Ni, to create a benchmark against which they could compare the results of the CC calculation (M Horoi et al. 2007). They found then that the CC theory produced near identical results and that the time spent crunching the numbers – on a standard laptop – was often measured in minutes or even seconds.
This research bodes well for next-generation nuclear science. Because of existing and planned accelerators around the world, the next few decades promise to yield many heavy isotopes for study. Theoretical models will need to keep pace with the expected avalanche of experimental data. To date, many such models have treated the nucleus as a relatively undifferentiated liquid, gas or other set of mathematical averages – all of which tends to gloss over subtle nuclear nuances. In contrast, coupled-cluster theory may be the only manageable and scalable model that takes a particle-by-particle approach.
Commissioning of the Linac Coherent Light Source (LCLS) at SLAC began on 5 April when physicists and engineers started up the electron-injector system for the first time, and created and accelerated a bunch of electrons. This injector is the first stage in a free-electron X-ray laser that will use the last kilometre of SLAC’s 3 km linac to accelerate electrons before they pass through an undulator magnet and emit X-rays of 800 eV – 8 keV.
In the injector facility at Sector 20, a drive laser initiates the process by sending a short burst of UV light to a radio-frequency (RF) gun. The RF gun not only creates a precisely shaped bunch of electrons but also gives the electrons their initial accelerating boost with microwaves. Once they enter the linac, the bunches will pass through compressors that pack them into even shorter bunches before they ultimately pass through the undulator.
In 2001, GSI, together with a large international science community, presented the Conceptual Design Report (CDR) for a major new accelerator facility for beams of ions and antiprotons in Darmstadt (Henning et al. 2001). The following years saw the consolidation of the proposal for the project, which was named the Facility for Antiproton and Ion Research (FAIR). During that process high-level national and international science committees evaluated the project’s feasibility, scientific merit and discovery potential, as well as the estimated costs. About 2500 scientists and engineers from 45 countries contributed to this effort, which resulted last year in the FAIR Baseline Technical Report (BTR) (Gutbrod et al. 2006).
The International Steering Committee has accepted the BTR as the basis for international negotiations on funding for FAIR. The plan is to found a company, FAIR GmbH, as project owner for the construction and operation of the FAIR research facility under international ownership. Currently 14 countries (Austria, China, Finland, France, Germany, Greece, India, Italy, Poland, Romania, Russia, Spain, Sweden and the UK) have signed the Memorandum of Understanding for FAIR, indicating their intention to participate in the FAIR project; the European Union, Hungary and the US have observer status. The investment cost for the project will be about €1000 million, and about 2400 man-years will be required to execute the project. Negotiations at governmental level to secure the funding started in summer 2006. The aim is to complete this process in summer 2007 and begin construction in autumn. The construction plan foresees a staged completion of the facility in which the first experimental programmes commence as early as 2012 while the entire facility will be completed in 2015 (figure 1).
The research programme of FAIR can be grouped in the following specific fields:
• Nuclear structure and nuclear astrophysics, using beams of stable and short-lived (radioactive) nuclei far from stability.
• Hadron structure, in particular quantum chromodynamics (QCD) – the theory of the strong interaction – and the QCD vacuum, using primarily beams of antiprotons.
• The nuclear-matter phase diagram and quark–gluon plasma, using beams of high-energy heavy ions.
• Physics of very dense plasmas, using highly compressed heavy-ion beams in unique combination with a petawatt laser.
• Atomic physics, quantum electrodynamics (QED) and ultra-high electromagnetic fields, using beams of highly charged heavy ions and antimatter.
• Technical developments and applied research, using ion beams for materials science and biology.
The BTR lists 14 experimental proposals as elements of the core research programme. However, additional experiments as future options are already being considered and evaluated. In particular, experiments with polarized antiprotons could add an entirely new research field to the FAIR programme. One addition to the core research programme, as presented in 2001, is the Facility for Low-Energy Antiproton and Ion Research (FLAIR), which will exploit the high flux of antiprotons at FAIR. Here cooled beams of antiprotons with energies well below 100 keV can be captured efficiently in charged-particle traps or stopped in low-density gas.
The new SIS100/300 double synchrotron, with a circumference of about 1100 m and with magnetic rigidities of 100 and 300 Tm in the two rings, will meet experimental requirements concerning particle intensities and energies. This constitutes the central part of the FAIR accelerator facility (figure 2). The two synchrotrons will be built on top of each other in a subterranean tunnel. They will be equipped with rapidly cycling superconducting magnets to minimize both construction and operating costs.
For the highest intensities, the 100 Tm synchrotron will operate at a repetition rate of 1 Hz, i.e. with ramp rates for the bending magnets of up to 4 T/s. The goal of the SIS100 is to achieve intense pulsed (5 × 1011 ions per pulse) uranium beams (charge state q = 28+) at 1 GeV/u and intense (4 × 1013) pulsed proton beams at 29 GeV. A separate proton linac will be constructed as injector to the SIS18 synchrotron to supply the high-intensity proton beams required for antiproton production. It will be possible to compress both the heavy-ion and the proton beams to the very short bunch lengths required for the production and subsequent storage and efficient cooling of exotic nuclei (around 60 ns) and antiprotons (around 25 ns). These short, intense ion bunches are also needed for plasma-physics experiments.
The double-ring facility will provide continuous beams with high average intensities of up to 3 × 1011 ions per second at energies of 1 GeV/u for heavy ions, either directly from the SIS100 or by slow extraction from the 300 Tm ring. The SIS300 will provide high-energy ion beams of maximum energies around 45 GeV/u for Ne10+ beams and close to 35 GeV/u for fully stripped U92+ beams, respectively. The maximum intensities in this mode will be close to 1.5 × 1010 ions for each spill. These high-energy beams will be extracted over time periods of 10–100 s in quasi-continuous mode, which is the limit that the detectors used for nucleus–nucleus collision experiments can accept.
A complex system of storage rings adjacent to the SIS100/300 double-ring synchrotron, together with the production targets and separators for antiproton beams and radioactive secondary beams (the Super Fragment Separator), will provide an unprecedented variety of particle beams at FAIR. These rings will be equipped with beam-cooling facilities, internal targets and in-ring experiments.
The Collector Ring (CR) serves for stochastic cooling of radio-active and antiproton beams and will allow mass measurements of short-lived nuclei using the time-of-flight method when in isochronous operation mode. The Accumulator Ring (RESR) will accumulate antiproton beams after stochastic pre-cooling in the CR and also provide fast deceleration of radioactive secondary beams with a ramp rate of up to 1 T/s.
The New Experimental Storage Ring (NESR) will be dedicated to experiments with exotic ions and with antiproton beams. The NESR is to be equipped with stochastic cooling and electron cooling and additional instrumentation will include precision mass-spectrometry using the Schottky frequency spectroscopy method, internaltarget experiments with atoms and electrons, an electron–nucleus scattering facility, and collinear laser spectroscopy. Moreover, the NESR will serve to cool and decelerate stable and radioactive ions as well as antiprotons for low-energy experiments and trap experiments at the FLAIR facility.
The High-Energy Storage Ring (HESR) will be optimized for anti-proton beams at energies of 3 GeV up to a maximum of 14.5 GeV. The ring is to be equipped with electron cooling up to a beam energy of 8 GeV (5 MeV maximum electron energy) and for stochastic cooling up to 14.5 GeV. The experimental equipment includes an internal pellet target and the large in-ring detector PANDA, as well as an option for experiments with polarized antiproton beams.
The design of the FAIR facility has incorporated parallel operation of the different research programmes from the beginning. The proposed scheme of synchrotrons and storage rings, with their intrinsic cycle times for beam acceleration, accumulation, storage and cooling, respectively, has the potential to optimize parallel and highly synergetic operation. This means that for the different programmes the facility will operate more or less like a dedicated facility, without the reduction in luminosity that would occur with simple beam splitting or steering to different experiments.
The realization of the facility involves some technological challenges. For example, it will be necessary to control the dynamic vacuum pressure. The synchrotrons will need to operate close to the space-charge limits with small beam losses in the order of a few per cent; in this respect, the control of collective instabilities and the reduction of the ring impedances is a subject of the present R&D phase. Fast acceleration and compression of the intense heavy-ion beams requires compact RF systems. The SIS100 requires superconducting magnets with a maximum field of 2 T and with 4 T/s ramping rate, while the SIS300 will operate at 4.5 T with a ramp rate of 1 T/s in the dipole magnets – technology that will benefit other accelerators. Lastly, electron and stochastic cooling at medium and high energies will be essential for experiments with exotic ions and with antiprotons.
The past five years have seen substantial R&D effort dedicated to the various technological aspects. This has been funded by the German BMBF and by FAIR member states, as well as by the European Union. The work has made considerable progress and has demonstrated the feasibility of the proposed technical solutions. Now the next stage is underway and prototyping of components has started.
The end of an era came on 28 November 2006 when the Sudbury Neutrino Observatory (SNO) stopped data-taking after eight years of exciting discoveries. During this time the observatory saw evidence that neutrinos, produced in the fusion of hydrogen in the solar core, change type – or flavour – while passing through the Sun on their way to Earth. This observation explained the long-standing puzzle as to why previous experiments had seen fewer solar neutrinos than predicted and also confirmed that these elusive particles have mass.
Ray Davis’s radiochemical experiment first detected solar neutrinos in 1967, a discovery for which he shared the 2002 Nobel Prize in Physics (CERN Courier December 2002 p15). Surprisingly, he found only about a third of the number predicted from models of the Sun’s output. The Kamiokande II experiment in Japan confirmed this deficit, which became known as the solar-neutrino problem, while other detectors saw related shortfalls in the number of solar neutrinos. A possible explanation, suggested by Vladimir Gribov and Bruno Pontecorvo in 1969, was that some of the electron-neutrinos, which are produced in the Sun, “oscillated” into neutrinos that could not be detected in Davis’s detector. This oscillation mechanism requires that neutrinos have non-zero mass.
In 1985, the late Herb Chen pointed out that heavy water (D2O) has a unique advantage when it comes to detecting the neutrinos from 8B decays in the solar-fusion process, as it enables both the number of electron neutrinos and the number of all types of neutrinos to be measured. In heavy water neutrinos of all types can break a deuteron into its constituent proton and neutron (the neutral-current reaction), while only electron neutrinos can change the deuteron into two protons and release an electron (the charged-current reaction). A comparison of the flux of electron neutrinos with that of all flavours can then reveal whether flavour transformation is the cause of the solar-neutrino deficit. This is the principle behind the SNO experiment.
Scientists from Canada, the US and the UK designed SNO to attain a detection rate of about 10 solar neutrinos a day using 1000 tonnes of heavy water. Neutrino interactions were detected by 9456 photomultiplier tubes surrounding the heavy water, which was contained in a 12 m diameter acrylic sphere. This sphere was surrounded by 7000 tonnes of ultra-pure water to shield against radioactivity. Figure 1 shows the layout of the SNO detector, which is located about 2 km underground in Inco’s Creighton nickel mine near Sudbury, Canada, so as to all but eliminate cosmic rays from reaching the detector. Figure 2 shows what the detector “sees”: the photo-multiplier tubes that were hit following the creation of an electron by an electron neutrino.
It was crucial to the success of this experiment to make the components of SNO very clean and, in particular, to reduce the radio-activity within the heavy water to exceedingly low levels. To achieve this aim the team constructed the detector in a Class-2000 cleanroom and entry to SNO was via a shower and changing rooms to reduce the chance of any dust contamination from the mine. The fraction of natural thorium in the D2O had to be less than a few parts in 1015, roughly equivalent to a small teaspoonful of rock dust added to the 1000 tonnes of heavy water. Such purity was necessary to reduce the break-up of deuterons by gamma rays from natural uranium and thorium radioactivity to a small fraction of the rate from the solar neutrinos. This required complex water purification and assay systems to reduce and measure the radioactivity. Great care in handling the heavy water was also needed as it is on loan from Atomic Energy of Canada Ltd (AECL) and is worth about C$300 million.
SNO’s results from the first phase of data-taking with unadulterated D2O were published in 2001 and 2002, and provided strong evidence that electron neutrinos do transform into other types of neutrino (CERN Courier June 2002 p5). The second phase of SNO involved adding 2 tonnes of table salt (NaCl) to the D2O to enhance the detection efficiency for neutrons. This large “pinch of salt” enabled SNO to make the most direct and precise measurement of the total number of solar neutrinos, which is in excellent agreement with solar-model calculations (CERN Courier November 2003 p5). The results to date reject the null hypothesis of no neutrino flavour change by more than 7 σ.
Together with other solar-neutrino measurements, the SNO results are best described by neutrino oscillation enhanced by neutrinos interacting with matter as they pass through the Sun – a resonant effect that Stanislav Mikheyev, Alexei Smirnov and Lincoln Wolfenstein predicted in 1985. To a good approximation, the electron-neutrino flavour eigenstate is a linear combination of two mass eigenstates with masses m1 and m2. The mixing angle between these two mass eigenstates, which the ratio (measured by SNO) of the electron-neutrino flux to the total neutrino flux constrains, is found to be large (around 34°) but is excluded from maximal mixing (45°) by more than 5 σ. The matter enhancement enables the ordering (hierarchy) of the two mass eigenstates to be defined, with m2 > m1 and a difference of around 0.01 eV/c2. The KamLAND experiment, which uses 1000 tonnes of liquid scintillator to detect anti-neutrinos from Japan’s nuclear reactors, confirmed in 2003 that neutrino mixing occurs and is large, as seen for solar neutrinos.
After the removal of salt from the heavy water, the third and final phase of SNO used an array of proportional counters in the heavy water to improve further the neutrino detection. Researchers filled 36 counters with 3He and four with 4He gas. Figure 3 shows part of this array during its deployment with a remotely operated submarine. The additional information from this phase will enable the SNO collaboration to determine better the oscillation parameters that describe the neutrino mixing. Data analysis is still in progress.
SNO’s scientific achievements were marked at the end of data-taking when the collaboration received the inaugural John C Polanyi Award (figure 4) of the Canadian funding agency, the Natural Sciences and Engineering Research Council (NSERC). The completion of SNO does not mark the end of experiments in Sudbury, however, as SNOLAB, a new international underground laboratory, is nearly complete, with expanded space to accommodate four or more experiments (see Canada looks to future of subatomic physics). SNOLAB has received a number of letters of interest from experiments on dark matter, double beta decay, supernovae and solar neutrinos. In addition, a new collaboration is planning to put 1000 tonnes of scintillator in the SNO acrylic vessel once the heavy water is returned to the AECL by the end of 2007. This experiment, called SNO+, aims to study lower-energy solar neutrinos from the “pep” reaction in the proton–proton chain, and to study the double beta decay of 150Nd by the addition of a metallo-organic compound.
As a historical anecdote, SNO was not the first heavy-water solar-neutrino experiment. In 1965, Tom Jenkins, along with other members of Fred Reines’ neutrino group, at what was then the Case Institute of Technology, began the construction of a 2 tonne heavy-water Cherenkov detector, complete with 55 photomultiplier tubes, in the Morton salt mine in Ohio. Unlike Chen’s proposal, Jenkins had only considered the detection of electron neutrinos through the charged-current reaction as other flavours were not expected, and the neutral-current reaction had not yet been discovered. This experiment finished in 1968 after Davis had obtained a much lower 8B solar-neutrino flux than had been predicted.
This article was adapted from text in CERN Courier vol. 47, May 2007, pp26–28
The principal goal of the experimental programme at the LHC is to make the first direct exploration of a completely new region of energies and distances, to the tera-electron-volt scale and beyond. The main objectives include the search for the Higgs boson and whatever new physics may accompany it, such as supersymmetry or extra dimensions, and also – perhaps above all – to find something that the theorists have not predicted.
The Standard Model of particles and forces summarizes our present knowledge of particle physics. It extends and generalizes the quantum theory of electromagnetism to include the weak nuclear forces responsible for radioactivity in a single unified framework; it also provides an equally successful analogous theory of the strong nuclear forces.
The conceptual basis for the Standard Model was confirmed by the discovery at CERN of the predicted weak neutral-current form of radioactivity and, subsequently, of the quantum particles responsible for the weak and strong forces, at CERN and DESY respectively. Detailed calculations of the properties of these particles, confirmed in particular by experiments at the Large Electron–Positron (LEP) collider, have since enabled us to establish the complete structure of the Standard Model. Data taken at LEP agreed with the calculations at the per mille level, and recent precise measurements of the masses of the intermediate vector boson W and the top quark at Fermilab’s Tevatron agree very well with predictions.
These successes raise deeper problems, however. The Standard Model does not explain the origin of mass, nor why some particles are very heavy while others have no mass at all; it does not explain why there are so many different types of matter particles in the universe; and it does not offer a unified description of all the fundamental forces. Indeed, the deepest problem in fundamental physics may be how to extend the successes of quantum physics to the force of gravity. It is the search for solutions to these problems that define the current objectives of particle physics – and the programme for the LHC.
Understanding the origin of mass will unlock some of the basic mysteries of the universe: the mass of the electron determines the sizes of atoms, while radioactivity is weak because the W boson weighs as much as a medium-sized nucleus. Within the Standard Model the key to mass lies with an essential ingredient that has not yet been observed, the Higgs boson; without it the calculations would yield incomprehensible infinite results. The agreement of the data with the calculations implies not only that the Higgs boson (or something equivalent) must exist, but also suggests that its mass should be well within the reach of the LHC.
Experiments at LEP at one time found a hint for the existence of the Higgs boson, but these searches proved unsuccessful and told us only that it must weigh at least 114 GeV. At the LHC, the ATLAS and CMS experiments will be looking for the Higgs boson in several ways. The particle is predicted to be unstable, decaying for example to photons, bottom quarks, tau leptons, W or Z bosons (figure 1 and figure 2). It may well be necessary to combine several different decay modes to uncover a convincing signal, but the LHC experiments should be able to find the Higgs boson even if it weighs as much as 1 TeV.
While resolving the Higgs question will set the seal on the Standard Model, there are plenty of reasons to expect other, related new physics, within reach of experiments at the LHC. In particular, the elementary Higgs boson of the Standard Model seems unlikely to exist in isolation. Specifically, difficulties arise in calculating quantum corrections to the mass of the Higgs boson. Not only are these corrections infinite in the Standard Model, but, if the usual procedure is adopted of controlling them by cutting the theory off at some high energy or short distance, the net result depends on the square of the cut-off scale. This implies that, if the Standard Model is embedded in some more complete theory that kicks in at high energy, the mass of the Higgs boson would be very sensitive to the details of this high-energy theory. This would make it difficult to understand why the Higgs boson has a (relatively) low mass and, by extension, why the scale of the weak interactions is so much smaller than that of grand unification, say, or quantum gravity.
This is known as the “hierarchy problem”. One could try to resolve it simply by postulating that the underlying parameters of the theory are tuned very finely, so that the net value of the Higgs boson mass after adding in the quantum corrections is small, owing to some suitable cancellation. However, it would be more satisfactory either to abolish the extreme sensitivity to the quantum corrections, or to cancel them in some systematic manner.
One way to achieve this would be if the Higgs boson is composite and so has a finite size, which would cut the quantum corrections off at a relatively low energy scale. In this case, the LHC might uncover a cornucopia of other new composite particles with masses around this cut-off scale, near 1 TeV.
The alternative, more elegant, and in my opinion more plausible, solution is to cancel the quantum corrections systematically, which is where supersymmetry could come in. Supersymmetry would pair up fermions, such as the quarks and leptons, with bosons, such as the photon, gluon, W and Z, or even the Higgs boson itself. In a supersymmetric theory, the quantum corrections due to the pairs of virtual fermions and bosons cancel each other systematically, and a low-mass Higgs boson no longer appears unnatural. Indeed, supersymmetry predicts a mass for the Higgs boson probably below 130 GeV, in line with the global fit to precision electroweak data.
The fermions and bosons of the Standard Model, however, do not pair up with each other in a neat supersymmetric manner. The theory, therefore, requires that a supersymmetric partner, or sparticle, as yet unseen, accompanies each of the Standard Model particles. Thus, this scenario predicts a “scornucopia’ of new particles that should weigh less than about 1 TeV and could be produced by the LHC (figure 3).
Another attraction of supersymmetry is that it facilitates the unification of the fundamental forces. Extrapolating the strengths of the strong, weak and electromagnetic interactions measured at low energies does not give a common value at any energy, in the absence of supersymmetry. However, there would be a common value, at an energy around 1016 GeV, in the presence of supersymmetry. Moreover, supersymmetry provides a natural candidate, in the form of the lightest supersymmetric particle (LSP), for the cold dark matter required by astrophysicists and cosmologists to explain the amount of matter in the universe and the formation of structures within it, such as galaxies. In this case, the LSP should have neither strong nor electromagnetic interactions, since otherwise it would bind to conventional matter and be detectable. Data from LEP and direct searches have already excluded sneutrinos as LSPs. Nowadays, the “scandidates” most considered are the lightest neutralino and (to a lesser extent) the gravitino.
Assuming that the LSP is the lightest neutralino, the parameter space of the constrained minimal supersymmetric extension of the Standard Model (CMSSM) is restricted by the need to avoid the stau being the LSP, by the measurements of b → sγ decay that agree with the Standard Model, by the range of cold dark-matter density allowed by astrophysical observations, and by the measurement of the anomalous magnetic moment of the muon (gμ–2). These requirements are consistent with relatively large masses for the lightest and next-to-lightest visible supersymmetric particles, as figure 4 indicates. The figure also shows that the LHC can detect most of the models that provide cosmological dark matter (though this is not guaranteed), whereas the astrophysical dark matter itself may be detectable directly for only a smaller fraction of models.
Within the overall range allowed by the experimental constraints, are there any hints at what the supersymmetric mass scale might be? The high precision measurements of mW tend to favour a relatively small mass scale for sparticles. On the other hand, the rate for b → sγ shows no evidence for light sparticles, and the experimental upper limit on Bs → μ+μ– begins to exclude very small masses. The strongest indication for new low-energy physics, for which supersymmetry is just one possibility, is offered by gμ–2. Putting this together with the other precision observables gives a preference for light sparticles.
Other proposals for additional new physics postulate the existence of new dimensions of space, which might also help to deal with the hierarchy problem. Clearly, space is three-dimensional on the distance scales that we know so far, but the suggestion is that there might be additional dimensions curled up so small as to be invisible. This idea, which dates back to the work of Theodor Kaluza and Oskar Klein in the 1920s, has gained currency in recent years with the realization that string theory predicts the existence of extra dimensions and that some of these might be large enough to have consequences observable at the LHC. One possibility that has emerged is that gravity might become strong when these extra dimensions appear, possibly at energies close to 1 TeV. In this case, some variants of string theory predict that microscopic black holes might be produced in the LHC collisions. These would decay rapidly via Hawking radiation, but measurements of this radiation would offer a unique window onto the mysteries of quantum gravity.
If the extra dimensions are curled up on a sufficiently large scale, ATLAS and CMS might be able to see Kaluza–Klein excitations of Standard Model particles, or even the graviton. Indeed, the spectroscopy of some extra-dimensional theories might be as rich as that of supersymmetry while, in some theories, the lightest Kaluza–Klein particle might be stable, rather like the LSP in supersymmetric models.
Back to the beginning
By colliding particles at very high energies we can recreate the conditions that existed a fraction of a second after the Big Bang, which allows us to probe the origins of matter. This may be linked to the question of why there are so many different types of matter particles in the universe. Experiments at LEP revealed that there are just three “families” of elementary particles: one that makes up normal stable matter, and two heavier unstable families that were revealed in cosmic rays and accelerator experiments. The Standard Model does not explain why there are three and only three families, but it may be that their existence in the early universe was necessary for matter to emerge from the Big Bang, with little or no antimatter. It seems likely that the answers to these questions are linked at a fundamental level.
Andrei Sakharov was the first to point out that particle physics could explain the origin of matter in the universe by the fact that matter and antimatter have slightly different properties, as discovered in the decays of K and B mesons, which contain strange and bottom quarks, members of the heavier families. These differences are manifest in the phenomenon of CP violation. Present data are in good agreement with the amount of CP violation allowed by the Standard Model, but this would be insufficient to generate the matter seen in the universe.
The Standard Model accounts for CP violation within the context of the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which links the interactions between quarks of different type (or flavour). Experiments at the B-factories at KEK and SLAC have established that the CKM mechanism is dominant, so the question is no longer whether this is “right”. The task is rather to look for additional sources of CP violation that must surely exist, to create the cosmological matter–antimatter asymmetry via baryogenesis in the early universe. It is an open question whether these may provide new physics at the tera-electron-volt scale accessible to the LHC. On the other hand, if the LHC does observe any new physics, such as the Higgs boson and/or supersymmetry, it will become urgent to understand its flavour and CP properties.
The LHCb experiment will be dedicated to probing the differences between matter and antimatter, notably looking for discrepancies with the Standard Model. The experiment has unique capabilities for probing the decays of mesons containing both bottom and strange quarks. It will be able to measure subtle CP-violating effects in Bs decays, and will also improve measurements of all the angles of the unitarity triangle, which expresses the amount of CP violation in the Standard Model. The LHC will also provide high sensitivity to rare B decays, to which the ATLAS and CMS experiments will contribute, in particular, and which may open another window on CP violation beyond the CKM model.
In addition to the studies of proton–proton collisions, heavy-ion collisions at the LHC will provide a window onto the state of matter that would have existed in the early universe at times before quarks and gluons “condensed” into hadrons, and ultimately the protons and neutrons of the primordial elements. When heavy ions collide at high energies they form for an instant a “fireball” of hot, dense matter. Studies, in particular by the ALICE experiment, may resolve some of the puzzles posed by the data already obtained at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. These data indicate that there is very rapid thermalization in the collisions, after which a fluid with very low viscosity and large transport coefficients seems to be produced. One of the surprises is that the medium produced at RHIC seems to be strongly interacting (see Theory ties strings round jet suppression). The final state exhibits jet quenching and the semblance of cones of energy deposition akin to Machian shock waves or Cherenkov radiation patterns, indicative of very fast particles moving through a medium faster than sound or light.
Experiments at the LHC will enter a new range of temperatures and pressures, thought to be far into the quark–gluon plasma regime, which should test the various ideas developed to explain results from RHIC. The experiments will probably not see a real phase transition between the hadronic and quark–gluon descriptions; it is more likely to be a cross-over that may not have a distinctive experimental signature at high energies. However, it may well be possible to see quark–gluon matter in its weakly interacting high-temperature phase. The larger kinematic range should also enable ideas about jet quenching and radiation cones to be tested.
First expectations
The first step for the experimenters will be to understand the minimum-bias events and compare measurements of jets with the predictions of QCD. The next Standard Model processes to be measured and understood will be those producing the W and Z vector bosons, followed by top-quark physics. Each of these steps will allow the experimental teams to understand and calibrate their detectors, and only after these steps will the search for the Higgs boson start in earnest. The Higgs will not jump out in the same way as did the W and Z bosons, or even the top quark, and the search for it will demand an excellent understanding of the detectors. Around the time that Higgs searches get underway, the first searches for supersymmetry or other new physics beyond the Standard Model will also start.
In practice, the teams will look for generic signatures of new physics that could be due to several different scenarios. For example, missing-energy events could be due to supersymmetry, extra dimensions, black holes or the radiation of gravitons into extra dimensions. The challenge will then be to distinguish between the different scenarios. For example, in the case of distinguishing between supersymmetry and universal extra dimensions, the spectra of higher excitations would be different in the two scenarios, the different spins of particles in cascade decays would yield distinctive spin correlations, and the spectra and asymmetries of, for instance, dileptons, would be distinguishable.
What is the discovery potential of this initial period of LHC running? Figure 5a shows that a Standard Model Higgs boson could be discovered with 5 σ significance with 5 fb–1 of integrated and well-understood luminosity, whereas 1 fb–1 would already suffice to exclude a Standard Model Higgs boson at the 95% confidence level over a large range of possible masses. However, as mentioned above, this Higgs signal would receive contributions from many different decay signatures, so the search for the Higgs boson will require researchers to understand the detectors very well to find each of these signatures with good efficiency and low background. Therefore, announcement of the Higgs discovery may not come the day after the accelerator produces the required integrated luminosity!
Paradoxically, some new physics scenarios such as supersymmetry may be easier to spot, if their mass scale is not too high. For example, figure 5b shows that 0.1 fb–1 of luminosity should be enough to detect the gluino at the 5 σ level if its mass is less than 1.2 TeV, and to exclude its existence below 1.5 TeV at the 95% confidence level. This amount of integrated luminosity could be gathered with an ideal month’s running at 1% of the design instantaneous luminosity.
We do not know which, if any, of the theories that I have mentioned nature has chosen, but one thing is sure: once the LHC starts delivering data, our hazy view of this new energy scale will begin to clear dramatically. Particle physics stands on the threshold of a new era, in which the LHC will answer some of our deepest questions. The answers will set the agenda for future generations of particle-physics experiments.
The gas electron multiplier (GEM) detector developed at CERN by Fabio Sauli has several unique features. For example, it can operate at relatively high gains in pure noble gases, and can be combined with other devices of the same kind to operate in a cascade mode. Indeed, cascaded GEM structures now feature in several large-scale high-energy physics experiments, such as COMPASS, TOTEM and LHCb at CERN. The basic device consists of a metallized polymer foil chemically pierced to form a dense array of microscopic holes. Applying a voltage across the foil creates a high electric field in the holes which then act as tiny proportional counters, amplifying ionization charge. However, despite great progress in its development and optimization, the GEM is still a rather fragile detector. It requires very clean and dust-free conditions during its manufacture and assembly and it can be easily damaged by sparks, which are almost unavoidable when operating at high gain.
To try to overcome these problems, a few years ago a team of physicists from CERN and the Royal Institute of Technology in Stockholm developed a more robust version of the GEM, which was further improved by a team at the Weizman Institute of Science in Rehovot. Called the thick GEM (TGEM), it is based on printed circuit boards (PCBs) metallized on both sides, with an array of tiny holes drilled through (figure 1). Typically 0.5–1.0 mm thick, it is manufactured using the standard industrial PCB processing techniques for precise drilling and etching. The TGEM has excellent rate characteristics and can operate at higher gains than the GEM, but it can still be damaged by sparks.
Now a small team from CERN and INFN has developed a new, more spark-resistant version of the GEM in which the metallic electrode layers are replaced with electrodes of resistive material. We built the first prototypes from a standard PCB 0.4 mm thick. We glued sheets of resistive kapton (100XC10E5) 50 μm thick onto both surfaces of the PCB to form resistive electrode structures, and drilled holes 0.3 mm in diameter with a pitch of 0.6 mm using a CNC machine. The surface resistivity of the material created in this way varied from 500 to 800 kΩ/square, depending on the particular sample. After the drilling was finished, the copper foils were etched from the active area of the detector (30 mm × 30 mm), leaving only a copper frame for the connection of the high-voltage wires in the circular part of the detector (figure 2). We call this the resistive-electrode thick GEM (RETGEM).
The detector operates in the following way. When a high voltage is applied to the copper frames, the kapton electrodes act as equipotential layers, owing to their finite resistivity, and the same electric field forms inside and outside of the holes as occurs in the TGEM with the metallic electrodes. So at low counting rates the detector should operate as a conventional TGEM, while at high counting rates and in the case of discharges the detector’s behaviour should be more like that of a resistive-plate chamber. The RETGEM is only seven times thicker than the conventional GEM structures and could easily be bent to form a semi-cylindrical shape, as is preferred in some cases, such as in the future NA49 experiment at CERN.
We have made systematic studies and further developments of the RETGEM in collaboration with the High Momentum Particle Identification (HMPID) group of the ALICE Collaboration and the ICARUS research group from INFN Padova. These investigations show that the maximum achievable gain before sparks appear in the RETGEM is at least 10 times higher than in the case of the conventional GEM (figure 3). Moreover, when sparks do appear at higher gains, the current in these discharges is of an order of magnitude less than in the case of the TGEMs, so they do not damage either the detector or the front-end electronics.
We have since manufactured RETGEMs 1 and 2 mm thick with active areas of 30 mm × 30 mm and 70 mm × 70 mm in the TS/DEM/PMT workshop at CERN and successfully tested the devices. The maximum gain achieved was 2–3 times higher than with the device that was only about 0.4 mm thick, reaching a value of close to 105; as before, sparks did not damage the detector. The RETGEMs could operate at up to 10 kHz/cm2 without a noticeable drop in the signal amplitude, while at higher counting rates the signal amplitude began dropping, as happens with resistive-plate chambers. We also found that double RETGEMs can operate stably in a cascade mode; we observed no charging-up effect despite the high resistivity of the electrodes and achieved gains close to 106 with the double-step RETGEMs.
The most interesting discovery was that if we coat the cathode of the RETGEM with a caesium iodide (CsI) photosensitive layer, the detector acquires high sensitivity to ultraviolet light – an approach that has already been used with the conventional GEM with metallic electrodes. In contrast to these earlier attempts, however, in our case, the CsI was deposited directly onto the dielectric layer, that is, there was no metallic substrate present. Surprisingly enough, this detector worked very stably in the pulse-counting mode, easily achieving gains of 6 × 105 in double-step operation. The measured quantum efficiency was 34% at a wavelength of 120 nm, which is sufficient for some applications such as ring imaging Cherenkov detectors (RICH) or for the detection of the scintillation light from the noble liquids.
These studies have shown that RETGEMs can compete with the GEM in many applications that do not require very fine position resolution. Indeed the RETGEM offers a maximum achievable gain that is 10 times higher, is intrinsically protected against sparks and is thus very robust, can be assembled in ordinary laboratory conditions without using a clean room, and can operate in poorly quenched gases and gas mixtures. Other resistive coatings could also be used and the resistivity optimized for each application.
We believe that the new detector will have a great future and will find a wide range of applications in many areas. In high-energy physics it can be used, for example in RICH, muon detectors, calorimetry and noble-liquid time projection chambers.
• The RETGEM team comprises Rui de Oliveira (CERN TS/DEM/PMT workshop), Paolo Martinengo (ALICE HMPID group), Vladimir Peskov (ALICE HMPID group), Francesco Pietropaolo (INFN Padova) and Pio Picchi (INFN Frascati).
The International Committee for Future Accelerators (ICFA) has released the Reference Design Report (RDR) for a future International Linear Collider (ILC). The report provides the first detailed technical description of the machine, including a cost estimate, and is a major step towards the engineering design report that would underlie a formal project proposal.
The concept behind the ILC is a high-luminosity electron–positron collider, operating at centre-of-mass energies of 200–500 GeV, with a possible upgrade to 1 TeV. The first map of physics at the tera-electron-volt scale will come from CERN’s LHC; the ILC would expand on the discoveries made in this new energy region, investigating it with high precision.
ICFA established the basis for the design in August 2004 when it accepted the advice of the International Technology Recommendation Panel to opt for superconducting radio-frequency (SCRF) accelerating cavities operating at 1.3 GHz. A year later the Global Design Effort (GDE), a team of more than 60 scientists, was officially formed to define the basic parameters and layout and develop the reference design.
The RDR defines the technical specifications for a 31 km long machine, which would deliver a peak luminosity of about 2 × 1034 cm–2s–1, at a top centre-of-mass energy of 500 GeV. The basic design achieves this high luminosity through a combination of small emittance beams and high beam power, facilitated by the use of 1.3 GHz SCRF. The design also allows for an upgrade to a 50 km, 1 TeV machine during the second stage of the project.
The major components start with a polarized electron source based on a photocathode DC gun and an undulator-based positron source, driven by a 150 GeV electron beam. The particles produced will then pass to 5 GeV electron and positron damping rings at the centre of the ILC complex, before being transported to the main linacs, where each beam will enter a bunch-compressor system prior to injection. The two 11 km long main linacs will use the 1.3 GHz SCRF cavities operating at an average gradient of 31.5 MV/m, with a pulse length of 1.6 ms and a cycle rate of 5 Hz. Finally, a 4.5 km long beam-delivery system will bring the two beams into collision at a 14 mrad crossing angle. Two detectors in a “push–pull” configuration will share the luminosity at the single interaction point.
As part of the RDR, the GDE members also produced a preliminary value estimate of the cost for the ILC. This estimate contains three elements: €1480 million ($1800 million) for site-related costs, such as for tunnelling in a specific region; €4040 million ($4900 million) for the value of the high technology and conventional components; and approximately 2000 people a year, or 13,000 person years, for the supporting manpower. Some 43% of the total costs come from the SCRF technology for the main linacs.
The value cost estimate provides guidance for optimization of both the design and the R&D to be done during the engineering design phase, which will formally start in the autumn. The global R&D effort will continue to focus on the performance of the high-gradient accelerating cavities. These are key components as the gradient governs the lengths of the linacs. The goal of an average operational gradient of 31.5 MV/m translates to a minimum of 35 MV/m in acceptance tests during mass production of the cavities. The next major milestone for the GDE will then be to produce the engineering design report – the detailed blueprints for building the machine – by 2010.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.