The world’s largest silicon tracking detector is now in its final location in the CMS detector at CERN. This completes the installation of sub-detectors inside CMS’s huge solenoid magnet, which was lowered into the experiment’s cavern on the LHC ring on 28 February last year.
With a total surface area of 205 m2, the CMS Silicon Strip Tracking Detector is the largest detector of its kind ever constructed. Its sensors provide 10 million individual detection strips, each of which is read out by one of 80,000 custom-designed microelectronics chips. The silicon sensors are precisely assembled on 15,200 modules, which are in turn mounted on an extremely low-mass carbon fibre structure that maintains the position of the sensors to less than 100 μm. They will allow the charged particles that are produced in the LHC’s collisions at the heart of the detector to be tracked with a precision of better than 20 μm.
The overall assembly of the silicon tracking detector began in December 2006 and was completed in March 2007. All of the systems were then fully commissioned, with 20% of the full detector operating over several months, during which it recorded 5 million cosmic-ray tracks. This commissioning demonstrated that the detector fully meets the experiment’s requirements.
Finally, in the early hours of 13 December the detector began its journey from the main CERN site to the site of the CMS experiment near Cessy, France. Later that day it was lowered 90 m into the cavern. Installation began on 15 December and was concluded the following morning.
More than 500 scientists and engineers from 51 research institutions worldwide have contributed to the success of the project. These institutions are located in Austria, Belgium, CERN, Finland, France, Germany, Italy, Switzerland, the UK and the US.
Within one week in December 2007, particle physicists in the UK and the US received unexpectedly bad budget news, which rocked the two communities. The funding decisions have together provided a large blow to work on a future International Linear Collider (ILC).
On 11 December the UK’s Science and Technology Facilities Council (STFC) announced its Delivery Plan for 2008/9 to 2011/12. The plan sets out how the council intends to deliver world-class science, in part through providing access to international facilities, within the finances allocated in the 2007 Comprehensive Spending Review. Though this review gave the STFC an increase of some 13.5% over the period in question, the news for UK particle physicists – and their colleagues in astronomy – was far from good. The most serious aspect for particle physicists was summed up in a simple statement: “We will cease investment in the International Linear Collider.” Astronomers received the news of withdrawal from “future investment in the twin 8-m Gemini telescopes”. The consequences of the overall UK budget announcement are still being assessed, but redundancies are likely.
This news immediately reverberated around the world, as the UK was a major contributor to the ILC, but bad news was also in store for their colleagues in the US. A week later on 18 December, the US budget for fiscal year 2008 was finally announced after several delays. In the rush to get the budget approved, several projects suffered big reductions, including “$0 for the US contribution to ITER [the international fusion project]”, and no funds for the NOvA project at Fermilab’s Tevatron. In addition the budget allowed for only 25% – $15 million instead of $60 million – of the amount requested for R&D on the ILC. This is much worse than it appears, as the US system works in such a way that FY2008 began last October, so this allocation may already have been spent.
While these two adverse developments represent a major setback for the ILC, there are also immediate ramifications for personnel at Fermilab and SLAC. Pierre Oddone, Fermilab’s director, had the unenviable task of announcing that some 200 layoffs from a workforce of about 2000 would probably be necessary, and that employees would now have two days enforced unpaid leave a month. Persis Drell, in her new role as director of SLAC, had to announce that work on the ILC had to stop on 1 January and that the B-factory would have to shut down prematurely. The laboratory would have to reduce its workforce by about 15%, implying 125 layoffs in addition to the nearly 100 announced previously as SLAC changes focus in its research.
CERN director-general Robert Aymar, in his end-of-year status account to Council, reported on a year of progress at the LHC, which is due to start operation in the summer.
The machine components are now fully installed in the 27 km tunnel and commissioning is well underway. The successful commissioning of the second of the two transfer lines that will carry beams into the collider took place at the end of October, at the first attempt. Two of the LHC’s eight sectors are currently cooling down to their operating temperature of 1.9 K and a further three sectors are being prepared for cool-down. More good news included a successful pressure test of sector 1-2 on 8 December. This was the final sector to undergo this test, which assesses the ability of the mechanical design to withstand a pressure 25% above its design value.
Aymar told Council that CERN is on course for the LHC to start up in early summer 2008. However, it will not be possible to fix a definite date before the whole machine is cold and magnet electrical tests are positive. This should be in the spring, but any difficulties encountered during the commissioning that require a sector of the machine to be warmed up will lead to a delay of two to three months.
Installation of the LHC detectors is approaching its conclusion, and the collaborations are turning more attention towards physics analysis, including testing of the full data chains from the detectors through the Grid to data storage. All of the collaborations expect to have their initial detectors ready for April. Some are already routinely taking data with cosmic rays, and baseline Grid services are in daily operation.
Council also approved a budget for CERN in 2008 that will allow consolidation of CERN’s aging infrastructure to begin, together with provision for preparations for an intensity upgrade for the LHC. This paves the way for the renovation of the LHC’s injector complex, including replacement of the venerable PS, which was first switched on in 1959. This process will allow the LHC’s beam intensity to be increased by around 2016, thereby improving the sensitivity of the experiments to rare phenomena. The 2008 budget includes additional funds for this work, with special contributions being made by CERN’s host states, France and Switzerland.
On 14 December, at its 145th meeting, CERN Council appointed Rolf-Dieter Heuer to succeed Robert Aymar as CERN’s director-general. Heuer will take office on 1 January 2009 and serve a five-year term. His mandate will cover the early years of operation of the LHC and its first scientific results.
Heuer is currently research director for particle and astroparticle physics at the DESY laboratory in Hamburg, but is no stranger to CERN. From 1984 to 1998, he was a staff member at the laboratory, working for the OPAL collaboration at LEP. He was also OPAL’s spokesperson from 1994 to 1998.
After obtaining a doctorate in 1977 from the University of Heidelberg, Heuer has spent much of his career involved with the construction and operation of large particle detector systems for studying electron–positron collisions. After leaving CERN in 1998 and joining the University of Hamburg he founded a group working on preparations for experiments at a possible future electron–positron collider. With his appointment at DESY in 2004, he became responsible for research at the HERA collider, DESY’s participation in the LHC, and R&D for a future electron–positron collider.
By Weimin Wu, World Scientific Publishing. Hardback ISBN 9812705600 £29 ($54).
Weimin Wu has led an extraordinary life. Arriving at Fudan University in 1960 at age 17, he was inducted into a special “Section Zero” – by day he studied nuclear physics, in the evenings he helped with research into uranium-enrichment techniques for China’s atomic bomb.
In 1965 he moved to Lanzhou University as a graduate student, but a year later the Great Proletarian Cultural Revolution burst over China and graduate students were a major target. Wu was packed off to an arid mountain region, where he worked as a shepherd, lived in a cave and survived on potatoes, wild plants, rainwater and melted snow. When he was allowed back to Lanzhou, he found that his supervisor had been accused of reactionary scholarship and landlordism, and assigned to clean toilets. Wu himself was soon sent to be a labourer. A commissar rescued him in 1969 and employed his skills to help develop the launch control system of China’s first artificial satellite.
After the end of the Cultural Revolution in 1976, the Chinese government discovered that intellectuals were “part of the working class”. But as Wu writes, “the era that destroyed the talents of many also had cast a dark shadow over them for a lifetime”.
In 1978 Wu joined the group tasked with building China’s first particle accelerator and in 1980 he came to CERN for two years, joining Jack Steinberger’s CDHS neutrino group. Back in Beijing he led the Chinese group involved in ALEPH’s muon detectors, and in June 1989 he observed the first J/Ψ particle to be seen at BES, the Beijing spectrometer. Three weeks earlier he had participated in pro-democracy demonstrations and witnessed the army’s repression in Tiananmen Square. Shortly afterwards he left China and found sanctuary at Fermilab, where he now works on the CMS experiment.
Of the two achievements closest to his heart, one occurred on 25 August 1986 when, despite technical and political obstacles, he sent the first e-mail from China (to Jack Steinberger at CERN). The other achievement was this book of photographs.
Wu has been taking photographs since he was 12. His takes his subjects mostly from nature and from the places and people in his life. Many of his photographs are romantic images of flowers, sunsets, rainbows and landscapes. Several are more mysterious, such as a green swimming pool, lit from within, in a city at night. “To me,” Wu writes, “physics and photography are like a pair of twin sisters.” Both require elegance, conciseness and the good luck that “is granted only to those who are prepared”.
The book includes 12 pages of episodes from Wu’s life, 12 pages by him about his photography, and more than 100 pages of his photographs divided into “Flowers”, “Landscape”, “People” and “The Beauty of Physics” – a selection of photos that remind him of physical concepts, with titles such as Latticework and Multidimensional Space, including the cover, which shows Birds of a Feather Flock Together.
What combinations of neutrons and protons can form a bound nucleus? The long-elusive answer continues to stimulate nuclear physicists. Even now, decades after most of the basic properties of stable nuclei have been discovered, a fundamental theory of the nuclear force is still lacking, and theoretical predictions of the limits of nuclear stability are unreliable. So, the task of finding these limits falls to experimentalists – who continue to find surprises among super-heavy isotopes of elements immediately beyond oxygen.
At the National Superconducting Cyclotron Laboratory (NSCL) at Michigan State University we recently discovered several new neutron-rich isotopes: 44Si, 40Mg, 42Al and 43Al (Tarasov et al. 2007 and Baumann et al. 2007;). These are at the neutron drip line – the limit in the number of neutrons that can bind to a given number of protons. The result confirms that these exotic neutron-rich nuclei gain stability from an unpaired proton, which narrows the normal gaps between shells and provides the opportunity to bind many more neutrons (Thoennessen 2004). This feature was firmly established in 2002 by the significant difference between the heaviest isotopes of oxygen (24O16) and fluorine (31F22). However, our observation of such ostensibly strange behaviour is still novel, since in stable nuclei, the attractive pairing interaction generally enhances the stability of “even–even” isotopes with even numbers of protons and neutrons.
The recent experiment at NSCL clearly identified three events of 40Mg in addition to many events of the isotopes previously observed, namely 31F, 34Ne, 37Na and 44Si (figure 1). It also confirmed that 30F, 33Ne and 36Na are unbound as there were no events; the lack of events corresponding to 39Mg indicates that it too is unbound. Furthermore, the 23 events of 42Al establish its discovery. The data also contain one event consistent with 43Al. Owing to the attractive neutron pairing interaction, the firm observation of the odd–odd isotope 42Al29 supports the existence of 43Al30 and lends credibility to the interpretation of the single event as evidence for the existence of this nucleus.
The discovery of the even–even isotope 40Mg28 is consistent with the predictions of two leading theoretical models, as well as with the experimentally confirmed staggered pattern of the drip line in this region (figure 2). It is interesting to note that if this experiment had not observed 40Mg, the drip line might have been considered to have been determined up to magnesium. However, with the observation of 40Mg, the question remains open as to whether 31F, 37Ne, 40Na and 40Mg are in fact the last bound isotopes of fluorine, neon, sodium and magnesium, respectively.
More important than the observation of the even–even 40Mg is the discovery of the odd–odd 42Al, which two leading theoretical models predicted to be unbound. The latest observation breaks the pattern of staggering at the drip line, somewhat akin to the situation at fluorine. In fact, it now appears possible that heavier nuclei up to 47Al may also be bound.
For many decades, the point at which the binding energy for a proton or a neutron goes to zero has been a clear-cut benchmark for models of the atomic nucleus. The drip line is the demarcation line between the last bound isotope and its unbound neighbour and each chemical element has a lightest (proton drip line) and a heaviest (neutron drip line) nucleus.
The proton drip line is relatively well established for most of the elements because the Coulomb repulsion among protons has a dramatic destabilizing effect on nuclei with significantly fewer neutrons than protons. On the other hand, the neutron-binding energy only gradually approaches zero as the neutron number increases. Subtle quantum-mechanical effects such as neutron pairing and energy-level bunching end up determining the stability of the heaviest isotope of each element. The weak binding of the most neutron-rich nuclei leads to the phenomena of neutron “skins” and “halos”, which give these nuclei some unusual properties.
The only method available at present to produce nuclei at, or near, the neutron drip line is through the fragmentation of stable nuclei followed by the separation and identification of the products in less than a microsecond (Thoennessen 2004). The fragmentation reactions produce a statistical distribution of products with a large range of excitation energies. The excitation energy is dissipated by particle emission through strong decay (primarily neutrons and protons) and then by electromagnetic decay before the fragments reach the detectors. The Coulomb force also favours the emission of neutrons, suppressing the production of the most neutron-rich products.
Current knowledge of the neutron drip line is limited to only the lightest nuclei. The portion of the chart of nuclides in figure 2 shows the known geography of the drip line and the variation in the predictions from two widely respected theoretical models. Researchers first observed the heaviest bound oxygen isotope, 24O, in 1970. However, it was much later before experiments showed that the nuclei 25O through 28O are unbound with respect to prompt neutron emission. Only in 1997 did nuclear physicists consider the drip line for oxygen to be established. Subsequently, the isotopes 31F, 34Ne and 37Na have been observed. Although no experiment has established that 33F, 36Ne, and 39Na are unbound, these heavier isotopes probably do lie beyond the neutron drip line. These earlier experiments also failed to observe the even–even nucleus 40Mg, and researchers even speculated that 40Mg might be unbound.
On the theoretical side, the finite-range droplet model (FRDM) uses a semi-classical description of the macroscopic contributions to the nuclear binding energy, which is augmented with microscopic corrections arising from local single-particle shell structure and the pairing of nucleons (Möller et al. 1995) – this gives the solid black line in figure 2. Another theoretical framework, the fully microscopic Hartree–Fock–Bogoliubov model (HFB-8), is a state-of-the-art quantum-mechanical calculation that puts the nucleons into a mean-field with a Skyrme interaction with pairing (Samyn et al. 2004). This is the dashed green line in figure 2. Although in many cases both models correctly predict the location of the neutron drip line, they cannot account for the detailed interplay of valence protons and neutrons, even among the oxygen and fluorine isotopes. The discrepancies between the models is still more apparent in the magnesium to silicon region.
The recent observations at NSCL required high primary beam intensity, high collection efficiency, high efficiency for identification and – perhaps most importantly – a high degree of purification, as the sought-after rare isotopes are produced infrequently, in approximately 1 in 1015 reactions. Currently, the worldwide nuclear science community is anticipating several new facilities, including the Facility for Antiproton and Ion Research in Germany, the Radioisotope Beam Factory in Japan and the Facility for Rare-Isotope Beams in the US. The facilities are needed for many reasons, including advancing the study of rare isotopes and investigating the limits of existence of atomic nuclei.
The result from NSCL is one among many that hints at scientific surprises associated with the ongoing pursuit of exotic, neutron-rich nuclei. A thorough and nuanced understanding of the nuclear force may remain beyond the collective understanding of nuclear science, but the drip line beyond oxygen – even if further out than previously expected – continues to beckon.
On 10 December 1957, two young Chinese-Americans arrived in Stockholm to collect the Nobel Prize in Physics – the first Chinese to achieve this award. Tsung-Dao Lee and Chen Ning Yang received their share of the best-known prize in science for work done in the summer of 1956, in which they proposed that the weak force is not symmetric with respect to parity – the reversal of all spatial directions. The bestowal of the Nobel was a fitting end to a tumultuous year for physics, which began with results from an experiment on nuclear beta-decay in which Chien-Sung Wu and collaborators had shown that Lee and Yang were correct: nature violates parity symmetry in weak interactions.
Half a century later, Lee continues to focus on understanding the basic constituents of matter and, in particular, symmetry in fundamental particles, though much has changed in the intervening years. “Our concept of what matter is made of, 50 years later, is very different,” he points out. “Today, we now know that all matter is made of 12 particles: six quarks and six leptons. The constituents of all matter – not dark matter, not dark energy but our kind of matter – every star, our Milky Way, all the galaxies in the universe are made of these 12.”
These 12 particles, divided into four families, each with three particles of the same charge, form the basis of the current Standard Model of particle physics. They are what students first learn about the subject. In 1957, however, physicists had a clear knowledge of only two of these – the electron and the muon, both charged leptons. Quarks lay in the future, and the neutrino associated with nuclear beta-decay had been detected for the first time only the previous year. Since then, the field has blossomed with the discovery of a total of six kinds of quark, a third charged lepton (the τ) and three kinds of neutrino. Five decades ago, Lee explained: “We knew a form of neutrino, but we didn’t know how to make a coherent mixture of all these three.” Now, one of Lee’s main interests lies in the phenomenon of mixing in the leptons and quarks, described by two 3 × 3 matrices, which he calls the cornerstones of particle physics.
Lee is fiercely proud of the progress in particle physics, and believes that the second half of the 20th century was as rich as the latter part of the 19th century. The 1890s saw the discovery of the electron, and Ernest Rutherford opened the door to a new world with his work on alpha, beta and gamma radiation. Here already, Lee observes, was much of 20th-century physics – alpha, beta and gamma decay, respectively, occur through the strong, weak and electromagnetic interactions, which underpin the Standard Model. “Now, 100 years later, we realize that all of our kind of matter is made of 12 particles, divided into four families, each of three particles of the same charge – that is fantastic.” He believes that the field is poised to lead to more great physics out of a better understanding of these basic constituents.
Lee’s contributions to particle physics during the past 50 years have been equally impressive. Growing up in China, he proved to be an excellent student and in 1946 he was able to go to the University of Chicago on a Chinese government scholarship. He gained his PhD under Enrico Fermi in 1950 and in 1953 was appointed assistant professor of physics at Columbia University, where he remains an active member of the faculty. His work in particle physics ranges from the ethereal, almost insubstantial world of the weakly interacting neutrinos to the rich, dense soup of the strongly interacting quark–gluon plasma.
At Columbia in the late 1950s, Lee’s enthusiasm for studying weak interactions at higher energies than in particle decays helped to inspire an experimentalist colleague, Melvin Schwartz, to work out how to make a beam of neutrinos. This led to the famous experiment at Brookhaven National Laboratory in 1962, which showed that there are two different neutrinos associated with the electron and the muon. Two years later, Brookhaven was again the scene of a groundbreaking experiment, when James Cronin, Val Fitch and colleagues discovered that the combined symmetry of charge-conjugation and parity (CP) is violated in the decays of neutral kaons. This phenomenon was eventually understood in the context of six kinds of quark and their 3 × 3 mixing matrix – a major focus of Lee’s current work.
At around this time, Lee also made a seminal contribution to field theory, which would ultimately be an important part of QCD, the theory of strong interactions. What is now known as the Kinoshita–Lee–Nauenberg theorem deals with a problem of infrared divergences in gauge theories. In QCD, this underlies our understanding of the production of jets from quarks and gluons – a topic of key importance at particle colliders, from the early days of SPEAR at SLAC to the LHC, about to start up at CERN.
However, it is in the physics of hot, dense QCD matter in the form of a quark–gluon plasma that Lee has made one of his most important marks, pushing people to realize that it was indeed possible to observe this exciting new state of matter. In 1974, at a time when experimentalists were concentrating on smaller and smaller scales, he put forward the idea that “It would be interesting to explore new phenomena by distributing high energy or high nuclear density over relatively large volume.” In particular, he saw the possibility of restoring broken symmetries of the vacuum in collisions of heavy nuclei. This was one of the inspirations behind those who pushed for the RHIC collider at Brookhaven, and Lee witnessed the results emerging on strongly interacting quark–gluon plasma during the past few years with excitement. He also sees a possible link between the physics of heavy-ion collisions and the physics of dark energy. Both could involve a collective field – a scalar – that in the presence of a matter field can generate a negative pressure. “I believe the heavy-ion programme at the LHC will be very important to explore this possibility.”
During Lee’s recent visit to CERN, he saw the enormous effort now going into preparations for the LHC. So what does he think the experiments there will find? While he expects the LHC to make important discoveries, including evidence for new particles such as the Higgs boson, his continuing thoughts about symmetry in the universe lead to more personal predictions.
He believes that asymmetries in parity, charge conjugation and time reversal are not asymmetries of the fundamental laws of physics. Instead, he thinks it is likely that they are “asymmetries of the solutions, namely the Big Bang universe we live in – it is the solution that is not symmetrical”. In other words, he sees CP violation as an effect of spontaneous symmetry-breaking. In this case, says Lee, there is a possibility of finding right-handed W and Z particles to match the left-handed Ws and Z already known. Other new particles could be massive partners for the massless graviton, just as the massless photon has heavy partners in the W and Z. “They will have to be uncovered, and the LHC might also be the first window on that.”
The promise that the LHC holds for the future fits well with Lee’s overall view of the state of particle physics. “It will be a turning point. By what we discover here we will also know what to do as a next step. We expect that the LHC will give us a world of discoveries that will set the route for our future explorations,” he says. Half a century after his Nobel prize, he retains an inspiring optimism: “I believe that the beginning of the 21st century will be as important for physics as the beginning, the first 50 years, of the 20th century, and the LHC is going to be the machine to make the first discovery – so it is very lucky to be here.”
When the LHC begins to open up a new high-energy frontier, it will achieve the highest concentration of energy in operations with lead ions. The collisions, each involving around 400 nucleons with a total energy of more than 1000 TeV, will create strongly interacting, hot, dense matter – a melting pot of quarks and gluons called quark–gluon plasma. This matter will exist for only an instant, and the main goal of the ALICE experiment is to search for evidence of its existence among the many thousands of particles emerging from each collision. One important piece of evidence will be the detection of dimuons – pairs of muons of opposite sign. For this reason, the muon spectrometer, which incorporates some of the first detectors installed in the ALICE underground cavern at Point 2 on the LHC ring, has a key role.
Dimuons are emitted in the decays of vector mesons containing heavy quarks, such as the J/Ψ, the Ψ’, and members of the Υ family. Dimuons will also reveal the decays of light vector mesons (φ, ρ and ω) and of particles with open charm and beauty. The heavy quarkonia states represent one of the most powerful methods to probe the nature of the medium produced in the early stages of the heavy-ion collisions. Indeed, more than 20 years ago, Tetsuo Matsui and Helmut Satz pointed out that J/Ψ production should be suppressed if a quark–gluon plasma is formed in the collision. This provides a strong motivation for experimental studies of J/Ψ and Ψ’ production, undertaken at the energies of the SPS at CERN and RHIC at Brookhaven National Laboratory (BNL). The LHC will be special, however, because two families of resonances (J/Ψ and Υ) rather than one will be experimentally accessible, thanks to the higher beam energy. In addition, the temperature of the quark–gluon “bath” at the LHC is expected to be high enough to “melt” all or most of the Υ states.
As in many experiments, including ATLAS and CMS at the LHC, the role of the muon spectrometer is to detect muons and measure their momenta from the bending of their tracks in a magnetic field. However, there are some very specific aspects of the spectrometer’s design because the ALICE experiment will specialize in studying heavy-ion collisions. In ATLAS and CMS, the muon spectrometer follows the “barrel and endcaps” construction based on a toroidal or solenoidal magnetic field. ALICE also has a central “barrel” of detectors inside the large-aperture solenoid magnet from the L3 experiment at LEP, but the muon spectrometer – with its own large dipole magnet – is located at one side of the barrel, where it will detect muons emitted at small angles with respect to the beam. Isolating muons in heavy-ion collisions requires a large amount of material (absorber) to reduce the huge numbers of hadrons, but the absorbers also stop low-energy muons. So, the measurement of vector mesons (in particular the J/Ψ and Ψ’) of low transverse momentum (pt) is feasible only at small angles, where the muons emitted in their decay have rather high energies owing to the Lorentz boost.
Both the special environment of the heavy-ion collisions and the physics involved have led to other important criteria for the design of the spectrometer. For example, the tracking detectors must be able to handle the high multiplicity of charged particles that are produced. Also, the accuracy of the dimuon measurements is limited by statistics (at least for the Υ family), so the geometrical acceptance must be as large as possible.
The main goal will be to resolve the peaks of the Υ, Υ’ and Υ”, which requires resolutions of 100 MeV/c2 for masses around 10 GeV/c2. This in turn determines the bending strength of the spectrometer magnet as well as the spatial resolution of the muon tracking system. It also imposes the need to minimize multiple scattering in the structure and carefully optimize the absorber. Finally, the spectrometer has to be equipped with a dimuon trigger system that matches the maximum trigger rate handled by the ALICE data acquisition.
Figure 1 shows the main components of the spectrometer. Closest to the interaction region, there is a front absorber, to remove hadrons and photons emerging from the collision. Five pairs of high-granularity detector planes form the tracking system within the field of the large dipole magnet. Beyond the magnet is a passive muon filter wall, followed by two pairs of trigger chamber planes. In addition, there is an inner beam shield to protect the chambers from particles and secondaries produced at small angles.
The absorbers have a crucial role, so the collaboration has taken great care in their design. The front absorber has to remove hadrons coming from the interaction region without creating further particles and without affecting muons that come from vector meson decays. This absorber is located inside the L3 magnet. It has a composite structure of different materials to limit small-angle scattering and energy lost by the muons and to protect other detectors in ALICE from secondary particles produced in the absorber itself.
Building such a complex item was an impressive international effort. The tungsten came from China, the aluminium from Armenia, the steel from Finland, the graphite from India, the borated polyethylene from Italy, the lead from the UK and the concrete from France. Engineers from Russia and CERN designed the absorber, the Chinese assembled it at CERN and the International Science and Technology Centre in Moscow provided part of the funding.
The spectrometer itself is shielded throughout its length by the beam shield. This is a dense absorber tube made of some 100 tonnes of tungsten, lead and stainless steel, which surrounds the beam pipe. The inner vacuum chamber has an open-angle conical geometry to reduce background particle interactions along the length of the spectrometer.
While the front absorber and the beam shield are sufficient to protect the tracking chambers, the trigger chambers need additional protection. This is provided by an iron wall about 1 m thick – the muon filter – located between the last tracking chamber and the first trigger chamber. Together, the front absorber and the muon filter stop muons with momentum of less than 4 GeV⁄c.
The spectrometer design is constructed around a dipole magnet that is among the largest ever built using resistive coils (figure 2). With a gap between poles of about 3.5 m and a yoke about 9 m high, it weighs 850 tonnes. To provide the required resolution on the dimuon mass, it has a field of 0.7 T, with a field integral between the interaction point and the muon filter of 3 Tm.
There are two main requirements that underpin the design of the tracking system: a spatial resolution better than 100 μm and the capability to operate in the high-multiplicity environment. For central lead–lead collisions, even after the absorbers have done their work, a few hundred particles will nevertheless hit the muon chambers, with a maximum hit density of about 5 × 10–2 cm–2. Moreover, the system has to cover an area of about 100 m2.
These demands all led to the choice of cathode-pad chambers to detect the muons. There are 10 planes of chambers in all, arranged in pairs to form five stations: two pairs before the dipole magnet; one inside it; and two after. Each chamber has two cathode planes to provide 2D hit information. The read-out pads are highly segmented to keep the occupancy down to around 5%. For example, in the region of the first station close to the beam pipe, where the multiplicity will be highest, the pads are as small as 4.2 × 6.3 mm2. Then, as the hit density decreases with the distance from the beam, larger pads are used at larger radii. This keeps the total number of electronics channels to about 1 million.
To minimize the multiple scattering of the muons, the chambers are constructed of composite materials such as carbon fibre. This technology allows for extremely thin and rigid detectors, resulting in the chamber thickness as small as 0.03 radiation lengths. The tracking stations vary in size, ranging from a few square metres for station 1 to more than 30 m2 for station 5. This led to two different basic designs for the chambers. The chambers in the first two stations have a quadrant structure, with the read-out electronics distributed on their surface (figure 3). For the other stations, the chambers have an overlapping slat structure (figure 4) with the electronics implemented on the side of the slats. The maximum size of the slats is 40 × 240 cm2.
The front-end electronics for reading out the signals from the tracking chambers is based on custom-designed VLSI chips, developed within the ALICE collaboration. The system uses the MANAS chip, which was derived from the GASSIPLEX chip used for other detectors in ALICE, and the MARC chip. The gain dispersion between the different channels is about 3% – essential for achieving the desired invariant mass resolution. The electronics are completed by the CROCUS system, which was specifically designed and developed to perform the read out of the tracking chambers.
The alignment of the tracking chambers is crucial for achieving the required invariant mass resolution, so there will be a strict procedure to follow when ALICE is running. There will be dedicated runs without magnetic field for aligning the chambers with straight muon tracks. Then, during standard datataking, a dedicated monitoring system will record any displacement with respect to the initial geometry, which can occur for a variety of reasons, including the switching on of the magnet. The geometry-monitoring system consists of 460 optical sensors installed on the tracking chambers. It projects the image of an object onto a CCD sensor and the analysis of the recorded image then provides a measurement of the displacement. The aim is to monitor the position of all of the tracking chambers with a precision better than 40 μm.
Trigger chambers beyond the muon filter form the final important component of the muon system. The role of the trigger detectors is to select dimuons produced (e.g. by J/Ψ or Υ decays) from the background of low-pt muons produced by the decays of pions and kaons. The selection is made on the pt of each individual muon, yielding a dimuon trigger signal when there are at least two tracks above a predefined pt. This pt selection needs a position-sensitive trigger detector with a spatial resolution of better than 1 cm – a requirement that is fulfilled by resistive plate chambers (RPCs). These detectors will be operated in streamer mode during heavy-ion runs.
The trigger system consists of four RPC planes, with a total active area of about 150 m2, arranged in two stations, 1 m apart, behind the muon filter. The RPC electrodes are made of low-resistivity Bakelite (about 3 × 109 Ωcm) so as to achieve the rate capability in the heavy-ion collisions. They are coated with linseed oil to improve the smoothness of the electrode surface. Extensive tests have shown that the RPCs will be able to tolerate several years of data taking in ALICE with heavy-ion beams.
The front-end electronics for the trigger detectors is based on the ADULT integrated circuit, also developed within the ALICE collaboration. Although designed for optimizing the time resolution when the RPCs operate in streamer mode, the circuit also allows the chambers to operate in “avalanche mode” during the long proton–proton runs that will occur at the LHC. The signals from the trigger detectors pass to the trigger electronics, which performs the pt selection on each muon. If the muon trigger is fired, a dedicated electronics card called the DARC allows the transfer of the trigger data to the ALICE data acquisition. Thanks to a short decision time of about 700 ns with these electronics, the dimuon trigger forms part of the level-0 trigger for ALICE. A high-level trigger system – based on the analysis by a PC farm of the final two tracking stations – further refines the selection of good events.
The collaboration has developed a detailed simulation to evaluate the performance of the muon spectrometer for the vector meson and heavy-flavour studies, using as input current knowledge about the different processes that contribute to dimuon production at LHC energies. Figure 5 shows an example of such studies, in this case the invariant mass distributions in the regions of the J/Ψ and Υ mass for lead–lead collisions in ALICE. This demonstrates the spectrometer’s capability to detect these resonances against various sources of background.
For the past few years, components built for the spectrometer have arrived at CERN from many different collaborating institutes and suppliers. The two coils for the spectrometer dipole magnet arrived at CERN in September 2003 and the complete magnet was installed in its final position underground in summer 2005. A year later, dimuon trigger and tracking chambers were the first detectors to be installed in ALICE’s underground cavern in July 2006. Since then installation and commissioning have maintained a good pace, and the dimuon spectrometer should be ready for the first global tests of ALICE at the end of the year.
• The design and construction of the ALICE muon spectrometer have been made possible through the joint efforts of many institutions in different countries: CEA/DAPNIA Saclay, IPN Lyon, IPN Orsay, LPC Clermont-Ferrand, LSPC Grenoble and Subatech Nantes (France); CERN Geneva (Switzerland); INFN/University of Cagliari, INFN/University of Torino, University of Piemonte Orientale and INFN Alessandria (Italy); JINR Dubna, PNPI Gatchina and VNIIEF Sarov (Russia); KIP Heidelberg (Germany); Muslim University of Aligarh and SAHA Institute Kolkata (India); University of Cape Town (South Africa); and Yer-Phi Yerevan (Armenia).
In April 1932 John Cockcroft and Ernest Walton split the atom for the first time, at the Cavendish Laboratory in Cambridge in the UK. Only weeks earlier, James Chadwick, also in Cambridge, discovered the neutron. That same year, far away in California, Carl Anderson discovered the positron while working on cosmic rays. So 1932 was a veritable annus mirabilis in which experiments discovered, and worked with, nucleons; exploited Albert Einstein’s relativity and energy-mass equivalence principle; took advantage of the newly emerging quantum mechanics and its prediction of “tunnelling” through potential barriers; and even verified the existence of “antimatter” predicted by Paul Dirac’s relativistic quantum theory of the electron. It is hard to think of a more significant year in the annals of science.
Ernest Rutherford (centre) encouraged Ernest Walton (left) and John Cockcroft (right) to build a high-voltage accelerator to split the atom. Their success marked the beginning of a new field of subatomic research.
Image credit: AIP Emilio Segrè Visual Archives. The experiment by Cockcroft and Walton split the nucleus at the heart of the atom with protons that were lower in energy than seemed possible, by virtue of quantum mechanical tunnelling – a phenomenon new to physics. In 1928 George Gamow had applied the new quantum mechanics to show how particles could tunnel through potential barriers, and how this could explain the decay of nuclei through alpha emission. He also realized that tunnelling could lower the energy required for an incident positively charged particle to overcome the Coulomb barrier of a target nucleus. It was this insight that underpinned the commitment of Cockcroft and Walton.
The entire sequence of events that led to the pioneering experiment (the specification of particle beam parameters based on contemporary theoretical application and phenomenology; the innovation and development of the necessary technologies to create such beams; and the use of the beams to do experiments on a subatomic scale to achieve a deeper understanding of the structure and function of matter) have been repeated many times as high-energy physics has advanced with the construction of accelerators to the current Standard Model of particles and forces. That Cockcroft realized the immense potential of accelerators in research, and in particular for progress in fundamental physics, is manifest in his instrumental role in later years to establish large accelerator laboratories, in particular CERN in 1954.
Cockcroft was born on 27 May 1897 to a family of cotton manufacturers in Todmorden, straddling the Lancashire–Yorkshire border in northern England. In his early years he experienced a varied educational background. He studied mathematics at Manchester University in 1914–1915, but the First World War interrupted his studies with service in the Royal Field Artillery. After the war, he returned instead to the College of Technology in Manchester to study electrical engineering. Later he joined the Metropolitan Vickers (“Metrovick”) Electrical Company as an apprentice for two years, but subsequently went to St John’s College, Cambridge, and took the Mathematical Tripos in 1924. This wide-ranging education served him well in later years. Nowadays, modern accelerator science and engineering relies on such a broad application of skill and innovation.
Such a diverse and formidable combination of training in mathematics, physics and engineering, plus practical experience with a local electrical company, primed Cockcroft for his future success. He joined Ernest Rutherford, who had recently moved from Manchester to the Cavendish Laboratory and with whom he had worked as an apprentice back in Manchester. Initially Cockcroft worked with Peter Kapitsa in the high-magnetic-field laboratory, where he used his industrial links to obtain the necessary large-scale equipment.
At the time, Cockcroft was in many ways the Cavendish Laboratory’s only true “theoretician”, bringing his mathematical abilities as well as his pragmatic engineering skills to a group that was strong in the experimentalist tradition of Rutherford. Cockcroft realized in 1928, before anyone else, the implications of Gamow’s tunnelling theory, namely that an energy of 300 keV might be sufficient for protons to penetrate a nucleus of boron, and even less for lithium. In a seminar at the Cavendish Laboratory in 1928 the young Soviet physicist Georgij Gamov (who became better known as George Gamow) reported on his calculations of potential-barrier tunnelling, its successful application to alpha-decay, and its importance for barrier penetration. Encouraged by Rutherford, Cockcroft initiated the high-voltage accelerator programme, and was joined by a student, Ernest T S Walton from Ireland.
Walton was a Methodist minister’s son, born in 1903 and educated in Belfast and Dublin. He was very much the lead experimentalist, though the junior partner. The aim was to build an accelerator to achieve an energy up to 1 MeV in order to be sure to penetrate the nuclear potential barrier. Walton had abandoned work on a circular accelerator for his thesis topic and now pursued the linear solution with Cockcroft. They took advantage of strong links with Cockcroft’s old employer in Manchester, Metrovick, which at the time was pioneering equipment for the UK electrical grid at transmission voltages up to 130 kV. Metrovick supplied the high-voltage transformers for what became the Cockcroft–Walton generator. So even at the start of the nuclear age, academic–industrial collaboration underpinned progress.
There were formidable challenges to overcome in each component: motor, generator and transformer; rectifier; 40 kV proton source; glass vacuum vessel; and so on. To this day working with such voltages, even below 500 kV, causes difficulties, as witnessed by the performance issues faced in DC photo-injectors at Jefferson Laboratory and Daresbury Laboratory. The interesting story of scrounging for the proper ceramic tubes to be used in the ultimate Cockcroft–Walton generator is a saga in itself.
Records show that life at the Cavendish Laboratory under Rutherford did not begin early in the day and finished strictly at 6.00 p.m. Rutherford insisted that it was to preserve health and to aid contemplation. Perhaps it partly explains the relatively slow progress by Cockcroft and Walton between 1929 and the ultimate triumph in 1932, although perhaps also it was, by all accounts and like all experimentalists, because both enjoyed the fun of building and perfecting their new experimental “toy”. Another reason was the relocation of their laboratory and a rebuild of their apparatus to a nominal 800 keV rating, primarily because of their own lack of confidence in the predictions of the new tunnelling calculations.
The day that transformed subatomic physics was 14 April 1932 when Cockcroft and Walton split the lithium atom with a proton beam. Accounts have it that Rutherford had become frustrated at the lack of results from the generator, which was Cockcroft and Walton’s pride and joy, and insisted that they get some results. Initially they used a beam of 280 keV, but later demonstrated atom splitting by a beam with energy below 150 keV. The experimenters closeted themselves in a lead-lined wooden hut in the accelerator room, and then peered through a microscope to look for scintillations due to alpha particles, which they counted by hand. If a zinc sulphide screen hanging on the wall glowed, they added a little more lead – so much for health and safety 75 years ago. Of course, they found scintillations, thereby observing the splitting of lithium nuclei by the incident protons, to form two alpha particles.
Ironically, as Gamow’s idea of barrier penetration proved to be correct, the experiment could have been performed at least a year earlier in a previous version of the apparatus. This is also true of a successful experiment in October 1932 at the Kharkov Institute, Ukraine, and for Ernest Lawrence’s cyclotron in Berkeley, California, soon after Cockcroft and Walton’s results. (In early August 1931, Gamow, and later Cockcroft, had visited the Kharkov Institute and discussed the new idea.) Many laboratories repeated and added to the work of the Cavendish Laboratory during the following six months, leading to a flood of experiments around the world. But it was Cockcroft and Walton who first split the atom, albeit later than might have been.
The so-called Cockcroft–Walton multiplier, based on a ladder-cascade principle to build up the voltage level by switching charge through a series of capacitances, is still in use today. Only in 2005, for example, was the version used on the injector for ISIS, the spallation neutron source at the Rutherford Appleton Laboratory, replaced by a new 665 keV RF quadrupole. The old multiplier will soon be on display at the entrance to the UK’s newly created Cockcroft Institute of Accelerator Science and Technology in Cheshire. The original version used by Cockcroft and Walton was, in fact, a refinement of a much earlier circuit by M Schenkel, a German engineer, which Heinrich Greinacher had already improved and so could never be patented.
Cockcroft and Walton naturally had close links with Chadwick, whose Nobel prize-winning discovery of the neutron occurred only a few weeks earlier in the same laboratory, making 1932 an extra-ordinary year for an extraordinary laboratory. Chadwick eventually built the first synchrocyclotron at the University of Liverpool, which was then reproduced at CERN at its inception in the early 1950s.
Cockcroft took over the Magnet Laboratory in Cambridge in 1934 following the departure of Kapitsa, and Walton moved to Trinity College, Dublin. In 1939, Cockcroft started work on radar systems for defence. In 1944 he became director of the Chalk River Laboratory, Canada. Two years later he was back in the UK, where he was appointed inaugural director of the Atomic Energy Research Establishment (AERE), Harwell, and played a major leader-ship role in ensuring the eventual operation of the world’s first nuclear power station at Windscale. He was also influential with the newly founded Indian government, whose foreign diplomat Vijayalakshmi Pandit visited Cockcroft in the UK for advice on the creation of an atomic-energy enterprise in India under physicist Homi J Bhabha’s leadership and initiative.
The week of 8–12 October 2007 marked the 50th anniversary of an event that was as notorious in its day as Three Mile Island in Pennsylvania or Chernobyl: the fire in the reactor at Wind-scale on the coast in north-western England. This was an environmental disaster that followed a standard procedure to release Wigner (thermal) energy stored in the graphite pile. The cause is still controversial. However, it would have been much worse without a feature widely known as “Cockcroft’s folly”, which was added late in the construction of the reactor. Cockcroft was head of AERE at the time and he intervened to insist on filters on the chimneys, which were retrofitted and were therefore at the top, so giving the chimneys their distinctive shape with large concrete bulges. Cockcroft’s intervention undoubtedly saved a much bigger disaster.
Cockcroft took charge of the AERE at a time when it was almost the sole repository of particle-accelerator expertise in Europe. In addition to early linear-accelerator construction, there was pioneering work on what was then a new and exciting device, the synchro-tron. Several small 30 MeV rings were built and larger ones designed for universities at Oxford, Glasgow and Birmingham. Then when planning started for CERN – which was not greeted with much enthusiasm by some in the UK who already had their own machines – it was Cockcroft who appointed Frank Goward from Harwell to assist Odd Dahl and Kjell Johnsen in the design of the PS. Soon afterwards he also encouraged two other important figures from Harwell to join in, with lasting impact on CERN: Donald Fry and John Adams.
In 1951, Cockcroft and Walton shared the Nobel Prize in Physics for the “transmutation of atomic nuclei by artificially accelerated particles”. Why had it taken so long to recognize the achievement, when Lawrence was instantly rewarded in 1932 for the invention of the cyclotron? The reason seems to be that there was a long list of giants still waiting to be recognized – Heisenberg among them – before Cockcroft and Walton could take their proper place. The awarding of the Nobel prize to Lawrence for the cyclotron helped to establish the pattern of rewarding instrument building for its own sake, introducing “innovation” into the criteria of the Nobel committee, in addition to “discovery”.
Cockcroft later held many important and influential scientific and administrative positions. He was president of the UK Institute of Physics and the British Association for the Advancement of Science, and was chancellor of the Australian National Academy, Canberra. His work was also acknowledged in many ways, including honorary doctorates and membership of many scientific academies. In 1959 he was appointed master of Churchill College, Cambridge. He died, aged 70, on 18 September 1967 – a year after the celebration of Chadwick’s 75th birthday at the newly created Daresbury Laboratory. It is at Daresbury that another important step forward for accelerator physics has begun, with the Cockcroft Institute named in honour of the accelerator “giant” who, along with Walton, first split the atom 75 years ago.
The English summer, renowned for being fickle, smiled kindly on the organizers of the 2007 European Physical Society (EPS) conference on High Energy Physics (HEP), which was held in Manchester on 19–25 July. In a city that is proud of both its industrial heritage and a bright commercial future, HEP 2007 surveyed the state of particle physics, which also seems to be at a turning point. While certain areas of the field pin down the details of the 20th-century Standard Model, others seek to prise open new physics as the LHC prepares to open a new frontier.
The conference had a packed programme of 12 plenary sessions and 69 parallel sessions. In his opening talk, CERN’s John Ellis took a lead from Paul Gauguin’s painting Life’s Questions, and interpreted the questions in terms of the status of the Standard Model (where are we coming from?), searches beyond the Standard Model (where are we now?) and the search for a “theory of everything” (where are we going?). More than 400 talks covered all three aspects, in particular the status of the Standard Model and the current and future efforts to go beyond it. This report summarizes some of the highlights within these broad themes.
A beautiful model
The success of the Standard Model underpinned the 2007 award of the EPS High Energy and Particle Physics prize to Makoto Kobayashi of KEK and Toshihide Maskawa of the University of Tokyo for their work in 1972 that showed that CP violation occurs naturally if there are six quarks, rather than the original three. Kobayashi was at the conference to receive the prize and to give a personal view of the early work and the current understanding of CP violation. The idea of six quarks began to attract attention with the discovery of the τ lepton in 1976. The rest, as they say, is history, and the Cabibbo–Kobayashi–Maskawa (CKM) matrix describing six quarks is now a key part of the Standard Model.
Moving to the present, Kobayashi pointed to the work of the experiments at the B-factories – BaBar at the PEPII facility at SLAC and Belle at KEK-B. They have played a key role in pinning down the well known triangle that expresses the unitarity of the CKM matrix. The two experiments have shown that the three sides of the triangle really do appear to close – a leitmotif that ran throughout the conference. Measurements of sin2β (sin2φ1) now give a clear value of 0.668 ± 0.028 – a precision of 4% – and even measurements of the angle γ (φ3) are becoming quite good thanks to the performance of B-factories.
Both facilities have provided high beam currents and small beam sizes, leading to extremely high luminosities. With a peak luminosity of 1.21 × 1034 cm–2 s–1 – four times the design value – PEPII has delivered a total luminosity of 460 fb–1 but is now feeling the stress of the high currents. Nevertheless, there are plans to try for still-higher luminosity and deliver the maximum possible before the facility closes down at the end of September 2008. KEK-B, with a peak luminosity of 1.71 × 1034 cm–2 s–1, has reached a total of 715 fb–1 and there are also plans for increasing luminosity in this machine, using the recently tested “crab crossing” technique, to bring the angled beams into a more direct collision.
The extra luminosity is important now that the experiments are moving on to a new phase, searching for new physics. This may be manifest in small deviations from the Standard Model at the 1% level, although guidelines from theory are made difficult – not least by uncertainties in QCD. The charmless B decay, B → φ K0 – where at the quark level b → sss – currently shows a small systematic deviation from theory. However, many agree with Kobayashi’s opinion that it is premature to derive any conclusion. “Super B-factories”, as proposed for example at KEK, will probably be necessary to clarify this and other hints of new physics.
B physics is not only the preserve of the B-factories, nor is interest in heavy flavours restricted only to B physics. The CDF and DØ experiments at Fermilab’s Tevatron have measured Bs oscillations for the first time, in a 5 σ effect with ΔMs = 17.77 ± 0.10 ± 0.07 ps–1. This result presents no surprises, but the award of the 2007 EPS Young Physicist prize reflected its importance. This went to Ivan Furic of Chicago, Guillelmo Gomez-Ceballos of Cantabria/MIT, and Stephanie Menzemer of Heidelberg, for their outstanding contributions to the complex analysis that provided the first measurement of the frequency of Bs oscillations. In the physics of the lighter charm particles, the BaBar and Belle experiments have made the first observations of D mixing, at the level of about 4 σ, with no evidence for CP violation. Neither Bs nor D mixing is easy to measure, the first being very fast, the second being very small. Moreover, D mixing is difficult to calculate as the charm quark is neither heavy nor particularly light. On the other hand, the Standard Model clearly predicts no CP violation. Elsewhere in the heavy-flavour landscape, CDF and DØ have found new baryons that help to fill the spaces remaining in the multiplets of various quark combinations.
The electroweak side of the Standard Model has known precision for many years, with the coupling constants α and GF, and more recently the mass of the Z boson, Mz, available as precise input parameters for calculations of a range of observables. Now with a steadily increasing total integrated luminosity in Run II – 2.72 fb–1 in DØ, for example, by the time of the conference – the mass of the W boson, MW, is measured with similar precision at both the Tevatron and LEP, and is known to ± 25 MeV. CDF and DØ also continue to pin down other observations, in particular in the physics of top, with studies of top decays and measurements of the top mass, Mt, with a latest value of 170.9 ± 1.8 GeV/c2. DØ also has evidence for the production of single top – produced from a W rather than a gluon – which gives a handle on ⃒Vtb⃒2 in the CKM matrix. A comparison of Mw and Mt from the Tevatron with the results from LEP and the SLAC Linear Collider provides a powerful check on the Standard Model – Mt is measured at the Tevatron, whereas it was inferred at the e+e– colliders – and constrains the mass of the Higgs boson. There will be no beam in the LHC until 2008, so the Tevatron is currently the only hunting ground for the Higgs; with upgrades planned to take its total luminosity to at least 6 fb–1, there are interesting times ahead.
While the Tevatron is still going strong, HERA – the first and only electron–proton collider – shut down for the last time this past summer, having written the “handbook” on the proton. HERA provided a unique view inside the proton through deep inelastic scattering, which is still being refined as analysis continues. Once the final pages are written they will provide vital input, in particular on the density of gluons, for understanding proton collisions at the LHC. This effort continues at the Tevatron, where the proton–antiproton collisions provide a complementary view to HERA, in particular regarding what is going on underneath the interesting hard scatters. Additionally, the HERMES experiment at HERA, COMPASS at CERN and experiments at RHIC are investigating the puzzle of what gives rise to the spin of the proton (or neutron) in terms of gluons or orbital angular momentum.
Measurements at HERA and the Tevatron have challenged the strong arm of the Standard Model by testing QCD with precision measurements that involve hadrons in the initial state, not just in the final state, as at LEP. In particular, they provide a testing ground for perturbative QCD (pQCD) in hard processes where the coupling strength is relatively weak, and show good agreement with theoretical predictions. The challenge now is to apply the theory to the more complex scenario of collisions at the LHC, in particular to calculate processes that will be the backgrounds to Higgs production and new physics.
QCD enters a particularly extreme regime in the relativistic collisions of heavy ions, where hundreds of protons and neutrons coalesce into a hot, dense medium. Results from RHIC at Brookhaven National Laboratory (BNL) are already indicating the formation of deconfined quark–gluon matter in an ideal fluid with small viscosity. Here the anti-de Sitter space/conformal field-theory correspondence offers an alternative view to pQCD, with predictions for the higher energies at the LHC.
Beyond the Standard Model
Experiments at the Tevatron and HERA have all searched for physics beyond the Standard Model and find nothing beyond 2 σ. At HERA, however, the puzzle remains of the excess of isolated leptons, which H1 still sees with the full final luminosity (reported at the conference only three weeks after the shutdown), although ZEUS sees no effect. This excess will have to be seen elsewhere to demonstrate that it is new physics, and not nature being unkind.
While the high-energy collider experiments see no real signs of new physics, at least neutrino physics is beginning to provide a way beyond the Standard Model. Neutrinos have long been particles about which we know hardly anything, but as Ken Peach from the University of Oxford commented in his closing summary talk, at least now we “clearly know what we don’t know”. Research has established neutrino oscillations, and with them neutrino mass. However, we still need to know more about the amounts of mixing of the three basic neutrino states to give the flavour states that we observe, and about the mass scale of those basic states.
Clarification in one area has come from the MiniBoone experiment at Fermilab, which finds no evidence for oscillations as reported by the LSND experiment. However, there are signs of a new puzzle as MiniBoone sees an excess of events at neutrino energies at 300–475 MeV. The Main Injector Neutrino Oscillation Search collaboration presented a new result for mixing in the 23 sector, with Δm223 = 2.38 + 0.20 – 0.16 × 10–3 eV2 and sin22θ23 = 1.00 with an error of 0.08. For the 13 sector, however, there is still a desperate need for new experiments. The Karlsruhe Tritium Neutrino experiment will try to measure directly the electron neutrino (an incoherent sum of mass states) using the classic technique of measuring the endpoint of the tritium beta-decay spectrum, with a sensitivity of 0.2 eV. Neutrinoless double beta-decay experiments provide another route to neutrino mass and could constrain the lightest state in the mass hierarchy. Taking what we already know from oscillations, one or two of the lightest neutrinos (depending on mass hierarchy) should have masses of at least 0.05 eV. Much now depends on experiments to come.
Dark matter in the cosmos seems to be another sure sign of physics beyond the Standard Model. Cosmology indicates that it is composed of non-baryonic particles and is mostly “cold” – low energy – and so cannot consist of the known lightweight neutrinos. Current direct searches for dark-matter particles are reaching cross-sections of around 10–44 cm2, and the next generation of experiments are aiming to reach a factor of 10 lower. Dark-matter annihilation can affect the gamma-ray sky, so the GLAST mission, due to be launched in December, could complement the searches for dark-matter candidates that will take place at the LHC.
The cosmos holds other mysteries for particle physics, in particular the long-standing question of the origin of ultra high-energy cosmic rays. Clues to the location of the natural accelerators lie in the precise shape of the spectrum at high energies: are there particles with energies above the Greisen–Kuzmin–Zatsepin cut-off? The Pierre Auger Observatory has ushered in a new age of hybrid detection based on a combination of scintillation and air fluorescence detectors. Together, the two techniques reveal both the footprint and the development of an extensive air shower, so reducing the dependence on interaction models. Auger now has more events above 10 EeV than previous experiments, and confirms the “ankle” and steepening at the end of the spectrum (and, since the conference, the first evidence for sources of ultra-high-energy cosmic rays, see “Pierre Auger Observatory pinpoints source of mysterious highest-energy cosmic rays“). Understanding thie spectrum depends on determining the mass of the incoming particles. Photons constitute less than 2% of the cosmic radiation at these high energies; is the remainder all protons, or are there heavier components, as the data from Auger hint at?
Back on Earth, the LHC is uniquely poised to go beyond the Standard Model, as Ellis pointed out in his opening talk. So a key question for everyone is: when will the LHC start-up? Lyn Evans, LHC project leader at CERN, brought the latest news, but first reminded the audience just how remarkable the project is. He began by paying homage to Kjell Johnsen, who died on 18 July, the week before the conference. Johnsen led the project to build the world’s first proton–proton collider, the Intersecting Storage Rings (ISR) at CERN. The LHC is a magnificent tribute to Johnsen, explained Evans, for without the ISR, there would be no LHC. The idea of storing protons, without the synchrotron radiation damping effects inherent in electron beams, was a leap of faith; respected people thought that it would never work.
Now, the LHC is built and the effort to cool down and power-up is underway. Unsurprisingly in a project so complex, problems arise, but they are being overcome; the schedule now foresees that beam commissioning should begin in May 2008, with the aim for first collisions at 14 TeV two months later. The injection system can already supply enough beam for a luminosity of 1034 cm–2 s–1, but in practice commissioning will start with only a few bunches for each beam, to ensure the safety of the collimation and protection systems.
For the LHC experiment collaborations, commissioning will also start with an emphasis on safety. They will study the first collisions with minimum-bias triggers while they gain full understanding of their detectors, before moving on to QCD dijet triggers to “rediscover” physics of the Standard Model. With 1 fb–1of data collected, there will be the opportunity to begin searching for new physics, with signs of supersymmetry perhaps appearing early. A major goal will of course be to discover the Higgs boson – or whatever mechanism it is that underlies electroweak symmetry-breaking. This is a key issue that the LHC should certainly resolve. Beyond it lie other more exotic questions, concerning extra dimensions and tests of string theory, for example, and even “unparticles” – denizens of a scale-invariant sector weakly coupled to the particles of the Standard Model, as recently proposed by Howard Georgi.
As the LHC nears completion, there is plenty of activity on projects to complement it. The largest is the proposed Inter-national Linear Collider (ILC) to provide e+e– collisions at a centre-of-mass energy of 500 GeV. The collaboration released the Reference Design Report in February, putting the estimated price tag for the machine at $6.4 thousand million. Like the LHC, it will be a massive undertaking, involving some 1700 cryogenic units for acceleration. To reach still higher energies with an e+e– collider, the Compact Linear Collider study is an international effort to develop technology to go up to 3 TeV in the centre-of-mass. The key feature is a double-beam delivery system, with a main beam and a drive beam, and normal conducting structures. It will require an accelerating gradient of more than 100 MV/cm to reach 3 TeV in a total length less than 50 km. The aim is to demonstrate the feasibility by 2010, with a technical design report in 2015.
In other areas, there are proposals for super B-factories and neutrino factories to produce the intense beams needed to study rare and/or weak processes in both fields. The idea behind the neutrino factories is to generate large numbers of pions, which will decay to muons that can be cooled and then accelerated before they decay to produce the desired neutrinos. An important requirement will be a high-intensity proton driver to produce pions in primary proton collisions. Such drivers have, of course, other uses: the Spallation Neutron Source in Oak Ridge, for example, is operating with the world’s first superconducting proton linac, currently delivering 0.4 × 1014 protons with each pulse. Other issues for a future neutrino factory are the cooling and acceleration of the muons. The Muon Ionisation Cooling Experiment at the UK’s Rutherford Appleton Laboratory will test one such concept, using liquid hydrogen absorbers to reduce the muon momentum in all directions. The subsequent acceleration will have to be fast, before the muons decay, and in this respect researchers are revisiting the idea of fixed-field alternating gradient (FFAG) accelerators, which dates back to the early 1950s. To test the principle, a consortium at the Daresbury Laboratory in the UK plans to build the world’s first non-scaling FFAG machine, a 20 MeV electron accelerator.
The design of particle detectors will have to adapt to the more exacting conditions at future machines, to deal with larger numbers of particles, higher densities of particles and higher radiation doses. Issues to consider include: segmentation to deal with the high density of particles; speed to handle large events quickly; and thin structures to keep down the material budget. For the ILC, various collaborations are working on four concepts for the collider detectors; the aim is to select two of these in 2009 and have engineering designs completed by 2010.
The next conference in the series is in Krakow, in 2009. It will be interesting to learn how the new ideas presented at HEP 2007 have advanced, to see the first steps across the new frontier with the LHC and to find out if we can see further towards where we are going.
• HEP 2007 was organized by the universities of Durham, Leeds, Lancaster, Liverpool, Manchester and Sheffield, together with the Cockcroft Institute and Daresbury Laboratory.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.