While awaiting the Planck mission results on the Cosmic Microwave Background (CMB), the scientific community must be content with measurements on foreground sources. Those are nevertheless providing interesting and unexpected results, especially by mapping a mysterious haze in the central portion of the Milky Way, and by producing the first all-sky map of molecular clouds where stars are born.
It was a risky strategy for the European Space Agency to launch two prime missions of its scientific programme on 14 May 2009, Herschel and Planck (CERN Courier July/August 2009 p6), but the Ariane 5 rocket succeeded in sending both spacecraft into orbit around the second Lagrange point of the Sun–Earth system. Both missions use this prime location – 1.5 million kilometres away – to study the cold universe unaltered by the glow of the Earth and Moon. However, they each operate differently. Herschel is an observatory, in the sense that any astronomer can propose an observation of a source of interest and get data rights if their proposal is accepted. This approach does not apply to Planck – which is continuously scanning the sky, so the data have to be shared among the Planck collaboration as a whole.
The Planck data acquisition is slowly nearing completion. The spacecraft had enough helium-3 to cool down the high-frequency instrument (HFI) to 0.1 K for 30 months, about twice what was originally required. Since January, however, only the low-frequency instrument (LFI) continues to operate, mainly to refine the calibration. The data release of the nominal mission (the first 15.5 months) is planned for early 2013; the full data set will become public a year later. Both releases will be accompanied by scientific publications on the observed fluctuations of the CMB, which are the most anticipated by the scientific community. The Planck results will improve the determinations of the constituents, the history and the fate of the universe obtained by NASA’s Wilkinson Microwave Anisotropy Probe (CERN Courier May 2008 p8).
The Planck CMB results have not yet been published because many Galactic and extra-Galactic foreground sources superimpose on the CMB, as illustrated by the all-sky image released in the summer of 2010 (CERN Courier September 2010 p11). Disentangling the various components from each other is a tricky task that requires a deep understanding of all foreground sources and instrumental effects. While some Planck scientists work on the CMB, others work on the removal and characterization of these foregrounds. A first set of early Planck results was released in January 2011, together with a catalogue of thousands of compact sources both in the Milky Way and in distant galaxies and clusters of galaxies. The next set of Planck results on foregrounds was presented at an international conference on 13–17 February 2012, in Bologna, and will be published in the coming months. This includes two unexpected results on the diffuse emission of the Galaxy.
One surprise came from the detailed, all-sky map of carbon monoxide (CO) presented by Jonathan Aumont of the Institut d’Astrophysique Spatiale, Université Paris XI, Orsay. The CO molecule emits a number of narrow, rotational emission lines in the frequency range probed by Planck’s HFI. The spectroscopic measurement of these lines is commonly used to probe the presence of cold molecular clouds from which new stars form. Because the CO lines are narrow compared with the broad spectral bands observed by Planck, it was not anticipated that it would be possible to measure their contribution and thus compete with spectroscopic surveys of CO.
Another unexpected result was presented by Krzysztof Gorski of the Jet Propulsion Laboratory, Caltech, Pasadena, and Warsaw University Observatory. He presented a map of the sky showing a distinct synchrotron emission that is roughly co-spacial with the giant gamma-ray bubbles detected by the Fermi Space Telescope (CERN Courier January/February 2011 p11). This suggests that the radio and gamma-ray emissions could come from the same population of relativistic electrons filling the bubbles – but their actual origin remains mysterious.
New members of the top-level management talk to Antonella Del Rosso about the CMS model for running a large collaboration, as they prepare for the start of the LHC’s run in 2012.
Trying to uncover the deepest mysteries of the universe is no trivial task. Today, the scientific collaborations that accept the challenge are huge, complex organizational structures that have their own constitution, strict budget control and top management. CMS, one of two general-purpose experiments that study the LHC collisions, provides a good example of how this type of scientific complexity can be dealt with.
The collaboration has literally thousands of heroes
Tiziano Camporesi
The CMS collaboration currently has around 4300 members, with more than 1000 new faces joining in the past three years. Together they come from some 170 institutes in 40 countries and six continents. Each institute has specific tasks to complete, which are agreed with the management leading the collaboration. “The collaboration is evolving all of the time. Every year we receive applications from five or so new institutes that wish to participate in the experiment,” says Joe Incandela of the University of California Santa Barbara and CERN, who took over as spokesperson of the CMS collaboration at the start of 2012. “The Collaboration Board has the task of considering those applications and taking a decision after following the procedures described in the CMS constitution. All of the participating institutes are committed to maintaining, operating, upgrading and exploiting the physics of the detector.”
Once they become full members of the collaboration, all institutes are represented on the Collaboration Board – the true governing body of CMS. (In practice, small institutes join together and choose a common representative.) The representatives can also vote for the spokesperson every two years. “To manage such a complex structure that must achieve very ambitious goals, the collaboration has so far always sought a spokesperson from among those people who have contributed to the experiment in some substantial way over the years and who have demonstrated some managerial and leadership qualities,” notes deputy-spokesperson Tiziano Camporesi of CERN . “We often meet film-makers or journalists who tell us that they want to feature a few people. They want to have ‘stars’ who can be the heroes of the show but we always tell them that the collaboration has literally thousands of heroes. I have often heard it said that we are like an orchestra: the conductor is important but the whole thing only works if every single musician plays well.”
Although two years may seem to be a short term, Joao Varela – who is a professor at the Instituto Superior Técnico of the Technical University of Lisbon and also deputy-spokesperson – believes that there are many positive aspects in changing the top management rather frequently. “The ‘two-years scheme’ allows CMS to grant this prestigious role to more people over time,” he says. “In this way, more institutes and cultures can be represented at such a high level. There is a sense of fairness in the honour being shared across the whole community. Moreover, each time a new person comes in, by human nature he/she is motivated to bring in new ideas.”
As good as the idea is to rotate people in the top management, the CMS collaboration is currently analysing the experience already accumulated to see if things can be improved. “So far deputies have always been elected as spokespersons and this has ensured continuity even during the short overlap. I was myself in physics co-ordination, then deputy and finally spokesperson. Even so, I am learning many new things every day,” points out Incandela.
At CMS the spokesperson also nominates his/her deputies and many of the members of the Executive Board, which brings together project managers and activity co-ordinators. “The members of the Executive Board are responsible for most of the day-to-day co-ordination work that is a big part of what makes CMS work so well,” explains Incandela. “Each member is responsible for managing an organization with large numbers of people and a considerable budget in some cases. Historically, the different projects and activities were somewhat isolated from one another, so that members of the board didn’t really have a chance or need to follow what the other areas were doing. With the start of LHC operations in 2008 this began to change and now people focus on broader issues.” To improve communication among the members of the Executive Board, the new CMS management also decided to organize workshops. “These have turned out to be fantastic events,” says Camporesi. “At the meetings, we discuss important and broad issues openly, from what is the best way to do great physics to how to maintain high morale and attract excellent young people to the collaboration.”
To keep the whole collaboration informed about the outcomes of such strategic meetings and other developments in the experiment in general, the CMS management organizes weekly plenary meetings. “I report once a week to the whole collaboration: we typically have anywhere from 50 to 250 people attending, plus 100–200 remote connections. We are a massive organization and the weekly update is a quick and useful means of keeping everybody informed,” adds Incandela.
The scientific achievements of CMS prove not only that a large scientific collaboration is manageable but also that it is effective. In January this year a new two-year term began for the CMS collaboration, which also renewed all of the members of top management. This is a historic moment for the experiment because many potential discoveries are in the pipeline. “This is my third generation of hadron collider – I participated in the UA2 experiment at CERN’s SPS, CDF at Fermilab’s Tevatron and now CMS at the LHC. When you are proposing a new experiment and then building it, the focus is entirely on the detector,” observes Incandela. “Then, when the beam comes, attention moves rapidly to the data and physics. The collaboration is mainly interested in data and the discoveries that we hope to make. We must ensure the high performance of the detector while providing the means for extremely accurate but quick data analysis. However, although almost everything works perfectly, there are already many small things in the detector that need repairing and upgrading.”
It is obviously important if we discover things. But is also important if we don’t see anything
Joao Valera
The accelerator settings for the LHC’s 2012 run, decided at the Chamonix Workshop in February, will mean that CMS has to operate in conditions that go beyond the design target. “The detector will face tougher pile-up conditions and our teams of experts have been working hard to ensure that all of the subsystems work as expected. It looks like the detector can cope with conditions that are up to 50% higher than the design target”, confirms Camporesi. “Going beyond that could create serious issues for the experiment. We observe that the Level1 trigger starts to be a limitation and the pixel detector starts to lose data, for instance.” CMS is already planning upgrades to improve granularity and trigger performance to cope with the projected higher luminosity beyond 2014.
Going to higher luminosity may be a big technical challenge but it does mean reducing the times to discoveries. “The final word on the Higgs boson is within reach, now measurable in terms of months rather than years. And for supersymmetry, we are changing the strategy. In 2010–2011, we were essentially searching for supersymmetric partners of light quarks because they were potentially more easily accessible. This approach didn’t yield any fruit but put significant constraints on popular models. A lot of people were discouraged,” explains Varela. “However, what we have not ruled out are possible relatively light supersymmetric partners of the third-generation quarks. The third generation is a tougher thing to look for because the signal is smaller and the backgrounds can be higher. By increasing the energy of the collisions to 4 TeV one gains 50–70% in pair production of supersymmetric top, for instance, while the top-pair background rises by a smaller margin. Having said this, and given the unexplored environment, it is obviously important if we discover things. But it is also important if we don’t see anything.”
There is a long road ahead because the searches will continue at higher LHC energies and luminosities after 2014, but the CMS collaboration plans to be well prepared.
The first cyclotron, built in 1930 by Ernest Lawrence and Stanley Livingston, was 4.5″ (11 cm) in diameter and capable of accelerating protons to an energy of 80 keV (figure 1). Lawrence soon went on to construct higher-energy and larger-diameter cyclotrons to provide particle beams for research in nuclear physics. Eighty years ago this month, he and Livingston published a seminal paper in which they described the production of light ions with kinetic energies in excess of 1 MeV using a device with magnetic pole-pieces 28 cm across (Lawrence and Livingston 1932). By 1936 John Lawrence, Ernest’s brother, had made the first recorded biomedical use of a cyclotron when he used the 36″ (91 cm) machine at Berkeley to produce 32P for the treatment of leukaemia. Since then, the physics design of the cyclotron has improved rapidly, with the introduction of alternating-gradient sector focusing, edge focusing, external ion-source injection, electron cyclotron-resonance sources, negative-ion acceleration, separated-sector technology and the use of superconducting magnets.
Historical developments
However, other accelerator designs were evolving even faster, with the construction of the synchrocyclotron, the invention of the synchrotron, of linear accelerators and of particle colliders that were capable of generating the extremely high energies needed by the particle-physics community. The usefulness of the cyclotron appeared to diminish but in 1972 the TRIUMF laboratory in Canada turned on the world’s largest cyclotron, at 2000 tonnes with a beam-orbit diameter of 18 m and negative-ion acceleration. Two years later, in Switzerland, PSI brought into commission a large separated-sector, 590 MeV proton cyclotron. Both of these machines have contributed to isotope-production programmes. More recently, a superconducting ring cyclotron delivering a proton beam energy of 2400 MeV has been built in Japan at the Riken research institute.
Nevertheless, the value of the cyclotron as a method for producing medical isotopes had come under further pressure in the 1950s and early 1960s from the availability of numerous nuclear-research reactors that had high neutron fluxes, large-volume irradiation positions and considerable flexibility for isotope production. These attributes allowed the generation of important radioisotopes such as 99Mo, 131I, 35S and even 32P more easily and more cost effectively.
Nevertheless, there remained a few radionuclides with neutron-deficient nuclei that were important for medical imaging but could be produced only by particle accelerators. These included 123I and 201Tl, used for nuclear cardiology, and others such as 111In. The production reactions needed were often of the type where a proton knocks out only a few neutrons, (p,xn) with x=1 or 2 or 3, so that the accelerator energy required was usually no more than around 30 MeV. Consequently the use of the medium-energy cyclotron was revived. The first dedicated medical-isotope cyclotron was designed and built at the Hammersmith Hospital in London in 1955 and was followed by dozens of research-based cyclotrons, often with their own bespoke designs.
The routine use of radioisotope-labelled medical products and the demand for radiopharmaceutical injections for patients led to the creation of a new sector of industry: to supply cyclotron systems capable of the production of medical isotopes. Commercial companies started to design, build and supply complete cyclotron systems specifically for this purpose. The first generation of these industrial cyclotrons was made available by companies such as Philips in the Netherlands and The Cyclotron Corporation (TCC) in the US, but these machines were usually complicated instruments requiring considerable physics expertise for operations and maintenance. Second-generation cyclotrons, with more compact designs and improved engineering, were developed later by Scanditronix in Sweden, Thompson CSF in France and Sumitomo and JSW in Japan, all with designs that led to lower radiation doses to the operators. Around 1980, the first negative-ion industrial cyclotron, the CP-42, became available from TCC, with 40 MeV proton extraction.
In 1988, a major step forward occurred with the development by Yves Jongen at the University of Louvain-la-Neuve, Belgium, of an industrial cyclotron customized for medical-isotope production – the Cyclone-30 (figure 2). This new cyclotron was power efficient, had a user-friendly control system and incorporated negative-ion acceleration and charge-exchange stripping for extraction, as developed earlier at the TRIUMF cyclotron. It spawned the start of a new accelerator company in Belgium, IBA SA, and this concept of an optimized industrial design was subsequently adopted by other companies, including Ebco Industries in Canada. Most of these isotope-producing cyclotrons were in the energy range of 20–40 MeV – some having an extracted beam capability of 500 μA or more – and several companies have made available a range of cyclotrons operating at different energies (Schmor 2010).
In addition, a range of positron-emitting, neutron-deficient radio¬nuclides were found to be particularly effective for biomedical human imaging via positron-emission tomography (PET), i.e. 18F, 11C, 15O and 13N. The production energies required for these PET isotopes were lower – from around 5 MeV to 20 MeV – with 18F being the most commonly used. Many of the same industrial companies designed even smaller cyclotrons at around either 17 MeV, for high-output 18F production, or around 11 MeV, for lower, hospital-based 18F production, and some cyclotron designs had radiation self-shielding arrangements. PET had long been an imaging technique used in research but by 1998, the medical regulator in the US – i.e. the Food and Drug Administration – had approved the use of PET imaging for several clinical indications. However, 18F has a half-life of only two hours, which limits delivery to small geographic regions. This led to the building of numerous manufacturing facilities for lower-energy PET cyclotrons. It is estimated that by 2010 the world market for small PET cyclotrons was between 50 and 60 a year.
Although 18F, in the form of 18F-deoxyglucose or FDG, has remained the most commonly used radionuclide for PET, numerous other tracers labelled with 18F are in advanced stages of clinical development and eventual commercialization. However, the process of their drug licensing has been particularly slow. Despite the considerable investment made in R&D and manufacturing of FDG by industry, there is a growing concern that the potential for its use in PET imaging and its implementation in personalized medicine has not been achieved fully. There also exist numerous other tracers that provide good images of the human body, many of which use 11C as the radiolabel. However, 11C has an extremely short half-life of only 20 minutes and it would have to be produced inside the facilities of the smallest general hospitals, as well as at larger research institutions.
The drive towards smaller, hospital-based cyclotrons dedicated to producing small quantities of injectable radioisotopes started back in 1989 when IBA, following on from the success of the Cyclone-30, designed the Cyclone 3D – a 3 MeV deuteron cyclotron for 15O production. Some five models have been delivered, but unfortunately 15O has remained a research tool used primarily for blood-flow studies rather than becoming a regular commercial product with its own pharmaceutical-marketing authorization licence.
Another approach towards reducing the size of the cyclotron was the OSCAR cyclotron, originally designed and delivered in 1990 by Oxford Instruments in the UK, and now distributed by EuroMeV. OSCAR is a 12 MeV, 100 μA superconducting cyclotron with an external ion source. Around eight models with this more complicated design have been delivered.
Latest trends
The concept of producing quantities of PET radionuclides that are suitable for one dose to a single patient was not really addressed until 2009, when Ron Nutt of ABT Molecular Imaging Inc developed a small cyclotron with 7.5 MeV energy, positive ion (i.e. proton) acceleration and an internal target for the production of unit doses of 18F. This cyclotron was designed as a component of an integrated production system that also included targetry, a chemistry system based on microfluidic processing, an online chemistry quality-control system and a methodology for radiopharmaceutical product release. The physics of this cyclotron reverts to the more traditional method of proton acceleration and internal targets to reduce the radiation burden associated with stripping inside negative-ion cyclotrons. Nevertheless, this cyclotron system has established a new strategy of producing unit-patient doses of radionuclides with short half-lives. Moreover, the production can be located in smaller clinical facilities, possibly in remote and rural locations around the world.
The use of negative-ion acceleration, the ease of charged-particle stripping-extraction and the convenience of having external targets have been preferred by other developers. General Electric Healthcare has recently reported success in the development of a small, vertical cyclotron with a proton energy of around 8 MeV. In Spain, a public–private consortium has announced a development project called AMIT, which is funded by the Spanish Centre for Industrial Technology Development. Within this consortium, the accelerator institute CIEMAT in Madrid will be delivering a cyclotron with a low-energy proton beam. A collaboration has been set up between CIEMAT and CERN to design and build the smallest-possible cyclotron using superconducting technology with a proton energy of around 8 MeV, the objective being to produce single-patient doses of both 18F and 11C in particular. This collaboration with CERN will include the use of some of the accelerator technology and expertise used in building the LHC.
Figure 5 shows a schematic of this cyclotron. A trade-off exists between increasing the magnetic field with higher levels of Lorentz stripping of negative ions and associated increases in the neutral beam-radiation field against the requirement of increasing the size of the radiation shielding around the cyclotron periphery. The nominal extraction radius for this machine will be around 11 cm. In other words, the size of the latest industrial medical-isotope producing cyclotrons have reverted to dimensions close to those of Lawrence’s first cyclotron developed over 80 years ago.
The origin of this conceptual revolution was the work in which these two theoretical physicists discovered that all quantities such as the gauge couplings (αi ) and the masses (mj) must “run” with q2, the invariant four-momentum of a process (Stueckelberg and Petermann 1951). It took many years to realize that this “running” allows not only the existence of a grand unification and opens the way to supersymmetry but also finally produces the need for a non-point-like description of physics processes – the relativistic quantum-string theory – that should produce the much-needed quantization of gravity.
It is interesting to recall the reasons that this paper attracted so much attention. The radiative corrections to any electromagnetic process had been found to be logarithmically divergent. Fortunately, all divergencies could be grouped into two classes: one had the property of a mass; the other had the property of an electric charge. If these divergent integrals were substituted with the experimentally measured mass and charge of the electron, then all theoretical predictions could be made to be “finite”. This procedure was called “mass” and “charge” renormalization.
Stueckelberg and Petermann discovered that if the mass and the charge are made finite, then they must run with energy. However, the freedom remains to choose the renormalization subtraction points. Petermann and Stueckelberg proposed that this freedom had to obey the rules of an invariance group, which they called the “renormalization group” (Stueckelberg and Petermann 1953). This is the origin of what we now call the renormalization group equations, which – as mentioned – imply that all gauge couplings and masses must run with energy. It was remarkable many years later to find that the three gauge couplings could converge, even if not well, towards the same value. This means that all gauge forces could have the same origin; in other words, grand unification. A difficulty in the unification was the new supersymmetry that my old friend Bruno Zumino was proposing with Julius Wess. Bruno told me that he was working with a young fellow, Sergio Ferrara, to construct non-Abelian Lagrangian theories simultaneously invariant under supergauge transformations, without destroying asymptotic freedom. During a nighttime discussion with André, in the experimental hall to search for quarks at the Intersecting Storage Rings in 1977, I told him that two gifts were in front of us: asymptotic freedom and supersymmetry. The first was essential for the experiment being implemented, the second to make the convergence of the gauge couplings “perfect” for our work on the unification. We will see later that this was the first time that we realized how to make the unification “perfect”.
The muon g-2
The second occasion for me to know about André came in 1960, when I was engaged in measuring the anomalous magnetic moment (g-2) of the muon. He had made the most accurate theoretical prediction, but there was no high-precision measurement of this quantity because technical problems remained to be solved. For example, a magnet had to be built that could produce a set of high-precision polynomial magnetic fields throughout as long a path as possible. This is how the biggest (6-m long) “flat magnet” came to be built at CERN with the invention of a new technology now in use the world over. André worked only at night and because he was interested in the experimental difficulties he spent nights with me working in the SC-Experimental Hall. It was a great help for me to interact with the theorist who had made the most accurate theoretical prediction for the anomalous magnetic moment of a particle 200 times heavier than the electron. The muon must surely reveal a difference in a fundamental property like its g-value. Otherwise, why is its mass 200 times greater than that of the electron? (Even now, five decades later, no one knows why.)
When the experiment at CERN proved that, at the level of 2.5 parts in a million for the g-value, the muon behaves as a perfect electromagnetic object, the problem changed focus to ask why are there so many muons around? The answer lay in the incredible value of the mass difference between the muon and its parent, the π. Could another “heavy electron” – a “third lepton” – exist with a mass in the range of giga-electron-volts? Had a search ever been done for this third “lepton”? The answer was no. Only strongly interacting particles had been studied. This is how the search for a new heavy lepton, called HL, was implemented at CERN, with the Proton AntiProton into LEpton Pairs (PAPLEP) project, where the production process was proton–antiproton annihilation. André and I discussed these topics in the CERN Experimental Hall during the night shifts he spent with me.
The results of the PAPLEP experiment gave an unexpected and extremely strong value for the (time-like) electromagnetic form-factor of the proton, whose consequence was a factor 500 below the point-like cross-section for PAPLEP. This is how, during another series of night discussions with André , we decided that the “ideal” production process for a third “lepton” was (e+e–) annihilation. However, there was no such collider at CERN. The only one being built was at Frascati, by Bruno Touschek, who was a good friend of Bruno Ferretti and another physicist who preferred to work at night. I had the great privilege of knowing Touschek when I was in Rome. He also became a strong supporter of the search for a “third lepton” with the new e+e– collider, ADONE. Unfortunately the top energy of ADONE was 3 GeV and the only result that we could achieve was a limit of 1 GeV for the mass of the much desired “third lepton”.
Towards supersymmetry
Another topic talked about with André has its roots in the famous work with Stueckelberg – the running with energy of the fundamental couplings of the three interactions: electromagnetic, weak and strong. The crucial point here was at the European Physical Society (EPS) conferences in York (1978) and Geneva (1979). In my closing lecture at EPS-Geneva, I said: “Unification of all forces needs first a supersymmetry. This can be broken later, thus generating the sequence of the various forces of nature as we observe them.” This statement was based on work with André where in 1977 we studied – as mentioned before – the renormalization-group running of the couplings and introduced a new degree of freedom: supersymmetry. The result was that the convergence of the three couplings improved a great deal. This work was not published, but known to a few, and it led to the Erice Schools Superworld I, Superworld II and Superworld III.
This is how we arrived at 1991 when it was announced that the search for supersymmetry had to wait until the multi-tera-electron-volt energy threshold would become available. At the time, a group of 50 young physicists was engaged with me on the search for the lightest supersymmetric particle in the L3 experiment at CERN’s Large Electron Positron (LEP) collider. If the new theoretical “predictions” were true then there was no point in spending so much effort in looking for supersymmetry-breaking in the LEP energy region. Reading the relevant papers, André and I realized that no one had ever considered the evolution of the gaugino mass (EGM). During many nights of work we improved the unpublished result of 1977 mentioned above: the effect of the EGM was to bring down the energy threshold for supersymmetry-breaking by nearly three orders of magnitude. Thanks to this series of works I could assure my collaborators that the “theoretical” predictions on the energy-level where supersymmetry-breaking could occur were perfectly compatible with LEP energies (and now with LHC energies).
Finally, in the field of scientific culture, I would like to pay tribute to André Petermann for having been a strong supporter for the establishment of the Ettore Majorana Centre for Scientific Culture in Erice. In the old days, before anyone knew of Ettore Majorana, André was one of the few people who knew about Majorana neutrinos and that relativistic invariance does not give any privilege to spin-½ particles, such as the privilege of having antiparticles, all spin values having the same privilege. In all of my projects André was of great help, encouraging me to go on, no matter what the opposition could present in terms of arguments that often he found to be far from being of rigorous validity.
Paradoxically, work on “light candles” led to the discovery that the universe is much darker than anyone thought. Arnaud Marsollier caught up with Saul Perlmutter recently to find out more about this Nobel breakthrough.
Saul Perlmutter admits that measuring an acceleration of the expansion of the universe – work for which he was awarded the 2011 Nobel Prize in Physics together with Brian Schmidt and Adam Riess – came as a complete surprise. Indeed, it is exactly the opposite of what Perlmutter’s team was trying to measure: the decelerating expansion of the universe. “My very first reaction was the reaction of any physicist in such a situation: I wondered which part of the chain of the analysis needed a new calibration,” he recalls. After the team had checked and rechecked over several weeks, Perlmutter, who is based at Lawrence Berkeley National Laboratory and the University of California, Berkeley, still wondered what could be wrong: “If we were going to present this, then we would have to make sure that everybody understood each of the checks.” Then, after a few months, the team began to make public its result in the autumn of 1997, inviting scrutiny from the broader cosmology community.
Despite great astonishment, acceptance of the result was swift. “Maybe in science’s history, it’s the fastest acceptance of a big surprise,” says Perlmutter. In a colloquium that he presented in November 1997, he remembers how cosmologist Joel Primack stood up and instead of talking to Perlmutter he addressed the audience, declaring: “You may not realize this, but this is a very big problem. This is an outstanding result you should be worried about.” Of course, some colleagues were sceptical at first. “There must be something wrong, it is just too crazy to have such a small cosmological constant,” said cosmologist Rocky Kolb in a later conference in early 1998.
According to Perlmutter, one of the main reasons for the quick acceptance by the community of the accelerating expansion of the universe is that two teams reported the same result at almost the same time: Perlmutter’s Supernova Cosmology Project and the High-z Supernova Search Team of Schmidt and Riess. Thus, there was no need to wait a long time for confirmation from another team. “It was known that the two teams were furious competitors and that each of them would be very glad to prove the other one wrong,” he adds. By the spring of 1998, a symposium was organized at Fermilab that gathered many cosmologists and particle physicists specifically to look at these results. At the end of the meeting, after subjecting the two teams to hard questioning, some three quarters of the people in the room raised their hands in a vote to say that they believed the results.
What could be responsible for such an acceleration of the expanding universe? Dark energy, a hypothetical “repulsive energy” present throughout the universe, was the prime suspect. The concept of dark energy was also welcomed because it solves some delicate theoretical problems. “There were questions in cosmology that did not work so well, but with a cosmological constant they are solved,” explains Perlmutter. Albert Einstein had at first included a cosmological constant in his equations of general relativity. The aim was to introduce a counterpart to gravity in order to have a model describing a static universe. However, with evidence for the expansion of the universe and the Big Bang theory, the cosmological constant had been abandoned by most cosmologists. According to George Gamow, even Einstein thought that it was his “biggest blunder” (Gamow 1970). Today, with the discovery of the acceleration of the expansion of the universe, the cosmological constant “is back”.
Since the discovery, other kinds of measurements – for example on the cosmic microwave background radiation (CMB), first by the MAXIMA and BOOMERANG balloon experiments, and then by the Wilkinson Microwave Anisotropy Probe satellite – have proved consistent with, and even made stronger, the idea of an accelerating expansion of the universe. However, it all leads to a big question: what could be the nature of dark energy? In the 20th century, physicists were already busy with dark matter, the mysterious invisible matter that can only be inferred through observations of its gravitational effects on other structures in the universe. Although they still do not know what dark matter is, physicists are increasingly confident that they are close to finding out, with many different kinds of experiments that can shed light on it, from telescopes to underground experiments to the LHC. In the case of dark energy, however, the community is far from agreeing on a consistent explanation.
When asked what dark energy could be, Perlmutter’s eyes light up and his broad smile shows how excited he is by this challenging question. “Theorists have been doing a very good job and we have a whole landscape of possibilities. Over the past 12 years there was an average of one paper a day from the theorists. This is remarkable,” he says. Indeed, this question has now become really important as it seems that physicists know about a mere 5% of the whole mass-energy of the universe, the rest being in the form of dark matter or, in the case of more than 70%, the enigmatic, repulsive stuff known as dark energy or a vacuum energy density.
Including a cosmological constant in Einstein’s equations of general relativity is a simple solution to explain the acceleration of the expansion of the universe. However, there are other possibilities. For example, a decaying scalar field of the kind that could have caused the first acceleration at the beginning of the universe or the existence of extra dimensions could save the standard cosmological model. “We might even have to modify Einstein’s general relativity,” Perlmutter says. Indeed, all that is known is that the expansion of the universe is accelerating, but there is no clue as to why. The ball is in the court of experimentalists, who will have to provide theorists with more data and refined measurements to show precisely how the expansion rate changes over time. New observations by different means will be crucial, as they could show the way forward and decide between the different available theoretical models.
“We have improved the supernova technique and we know what we need to make a measurement that is 20 times more accurate,” he says. There are also two other precision techniques currently being developed to probe dark energy either in space or from the ground. One uses baryon acoustic-oscillations, which can be seen as “standard rulers” in the same way that supernovae are used as standard candles (see box, previous page). These oscillations leave imprints on the structure of the universe at all ages. By studying these imprints relative to the CMB, the earliest “picture of the universe” available, it is possible to measure the rate at which the expansion of the universe is accelerating. The second technique is based on gravitational lensing, a deflection of light by massive structures, which allows cosmologists to study the history of the clumping of matter in the universe, with the attraction of gravity contesting with the accelerating expansion. “We think we can use all of these techniques together,” says Perlmutter. Among the projects he mentions, are the US-led ground-based experiments BigBOSS and the Large Synoptic Survey Telescope and ESA’s Euclid satellite, all of which are under preparation.
However, the answer to this obscure mystery – or at least part of it – could come from elsewhere. The full results from ESA’s Planck satellite, for instance, are eagerly awaited because they should provide unprecedented precision on measurements of the CMB. “The Planck satellite is an ingredient in all of these analyses,” explains Perlmutter. In addition, cosmology and particle physics are increasingly linked. In particular, the LHC could bring some input into the story quite soon. “It is an exciting time for physics,” he says. “If we just get one of these breakthroughs through the LHC, it would help a lot. We are really hoping that we will see the Higgs and maybe we will see some supersymmetric particles. If we are able to pin down the nature of dark matter, that can help a lot as well.” Not that Perlmutter thinks that the mystery of dark energy is related to dark matter, considering that they are two separate sectors of physics, but as he says, “until you find out, it is still possible”.
The ALICE software environment (AliRoot) first saw light in 1998, at a time when computing in high-energy physics was facing a challenging task. A community of several thousand users and developers had to be converted from a procedural language (FORTRAN) that had been in use for 40 years to a comparatively new object-oriented language (C++) with which there was no previous experience. Coupled to this was the transition from loosely connected computer centres to a highly integrated Grid system. Again, this would involve a risky but unavoidable evolution from a well known model where, for experiments at CERN, for example, most of the computing was done at CERN with analysis performed at regional computer centres to a highly integrated system based on the Grid “vision”, for which neither experience nor tools were available.
In the ALICE experiment, we had a small offline team that was concentrated at CERN. The effect of having this small, localized team was to favour pragmatic solutions that did not require a long planning and development phase and that would, at the same time, give maximum attention to automation of the operations. So, on one side we concentrated on “taking what is there and works”, so as to provide the physicists quickly with the tools they needed, while on the other we devoted attention towards ensuring that the solutions we adopted would lend themselves to resilient hands-off operation and would evolve with time. We could not afford to develop “temporary” solutions but still we had to deliver quickly and develop the software incrementally in ways that would involve no major rewrites.
The rise of AliRoot
When development of the current ALICE computing infrastructure started, the collaboration decided to make an immediate transition to C++ for its production environment. This meant the use of existing and proven elements. For the detector simulation package, the choice fell on GEANT3, appropriately “wrapped” into a C++ “class”, together with ROOT, the C++ framework for data manipulation and analysis that René Brun and his team developed for the LHC experiments. This led to a complete, albeit embryonic, framework that could be used for the experiment’s detector-performance reports. AliRoot was born.
The initial design was exceedingly simple. There was no insulation layer between AliRoot and ROOT; no software-management layer beyond a software repository accessible to the whole ALICE collaboration; and only a single executable for simulation, calibration, reconstruction and analysis. The software was delivered in a single package, which just needed GEANT3 and ROOT to be operational.
To allow the code to evolve, we relied heavily on virtual interfaces that insulated the steering part from the code from the 18 ALICE subdetectors and the event generators. This proved to be a useful choice because it made the addition of new event generators – and even of new detectors, easy and seamless.
To protect simulation code by users (geometry description, scoring and signal generation) and to ease the transition from GEANT3 to GEANT4, we also developed a “virtual interface” with the Monte Carlo simulator, which allowed us to reuse the ALICE simulation code with other detector-simulation packages. The pressure from the users, who relied on AliRoot as their only working tool, prompted us to assume an “agile” working style, with frequent releases and “merciless” refactorizations of the code whenever needed. In open-source jargon we were working in a “bazaar style”, guided by the users’ feedback and requirements, as opposed to the “cathedral style” process where the code is restricted to an elite group of developers between major releases. The difficulty of working with a rapidly evolving system while also balancing a rapid response to the users’ needs, long-term evolution and stability was largely offset by the flexibility and robustness of a simple design, as well as the consistency of a unique development line where the users’ investment in code and algorithms has been preserved over more than a decade.
The design of the analysis framework also relied directly on the facilities provided by the ROOT framework. We used the ROOT tasks to implement the so called “analysis train”, where one event is read in memory and then passed to the different analysis tasks, which are linked like wagons of a train. Virtuality with respect to the data is achieved via “readers” that can accept different kinds of input and take care of the format conversion. At ALICE we have two analysis objects: the event summary data (ESD) that result from the reconstruction and the analysis object data (AOD) in the form of compact event information derived from the ESD. AODs can be customized with additional files that add information to each event without the need to rewrite them (the delta-AOD). Figure 1 gives a schematic representation that attempts to catch the essence of AliRoot.
The framework is such that the same code can be run on a local workstation, or on a parallel system enabled by the “ROOT Proof” system, where different events are dispatched to different cores, or on the Grid. A plug-in mechanism takes care of hiding the differences from the user.
The early transition to C++ and the “burn the bridge” approach encouraged (or rather compelled) several senior physicists to jump the fence and move to the new language. That the framework was there more than 10 years before data-taking began and that its principles of operation did not change during its evolution allowed several of them to become seasoned C++ programmers and AliRoot experts by the time that the detector started producing data.
AliRoot today
Today’s AliRoot retains most of the features of the original even if the code provides much more functionality and is correspondingly more complex. Comprising contributions from more than 400 authors, it is the framework within which all ALICE data are processed and analysed. The release cycle has been kept nimble. We have one update a week and one full new release of AliRoot every six months. Thanks to an efficient software-distribution scheme, the deployment of a full new version on the Grid takes as little as half a day. This has proved useful for “emergency fixes” during critical productions. A farm of “virtual” AliRoot builders is in continuous operation building the code on different combinations of operating system and compiler. Nightly builds and tests are automatically performed to assess the quality of the code and the performance parameters (memory and CPU).
The next challenge will be to adapt the code to new parallel and concurrent architectures to make the most of the performance of the modern hardware, for which we are currently exploiting only a small fraction of the potential. This will probably require a profound rethinking of the class and data structures, as well as of the algorithms. It will be the major subject of the offline upgrade that will take place in 2013 and 2014 during the LHC’s long shutdown. This challenge is made more interesting because new (and not quite compatible) architectures are continuously being produced.
An AliEn runs the Grid
Work on the Grid implementation for ALICE had to follow a different path. The effort required to develop a complete Grid system from scratch would have been prohibitive and in the Grid world there was no equivalent to ROOT that would provide a solid foundation. There was, however, plenty of open-source software with the elements necessary for building a distributed computing system that would embody major portions of the Grid “vision”.
Following the same philosophy used in the development of AliRoot, but with a different technique, we built a lightweight framework written in the Perl programming language, which linked together several tens of individual open-source components. This system used web services to create a “grid in a box” – a “shrink-wrapped” environment, called Alice Environment or AliEn – or implement a functional Grid system, which already allowed us to run large Monte Carlo productions as early as 2002. From the beginning, the core of this system consisted of a distributed file catalogue and a workload-management system based on the “pull” mechanism, where computer centres fetch appropriate workloads from a central queue.
AliEn was built as a metasystem from the start with the aim of presenting the user with a seamless interface while joining together the different Grid systems (a so-called overlay Grid) that harness the various resources. As AliEn could offer the complete set of services that ALICE needed from the Grid, the interface with the different systems consisted of replacing as far as possible the AliEn services with the ones of the native Grids.
This has proved to be a good principle because the Advanced Resource Connector (ARC) services of the NorduGrid collaboration are now integrated with AliEn. ALICE users access transparently three Grids (EGEE, OSC and ARC), as well as the few remaining native AliEn sites. One important step was achieved with the tight integration of AliEn with the MonALISA monitoring system, which allows large quantities of dynamic parameters related to the Grid operation to be stored and processed. This integration will continue in the direction of provisioning and scheduling Grid resources based on past and current performance, and load as recorded by MonALISA.
The AliEn Grid has also seen substantial evolution, its core components having been upgraded and replaced several times. However, the user interface has changed little. Thanks to AliEn and MonALISA, the central operation of the entire ALICE Grid takes the equivalent of only three or four full-time operators. It routinely runs complicated job chains fully automated at all times, totalling an average of 28,000 jobs in continuous execution on 80 computer centres in four continents (figure 3).
The next step
Despite the generous efforts of the funding agencies, computing resources in ALICE remain tight. To alleviate the problem and ensure that resources are used at the maximum efficiency, all ALICE computing resources are pooled into AliEn. The corollary is that the Grid is the most natural place for all ALICE users to run any job that exceeds the capacity of a laptop. This has put considerable stress on the ALICE Grid developers to provide a friendly environment, where even running short, test jobs on the Grid should be as simple and fast as running them on a personal computer. This still remains the goal but much ground has been covered in making Grid usage as transparent and efficient as possible; indeed, all ALICE analysis is performed on the Grid. Before a major conference, it is not uncommon to see more than half of the total Grid resources being used by private-analysis jobs.
The challenges ahead for the ALICE Grid are to improve the optimization tools for workload scheduling and data access, thereby increasing the capabilities to exploit opportunistic computing resources. The availability of the comprehensive and highly optimized monitoring tools and data provided by MonALISA are assets that have not yet been completely exploited to provide predictive provisioning of resources for optimized usage. This is an example of a “boundary pushing” research subject in computer science, which promises to yield urgently needed improvements to the everyday life of ALICE physicists.
It will also be important to exploit interactivity and parallelism at the level of the Grid, to improve the “time-to-solution” and to come a step closer to the original Grid vision of making a geographically distributed, heterogeneous system appear similarly to a desktop computer. In particular, the evolution of AliRoot to exploit parallel computing architectures should be extended as seamlessly as possible from multicore and multi-CPU machines – first to different machines and then to Grid nodes. This implies both an evolution of the Grid environment as well as the ALICE software, which will have to be transformed to expose the intrinsic parallelism of the problem in question (event processing) at its different levels of granularity.
Although it is difficult to define success for a computing project in high-energy physics, and while ALICE computing certainly offers much room for improvement, it cannot be denied that it has fulfilled its mandate of allowing the processing and analysis of the initial ALICE data. However, this should not be considered as a result acquired once and for all, or subject only to incremental improvements. Requirements from physicists are always evolving – or rather, growing qualitatively and quantitatively. While technology offers the possibilities to satisfy these requirements, this will entail major reshaping of ALICE’s code and Grid tools to ride the technology wave while preserving as much as possible of the users’ investment. This will be a challenging task for the ALICE computing people for years to come.
LHCb is one of the four large experiments at the LHC. It was designed primarily to probe beyond the Standard Model by investigating CP violation and searching for the effects of new physics in precision measurements of decays involving heavy quarks, b quarks in particular. At the LHC, pairs of particles (B and B mesons) containing these quarks are mainly produced in the direction of the colliding protons, that is, in the same forward or backward cone about the beam line. For this reason, LHCb was built as a single-arm forward spectrometer that covers production angles close to the beam line with full particle detection and tracking capability – closer even than the general-purpose experiments, ATLAS and CMS. This gives LHCb the opportunity to study the Standard Model in regions that are not easily accessible to ATLAS and CMS. In particular, the experiment has an active and rapidly developing programme of electroweak physics that is beginning to test the Standard Model in several unexplored regions.
Closer to the beam
Particle production at collider experiments is usually described in terms of pseudorapidity, defined as η = –ln(tan θ/2), where θ is the angle that the particle takes relative to the beam axis. The particles tend to be produced in the forward direction: that is crowded into small values of θ, while in terms of η, they are spread more uniformly. The inverse relationship means that the closer a particle is to the beam line, the larger its pseudorapidity. LHCb’s forward spectrometer is fully instrumented in the range 2 < η < 5, a portion of which (2 < η < 2.5) is also covered by ATLAS and CMS. However, the forward region at η > 2.5 – roughly between 10° and 0.5° to the beam – is unique to LHCb, thanks to its full complement of particle detection.
LHCb can explore electroweak physics through the production of W and Z bosons, as well as virtual photons. The experiment can trigger on and reconstruct muons with low momentum pμ > 5 GeV and transverse momentum pTμ > 1 GeV, giving access to low values of the muon-pair invariant mass mμμ > 2.5 GeV. Specialist triggers can even explore invariant masses below 2.5 GeV in environments of low multiplicity. Coupled with the forward geometry, this reconstruction capability opens up a large, previously unmeasured kinematic region.
Figure 1 shows the kinematic regions that LHCb probes in terms of x, the longitudinal fraction of the incoming proton’s momentum that is carried by the interacting parton (quark or gluon), and Q2, the square of the four-momentum exchanged in the hard scatter. Because of the forward geometry, the momenta of the two interacting partons are highly asymmetric in the particle-production processes detected at LHCb. This means that LHCb can simultaneously probe not only a region at high-x that has been explored by other experiments but also a new, unexplored region at small values of x. The high rapidity range and low transverse-momentum trigger thresholds for muons allow potential exploration of Q2 down to 6.25 GeV2 and x down to 10–6, thus extending the region that was accessible at HERA, the electron–proton collider at DESY.
The aim is to probe and constrain the parton-density functions (PDFs) – basically, the probability density for finding a parton with longitudinal momentum fraction x at momentum transfer Q2 – in the available kinematic regions. The PDFs provide important input to theoretical predictions of cross-sections at the LHC and at present they dominate the uncertainties in the theoretical calculations, which now include terms up to next-next-to-leading order (NNLO).
Using data collected in 2010, the LHCb collaboration measured the production cross-sections of W and Z bosons in proton–proton collisions at a centre-of-mass energy of 7 TeV, based on an analysis of about 36 pb–1 of data (LHCb collaboration 2011a). Although only a small fraction of W and Z bosons enter the acceptance of the experiment (typically 10–15%), the large production cross-sections ensure that the statistical error on these measurements is small. The results are consistent with NNLO predictions that use a variety of models for the PDFs. With greater statistics, the measurements will begin to probe differences between these models.
The uncertainty in luminosity dominates the precision to which cross-sections can be determined, so the collaboration also measures ratios of W and Z production, which are insensitive to this uncertainty, as well as the charge asymmetry for W production, AW = ( σW+ – σW– / σW+ + σW–). Figure 2 shows the results for AW overlaid with equivalent measurements by ATLAS and CMS. It illustrates how the kinematic region explored by LHCb is complementary to that of the general-purpose detectors and extends the range that can be tested at the LHC. It is also apparent that LHCb’s acceptance probes the region where the asymmetry is changing rapidly, so the measurements are particularly sensitive to the parameters of the various PDF models.
Low-momentum muons
The LHCb collaboration also plans to increase the probing power of the cross-section measurements by improving the uncertainty in the luminosity itself. Work is ongoing to measure the exclusive production of pairs of muons, a QED process that should ultimately yield a more precise indirect measure of integrated luminosity. Although instrumented in the forward region, LHCb has some tracking coverage in the region –1.5< η< –4 because the proton–proton collision point lies a little way inside the main tracking detector. The measurement exploits this acceptance, LHCb’s ability to trigger on muons with low momentum and the low pile-up environment of collisions at LHCb, which allows the identification of these low multiplicity, exclusively produced events. First measurements based on 2010 data show that the measurement is feasible (LHCb collaboration 2011b). Updated measurements based on the 2011 data set are underway.
In high-energy hadron–hadron scattering the production of Z and W bosons, which decay into lepton pairs, occurs as a Drell-Yan process in which a quark in one hadron interacts with an antiquark in the other hadron to produce a W or a Z or a virtual photon, which then produces a lepton pair of opposite charges. With its ability to trigger on and identify muons with low transverse-momentum, LHCb can measure the production of muon pairs from Drell-Yan production down to invariant masses approaching 5 GeV. As figure 1 shows, these measurements probe values of x around 10–5 and can be used to improve knowledge of the behaviour of gluons inside the proton, building on the knowledge gained at HERA.
These and other production studies are being updated for the upcoming 20th International Workshop on Deep-Inelastic Scattering that takes place in Bonn on 26–30 March. The first measurements using electron final-states will also be available soon, as will those on the production of Z bosons in association with jets. The latter will open the way to more direct probes of the PDFs, once the jets can tagged by flavour (for example, a measurement of the production of a W boson together with a charm jet will allow constraints to be placed on the behaviour of the strange quark inside the proton).
The forward acceptance of LHCb also provides unexpected advantages for other measurements. The further forward in pseudorapidity that final states are produced, the more likely they are to arise from interactions between a valence quark in one proton and an antiquark in the “sea” of the other proton. This is in contrast to the ATLAS and CMS experiments, which experience predominately sea–sea collisions. The measurement of the forwards–backwards asymmetry of Z bosons, which is sensitive to the electroweak mixing angle, sin2θW, benefits from this ability to define a “forward” incoming-quark direction. Studies show that LHCb can identify this correctly in more than 90% of events that have boson rapidities above 3 (McNulty 2011). PDF uncertainties are also reduced in this region. This gives the LHCb experiment the potential to reach the precision of a typical measurement of sin2θW at the Large Electron–Positron collider, even with the data set of 1 fb–1 already recorded.
Studies of the production of the top quark could also benefit from LHCb’s detection system. Although the production rate for top inside LHCb is small at 7 TeV, at 14 TeV the rate should be large enough to make measurements viable. At this centre-of-mass energy, top-pairs are produced by quark–antiquark annihilation twice as often inside the forward region of LHCb’s acceptance as they are in the central region. A measurement of the t-t asymmetry with LHCb could give a direct and comparable cross-check of the recent result from Fermilab’s Tevatron.
Electroweak physics at LHCb may not have been part of the original programme, but the future prospects are bright.
On 11–14 December, the city of Mumbai was the setting for the Second International Workshop on Accelerator Driven Sub-Critical Systems and Thorium Utilization. Only a month later, a team in Belgium announced the first successful operation of GUINEVERE, a prototype lead-cooled nuclear reactor driven by a particle accelerator – one of the milestones in progress towards the type of accelerator-driven system (ADS) envisioned in Mumbai.
Today’s nuclear reactors are based on a core with fissile fuel configured such that neutrons emitted in the fission process can maintain a chain reaction. In an ADS, by contrast, the neutrons necessary to establish a sustainable fission chain reaction are knocked out of a spallation target by high-energy protons from an accelerator. Because these neutrons are produced externally from the core, an ADS reactor has a great deal of flexibility in the elements and isotopes that can be fissioned. Indeed, the ADS – long advocated by Nobel laureate Carlo Rubbia – is increasingly seen as offering promise for nuclear-waste transmutation and for generating electricity from thorium, uranium or spent nuclear fuel (Clements 2012).
Setting the scene
The Mumbai workshop attracted 160 researchers from nine countries to discuss developments in this burgeoning field. Srikumar Banerjee, chair of India’s Department of Atomic Energy, opened the workshop by welcoming all of the participants and providing an overview of India’s efforts in ADS research. He described several thrusts in the country’s R&D programmes: development of a low-energy (20 MeV) accelerator front end; design studies for a 1 GeV, 30 mA superconducting RF (SRF) linac; and development of a spallation neutron source. He also emphasized the importance of thorium in India’s three-phase, long-term development strategy for nuclear power, as well as the key role of the ADS concept both for power production and for management of minor actinides and used nuclear fuel.
Kumar Sinha, director of the Bhabha Atomic Research Centre (BARC) in Mumbai, which hosted the workshop, also spoke during the opening session. He discussed some of the challenges facing the ADS scientific community and stressed the value of international collaboration in large-scale projects of this kind, where it is important to co-ordinate efforts and optimize the use of financial and human resources.
The workshop convener, K C Mittal of BARC, outlined the overall context of the meeting – in particular the wish of India to exploit a thorium-based ADS to enhance the sustainability, safety and the proliferation resistance of nuclear-power generating systems. He noted that researchers worldwide have proposed innovative physics concepts and that several laboratories have succeeded in the design and construction of the new generation of accelerator required. Mittal underlined SRF accelerating technology and noted the potential importance of using ingot niobium for cost savings (see box). He also highlighted the continued ADS-related developments in India and China, at Belgium’s Multi-purpose hYbrid Research Reactor for High-tech Applications (MYRRHA) and for the European Spallation Source (ESS), which is being built in Lund, Sweden.
Hamid Aït Abderrahim spoke about MYRRHA, the project to build a €960 million subcritical research reactor at the Belgian Nuclear Research Centre SCK•CEN (Studiecentrum voor Kernenergie, Centre d’Etude de l’é nergie Nuclé aire), which is scheduled to become operational in 2023. The centre is also the site of the GUINEVERE demonstration model, which is seen as a key step for developing procedures for regulating and controlling the operation of future subcritical reactors such as MYRRHA. The objectives for MYRRHA are to demonstrate the ADS concept at a significant power level and to prove the technical feasibility of transmuting minor actinides and long-lived fission products. Belgium welcomes international participation in the MYRRHA consortium, with eligibility based on a balanced in-cash/in-kind contribution to the project.
The technological advances for neutron spallation sources, such as the ESS, have obvious relevance for an ADS. Each type of facility requires a high-power, high-intensity linac to provide a proton beam for generating neutrons by spallation. A big difference, however, is the relative stringency of requirements for reliability, as measured by the rate at which faults trip the accelerator off-line. Colin Carlile reported on the outlook for ESS, noting that there are five spallation sources in four countries but, unlike the others, the ESS will produce neutrons in millisecond-scale bursts rather than on the microsecond scale. The linac will operate at 2.5 GeV, with 50 mA peak and 2 mA average current for 5 MW of proton-beam power with a 357 kJ/pulse. The ESS has 17 partners and expects to be the world’s best source of slow neutrons. They aim to begin producing neutrons in 2019.
The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory in Tennessee in the US has an SRF proton linac that is sometimes seen as being close to a proof of principle for an ADS accelerator. The SNS now has two years of experience at the megawatt level, having reached 1 MW within three years of operation. John Galambos of Oak Ridge summarized information on SNS operation that is pertinent to an ADS. He said that initial proton experiments indicate a favourable beam loss for an ADS and that although the SNS was never designed for low trip rates, the declining trip rate seems encouraging. Data from 2008, which are still considered current and applicable, indicate that four of the world’s neutron facilities have roughly similar performance: many tens of trips a day lasting more than a minute, with far fewer than one trip a day lasting more than three hours.
Not all of ADS approaches call for SRF linacs. Swapan Chattopadhyay of the Cockcroft Institute in the UK told workshop participants about research into an ADS using a novel fixed-field, alternating-gradient driver. Collaborators in this effort represent PSI in Switzerland, Fermilab in the US, the International Atomic Energy Agency in Vienna, the Japan Proton Accelerator Research Complex, MYRRHA, ESS and BARC. In Japan itself, meanwhile, efforts are focusing on SRF, as Akira Yamamoto of KEK explained – but overlap with R&D for a future International Linear Collider and for energy-recovering linacs. KEK foresees building an in-house SRF fabrication and test facility.
As of early 2012, no government-funded ADS initiatives for nuclear-waste disposal or power generation are underway in the US. Nevertheless many of the country’s scientists and engineers are actively working in ADS-related efforts. Two high-power accelerators, both built by Jefferson Lab, already operate with SRF technology: the SNS at Oak Ridge and Jefferson Lab’s own Continuous Electron Beam Accelerator Facility (CEBAF). A third project, the SRF-based Project X, is in the design and prototyping stage at Fermilab and is foreseen to serve several scientific purposes with 3 GeV, 3 MW protons.
CEBAF pioneered the large-scale application of SRF when it became operational for nuclear-physics experiments in the mid-1990s at 4 GeV. It progressed to operate at 6 GeV and 1 MW through incremental improvements in technology. Researchers there have sought to reduce RF trips and develop tools to characterize them. A consortium of Virginia universities, industrial partners and Jefferson Lab has been established to pursue ADS R&D while preparing to host an ADS facility.
The efforts of the Virginia consortium fall in line with the sentiments of the September 2010 white paper written by 13 scientists from laboratories in the US and Europe, which was published by the US Department of Energy’s Office of Science: ‘Accelerator and Target Technology for Accelerator Driven Transmutation and Energy Production’ (Aït Abderrahim et al. 2010). The paper notes that many of the key technologies required for industrial-scale transmutation requiring tens of megawatts of beam power, including front-end systems and accelerating systems, have already been demonstrated. The report also points out, however, that demonstration is still required for other components, such as those that enable improved beam quality and halo control, as well as highly reliable subsystems.
At Mumbai, an informal international collaboration to attack these and other ADS challenges continued to coalesce. Participants recognized the magnitude of the challenges that must be overcome for an ADS scheme to be completely successful: well thought-out, long-term development plans and international collaboration are going to be indispensable for its realization. One of the strengths of the workshop was that it gathered experts from the various subfields relevant to an ADS and gave them the opportunity to discuss the different struggles that they each face while still achieving optimization for the overall system. With this in mind, participants decided to meet again next year for a third International Workshop on Accelerator Driven Sub-Critical Systems, probably in Europe.
The first “high-energy” accelerators were constructed more than 80 years ago. No doubt they represented technological challenges and major achievements even though, seen from a 2012 perspective, the projects involved only a few people and small hardware set-ups. For many of us, making a breakthrough with just a few colleagues and some new equipment feels like a dream from a different era. Nowadays, frontier research in particle physics requires huge infrastructures that thrill the imagination of the general public. While people often grasp only a fraction of the physics at stake, they easily recognize the full extent of the human undertaking. Particle-physics experiments and accelerators are, indeed, miracles of technology and major examples of worldwide co-operation and on-site teamwork.
Looking ahead
Studies on future accelerators and particle-physics experiments at the energy or luminosity frontier now span several decades and involve hundreds, if not thousands, of participants. This means that, while progress is made with the technical developments for a future facility, the physics landscape continues to evolve. The key example of this is the way that current knowledge is evolving quickly thanks to measurements at the LHC. As a result, it is impossible to predict decades in advance what the best machine option will be to expand our knowledge. Pursuing several options and starting long-term R&D well in advance is therefore essential for particle physics because it allows the community to be prepared for the future and to make informed decisions when the right moments arise.
For the post-LHC era, several high-energy accelerator options are already under study. Beyond high-luminosity extensions of the LHC programme, new possibilities include: a higher-energy proton collider in the LHC tunnel, as well as various electron–positron colliders, such as the International Linear Collider (ILC) and the Compact Linear Collider (CLIC); and a muon collider. There is typically much cross-fertilization and collaboration between these projects and there is no easy answer when it comes to identifying who has contributed to a particular project.
When, some months ago, we were discussing the authoring of the CLIC conceptual design report, we faced exactly such a dilemma. The work on the CLIC concept has been ongoing for more than two decades – clearly with a continuously evolving team. On the other hand, the design of an experiment for CLIC has drawn heavily on studies carried out for experiments at the ILC, which in turn have used results from earlier studies of electron–positron colliders. Moreover, we also wanted both the accelerator studies and the physics and detector studies to be authored by the same list.
We looked at how others had dealt with this dilemma and found that in some cases, such as in the early studies for LHC experiments, protocollaborations were taken as a basis for authoring, while others, such as the TESLA and Super-B projects, have invited anyone who supports the study to sign. For the CLIC conceptual design report we opted for a list of “signatories”. Those who have contributed to the development are invited to sign alongside those wishing to express support for the study and the continuation of the R&D. Here non-exclusive support is meant: signing-up for CLIC is not in contradiction with supporting other major collider options under development.
The advantage of the signatories list is that it provides the opportunity to cover a broader range of personal involvements and avoids excluding anyone who feels associated or has been associated with the study. The drawback of our approach is that the signatories list does not pay tribute in a clear way to individual contributions to the study. This recognition has to come from authoring specialized notes and publications that form the basis of what is written in the report.
The signatories list covers both the CLIC accelerator and the report for the physics and detector conceptual design. Already exceeding 1300 names in February, it demonstrates that – even if all eyes are on LHC results – simultaneous R&D for the future is considered important.
Are there better ways of doing this? As the projects develop, the teams are becoming more structured and this helps – at least partly – towards creating appropriate author lists. The size of the teams and the particular timescale of the projects will, however, remain much larger than the first accelerator projects in our field, and it is likely that striking the right balance between openness and inclusiveness and, on the other hand, restrictions and procedures in this matter will continue to be a difficult subject.
By A M Zagoskin Cambridge University Press
Hardback: £45 $80
E-book: $64
Quantum engineering has emerged as a field with important potential applications. This book provides a self-contained presentation of the theoretical methods and experimental results in quantum engineering. It covers topics such as the quantum theory of electric circuits, the quantum theory of noise and the physics of weak superconductivity. The theory is complemented by up-to-date experimental data to help put it into context.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.