Some years after Ernest Rutherford invented nuclear physics, he expressed a wish for “a copious supply of atoms and electrons which have an individual energy far transcending that of the α and β particles” available from natural sources so as to “open up an extraordinarily interesting field of investigation”. He was calling for the invention of the particle accelerator, but probably had no idea that by 2011 – the centenary of his famous publication on the nuclear atom – some 30,000 of them would operate worldwide, mostly for applications outside discovery science. And given that he famously dismissed as “moonshine” all talk about someday extracting useful energy from atoms, he surely did not foresee what might conceivably become one of the most important practical applications of accelerators: accelerator-driven systems, or ADS, for transmuting nuclear waste and generating electricity.
How would an accelerator replace a nuclear reactor? Today’s reactors include a core in which the composition and configuration of the nuclear fuel are such that there are enough neutrons to maintain a fission chain reaction. An ADS system involves a fuel configuration where the neutrons necessary to establish a sustainable fission chain reaction are produced by spallation of a target by an accelerator. Because the neutrons that maintain the chain reaction are produced by the accelerator – and are thus external to the core of the ADS reactor – an ADS reactor has a lot of flexibility in the elements and isotopes that can be fissioned in its core.
Among the fission products of uranium-235 are the minor actinides (mainly americium and californium), which are radioactive with extremely long half-lives. Their presence in nuclear waste drives the storage requirements for spent fuel: 100,000 years to return the radioactivity to its initial levels. These isotopes could be fissioned (burnt) in a conventional reactor, but given the characteristics of their delayed neutrons there is a maximum concentration of these materials that can be consumed in existing reactors. With ADS, however, the minor actinides can be a much larger fraction of the fuel because an ADS core is subcritical and the fission neutrons are produced by the accelerator externally to the core. Thus, core kinetics and stability do not come into play as much as they would in a conventional reactor. So, ADS can burn up a much greater quantity of the minor actinides than a typical commercial reactor, and can return the radiation to its initial level in “only” 300 years.
Nobel laureate Carlo Rubbia and others have been advocating ADS for two decades – and for good reason. By efficiently burning the minor actinides, ADS could conceivably transform the landscape of the waste-disposal and storage problem. And to paraphrase a US white paper from September 2010, additional advantages are flexibility of fuel composition and potentially enhanced safety. With ADS, nonfissile fuels such as thorium can be used without incorporating uranium or plutonium into fresh fuel. An ADS can be shut down simply by switching off the accelerator. With a large enough margin to criticality, reactivity-induced transients cannot cause a supercritical accident and power control via beam-current control allows fuel burn-up compensation. However, as we have learnt from Fukushima, it remains necessary to address the problem of long-term removal of the residual decay heat left in the fuel once the fission reaction has been shut down.
The overall potential of ADS has been understood for two decades, but technological evolution during that time has improved the outlook for actual implementation. As early as 2002, a European study concluded that “beam powers of up to 10 MW for cyclotrons and 100 MW for linacs now appear to be feasible”.
It is important to highlight the ADS prospects for power production using thorium-based fuel. Even though thorium has no current market value, it is known to be plentiful. Thorium can absorb a neutron to become 233U, which is fissile. Its potential benefits for nuclear energy are proliferation resistance, minimized production of radiotoxic transuranics, avoidance of the need to incorporate fissile material in the fuel and the potential to operate nearly indefinitely in a closed fuel cycle.
Interest has been growing worldwide. Thorium particularly interests India, Norway and China, all with programmes investigating the 233U-thorium fuel cycle. India, which has little uranium but much thorium, sees ADS as part of its energy future. China is rapidly building reactors, but not having identified a stable geological waste repository is investigating ADS for transmutation of minor actinides. Perhaps most notably, Belgium is planning MYRRHA, an 85 MW ADS prototype to be built at the Belgian Nuclear Research Centre, SCK.CEN. The projected total capital cost is €950 million, with a construction start set for 2015.
Some 200 of us are gathering on 11–14 December in Mumbai for the 2nd International Workshop on Accelerator-Driven Sub-Critical Systems & Thorium Utilization. The first conference was held last year at Virginia Tech, in Blacksburg, Virginia. Considerable effort has been spent to get a world-class International Advisory Committee that includes Rubbia as well as Srikumar Banerjee, chair, Atomic Energy Commission, India, and Hamid Aït Abderrahim, director, MYRRHA. For more about the conference, see www.ivsnet.org/ADS/ADS2011.
The public launch in August of a new application for CERN’s volunteer-computing platform LHC@home produced an overwhelming response. The application Test4Theory, which runs Monte Carlo simulations of events in the LHC, was announced in a CERN press release on 8 August. Within three days, the number of registered volunteers swelled from a few hundred to nearly 8000. The application joins SixTrack, an accelerator beam-dynamics tool that has been used for LHC machine studies at CERN since 2004 and is now being prepared and extended in collaboration with the École polytechnique fédérale de Lausanne for studies of the LHC and its upgrade.
Given that the new application requires participants to install a virtual machine on their computer – not a trivial task – the level of enthusiasm is impressive. So, to avoid saturating the server that manages the project, there is now a waiting list for new participants. With the volunteer computing power at hand, nearly 20 billion events have already been simulated. According to CERN’s Peter Skands, the physicist leading the simulation effort, when the number of active volunteers passes 40,000 – which could happen later this year – the system will become equivalent to a true “virtual collider”, producing as many collisions per second as the real LHC.
Running part of a “virtual LHC” on their computers is clearly appealing to those who join LHC@home. The volunteers have not only dedicated a great deal of computing time to the project, but in many cases also provided expert assistance in debugging some of the software and managing the discussion forums that are part and parcel of a successful online citizen-science project.
A quarter-century of experimentation is coming to a close at Fermilab’s Tevatron collider, a pioneering instrument that advanced the frontiers of accelerator science and particle physics alike, setting the stage for the LHC at CERN. The world’s first high-energy superconducting synchrotron, the Tevatron served as the model for the proton ring in the HERA collider at DESY and as a key milestone towards the development of the LHC. In its final months of operation the Tevatron’s initial luminosity for proton–antiproton collisions at 1.96 TeV averaged more than 3.5 × 1032 cm–2s–1. The integrated luminosity delivered at 1.96 TeV approached 12 fb–1, with approximately 10 fb–1 recorded by the CDF and DØ experiments. A long line of innovations and much perseverance made possible the evolution of luminosity shown in figure 1 (Holmes et al. 2011).
The legacy of the Tevatron experiments includes many results for which the high energy of a hadron collider was decisive. Chief among these is the discovery of the top quark, which for 15 years could be studied only at the Tevatron. Exacting measurements of the masses of the top quark and the W boson and of the frequency of Bs oscillations punctured the myth that hadron colliders are not precision instruments. Remarkable detector innovations such as the first hadron-collider silicon vertex detector and secondary vertex trigger, and multilevel triggering are now part of the standard experimental toolkit. So, too, are robust multivariate analysis techniques that enhance the sensitivity of searches in the face of challenging backgrounds. CDF and DØ exemplify one of the great strengths of particle physics: the high value of experimental collaborations whose scientific interests and capabilities expand and deepen over time – responding to new opportunities and delivering a harvest of results that were not imagined when the detectors were proposed.
Early days
The CDF logbook records the first collision event in the Tevatron at 02.32 a.m. on 13 October 1985, at an energy of 800 GeV per beam. The estimated luminosity was 2 × 1025 cm–2s–1, more than seven orders of magnitude below the machine’s performance in 2011. By the afternoon, the Tevatron complex was shut down for 18 months to construct the DØ interaction region and complete the CDF detector. CDF’s pilot run in 1987 yielded the first wave of physics papers, including measurements and searches. During 1988 and 1989 CDF accumulated 4 pb–1, now at 1.8 TeV in the centre of mass. (Two special-purpose experiments also published results from this run. Experiment 710 measured elastic scattering and the total cross-sections; Experiment 735 sought evidence of a deconfined quark–gluon plasma.) The peak luminosity delivered to CDF surpassed 1030 cm–2s–1 in collisions of six proton bunches on six antiproton bunches. Papers from these early runs are worth rereading as reminders of how little we knew, and how a tentative but growing respect for the Standard Model brought coherence to the interpretation of results. It is also interesting to see how the experimenters went about gaining confidence in their detector and their analysis techniques.
Both DØ and CDF took data at 1.8 TeV in the extended Run 1 between 1992 and 1996, recording 120 pb–1. An important enabler of increased luminosity was the move to helical orbits, which eliminated collisions outside the two interaction regions. During this period, a small test experiment called MiniMax (T864) searched for disordered chiral condensates and other novel phenomena in the far-forward region. This was a time of high excitement, not only for the drama of the top-quark search, but also for the stimulating conversation between the teams on the Tevatron experiments and those at the Z factories at CERN and SLAC, and at the HERA electron–proton collider, all of which were breaking new ground.
Fermilab then constructed the Main Injector and Recycler in a new tunnel, while the experiments undertook ambitious detector upgrades. Improvements to the cryogenic system made it possible to lower the operating temperature of the superconducting magnets and so raise the collision energy to 1.96 TeV. CDF installed a new central tracker and improved silicon vertex detector and enhanced its forward calorimetry and muon detection. DØ added a solenoid magnet, a silicon vertex detector and a scintillating-fibre tracker and also improved the detection of forward muons. Run 2 began slowly in 2001, but attention to detail and many accelerator improvements – including 36-bunch operation and electron-cooling of antiprotons in the recycler – contributed to the outstanding performance of the mature machine.
Strong and electroweak physics
The Tevatron experiments have probed the proton with a resolution of about one-third of an attometer (10–18 m), greatly expanding the kinematical range over which we can test the theory of the strong interactions. Perturbative QCD is extremely well validated in studies of hadron jets and other observables. The jet cross-section displayed in figure 2 shows the agreement between calculation and observation over eight orders of magnitude in rate (e.g. Abazov et al. 2008, Aaltonen et al. 2008 and 2009). Such measurements established the importance of gluon–gluon scattering as a mechanism for jet production and helped constrain the parton distribution functions for the gluons. Values of the strong coupling constant extracted from jet studies exhibit the running behaviour characteristic of asymptotic freedom at higher scales than accessible in other experiments. The strong coupling at the Z-boson mass has been determined with an uncertainty of about 4%.
Other jet studies have not only tested QCD but also probed for physics beyond the Standard Model. Measurements of the angular distribution of dijet production confirm the Rutherford-scattering-like expectation of QCD and place upper bounds on the size of extra spatial dimensions. They also validate, at a resolution of nearly 1/(3 TeV), a key idealization that underpins the Standard Model – the working hypothesis that quarks are pointlike and structureless. Measurements of the dijet mass spectrum that extend beyond 1.2 TeV (roughly 2/3 of the centre-of-mass energy of the proton–antiproton collisions) are likewise in accord with next-to-leading-order QCD calculations. No evidence is seen for unexpected dijet resonances.
In the final data set of 10 fb–1, each experiment should have approximately 5 million W bosons in each leptonic decay channel and perhaps 400,000 Z bosons. These large samples have made possible many important measurements. The production cross-sections agree with QCD predictions to such a degree that electroweak gauge-boson production is under study as a primary luminosity monitor for LHC experiments. Studies of Z production, with or without accompanying jets, are immensely valuable for testing simulations of Standard Model physics. The forward-backward asymmetry of the electrons or muons produced in W decay, which arises from the V–A structure of the charged weak current, provides important information about the up-quark and down-quark parton-distribution functions.
Given what we know from many sources, the masses of the W boson and top quark are key elements in the Standard Model network that constrains the properties of the Higgs boson. A stellar accomplishment of the Tevatron experiments has been the determination of the W-boson mass as 80.420 ± 0.031 GeV, better than 4 parts in 104. Figure 3 summarizes the Tevatron measurements and their impact on the current world average. The combined uncertainty at the end of Run 2 may approach 15 MeV.
The growing data samples available at the Tevatron, along with the evolution of experimental techniques, have made it possible to observe cross-sections times branching ratios well below 0.1 pb. All of the electroweak diboson pairs (Wγ, Zγ, WW, WZ and ZZ) have been detected at the rates predicted by the Standard Model. Mastery of these channels is a prerequisite to the Higgs-boson search at moderate and high masses, but they carry their own physics interest as well: the possibility of validating the Standard Model structure of the triple-gauge couplings and searching for anomalous couplings incompatible with the Standard Model. So far, the three-gauge-boson interactions are consistent with electroweak theory in every particular.
From bottom to top
CDF and DØ have exerted a broad impact on our knowledge of states containing heavy quarks. Studies of the production and decay dynamics of quarkonium states have repeatedly challenged phenomenological models, while measurements of b- and t-quark production have made possible sharp tests of QCD calculations at next-to-leading order. The Tevatron experiments account for nearly all of our knowledge of the Bc meson, with precise measurements of the mass and lifetime. The Tevatron contributes world-leading measurements of masses and lifetimes of B mesons and baryons, and has been the unique source of information on many of the B-baryons. With CDF’s recent observation of the Ξ0b, all of the spin-1/2 baryons containing one b quark have been observed at the Tevatron, except for the Σ0b. We also owe to the Tevatron our knowledge of orbitally excited B and Bs mesons, constraints on the mass and quantum numbers of X(3872), important evidence on D0–D0 mixing and high-sensitivity searches for rare decays into dimuons.
The Tevatron experiments met one of the key targets for Run 2 by determining the frequency of Bs–Bs oscillations. Following a two-sided limit published by DØ, the CDF collaboration determined the oscillation frequency as 17.77 ± 0.13 ps–1 (Abulencia et al. 2006). The oscillation signal is shown in figure 4. This beautiful measurement, in line with Standard Model expectations, constrains the manner in which new physics might show itself in B physics.
The discovery of the top quark by the Tevatron collaborations in 1995 was a landmark achievement
The discovery of the top quark by the Tevatron collaborations in 1995 was a landmark achievement (Abe et al. 1995, Abachi et al. 1995, Carithers and Grannis 1995). By 1990, searches by CDF had raised the lower bound on the top-quark mass to 91 GeV, excluding decays of W into t + b. A heavy top decays so swiftly that it cannot be observed directly, but must be inferred from its disintegration into a bottom quark and a W boson – both of which are themselves unstable particles. The hunt took off with the growing data-sets available to both CDF and DØ in 1992–1993 and soon the possibility of observing top was in the air. DØ subsequently raised the lower bound to 131 GeV. Moreover, a growing body of observations that probed quantum corrections to the electroweak theory pointed to a top-quark mass in the range 150–200 GeV. Finding top there emerged as a critical test of the understanding built up over two decades.
Eighteen months of deliciously intense activity culminated in a joint seminar on 2 March 1995, demonstrating that top was found in the reaction pp → tt+ anything. CDF gauged the top-quark mass at 176 ± 13 GeV, while DØ reported 199 ± 30 GeV. Since the discovery, larger event samples, improved detectors and sophisticated analysis techniques have led to a detailed dossier of top-quark properties (Deliot and Glenzinski 2010). Tevatron measurements of the top mass have reached 0.54% precision, at 173.2 ± 0.9 GeV, a level that demands scrupulous attention to the theoretical definition of what is being measured (Tevatron Electroweak Working Group 2011). A compilation of the Tevatron measurements is shown in figure 5. CDF and DØ now aim for an uncertainty of ± 1 GeV per experiment; to reach this level of precision will require a better understanding of b-jet modelling and of uncertainties in the signal and background simulations.
The tt production characteristics are in good agreement with QCD expectations for the total rate, transverse-momentum dependence and invariant-mass distribution. Tevatron studies support a top-quark charge of +2/3, and show that the tbW interaction is left-handed. Approximately 70% of the W bosons emitted in top decay are longitudinally polarized, while the rest are left-handed. The top-quark lifetime is close to 0.3 yoctosecond (10–24 s), as electroweak theory anticipates. Because top decays before hadronizing, it can be studied as a bare quark. Up to this point, exploratory studies of spin correlations among the tt decay products are in accord with the Standard Model. Both experiments have observed a forward-backward production asymmetry that is considerably larger than the Standard Model predictions, as currently understood. This tantalizing result – which could point to new physics – challenges theorists to create more robust, detailed and credible simulations of the Standard Model.
Important information about the weak interactions of top comes from the detection of single-top production through the decay of a virtual W boson or the interaction of an exchanged W boson with a b quark. Using an array of multivariate analysis techniques, CDF and DØ have observed single-top production at a rate consistent with the Standard Model. The DØ collaboration has succeeded in isolating the t-channel exchange process. These measurements allow a determination of the strength of the tbW weak coupling that is consistent with the Standard Model prediction of a value near unity, as well as with other indications that t → bW is the dominant decay mode of the top quark.
Higgs and other new phenomena
The search for the Standard Model Higgs boson is the ultimate challenge for the Tevatron. The straightforward strategy, to detect a light Higgs boson produced in gluon–gluon fusion that decays into the dominant bb mode, is foreclosed by the overwhelming rate of b-quark pair production by the strong interactions. Thus CDF and DØ have had to seek signals in several production channels and decay modes, as well as master many sources of background. Current searches consider gluon–gluon fusion, the associated production of a Higgs boson and W or Z boson and vector-boson fusion. The decay modes examined are bb, W+W–, ZZ, γγ and τ+τ–.
So far, the Tevatron experiments have given information on where the Standard-Model Higgs boson is not. The combined analyses of summer 2011, based on up to 8.6 fb–1 of data, exclude Standard Model Higgs-boson masses between 156 and 177 GeV, as shown in figure 6 (The Tevatron New-Phenomena and Higgs Working Group 2011). Parallel work has restricted the allowed parameter space for the lightest Higgs boson of supersymmetric models. According to projections informed by current experience, the full Tevatron data-set should yield 95% confidence-level exclusion limits up to 185 GeV – should no signal be present – and “evidence” at the 3σ level below 120 GeV and in the range 150–175 GeV.
During more than two decades as the world’s highest-energy machine, the Tevatron has had unparalleled capability to search for direct manifestations of physics beyond the Standard Model. Broad explorations and searches for specific hypothetical phenomena have been major activities for the experiments. The Tevatron constraints on conjectured extensions to the Standard Model are impressive in number and scope: CDF and DØ have set limits on supersymmetric particles, many varieties of extra spatial dimensions, signs of new strong dynamics, carriers of new forces of nature, magnetic monopoles and many more exotica. The null searches compel us to contemplate with greater intensity the unreasonable effectiveness of the Standard Model.
To be sure, some observations do not square with conventional expectations. In addition to the suggestion of a larger-than-foreseen forward-backward asymmetry in top-pair production noted above, it is worth mentioning two other surprising effects now in play. DØ reports an anomalous like-sign dimuon charge asymmetry in semileptonic decays of bb pairs that suggests unexpectedly large CP violation in the decays of b-hadrons. CDF sees a yield of jet pairs in association with a W boson that exceeds expectations in the dijet mass interval between 120 and 160 GeV. DØ does not confirm the excess, but the degree of disagreement remains to be quantified. We should find out soon, from further work at the Tevatron and from new analyses at the LHC, whether any of these results holds up and changes our thinking.
The LHC is enjoying a confluence of twos. On 5 August the total integrated luminosity delivered in 2011 passed 2 fb–1; the peak luminosity has risen to over 2 × 1033cm–2s–1; and fill number 2006 lasted for 26 hours, delivering an integrated luminosity of 100 pb–1.
Following the period of machine development that started at the end of June, the decision was taken to continue running with 50 ns bunch spacing and the maximum of 1380 bunches. Increases in luminosity must come from increasing the number of protons per bunch, or decreasing the transverse beam size at the interaction point. The size of the beams coming from the injectors has now been reduced to the minimum possible, bringing an increase in the peak luminosity of about 50%.
At its recent session in June, the CERN Council approved the construction of the Extra Low ENergy Antiproton ring (ELENA) – an upgrade of the existing Antiproton Decelerator (AD). ELENA will allow the further deceleration of antiprotons, resulting in an increased number of particles trapped downstream in the experiments. This will give an important boost to antimatter research in the years to come.
The recent successes of the AD experiments are just the latest in a long list of important scientific results with low-energy antiprotons at CERN that started in the 1990s with the Low Energy Antiproton Ring. Over the years, the scientific demand for antiprotons at the AD has continued to grow. There are now four experiments running there (ATRAP, ALPHA, ASACUSA and ACE). A fifth, AEGIS, has been approved and will take beam for the first time at the end of the year; further proposals are also under consideration. The AD is approaching the stage where it can no longer provide the number of antiprotons needed. As antihydrogen studies evolve into antihydrogen spectroscopy and gravitational measurements, the shortage will become even more acute.
The solution is a small ring of magnets that will fit inside the current AD hall – in other words, ELENA, the recently approved upgrade. ELENA will be a 30 m-circumference decelerator that will slow down the 5.3 MeV antiprotons from the AD to an energy of only 100 keV. Receiving slower antiprotons will help the experiments to improve their efficiency in creating antimatter atoms.
Currently, around 99.9% of the antiprotons produced by the AD are lost because of the experiments’ use of degrader foils, which are needed to decelerate the particles from the AD ejection energy down to around 5 keV – the energy needed for trapping. ELENA will increase the experiments’ efficiency by a factor of 10–100 as well as offer the possibility to accommodate an extra experimental area.
The new ring will be located such that its assembly and commissioning will have a minimal impact on operation of the AD. Indeed, the commissioning of the ELENA ring will take place in parallel with the current research programme, with short periods dedicated to commissioning during the physics run. The layout of the experimental area at the AD will not be significantly modified, but the much lower beam energies involved require the design and construction of completely new electrostatic transfer lines.
The construction of ELENA should begin in 2013 and the first physics injection should follow about three years later. The initial phase of the work will include the installation and commissioning of the ELENA ring while using the existing AD beam lines. The old ejection lines in all of the experimental areas will then be replaced with new electrostatic beam lines that will deliver antiprotons at the design energy of 100 keV. In its final configuration, ELENA will be able to deliver beams almost simultaneously to four experiments, resulting in a vital gain in total beam time.
The International Linear Collider (ILC) Global Design Effort (GDE) has released a major milestone report, The International Linear Collider: A Technical Progress Report. As its title suggests, the 162-page report represents the current status of the global R&D that is currently co-ordinated by the GDE. Coming roughly half way through the ILC Technical Design Phase, it documents the considerable progress that has been made worldwide towards a robust and technically mature design of a 500–1000 GeV electron–positron linear collider. With a stated five-year programme for the technical design phase, the GDE felt it necessary to have a significant mid-term publication milestone that would bridge the gap between the publication of the Reference Design Report (RDR) in 2007 and that of the foreseen Technical Design Report (TDR) in 2012. Because much of the R&D referred to in the report is still ongoing, it necessarily represents a snapshot of the current situation.
The focus of the progress report is on the co-ordinated worldwide “risk-mitigating” R&D that was originally identified at the time of the RDR publication. Although the report is comprehensive in covering nearly all areas of R&D, it has a strong focus on the development of the 1.3 GHz superconducting RF accelerating technology – the heart of the linear collider design. A large fraction of the total resource available has been used to develop the necessary worldwide infrastructure and expert-base in this technology, which includes research into high-gradient superconducting cavities as well as a focus on industrialization and mass-production models for this state-of-the-art technology. A further focus is on the three beam-test facilities: TTF/FLASH at DESY Hamburg, for the superconducting RF linac; the CesrTA facility at Cornell, for damping-ring electron cloud R&D; and ATF/ATF2 at KEK, for final focus optics, instrumentation and beam stabilization. Finally, the report also indicates work towards the ILC TDR baseline design and, in particular, the conventional facilities and siting activities.
The technical progress report will serve as a solid base for the production of the final report on the technical design phase R&D, which will be part of the TDR. Some 350 authors from more than 40 institutes around the globe have contributed to its successful publication. Now attention is already turning to producing the TDR – work that will formally start at the joint ILC-CLIC workshop being held in Granada in September.
• The report, which is available online at www.linearcollider.org/interim-report, is the first of two volumes; a second volume, to be released soon by the ILC Research Directorate, will focus on the ILC scientific case and on the design of the detectors associated with the collider.
PAMELA – the Payload for Antimatter Matter Exploration and Light nuclei Astrophysics – was launched into space on 15 June 2006 aboard a Soyuz rocket from Bajkonur in Kazakhstan. Since then, it has been orbiting the Earth, installed on the upward side of the Resurs-DK1 satellite at a distance that varies between 350 km and 610 km. On board are different types of detector (figure 1) comprising: a magnetic spectrometer, based on a neodymium-iron-boron permanent magnet and a precision tracking system; a sampling imaging calorimeter, in which pairs of orthogonal millistrip silicon sensor planes are interleaved with tungsten absorber plates; a precise time-of-flight system, using plastic scintillation detectors; an anticoincidence system; and a neutron detector.
The experimental apparatus was designed to provide precise measurements of the particle and nuclei fluxes in the cosmic radiation over a wide energy range. It is sensitive to antiprotons between 80 MeV and 190 GeV, positrons between 50 MeV and 270 GeV, electrons up to 600 GeV, protons up to 1 TeV and nuclei up to a few hundred giga-electron-volts. In addition, in the search for antinuclei PAMELA has a sensitivity of about 10–7 in the ratio He/He.
The experiment’s scientific objectives are ambitious and aim to clarify some of the trickiest questions of modern physics: the origin of cosmic rays, their energy spectrum, their antimatter components and particles possibly originating in the annihilation of dark matter particles. With data accumulated over several years, the mission, which is scheduled to finish at the end of the year, is now providing new insights into some of these questions and more.
Cosmic revelations
In 2009, the PAMELA collaboration published an anomalous positron abundance in cosmic rays with energies between 1.5 and 100 GeV. By contrast, as figure 2 shows, the antiproton flux they observe agrees with standard secondary antiproton production in the Galaxy (Adriani et al. 2010). These results were followed more recently with the publication of precision measurements of the proton and helium spectra in the rigidity range 1 GV to 1.2 TV. The proton and helium spectra show different shapes and moreover cannot be described by a single power law, as would be expected from previous observations and from the theoretical models adopted so far. Also, while the spectra of protons and helium gradually soften in the rigidity range 30–230 GV, they both show a hardening at 230–240 GV. Previous experiments did not have the statistical and systematic precision to show this behaviour, although an indirect indication was derived by comparing the results from a range of balloon-borne experiments (JACEE, CREAM and BESS) as well as from the first trial flight of the Alpha Magnetic Spectrometer in 1998.
So far, supernovae have been considered to be the sites of cosmic-ray acceleration. However, the discrepancies found by PAMELA in the proton and helium spectra have prompted a re-evaluation of the processes that underlie the acceleration, as well as the propagation of cosmic rays. Similar conclusions were drawn from PAMELA’s results on the positron abundance. Theoretical explanations of these data invoke more complex processes of acceleration and propagation, as well as possible contributions from new astrophysical sources, such as pulsars or more exotic ones, such as dark matter.
Conventional diffusive propagation models can, on the other hand, be used to interpret recently published PAMELA data on the electron component of the cosmic radiation. Precision measurements of the electron flux provide information regarding the origin and propagation of cosmic rays in the Galaxy that are not accessible through the study of the cosmic-ray nuclear components because of their differing energy-loss processes. PAMELA collected data between July 2006 and January 2010 by selecting electrons in the energy interval 1–625 GeV. This is the largest energy range covered by any cosmic-ray experiment so far, and the first time that electrons above 50 GeV have been identified in cosmic rays.
The collaboration derived the electron spectrum in two independent ways – using either the calorimeter or the tracking information – and the two sets of measurements show good agreement within the statistical errors. Figure 3 shows a typical electron event with a track and energy deposited in the calorimeter. The electron spectrum, although in substantial agreement with the results of other recent experiments, in particular the balloon-bourn Advanced Thin Ionization Calorimeter (ATIC) and the Fermi Gamma-Ray Space Telescope, appears softer than the e–+e+ spectra they measure. This difference is within the systematic uncertainties between the various measurements, but it is also consistent with a positron component that increases with energy.
Solar events
PAMELA has also measured solar-particle events and their temporal evolution during the major solar emissions of 13–14 December 2006 (figure 4). This was the first direct measurement by a single instrument of proton and helium nuclei of solar origin in a large energy range between 100 MeV/n and 3 GeV/n (Adriani et al. 2011b). The data show a spectral behaviour that is different from those derived from the neutron monitor network, with no satisfactory analytical description fitting the measured spectra. This implies the presence of complex, concurrent acceleration and propagation processes at the Sun and in interplanetary space. Modelling the solar-particle events is also relevant for future manned missions to the Moon and Mars.
Over the past five years, PAMELA has continuously monitored solar activity during an unusually long-lasting solar minimum, followed by – as of the end of December 2009 – a slow increase of activity, probably marking the beginning of the new solar cycle. This particularly favourable situation is providing the collaboration with an excellent opportunity to study heliospheric effects and underlines the major role that the experiment has in providing unique information about the nature of the cosmic rays at the scale of giga-electron-volts in the heliosphere. By combining data from PAMELA and the ULYSSES space mission, the PAMELA collaboration has also performed a new evaluation of the spatial dependence of cosmic-ray intensities in the heliosphere, with an accurate measurement of the radial and longitudinal gradients (De Simone et al. 2011).
Many new results from PAMELA were presented recently at the 2011 European Physical Society Conference on High-Energy Physics in Grenoble on 21–27 July and at the International Cosmic Ray Conference in Beijing on 11–18 August, as well as at other conferences. These results concern mainly new data on the electron/positron ratio, the absolute flux of positrons up to 100 GeV, fluxes and ratios of light nuclei, the abundance of hydrogen and helium isotopes, as well as new limits of the anti-helium to helium ratio. The new results confirm earlier findings and also extend the energy range and precision of the data. One interesting feature concerns the change in slope of the positron flux above 20 GeV, as shown in figure 5, which also includes the electron spectrum. Exclusion limits on the existence of new sorts of matter, such as strangelets, are also in the pipeline. The latest interesting PAMELA result concerns the discovery of a radiation belt around the Earth containing trapped antiprotons (Adriani et al. 2011c).
Although all the instrumentation aboard PAMELA is working well, the mission is expected to finish at the end of this year. The collaboration will then continue to work for another two years to analyse all of the data collected and improve the statistics.
• PAMELA was constructed by the WiZard collaboration, which was originally formed around Robert Golden, who first observed antiprotons in space. There are now 14 institutions involved. Italian INFN groups in Bari, Florence, Frascati, Naples, Rome Tor Vergata and Trieste, and groups from CNR, Florence and the Moscow Engineering and Physics Institute form the core. They are joined by groups from The Royal Institute of Technology (KTH) in Sweden, Siegen University in Germany and Russian groups from the Lebedev Institute, Moscow, and the Ioffe Institute, St Petersburg.
Could the internet one day wreak the same sort of social change on the world of science, breaking down the distinction between amateur and professional? In the world of high-energy physics, that might seem unlikely. What amateur can really contribute something substantial to, say the analysis of LHC data? Yet in many fields of science, the scope for amateur contributions is growing fast.
Modern astronomy, for example, has a long tradition of inspired amateur contributions, such as spotting comets or supernovae. Now, the internet has broadened the range of tasks that amateurs can tackle. For example, the project GalaxyZoo, led by researchers at the University of Oxford, invites volunteers to participate in web-based classification of galaxy images. Such pattern recognition is a task where the human mind still tends to outperform computer algorithms.
Not only can astronomers attract hundreds of thousands of free and eager assistants this way, but occasionally those helpers can themselves make interesting discoveries. This was the case for a Dutch school teacher, Hanny van Arkel, who spotted a strange object in one of the GalaxyZoo images that had stumped even the professional astronomers. It now bears the name “Hanny’s Voorwerp”, the second word meaning “object” in Dutch.
GalaxyZoo is just one of many volunteer-based projects making waves in astronomy. Projects such as Stardust@home, Planet Hunters, Solar Watch and MilkyWay@home all contribute to cutting-edge research. The Einstein@home project uses volunteer computing power to search for – among other things – pulsar signals in radio-astronomy data. Run by researchers at the Max-Planck Institute for Gravitational Research, the project published its first discoveries in Science last year, acknowledging the names of the volunteers whose computers had made each discovery.
Crowdsourcing research
However, it is in fields outside those traditionally accessible to amateurs where some of the most impressive results of citizen-powered science are beginning to be felt. Consider the computer game FoldIt, where players compete to fold protein molecules into their lowest energy configuration. Humans routinely outperform computers at this task, because the human mind is uniquely apt at such spatial puzzles; and teenagers typically out-compete trained biochemists. What the scientists behind the FoldIt project, based at the University of Washington, have also discovered is that the players were spontaneously collaborating to explore new folding strategies – a possibility the researchers had not anticipated. In other words, the amateur protein folders were initiating their own research programme.
Could high-energy physics also benefit from this type of approach? Peter Skands, a theorist at CERN, thinks so. He has been working with colleagues on a project about fitting models to LHC data, where delicate tuning of the model parameters by eye can help the physicists achieve the best overall fit. Experience with a high-school intern convinced Skands that even people not versed in the gory details of LHC physics could solve this highly visual problem efficiently.
Volunteers can already contribute their processor time to another project that Skands is involved in – simulating collisions in the LHC for the recently launched LHC@Home 2.0 project, where 200 volunteers have already simulated more than 5 billion collision events. Such volunteer computing projects, like Einstein@Home, are not as passive as they might appear. Many of the volunteers have spent countless hours helping developers in the early alpha-test stages of the project by providing detailed bug reports. Message boards and a credit system for the amount of processing completed – features provided by an open-source platform called BOINC – add elements of social networking and gaming to the project.
The LHC@Home 2.0 project also relies on CernVM, a virtual machine technology developed at CERN that enables complex simulation code to run easily on the diverse platforms provided by volunteers. Running fully fledged physics simulations for the LHC on home computers – a prospect that seemed technically impossible when the first LHC@home project was introduced in 2004 to simulate proton-beam stability in the LHC ring – now has the potential to expand significantly the computing resources for the LHC experiments. Projects like LHC@home typically draw tens of thousands of volunteers and their computers, a significant fraction of the estimated 250,000 processor cores currently supporting the four LHC experiments.
A humanitarian angle
LHC@home 2.0 is an example of a project that has benefited from the support of the Citizen Cyberscience Centre (CCC), which was set up in 2009 in partnership between CERN, the UN Institute of Training and Research and the University of Geneva. A major objective of the CCC is to promote volunteer computing and volunteer thinking for researchers in developing regions, because this approach effectively provides huge resources to scientists at next to no cost. Such resources can also be used to tackle pressing humanitarian and development challenges.
One example is the project Computing for Clean Water, led by researchers at Tsinghua University in Beijing. The project was initiated by the CCC with the sponsorship of a philanthropic programme run by IBM, called World Community Grid. The goal is to simulate how water flows through carbon nanotubes and explore the use of arrays of nanotubes for low-cost water filtration and desalination. The simulations would require thousands of years on a typical university computing cluster but can be done in just months using volunteer-computing resources aggregated through World Community Grid.
Another example is volunteer mapping for UNOSAT, the operational satellite-applications programme for UNITAR, which is based at CERN. Although a range of crowd-based mapping techniques are available these days, the use of satellite images to assess accurately the extent of damage in regions devastated by war or natural disasters is not trivial, even for experts. However, rapid and accurate assessment is vital for humanitarian purposes in estimating reconstruction costs and rapid mobilization of the international community and NGOs.
With the help of researchers at the University of Geneva and HP Labs in Palo Alto, UNOSAT is testing new approaches in crowdsourcing damage assessment by volunteers. These involve using statistical approaches to improve accuracy, as well as models inspired by economics where volunteers can vote on the quality of others’ results.
There are hundreds of citizen-cyberscience projects engaging millions of volunteers but the vast majority supports researchers in industrialized countries. A large part of the CCC activities involve raising awareness in developing regions. With the support of the Shuttleworth Foundation in South Africa, the CCC has been organizing a series of “hackfests”: two-day events where scientists, software developers and citizen enthusiasts meet to build prototypes of new citizen-based projects, which the scientists can then go on to refine. Hackfests have already taken place in Beijing, Taipei, Rio de Janeiro and Berlin, with more planned this year in South Africa and India.
The topics covered to date include: using mobile-phone Bluetooth signals as a proxy for bacteria, tracking how air-borne bacterial diseases such as tuberculosis spread in buildings, monitoring earthquakes using the motion sensors built in to laptop computers and digitizing tables of economics data from government archives. Because the “end-users” – the citizen volunteers themselves – participate in the events, there is a healthy focus on making projects as accessible and attractive as possible, so that even more volunteers sign up and stay active.
At such events, when asked what sort of rewards the most engaged volunteers might appreciate for their online efforts, one striking response – echoed on several occasions – is the opportunity to make a suggestion to the scientists for the course of their future research. In other words, there is a desire on behalf of volunteers to be involved more actively in the process that defines what science gets done. The volunteers who propose this are quite humble in their expectations – they understand that not every idea they have will be useful or feasible. Whether scientists will reject this sort of offer of advice as unwanted interference, or embrace the potentially much larger brainpower that informed amateurs could provide, remains to be seen. But the sentiment is clear: in science, as in journalism, the audience wants to be part of the show.
In June the LHC made good the promise of delivering an integrated luminosity of 1 fb–1 to the general-purpose detectors, ATLAS and CMS. This was the target for 2011 and it was achieved a little before the middle of the year. At the same time, making use of a technique known as “luminosity levelling”, the LHCb experiment had already recorded around 0.36 fb–1, well on the way to achieving its 1 fb–1 by the end of the year.
Reaching the luminosity milestone was the result of a programme of steady increments in the number of bunches of protons injected into the LHC, with 144 bunches added per beam per step. With each increment, the LHC provides three long “fills” of stable beams, before the next step. By the end of May, the number of bunches reached 1092 per beam, providing a peak luminosity of 1.25 × 1033 cm–2s–1 and a total energy per beam of some 70 MJ. A few long-lived fills soon yielded more than 40 pb–1 at a time for the general-purpose detectors, nearly as much as the LHC delivered in all of 2010, so allowing the 2011 milestone to be reached by 17 June.
The step to 1236 bunches per beam followed on 24 June, with the successful increment only four days later to 1380 bunches per beam – the maximum for the current bunch spacing of 50 ns. Running during these last few days of June included one epic fill that was 19 hours long and delivered an integrated luminosity of 60 pb–1.
At the same time, the technique of luminosity levelling has been employed to deliver a peak luminosity to the LHCb experiment of about 3 × 1032 cm–2s–1. If the beams were allowed to collide head-on in the LHCb detector, this figure would be exceeded, so the beams are initially separated by about 15 μm in the vertical plane. Then, as the beam intensity decays during a fill, this separation is gently reduced to keep the luminosity constant at the acceptable maximum. The LHCb experiment has more specialized physics goals than ATLAS and CMS, and was designed to run at lower luminosity and low multiplicity, processing just one proton–proton interaction per bunch crossing. The decision to increase the bunch intensity in the LHC before increasing the number of bunches, as well as the excellent performance of the detectors, has inspired the collaboration to run with as many as six interactions per crossing. The successful implementation of luminosity levelling means that the physics output of LHCb can be maximized while staying within the limits of peak luminosity that the detector can handle.
As the total beam intensity of the LHC has been pushed up, the operators have encountered various problems, such as the “unidentified falling objects” (UFOs). These are thought to be dust particles falling through the beam, causing localized beam loss. The losses can push nearby beam-loss monitors over the threshold to dump the beam. This is more of an annoyance than a danger for the LHC, but it does reduce the operational efficiency.
A period of machine development began on 29 June, in which the operators made several investigations for further improvements in the LHC’s performance, including the next steps towards higher beam intensities. One test involved the successful injection of trains of 24 bunches with 25 ns spacing, with up to 216 bunches injected. In other tests, bunches at 50 ns spacing were filled to twice the nominal intensity, with individual bunches at 2.7 × 1011 protons per bunch, the highest intensity achieved. These studies thus offer different paths to higher luminosities in the LHC.
Meanwhile, with the bumper crop of data already in hand, the LHC experiments are now working hard to get results ready for the main summer physics conferences: the European Physical Society’s High Energy Physics conference, being held in Grenoble on 21–27 July, and the Lepton-Photon conference, this year hosted by the Tata Institute in Mumbai on 22–27 August.
RIKEN and the Japan Synchrotron Radiation Research Institute (JASRI) have successfully produced a beam of X-ray laser light with a wavelength of 0.12 nm. This was created using the SPring-8 Angstrom Compact free electron LAser (SACLA), a cutting-edge X-ray free-electron laser (XFEL) facility unveiled by RIKEN in February 2011 in Harima. It opens a window into the structure of atoms and molecules at a level of detail never seen before.
One of only two facilities in the world to offer this novel light source, SACLA has the capacity to deliver radiation one billion times brighter and with pulses one thousand times shorter than other existing X-ray sources. In late March, the facility marked its first milestone, accelerating beam to 8 GeV and spontaneous X-rays produced at 0.08 nm.
Only three months later, SACLA has marked a second milestone. On 7 June, operators successfully increased the density of the electron beam by several hundred times and guided it with a precision of several micrometres to produce a bright X-ray laser with a wavelength of only 0.12 nm (a photon energy of 10 keV). This matches the record of 0.12 nm set at the only other operational XFEL facility in the world, the Linac Coherent Light Source at SLAC.
With experiments soon to commence and user operations at the facility to begin by the end of fiscal year 2011, this new record offers a taste of things to come with SACLA’s powerful beam.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.