The public launch in August of a new application for CERN’s volunteer-computing platform LHC@home produced an overwhelming response. The application Test4Theory, which runs Monte Carlo simulations of events in the LHC, was announced in a CERN press release on 8 August. Within three days, the number of registered volunteers swelled from a few hundred to nearly 8000. The application joins SixTrack, an accelerator beam-dynamics tool that has been used for LHC machine studies at CERN since 2004 and is now being prepared and extended in collaboration with the École polytechnique fédérale de Lausanne for studies of the LHC and its upgrade.
Given that the new application requires participants to install a virtual machine on their computer – not a trivial task – the level of enthusiasm is impressive. So, to avoid saturating the server that manages the project, there is now a waiting list for new participants. With the volunteer computing power at hand, nearly 20 billion events have already been simulated. According to CERN’s Peter Skands, the physicist leading the simulation effort, when the number of active volunteers passes 40,000 – which could happen later this year – the system will become equivalent to a true “virtual collider”, producing as many collisions per second as the real LHC.
Running part of a “virtual LHC” on their computers is clearly appealing to those who join LHC@home. The volunteers have not only dedicated a great deal of computing time to the project, but in many cases also provided expert assistance in debugging some of the software and managing the discussion forums that are part and parcel of a successful online citizen-science project.
The LHCb collaboration’s presentation at Lepton Photon 2011 included one of the most eagerly awaited measurements in flavour physics: the CP violation phase in Bs–B mixing. This is the counterpart of sin 2β in the B0 system, which was measured by the B-factory experiments BaBar and BELLE using the channel B0→ J/Ψ Ks. They provided the first measurement of CP violation in B0 mixing, which is both large and now well measured, with sin 2β = 0.69 ± 0.02. In contrast, the Standard Model prediction for φs, the corresponding phase for the Bs meson, is extremely small and precise: φs = 0.036 ± 0.002 rad (Charles et al. 2005). It is therefore an interesting place to search for new physics beyond the Standard Model, which may enhance the value. Time-dependent analyses of Bs mesons were not accessible at the B-factories, so this remained a key measurement for hadronic machines, first at the Tevatron and now at the LHC.
The golden mode for this study is Bs → J/Ψ φs where the J/Ψ decays to μ+μ– and the φ decays to K+K– . The measurement is very challenging: the final state is not a pure CP eigenstate, so an angular analysis has to be made to separate the CP-even and CP-odd components. In addition, the fast Bs–Bs oscillation necessitates precise vertex reconstruction, and tagging of the production state (whether it was a Bs or Bs ) is also important. The result for φs is correlated to another quantity in the fit, ΔΓs, the difference in width of the two Bs mass eigenstates. (It is the mass difference of these two states that determines the oscillation frequency.) ΔΓs can be positive or negative, but in the Standard Model is predicted to be 0.087 ± 0.021 ps–1 (Lenz and Nierste 2011). The uncertainties on φs and ΔΓs are correlated, and furthermore the fit turns out to be insensitive to the replacement φs → π – φs when ΔΓs → – ΔΓs so there are two ambiguous solutions. As a result, the measurements are usually plotted as contours in the φs vs ΔΓs plane.
The CDF and DØ experiments at the Tevatron made the first measurements. Their early results agreed with each other and appeared, when combined, to indicate a large value for φs, about 3σ away from the Standard Model expectation. More recent updates have moved their preferred values somewhat closer to the Standard Model, but a hint of a possible discrepancy remained, as shown by the red and green contours in figure 1 (Burdin and DØ 2011, CDF 2010).
LHCb has now accumulated the largest sample of Bs → J/Ψ φ decays in the world, over 8000 signal candidates with very high purity (figure 2). The resulting constraint is shown as the blue contour in figure 1 (LHCb 2011a). It is much more precise than the preceding measurements, with one of the two solutions being in good agreement with the Standard Model expectation – the hint of a discrepancy is not confirmed. This result also gives the first significant direct measurement of ΔΓs, 0.123 ± 0.029 ± 0.008 ps–1, where the first uncertainty is statistical and the second systematic.
Another related analysis presented by LHCb uses a different decay mode, Bs → J/Ψ f0, which should measure the same phase. Although the statistics are lower, the final state is CP-odd in this case, so the analysis is simpler (LHCb 2011b). It gives a consistent result to B0s → J/Ψ φ, and the preliminary combined result from LHCb is φs = 0.03 ± 0.16 ± 0.07 rad (LHCb 2011c). This result is statistically limited, but as data continue to pour in from the LHC there are good prospects for substantial further improvement. So, although LHCb has now ruled out a gross effect from new physics, the experiment should be able to measure the true value even if it is as small as predicted in the Standard Model – and test any subtle effects from new physics.
Measurements of top-quark properties were among the many new results shown by the ATLAS collaboration at Lepton-Photon 2011. The very large mass of this quark relative to the others leads many physicists to believe that it plays a special role in physics beyond the Standard Model.
At the luminosity recently achieved at the LHC, a top quark is produced on average approximately every second. Because of the large number of top quarks produced and the excellent detector performance, the ATLAS experiment is able to measure precisely the quark’s properties, thereby providing stringent tests of the Standard Model as well as probing for the subtle effects of new physics. So far all measurements are consistent with the Standard Model, but further data will bring increased precision and with it greater sensitivity to new effects.
ATLAS has measured the production cross-section of top pairs in the single lepton decay channel to be 179±12 pb. This precision of 7% is better than the uncertainty on the theoretical prediction, providing an excellent testing ground for perturbative QCD. A combination of the measurements in the different channels will further increase the precision.
Electroweak production of single top quarks is sensitive to the element Vtb of the quark-mixing matrix and also to a potential flavour-changing-neutral current component in top-quark couplings. ATLAS has measured top production in the t-channel – first measured only a couple of years ago at the Tevatron – with a significance exceeding 7σ. ATLAS has also placed limits on the production of single top in the s-channel and the Wt final state, laying the groundwork for the eventual measurement of these processes.
As a result of the excellent calibration of the detector, ATLAS has also measured precisely the mass of the top quark at 175.9 GeV, with a total uncertainty of just 2.8 GeV. This precise measurement, together with the W mass and electroweak radiative corrections, implies that the Higgs boson is lurking at low mass – if it is indeed a Standard Model Higgs.
ATLAS has further probed for new physics with the most precise measurements to date of the fraction of longitudinally polarized W bosons in the decay of top quarks and of the degree of spin correlation. The results of these measurements are consistent with Standard Model expectations, as are those of production asymmetries similar to those recently reported to be anomalous at the Tevatron.
The CMS search for the Higgs boson is being carried out using a range of decay products: two photons; two τ leptons; two b quarks; two W bosons; and two Z bosons. Analysing all of these channels ensures that the search is sensitive to observing the Higgs irrespective of its mass. The CMS collaboration presented the first results from a combination of Higgs searches in these channels at the EPS-HEP 2011 conference in Grenoble at the end of July. For Lepton Photon 2011, held in Mumbai a month later, they were able to update several key analyses, using additional data collected during the summer.
The CMS results presented in Mumbai were based on data-sets corresponding to 1.1–1.7 fb–1 (integrated luminosity), depending on the channel. The figure shows the result of all of the search channels combined. It indicates that CMS observes no convincing excess of events in the explored mass range of 110–600 GeV.
The analysis excludes, with a confidence level (CL) of 95% the existence of a Standard Model Higgs boson in three Higgs mass ranges: 145–216 GeV, 226–288 GeV and 310–400 GeV. For the quantity of data collected so far, the CMS collaboration would expect to exclude the Higgs boson in the range 130–440 GeV in the absence of a signal. The two gaps between the three excluded mass ranges observed in the data are consistent with statistical fluctuations. At 90% CL, the results exclude the Standard Model Higgs boson in the mass range from 144–440 GeV, without interruptions. All exclusion regions were obtained using CLs from the modified frequentist construction .
A modest excess of events is, however, apparent for Higgs boson masses below 145 GeV. With the data due to be collected in the coming months, CMS will be able to distinguish between the possible interpretations: either the production of a Higgs boson or a statistical fluctuation of the backgrounds. During the ongoing proton–proton data-taking period at the LHC, which is expected to terminate at the end of 2012, CMS will record substantially more data, leading to a significantly increased sensitivity to the Standard Model Higgs boson – if it exists – over the full range of possible masses.
One of last year’s surprise results came from the MINOS (Main Injector Neutrino Oscillation Search) experiment in the US, which suggested that neutrinos and their antimatter counterparts, antineutrinos, might have different masses – an idea that goes against most commonly accepted theories of how the subatomic world works. At Lepton Photon 2011, however, the MINOS collaboration presented updated results. These constitute the world’s best measurement of muon neutrino and antineutrino mass comparisons and bring the masses more closely together.
Since the result announced in June 2010, the experiment has nearly doubled its data set, from 100 antineutrino events to 197 events. While the new results are only about 1 σ away from the previous results, the combination rules out concerns that the previous result could have arisen from detector or calculation errors. Instead, the combined results point to a statistical fluctuation that has lessened as more data have been collected.
“Remember when you were young, you shone like the Sun”, opens the famous song Shine On You Crazy Diamond. Written as a tribute to Syd Barrett, Pink Floyd’s lead musician until 1975, it could also refer to a newly discovered planet orbiting a pulsar. The planet was once a star but the pulsar robbed it of almost everything except for its innermost part of crystalline carbon (diamond) and oxygen.
The supernova explosion of massive stars usually leaves behind an ultradense neutron star formed by the collapse of the iron core of the former star. Like a giant atomic nucleus, neutron stars typically contain the mass of the Sun in a sphere of about only 10 km in radius. They often manifest themselves as pulsating stars, or “pulsars”. The observed pulsation comes from the spin of the neutron star shining a beam of radiation from a magnetic pole towards the Earth at each rotation (CERN Courier September 2006 p13). The pulse duration corresponds to the spin period of the neutron star and ranges from about 1 ms to 10 s. Millisecond pulsars – the most rapidly rotating – are thought to have been spun-up via accretion of matter from a companion star (CERN Courier December 2010 p10). That about 30% are solitary suggests that some might have completely “eaten-up” their companion star, either via continuous accretion or by the merger of the binary system.
While extrasolar planets are continuously being discovered around normal stars (CERN Courier November 2010 p13), planets around pulsars are extremely rare. So far, an extrasolar planetary system has been found around only one pulsar, PSR B1257+12. It comprises two planets of about three times the mass of the Earth and a third body of lunar mass. The origin of these first detected exoplanets – back in 1992 – is still puzzling astronomers. Could the planets have survived the supernova explosion at the origin of the pulsar or did they form afterwards out of a remaining disc of matter?
The presence of an orbiting companion can be detected by a sinusoidal modulation of the arrival times of the pulses. Such a characteristic signal has now been detected among almost 200 TB of data processed by supercomputers as part of a systematic search for pulsars. The modulation observed by the 64 m radio telescope in Parkes, Australia, suggests the presence of a Jupiter-mass planet orbiting the newly discovered pulsar PSR J1719-1438. The period of the orbit is only 2.2 hours, corresponding to a distance between the planet and the pulsar of no more than the radius of the Sun.
This tight orbit gives strong constraints on the size of the planet. It must be much denser and hence smaller than Jupiter to prevent it from being ripped apart by the strong gravity of the neutron star. A minimal density of 23 g cm–3 was calculated by a team led by Matthew Bailes, of Swinburne University of Technology in Australia, together with colleagues from Europe, Australia and the US. This exceeds the density of any chemical element and suggests that the planet-like body is made of matter in extreme conditions similar to those found in white dwarfs.
Low-mass white dwarfs have already been found around millisecond pulsars, but this one is unique as it would have lost more than 99.9% of its original mass to the hungry pulsar. The remnant planet-mass body is most likely to be made mainly of carbon and oxygen and the density is such that it is certain to be crystalline, which means that a large part of the star may be similar to a diamond. To check how brilliant and transparent the planet really looks, one would have to travel 4000 light-years towards the constellation of Serpens.
The biennial meetings organized by the High Energy and Particle Physics division of the European Physics Society (EPS) aim to provide a global view of the state of the art in research in the field. This year’s meeting, which took place in Grenoble on 21–27 July, attracted more than 800 participants – and certainly delivered. The International Europhysics Conference on High-Energy Physics, EPS-HEP 2011, was the first major international conference since CERN’s LHC started to supply significant amounts of data in a new energy region. After only one year of data-taking, the LHC took centre stage, in both proton–proton and heavy-ion physics, thanks to the spectacular performance of the accelerator and the impressively fast data analysis by the experiments. In parallel, there were results based on near-final data samples from experiments at Fermilab’s Tevatron and at the B factories, while hot news came from neutrino experiments and searches for dark matter. All in all, the conference was a real success, raising current knowledge up a huge notch across all searches for new physics.
All of the Tevatron and LHC experiments showed improved or new limits in searches, reaching mass limits close to or slightly above 1 TeV in simple supersymmetric models. Anyone who had hoped that the LHC would reveal supersymmetry early on may have been slightly disappointed, but as many theorists reminded the participants: new physics is guaranteed, all that is needed is patience. CERN’s director-general, Rolf Heuer reinforced this point, stating that either finding the Higgs or excluding it will be a great discovery.
At the Tevatron, the CDF and DØ experiments – with data corresponding to an integrated luminosity of about 8 fb–1 – continue to extend their exclusion limits for a Higgs particle with a mass of around 160 GeV. After two decades of truly fruitful physics, the Tevatron was scheduled to shut down definitively at the end of September, leaving a total of more than 10 fb–1 of data ready to be analysed. Meanwhile, the CMS and ATLAS experiments at the LHC have already reached a sensitivity exceeding that of the Tevatron experiments. Both experiments have slightly better preliminary limits in the region covered by CDF and DØ, while also excluding a high-mass Higgs – in a region out of the reach of the Tevatron. Most amazingly, if the LHC continues to perform as well as it has so far, the experiments are guaranteed to find or exclude the Standard Model Higgs by the end of 2012.
There was, however, already some intriguing news from the Higgs sector. Both the CMS and ATLAS collaborations revealed small excesses of events in their preliminary WW and ZZ analyses based on about 1 fb–1 of data. The most significant upward fluctuation over background was obtained in the Higgs → WW → |ν|ν channel. Both groups also see smaller excesses in the same mass region from the decay via ZZ to four leptons. All of these fluctuations fall in regions that are not yet excluded by the Tevatron or the LHC, and they made for some interesting discussions during the conference. If the Standard Model Higgs boson does indeed exist, this is exactly how it will manifest itself: a faint appearance above the distant horizon, which should grow with time as more data are analysed. Nonetheless, everyone agreed that it is far too early to tell what is happening unambiguously before more data are added and further rigorous checks made, because both experiments could be affected similarly by mis-modelling of the background or be victims of statistical fluctuations.
The recent results from CDF, CMS and LHCb on the flavour-changing neutral-current decays Bs → μμ provided another great conversation topic. The CDF collaboration reported a first measurement of this branching ratio at (1.8 + 1.1 – 0.9) × 10–8, which is higher than the Standard Model prediction of (3.2 ± 0.2) × 10–9. On the other hand, CMS and LHCb both have preliminary limits, which when combined are lower than the CDF result. More data from all experiments will soon help to clear this ambiguity.
One analysis that has drawn considerable attention in the preceding months is CDF’s observation of a possible signal of new physics in the final state W + 2 jets. The updated analysis, now based on 7.3 fb–1 of data, shows a clear excess of events with a dijet mass around 145 GeV. At more than 4σ, this is a significant excess, shared equally between the two channels W → e or μ. All eyes have thus turned to DØ, which is best positioned to look into this effect. The DØ team performed an important verification by artificially adding in a signal such as the one that CDF observed to confirm that DØ is indeed sensitive to such a signal. All efforts so far have turned up no excess above Standard Model backgrounds in a 4.3 fb–1 data sample, even when emulating the CDF selection criteria. A joint task force between the two experiments is now hard at work trying to resolve this discrepancy. Meanwhile, at the LHC, a similar signal would have to emerge amid larger backgrounds, so the sensitivity in this search may be diminished, depending on the nature of the new effect. Both CMS and ATLAS are actively combing through their data for signs of this effect, finding no evidence for the CDF signal thus far.
The DØ and CDF collaborations still see a deviation from the Standard Model in the forwards–backwards asymmetry of top–antitop production, opening the door to several possible explanations in terms of physics beyond the Standard Model. This effect is most pronounced at tt masses above 450 GeV. More studies are underway.
From QCD to neutrinos
The session on QCD showed great progress in the field, with updates on parton distribution functions from the experiments at DESY’s HERA collider, which stopped running in June 2007, as well as several results from the LHC. These measurements are now challenging the precision of theoretical predictions and will contribute to further refinements of the Monte Carlo simulations.
On the flavour front, there were as many as 25 new results from near-to-final datasets in the BaBar and Belle experiments at the B factories at SLAC and KEK, respectively, as well as new measurements from the Tevatron and LHC experiments, in particular LHCb. Together, they provide significant tests of the Standard Model, which still stands strong and unchallenged despite every attempt to uncover a flaw. Searches for charged-lepton flavour violation, electric dipole moments and the updated dilepton charge-asymmetries at the Tevatron continue to probe possible new effects. The BaBar collaboration showed impressive limits on rare decays with branching ratios reaching as low as 10–8, while LHCb now has the most precise single measurement and first 5σ observation of CP violation at a hadron machine using B→Kπ decays.
Any deviation from the Standard Model predictions would reveal the existence of new physics, but such deviations now require even more stringent tests, hence more data are needed. To collect even larger data samples, a new generation of B factories is on its way. The Belle II experiment at SuperKEKB is well into the construction phase and will replace KEKB, which shut down definitively in June 2010. The SuperKEKB design luminosity of 8 × 1035 cm–2 s–1 is 40 times higher than the record for KEKB. Commissioning of the upgraded machine and experiment is planned for mid-2014. Meanwhile, the SuperB project has been approved in Italy. Parts of the BaBar detector will soon cross the ocean to be relocated in Rome at a new facility on the Tor Vergata University campus, to start a new life there in 2015.
As ever, conference participants eagerly anticipated news from the various direct searches for dark matter. While both the DAMA/LIBRA and CoGeNT experiments have tantalizing signs of light dark-matter candidates, these are now in contradiction with recent limits revealed by the XENON100 collaboration, whose null results exclude the first two results. Hence, the situation remains ambiguous, with much still to be elucidated. Efforts are ongoing to explain why the peaks of the modulation signals seen by DAMA/LIBRA and CoGeNT do not coincide. Further scrutiny is also being directed towards the unmodulated part of the DAMA/LIBRA signal, together with investigations of other possible backgrounds for the CoGeNT experiment – all with the aim of producing more convincing and irrefutable results. The whole community is hard at work collecting and analysing more data, while construction of larger detectors that will use a tonne or more of active material is underway. With new results and updates expected in a year or two, this topic will be among the highlights of the next EPS-HEP meeting in 2013.
The 295 km Tokai-to-Kamioka (T2K) long-baseline neutrino experiment in Japan, and the Fermilab-based Main Injector Neutrino Oscillation Search (MINOS) with a “far” detector 730 km away in the Soudan Mine, now have the first indications for a sizeable mixing angle between the electron and muon neutrinos. These measurements were discussed in view of their implications for the measurement of CP violation in neutrino mixing and for the design of future long-baseline neutrino projects. One of the proposed experiments, the Laguna Pyhäsalmi project, would have an underground 100 ktonne “far” detector in Finland, 2300 km from CERN. Its goal would be to measure neutrino oscillations, taking advantage of large enhancements from matter effects.
While the LHC is already looking back in time all of the way to the early universe, it will also take particle physics into the future, with plans established up to 2030 and beyond. Three long shutdowns are currently foreseen. The first will take place at the end of 2012 to allow improvements in the magnet interconnects and for the installation of new release-valves to ensure the prevention of further incidents like the one that brought the LHC to a halt in 2009. This will allow the machine to reach its design centre-of-mass energy of 14 TeV by the autumn of 2014. The following long shutdown in 2018 will be for several detector upgrades while the final planned shutdown, in 2022, will be used to prepare for the high-luminosity upgrade to the LHC (HL-LHC). Scenarios are also being considered for an upgrade to higher energies in the even more distant future.
A next-generation linear collider is still under study, with final design reports on the Compact Linear Collider (CLIC) and International Linear Collider (ILC) concepts planned for the end of this year. The physics outcome from the LHC experiments by the end of 2012 will provide crucial input to decide what kind of linear collider will best suit the future needs of particle physics.
On the theoretical front, participants heard of recent accomplishments using scattering amplitudes in quantum field theory. As well as updating the community on their progress with calculation techniques, the presentations by theorists served as a reminder that the questions that are currently unanswered by the Standard Model – from the existence of dark matter to the unexplained problem of particle “generations” – imply that new physics ought to be there, waiting to be discovered.
The closing session was devoted to an outlook for experimental and theoretical particle physics as a whole. Pier Oddone, Rolf Heuer and Atsuko Suzuki, the directors for Fermilab, CERN and KEK, respectively, presented their visions for the future of particle physics from the perspectives of the US, Europe and Asia.
Oddone laid out the many plans for Fermilab in the post-Tevatron era, spanning from searches for dark energy and dark matter, to neutrino physics with upgraded experiments MicroBoone and MINOS+, as well as projects for accelerator development of the ILC and a muon collider. He reminded participants that contributions to the LHC from the US – for both the accelerator and the detector – represented the largest single investment in high-energy physics that the US has made since the 1970s.
Suzuki warmly thanked the community for its extended support after the devastating earthquake and tsunami earlier this year and it was moving to hear about the efforts that Japanese colleagues are making to recover from the effects. The KEK laboratory suffered damage both at the Tsukuba and Tokai campuses. Nevertheless, the construction plans for SuperKEKB are still on schedule. Repairs are underway at Tokai, with the first power tests scheduled for November. In addition, Suzuki presented new projects underway in Asia, such as the Korea Neutrino Research Center, which was scheduled to start operation in August, and the Daya Bay experiment in China, which will study oscillations in reactor neutrinos. “Near” detectors at Daya Bay started data-taking in August, while “far” detectors in the nearby mountains should be operating by next summer.
We have one inverse femtobarn of data in, and 2999 more to go. So be patient. The fun is just starting!
David Gross
Heuer stressed both the importance of international collaboration in establishing any future accelerator projects and how the results of existing facilities should be used to determine the needs for future accelerators. Results from the LHC will therefore be a key ingredient in determining which design is best for a new linear collider. In an effort to increase the collaborative spirit, CERN is already opening its door to new member states.
All of the great results presented at EPS-HEP 2011 could not have been fully appreciated without the impeccable organization provided by the local committee headed by Johann Collot of the University of Grenoble. Sometimes, however, success can bring trouble: interest in the conference following the news of intriguing reports on the Higgs boson nearly brought the conference website to a halt.
The local committee also spared no effort in treating the participants to local specialities, providing more than just food for thought. These included an impressive wine-and-cheese reception on the opening night, followed by “danced” lectures organized for the public. While local speakers explained the field of particle physics, dancers from the University of Grenoble’s modern-dance company accompanied them on stage, seemingly surprising some of the speakers themselves. The evening ended with a beautiful “dance of the particles” to everybody’s delight. The social programme also included a soccer tournament, a reception hosted by the City of Grenoble at the modern art museum, a Bel Canto concert and a gastronomic dinner – enough to suit everyone’s taste.
This conference really marked the beginning of the LHC era. As David Gross, who received the Nobel Prize in Physics in 2004, concluded: “We have one inverse femtobarn of data in, and 2999 more to go. So be patient. The fun is just starting!” Now, even the unexpected can be expected.
A key highlight was the presentations of prizes by the High Energy and Particle Physics Division of the EPS. This year the prestigious High Energy and Particle Physics Prize for an outstanding contribution to High Energy Physics went to Sheldon Lee Glashow of Boston University, John Iliopoulos of the Ecole Normale Supérieure, Paris, and Luciano Maiani from the University of Rome La Sapienza. They were rewarded “for their crucial contribution to the theory of flavour, presently embedded in the Standard Theory of strong and electroweak interactions which is still of utmost importance today”. In 1970, they put forward a compelling argument for the existence of a fourth quark – charm – to solve a number of problems in particle physics. Their proposal, now known as the GIM mechanism, was spectacularly confirmed when particles containing the charm quark were unexpectedly discovered in 1974.
The Giuseppe and Vanna Cocconi Prize for an outstanding contribution (experimental or theoretical) to particle astrophysics and cosmology went to Paolo de Bernardis of the University of Rome La Sapienza and Paul Richards of the University of California, Berkeley, “for their outstanding contributions to the study of cosmic microwave background anisotropies with the balloon-borne experiments BOOMERanG and MAXIMA”.
Davide Gaiotto of the Institute for Advanced Studies in Princeton, received the Gribov Medal for outstanding work by an early-career physicist in theoretical particle physics and/or field theory. He was rewarded “for his work on uncovering new facets of the dynamics of four-dimensional supersymmetric gauge theories and in particular for discovering a large class of four-dimensional superconformal theories and for finding with others important intricate relations between two-dimensional theories of gravity and four-dimensional gauge theories”.
The Young Physicist Prize for outstanding work by one or more was awarded to Paolo Creminelli of the International Centre for Theoretical Physics, Trieste, and Andrea Rizzi of the Swiss Federal Institute of Technology, Zurich. Creminelli received his share of the award “for his contributions to the development of a solid field-theoretical approach to early-universe cosmology and for his studies of non-gaussianities in the cosmic-microwave background”, while Rizzi was rewarded “for his contributions to the reconstruction software and physics programme of the CMS experiment at the LHC”.
Finally, the Outreach Prize for outstanding outreach achievement connected with high-energy physics and/or particle astrophysics went to Christine Kourkoumelis of the University of Athens and Sofoklis Sotiriou, director of the Ellinogermaniki Agogi Center for Science Teachers Training, for “building educational resources to bring the research process in particle physics and its results to teachers and students, both nationally and across Europe”.
A quarter-century of experimentation is coming to a close at Fermilab’s Tevatron collider, a pioneering instrument that advanced the frontiers of accelerator science and particle physics alike, setting the stage for the LHC at CERN. The world’s first high-energy superconducting synchrotron, the Tevatron served as the model for the proton ring in the HERA collider at DESY and as a key milestone towards the development of the LHC. In its final months of operation the Tevatron’s initial luminosity for proton–antiproton collisions at 1.96 TeV averaged more than 3.5 × 1032 cm–2s–1. The integrated luminosity delivered at 1.96 TeV approached 12 fb–1, with approximately 10 fb–1 recorded by the CDF and DØ experiments. A long line of innovations and much perseverance made possible the evolution of luminosity shown in figure 1 (Holmes et al. 2011).
The legacy of the Tevatron experiments includes many results for which the high energy of a hadron collider was decisive. Chief among these is the discovery of the top quark, which for 15 years could be studied only at the Tevatron. Exacting measurements of the masses of the top quark and the W boson and of the frequency of Bs oscillations punctured the myth that hadron colliders are not precision instruments. Remarkable detector innovations such as the first hadron-collider silicon vertex detector and secondary vertex trigger, and multilevel triggering are now part of the standard experimental toolkit. So, too, are robust multivariate analysis techniques that enhance the sensitivity of searches in the face of challenging backgrounds. CDF and DØ exemplify one of the great strengths of particle physics: the high value of experimental collaborations whose scientific interests and capabilities expand and deepen over time – responding to new opportunities and delivering a harvest of results that were not imagined when the detectors were proposed.
Early days
The CDF logbook records the first collision event in the Tevatron at 02.32 a.m. on 13 October 1985, at an energy of 800 GeV per beam. The estimated luminosity was 2 × 1025 cm–2s–1, more than seven orders of magnitude below the machine’s performance in 2011. By the afternoon, the Tevatron complex was shut down for 18 months to construct the DØ interaction region and complete the CDF detector. CDF’s pilot run in 1987 yielded the first wave of physics papers, including measurements and searches. During 1988 and 1989 CDF accumulated 4 pb–1, now at 1.8 TeV in the centre of mass. (Two special-purpose experiments also published results from this run. Experiment 710 measured elastic scattering and the total cross-sections; Experiment 735 sought evidence of a deconfined quark–gluon plasma.) The peak luminosity delivered to CDF surpassed 1030 cm–2s–1 in collisions of six proton bunches on six antiproton bunches. Papers from these early runs are worth rereading as reminders of how little we knew, and how a tentative but growing respect for the Standard Model brought coherence to the interpretation of results. It is also interesting to see how the experimenters went about gaining confidence in their detector and their analysis techniques.
Both DØ and CDF took data at 1.8 TeV in the extended Run 1 between 1992 and 1996, recording 120 pb–1. An important enabler of increased luminosity was the move to helical orbits, which eliminated collisions outside the two interaction regions. During this period, a small test experiment called MiniMax (T864) searched for disordered chiral condensates and other novel phenomena in the far-forward region. This was a time of high excitement, not only for the drama of the top-quark search, but also for the stimulating conversation between the teams on the Tevatron experiments and those at the Z factories at CERN and SLAC, and at the HERA electron–proton collider, all of which were breaking new ground.
Fermilab then constructed the Main Injector and Recycler in a new tunnel, while the experiments undertook ambitious detector upgrades. Improvements to the cryogenic system made it possible to lower the operating temperature of the superconducting magnets and so raise the collision energy to 1.96 TeV. CDF installed a new central tracker and improved silicon vertex detector and enhanced its forward calorimetry and muon detection. DØ added a solenoid magnet, a silicon vertex detector and a scintillating-fibre tracker and also improved the detection of forward muons. Run 2 began slowly in 2001, but attention to detail and many accelerator improvements – including 36-bunch operation and electron-cooling of antiprotons in the recycler – contributed to the outstanding performance of the mature machine.
Strong and electroweak physics
The Tevatron experiments have probed the proton with a resolution of about one-third of an attometer (10–18 m), greatly expanding the kinematical range over which we can test the theory of the strong interactions. Perturbative QCD is extremely well validated in studies of hadron jets and other observables. The jet cross-section displayed in figure 2 shows the agreement between calculation and observation over eight orders of magnitude in rate (e.g. Abazov et al. 2008, Aaltonen et al. 2008 and 2009). Such measurements established the importance of gluon–gluon scattering as a mechanism for jet production and helped constrain the parton distribution functions for the gluons. Values of the strong coupling constant extracted from jet studies exhibit the running behaviour characteristic of asymptotic freedom at higher scales than accessible in other experiments. The strong coupling at the Z-boson mass has been determined with an uncertainty of about 4%.
Other jet studies have not only tested QCD but also probed for physics beyond the Standard Model. Measurements of the angular distribution of dijet production confirm the Rutherford-scattering-like expectation of QCD and place upper bounds on the size of extra spatial dimensions. They also validate, at a resolution of nearly 1/(3 TeV), a key idealization that underpins the Standard Model – the working hypothesis that quarks are pointlike and structureless. Measurements of the dijet mass spectrum that extend beyond 1.2 TeV (roughly 2/3 of the centre-of-mass energy of the proton–antiproton collisions) are likewise in accord with next-to-leading-order QCD calculations. No evidence is seen for unexpected dijet resonances.
In the final data set of 10 fb–1, each experiment should have approximately 5 million W bosons in each leptonic decay channel and perhaps 400,000 Z bosons. These large samples have made possible many important measurements. The production cross-sections agree with QCD predictions to such a degree that electroweak gauge-boson production is under study as a primary luminosity monitor for LHC experiments. Studies of Z production, with or without accompanying jets, are immensely valuable for testing simulations of Standard Model physics. The forward-backward asymmetry of the electrons or muons produced in W decay, which arises from the V–A structure of the charged weak current, provides important information about the up-quark and down-quark parton-distribution functions.
Given what we know from many sources, the masses of the W boson and top quark are key elements in the Standard Model network that constrains the properties of the Higgs boson. A stellar accomplishment of the Tevatron experiments has been the determination of the W-boson mass as 80.420 ± 0.031 GeV, better than 4 parts in 104. Figure 3 summarizes the Tevatron measurements and their impact on the current world average. The combined uncertainty at the end of Run 2 may approach 15 MeV.
The growing data samples available at the Tevatron, along with the evolution of experimental techniques, have made it possible to observe cross-sections times branching ratios well below 0.1 pb. All of the electroweak diboson pairs (Wγ, Zγ, WW, WZ and ZZ) have been detected at the rates predicted by the Standard Model. Mastery of these channels is a prerequisite to the Higgs-boson search at moderate and high masses, but they carry their own physics interest as well: the possibility of validating the Standard Model structure of the triple-gauge couplings and searching for anomalous couplings incompatible with the Standard Model. So far, the three-gauge-boson interactions are consistent with electroweak theory in every particular.
From bottom to top
CDF and DØ have exerted a broad impact on our knowledge of states containing heavy quarks. Studies of the production and decay dynamics of quarkonium states have repeatedly challenged phenomenological models, while measurements of b- and t-quark production have made possible sharp tests of QCD calculations at next-to-leading order. The Tevatron experiments account for nearly all of our knowledge of the Bc meson, with precise measurements of the mass and lifetime. The Tevatron contributes world-leading measurements of masses and lifetimes of B mesons and baryons, and has been the unique source of information on many of the B-baryons. With CDF’s recent observation of the Ξ0b, all of the spin-1/2 baryons containing one b quark have been observed at the Tevatron, except for the Σ0b. We also owe to the Tevatron our knowledge of orbitally excited B and Bs mesons, constraints on the mass and quantum numbers of X(3872), important evidence on D0–D0 mixing and high-sensitivity searches for rare decays into dimuons.
The Tevatron experiments met one of the key targets for Run 2 by determining the frequency of Bs–Bs oscillations. Following a two-sided limit published by DØ, the CDF collaboration determined the oscillation frequency as 17.77 ± 0.13 ps–1 (Abulencia et al. 2006). The oscillation signal is shown in figure 4. This beautiful measurement, in line with Standard Model expectations, constrains the manner in which new physics might show itself in B physics.
The discovery of the top quark by the Tevatron collaborations in 1995 was a landmark achievement
The discovery of the top quark by the Tevatron collaborations in 1995 was a landmark achievement (Abe et al. 1995, Abachi et al. 1995, Carithers and Grannis 1995). By 1990, searches by CDF had raised the lower bound on the top-quark mass to 91 GeV, excluding decays of W into t + b. A heavy top decays so swiftly that it cannot be observed directly, but must be inferred from its disintegration into a bottom quark and a W boson – both of which are themselves unstable particles. The hunt took off with the growing data-sets available to both CDF and DØ in 1992–1993 and soon the possibility of observing top was in the air. DØ subsequently raised the lower bound to 131 GeV. Moreover, a growing body of observations that probed quantum corrections to the electroweak theory pointed to a top-quark mass in the range 150–200 GeV. Finding top there emerged as a critical test of the understanding built up over two decades.
Eighteen months of deliciously intense activity culminated in a joint seminar on 2 March 1995, demonstrating that top was found in the reaction pp → tt+ anything. CDF gauged the top-quark mass at 176 ± 13 GeV, while DØ reported 199 ± 30 GeV. Since the discovery, larger event samples, improved detectors and sophisticated analysis techniques have led to a detailed dossier of top-quark properties (Deliot and Glenzinski 2010). Tevatron measurements of the top mass have reached 0.54% precision, at 173.2 ± 0.9 GeV, a level that demands scrupulous attention to the theoretical definition of what is being measured (Tevatron Electroweak Working Group 2011). A compilation of the Tevatron measurements is shown in figure 5. CDF and DØ now aim for an uncertainty of ± 1 GeV per experiment; to reach this level of precision will require a better understanding of b-jet modelling and of uncertainties in the signal and background simulations.
The tt production characteristics are in good agreement with QCD expectations for the total rate, transverse-momentum dependence and invariant-mass distribution. Tevatron studies support a top-quark charge of +2/3, and show that the tbW interaction is left-handed. Approximately 70% of the W bosons emitted in top decay are longitudinally polarized, while the rest are left-handed. The top-quark lifetime is close to 0.3 yoctosecond (10–24 s), as electroweak theory anticipates. Because top decays before hadronizing, it can be studied as a bare quark. Up to this point, exploratory studies of spin correlations among the tt decay products are in accord with the Standard Model. Both experiments have observed a forward-backward production asymmetry that is considerably larger than the Standard Model predictions, as currently understood. This tantalizing result – which could point to new physics – challenges theorists to create more robust, detailed and credible simulations of the Standard Model.
Important information about the weak interactions of top comes from the detection of single-top production through the decay of a virtual W boson or the interaction of an exchanged W boson with a b quark. Using an array of multivariate analysis techniques, CDF and DØ have observed single-top production at a rate consistent with the Standard Model. The DØ collaboration has succeeded in isolating the t-channel exchange process. These measurements allow a determination of the strength of the tbW weak coupling that is consistent with the Standard Model prediction of a value near unity, as well as with other indications that t → bW is the dominant decay mode of the top quark.
Higgs and other new phenomena
The search for the Standard Model Higgs boson is the ultimate challenge for the Tevatron. The straightforward strategy, to detect a light Higgs boson produced in gluon–gluon fusion that decays into the dominant bb mode, is foreclosed by the overwhelming rate of b-quark pair production by the strong interactions. Thus CDF and DØ have had to seek signals in several production channels and decay modes, as well as master many sources of background. Current searches consider gluon–gluon fusion, the associated production of a Higgs boson and W or Z boson and vector-boson fusion. The decay modes examined are bb, W+W–, ZZ, γγ and τ+τ–.
So far, the Tevatron experiments have given information on where the Standard-Model Higgs boson is not. The combined analyses of summer 2011, based on up to 8.6 fb–1 of data, exclude Standard Model Higgs-boson masses between 156 and 177 GeV, as shown in figure 6 (The Tevatron New-Phenomena and Higgs Working Group 2011). Parallel work has restricted the allowed parameter space for the lightest Higgs boson of supersymmetric models. According to projections informed by current experience, the full Tevatron data-set should yield 95% confidence-level exclusion limits up to 185 GeV – should no signal be present – and “evidence” at the 3σ level below 120 GeV and in the range 150–175 GeV.
During more than two decades as the world’s highest-energy machine, the Tevatron has had unparalleled capability to search for direct manifestations of physics beyond the Standard Model. Broad explorations and searches for specific hypothetical phenomena have been major activities for the experiments. The Tevatron constraints on conjectured extensions to the Standard Model are impressive in number and scope: CDF and DØ have set limits on supersymmetric particles, many varieties of extra spatial dimensions, signs of new strong dynamics, carriers of new forces of nature, magnetic monopoles and many more exotica. The null searches compel us to contemplate with greater intensity the unreasonable effectiveness of the Standard Model.
To be sure, some observations do not square with conventional expectations. In addition to the suggestion of a larger-than-foreseen forward-backward asymmetry in top-pair production noted above, it is worth mentioning two other surprising effects now in play. DØ reports an anomalous like-sign dimuon charge asymmetry in semileptonic decays of bb pairs that suggests unexpectedly large CP violation in the decays of b-hadrons. CDF sees a yield of jet pairs in association with a W boson that exceeds expectations in the dijet mass interval between 120 and 160 GeV. DØ does not confirm the excess, but the degree of disagreement remains to be quantified. We should find out soon, from further work at the Tevatron and from new analyses at the LHC, whether any of these results holds up and changes our thinking.
Astrophysical neutrinos are produced in the interactions of cosmic rays with an ambient medium of gas (protons) and photons of different energies. Once produced, these cosmic neutrinos can propagate cosmological distances and reach the Earth practically without interactions. They therefore carry unique information about the sources of cosmic rays, their acceleration and the composition of the most energetic phenomena in the universe.
The neutrino sky “seen” by experiments originates in the atmosphere, which shines day and night in neutrinos. One experiment alone, IceCube at the South Pole, has already detected more than 105 atmospheric neutrino events. However, the hope is to see “stars in broad daylight” through this atmospheric flux – that is, to observe neutrinos of cosmic origins. These include neutrinos from various point-like sources and some extended objects, as well as diffuse neutrino fluxes. The selection of the energy band – cosmic neutrinos should dominate at high energies – together with directional and time features, as well as correlations with known objects, emitting for instance in γ rays, are the main tools for distinguishing atmospheric and cosmic neutrinos.
The NUSKY 2011 international workshop on cosmic rays and cosmic neutrinos took place at the Abdus Salam International Centre for Theoretical Physics, Trieste, on 20–24 June. It attracted around 90 participants and featured some 40 talks by the main players in the field, covering all of the important aspects of the production, propagation and detection of high-energy cosmic neutrinos. Numerous discussions ensued, focusing on the implications of the latest experimental results, as well as on the status and perspectives of the field.
The results from IceCube played a prominent role in the discussions
The workshop took place during a critical period for a field in which the working experiments have reached the sensitivity necessary to probe realistic theoretical predictions. The results from IceCube – the first cubic-kilometre-scale detector ever built – thus played a prominent role in the discussions. Their preliminary results correspond to 40 and 59 detector strings (IC40 and IC59); data from IC79 are being analysed and the complete detector, IC86, is now running. So far, the various searches have found no cosmic-neutrino events.
Diffuse neutrino fluxes include the cosmogenic neutrinos generated in cosmic-ray interactions with the photons of the cosmic microwave background, as well as the integrated fluxes from remote, faint and unresolved objects. The IceCube collaboration finds no deviation of the reconstructed neutrino-energy spectrum from that for atmospheric neutrinos. This gives an upper bound on the neutrino flux in the 0.1–10 PeV energy range that is already below the Waxman-Bahcall limit, derived from the known cosmic-ray flux above 1019 eV.
As far as individual sources are concerned, the main suspects are objects that are relatively close, where the acceleration of cosmic rays probably occurs. These include supernova remnants (SNRs) in the Galaxy, as well as active galactic nuclei and gamma-ray bursters (GRBs). The IceCube all-sky maps show no statistically significant signal for steady or transient galactic or extragalactic sources. Nor has any neutrino been detected by IceCube (IC40 + IC59) in the so called stacking analysis of the GRBs (more than 100). The limit on the neutrino flux that emerges from this analysis is a factor of 5 below predictions, thus disfavouring the fireball model of GRBs.
The Pierre Auger Observatory in Argentina and ANITA, the balloon-bourne radio-interferometer that flew over Antarctica, are sensitive to the upper end of the cosmic-neutrino spectrum, the most relevant range for cosmogenic neutrinos (i.e. 1018 eV or 1 EeV). No neutrino-candidate events have been found in Auger data for periods equivalent to two years of the full array. ANITA-II has one candidate event, with one background event expected; cosmogenic models predict from 0.3 to 25 events.
The predictions for atmospheric neutrino fluxes depend on the properties of cosmic rays and on the physical conditions of the sources. In this connection, there are some new and interesting results. IceCube has found cosmic-ray anisotropies in the 20–400 TeV energy range, with a significant angular structure in the southern hemisphere. Anisotropy at higher energies, above 100 TeV, could reveal some connection to nearby SNRs. In addition, the KASCADE-Grande extensive air-shower array has observed structures in the “knee” region of the all-particle cosmic-ray spectrum.
Cosmic-ray origins
Turning to the question of the composition of ultra-high-energy cosmic rays (UHECRs), there had been somewhat contradictory results from the HiRes experiment and the Pierre Auger Observatory. In this connection, the possibilities for UHECR production by sources in the Galaxy (such as past GRBs ), as well as a dominant contribution from Centaurus A, were discussed at the workshop. The basic principles of cosmic-ray acceleration in SNRs are well understood on the basis of the non-linear theory of diffusive acceleration at collisionless Newtonian shocks.
The neutrino−γ-ray connection was at the centre of many discussions as a result of the wealth of new information from γ-ray astronomy. The production of neutrinos should be accompanied by production of γ-rays from π0-decay (the hadronic mechanism). However, ultrahigh-energy γs from extragalactic sources and γs of cosmogenic origin can interact with the medium (photons, electrons), giving rise to electromagnetic cascades. Hence, the whole γ spectrum shifts to lower energies in the giga- to tera-electron-volt range, where the Large Area Telescope (LAT) on the Fermi Gamma Ray Telescope gives important bounds. The Fermi-LAT results on the extragalactic γ flux can be translated into bounds on cosmic rays and cosmogenic neutrinos – the so-called “cascade” bound, based on the approximate equality of the energy released in neutrino production and in the electromagnetic cascade process. These data challenge the GRB origin of cosmic rays: if GRBs are the source of cosmic rays, then 10 events are predicted, while nothing appears in the diffuse bound.
One open question concerns the mechanism for the production of photons at the source. Tera-electron-volt γ-rays from transparent galactic sources can provide a direct indication of cosmic-ray acceleration sites. However, γs can be produced by accelerated electrons via the inverse Compton effect and by synchrotron radiation (both leptonic mechanisms). Fermi-LAT has measured γ spectra from a large number of SNRs and it turns out that both leptonic and hadronic γ-ray models work for SNRs on a source-by-source basis. In the case of GRBs, only bright GRBs are favoured by the Fermi-LAT data as the detectable sources. Nevertheless, bright nearby GRBs seem to be rare.
Features of neutrino propagation are a key element when the flavour of neutrino is taken into account in the detection process. The flavour composition and its dependence on neutrino energy are determined by conditions at the neutrino sources, in particular by the strength of magnetic field, the density distribution, etc. Flavour is also affected by neutrino oscillations and therefore depends on neutrino parameters. The expected composition ratio has the form a : 1 : 1 with a around 1, its precise value depending on 1–3 mixing, the deviation of 2–3 mixing from maximal, the neutrino-mass hierarchy and CP-violation. Various effects typical of physics beyond the Standard Model, such as neutrino decay, non-standard neutrino interactions or the presence of new neutrino species, can also modify the ratio. Finally, the ratio is extremely sensitive to possible violations of fundamental symmetries, such as Lorentz symmetry or the equivalence principle, which lead to modifications in the dispersion relations.
Neutrino astronomy enters a new cubic-kilometre era
Another highlight of the workshop was the report on the first-year of data-taking by DeepCore, the inner detector of IceCube, which has a low energy threshold of 10 GeV. The rate of events, which include cascades induced by electron-neutrinos as well as neutral current muon-neutrinos, was shown. DeepCore will detect around 800 neutrino-induced cascades per year. The physics motivations for the Phased IceCube Next Generation Upgrade (PINGU-I and PINGU-II) were also presented.
Neutrino observatories have now reached sufficient sensitivity to constrain multimessenger signals, γ-rays and UHECRs with minimal assumptions. That there is no evidence as yet for astrophysical neutrinos poses a problem for future projects because it means that IceCube will only scratch the surface of neutrino astronomy. The prime targets now are the transient sources.
There are several projects already under consideration or in progress. KM3NeT, a detector for neutrino astronomy under the Mediterranean Sea, which will have an instrumented volume of more than 5 km3, is in its preparatory phase. It will search for neutrino point sources in the energy range 100 GeV – 1 PeV. The Cherenkov Telescope Array is a new instrument for very high-energy (10–105) GeV γ astronomy. JEM-EUSO will detect Cherenkov light coming from the atmosphere using a telescope on the International Space Station that will have an instantaneous aperture of up to 106 km2. ANITA-III, approved to fly in 2013–2014, will search for ultra-high-energy neutrinos with 3–5 times higher sensitivity than ANITA-II. The Askaryan Radio Array is a ground-based antenna array at the South Pole covering an area of 100 km2. The expected yield is 3–5 neutrinos per year above 1017 eV, below the bulk of the cosmogenic neutrino predictions.
The NUSKY 2011 workshop was held just as high-energy neutrino astronomy enters a new cubic-kilometre era. Current bounds already have important implications and any further improvement of data will have an impact on the picture of the neutrino sky, with important consequences. The hope is that, with progressively more data from IceCube, a discovery is on the horizon. As Francis Halzen, of the University of Wisconsin-Madison and IceCube, concluded, “Hess 1912… and still no conclusion [on the origins of cosmic rays]; now the instrumentation is in place… SNRs and GRBs are in close range!”
Studies of the effects on clouds of atmospheric ions from galactic cosmic rays extend as far back as C T R Wilson at the beginning of the 20th century, whose work on simulating ion-droplet processes led to a Nobel Prize in Physics for his development of the cloud chamber. Some 50 years later, laboratory studies beginning in the 1960s established ion-enhancement of aerosol nucleation at ion-production rates that are characteristic of the lower atmosphere (e.g. Vohra et al. 1984). Aerosols are tiny liquid or solid particles suspended in the atmosphere and – above a size of around 100 nm – they provide the seed particles for all cloud droplets; “nucleation” indicates that they are produced by the clustering (condensation) of trace atmospheric molecules rather than by direct emission into the atmosphere, such as sea-spray particles.
Robert Dickinson of the National Center for Atmospheric Research, Boulder, Colorado, was the first to postulate in detail a cosmic-ray-aerosol-cloud mechanism to explain solar-climate variability (Dickinson 1975). More than 20 years later, correlations between cosmic-ray changes and clouds were reported for the first time by two groups (Svensmark and Friis-Christensen 1997; Pudovkin and Veretenenko 1997). Since these initial observations, a large number of papers have been published that either dispute or support the presence of significant correlations between cosmic rays and clouds, so the atmospheric observations are not yet settled. Carefully controlled laboratory experiments provide the best way of understanding whether or not cosmic rays could affect Earth’s clouds and climate because atmospheric measurements are affected by many uncontrolled sources of variability. This is precisely the aim of the CLOUD experiment at CERN.
In its first round of measurements, the CLOUD experiment is tackling one of the most challenging and long-standing problems in atmospheric science: to understand how new aerosol particles are formed in the atmosphere and the effect that these particles have on climate. Increases in the number concentration of atmospheric aerosol particles cool the climate both directly, by reflecting more sunlight, and indirectly, by forming additional cloud droplets, which makes clouds brighter and extends their lifetimes. The increased amount of aerosols in the atmosphere caused by human activities is thought to have offset a large fraction of the warming caused by greenhouse gases.
By current estimates, about half of all cloud droplets are formed on aerosol particles that were nucleated, so the nucleation process is likely to be important for climate. However, the physical mechanisms of nucleation are not well understood, so aerosol nucleation in current global-climate models is either based on theoretical calculations or adjusted to match observations. The CLOUD collaboration aims to understand the nucleation process and provide reliable aerosol physics for climate models. These data will help to quantify the direct and indirect radiative effects of aerosols on the climate, which are recognized as the largest source of uncertainty in climate forcing contributed by mankind.
To answer these questions, the CLOUD collaboration has built a 3-m stainless-steel chamber with much lower concentrations of contaminants than all previous experiments. This allows the measurement of nucleation from controlled amounts of selected trace gases without the complicating effect of undetected gases. CERN know-how has been key in achieving the demanding technical requirements for the CLOUD chamber and its gas and thermal systems: impurities of condensable vapours in the chamber must be kept below about 1 part per trillion; and the temperature stability of the chamber must be around 0.01 K. CLOUD uses a secondary beam from the CERN Proton Synchrotron (PS) to simulate the effects of cosmic rays with precise control of the “cosmic ray” intensity. The experiment has several other unique aspects, including the capability to create an ion-free environment with an internal electric clearing field, precise control of light-induced (photolytic) gas-phase reactions by means of ultraviolet (UV) illumination from a fibre-optic system, as well as highly stable operation at any temperature between 300 K and 183 K. During experimental runs, small amounts of the chamber atmosphere are extracted and passed through an array of state-of-the-art mass spectrometers and other instruments to measure the ultralow concentrations of atmospheric vapours and other important quantities.
The first results
In its first results published in Nature, the CLOUD collaboration reports on its measurements of the formation of new particles from sulphuric acid, ammonia and water vapours, which have long been thought to account for nucleation in the Earth’s atmosphere. The experiment has also measured the enhancement of atmospheric aerosol nucleation from galactic cosmic rays and the report includes the first-ever measurements of the chemistry and growth, molecule-by-molecule, of newly-formed charged clusters from single molecules up to stable aerosol particles.
Figure 1 shows a typical sequence of online measurements of the nucleation rates under different ionization conditions. High voltage is initially applied to the clearing-field electrodes to sweep ions from the chamber and suppress all effects of ionization (a). The run is started by opening the shutter of the UV system at a selected aperture, which rapidly establishes a chosen sulphuric-acid concentration in the chamber by photolytic oxidation of SO2 in the presence of O3 and H2O – as occurs in the real atmosphere (b). Particles begin to appear in each aerosol counter after a time delay that depends on the particle growth rate and the detection size threshold (c). When the neutral nucleation rate, Jn, has been measured, the clearing field is turned off (a). This allows cosmic rays to generate ion pairs that remain in the chamber, as shown by the appearance of small ion clusters (b). The ions give rise to a distinct increase in the nucleation rate, Jgcr, resulting from ion-induced nucleation at ground-level cosmic-ray intensity (c). In the next step, a 3.5 GeV/c pion beam from the PS is turned on and passes through the chamber, producing a further sharp increase in the nucleation rate, corresponding to Jch. Finally, the run is ended by closing the UV shutter and turning on the clearing field high-voltage, which starts to clear the chamber of aerosols in preparation for a new run under different conditions.
CLOUD has already made several important discoveries. First, the experiment has shown that the most likely nucleating vapours, sulphuric acid and ammonia, cannot account for the nucleation that is observed in the lower atmosphere. The nucleation measured in the chamber occurs at only 1/10–1/1000 of the rates observed in the lower atmosphere. It is clear from these first results from CLOUD that the treatment of aerosol formation in climate models will need to be revised substantially because all models assume that nucleation in the lower atmosphere is caused by these vapours and water alone. It is now essential to identify the additional nucleating vapours and whether their sources are mainly natural or from human activities. If the vapours have strong anthropogenic sources then there potentially exists a new climate-forcing agent from human activities. Alternatively, if the source is natural, then there is the potential for a new climate feedback that may affect the understanding of how the climate responds to radiative forcings.
Second, CLOUD has found that natural rates of atmospheric ionization caused by galactic cosmic rays substantially enhance nucleation under the conditions studied so far – by up to a factor of 10. Ion-enhancement is particularly pronounced in the cool temperatures of the mid-troposphere (about 5 km altitude) and above. CLOUD has found that at these temperatures, sulphuric acid and water vapour can nucleate without the need for additional vapours. This result leaves open the possibility that cosmic rays could also influence climate. However, it is premature to conclude that cosmic rays have a significant influence on clouds and climate until the additional nucleating vapours have been identified, their ion enhancement measured and the ultimate effects on clouds have been confirmed. So far, CLOUD has only measured the formation rate of aerosols in the few-nanometre size range, which are far too small to seed clouds.
The next steps for the CLOUD experiment will be to investigate the role of biogenic organic vapours in atmospheric aerosol nucleation, to measure condensational growth of aerosols up to sizes sufficient to seed cloud droplets and to study the effect of cosmic rays on these processes. Also, during the next few months, a new fast expansion system will be installed on the CLOUD chamber to allow it to operate as a classical Wilson cloud chamber for the in situ creation of liquid and ice clouds. This will extend CLOUD’s capability to study the effects of cosmic rays directly on cloud droplets and ice particles themselves.
When he visited the Ben Nevis Observatory in 1894 and 1895, Wilson was fascinated by the electrical and cloud condensation phenomena he witnessed. He returned to the Cavendish Laboratory at Cambridge determined to recreate clouds in the laboratory and study their physics. This led to his expansion cloud chamber, later described by Ernest Rutherford as “the most original and wonderful instrument in scientific history”. After his Nobel prize in 1927, Wilson returned to his passion for meteorological phenomena and devoted the rest of his life to the study of atmospheric electricity and clouds. Today, a century after its invention, Wilson’s cloud chamber remains “the most original and wonderful instrument” for studying the link between cosmic rays and clouds.
• The CLOUD (Cosmics Leaving OUtdoor Droplets) experiment is conducted by an international and interdisciplinary collaboration of scientists from Austria (University of Innsbruck, University of Vienna), Finland (Finnish Meteorological Institute, Helsinki Institute of Physics, University of Eastern Finland, University of Helsinki), Germany (Johann Wolfgang Goethe University Frankfurt, Leibniz Institute for Tropospheric Research), Portugal (University of Beira Interior, University of Lisbon), Russia (Lebedev Physical Institute), Switzerland (CERN, Paul Scherrer Institut), the UK (University of Manchester, University of Leeds) and the US (California Institute of Technology). CLOUD has received invaluable support from CERN accelerator and technical teams including, in particular, PH-DT, EN-CV, EN-MME, EN-MEF and TE-VSC.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.