Comsol -leaderboard other pages

Topics

Exoplanet hunters find rich planetary systems

The time when isolated extrasolar planets were discovered is over, with the detection of two entire systems of six and possibly even seven planets orbiting nearby stars. These recent discoveries by two competing groups are opening a new era in exoplanet searches, in particular with the possible discovery of an Earth-sized planet within the habitable zone of one of the two systems.

More than 450 exoplanets have been detected since the discovery, 15 years ago, of 51 Peg b, the first extrasolar planet found around a normal star. The number of new detections per year is still rising and could surpass 100 new planets for 2010. Measurements of stellar radial-velocity remain the prime detection method, but detection via planetary transits is catching up rapidly since the launch of the French–European satellite for Convection, Rotation and planetary Transits (CoRoT) in 2006, and the NASA Kepler mission in 2009. As the occurrence of a planet transiting in front of the disc of its parent star is rare, these discoveries are made by looking at hundreds of thousands of relatively distant stars. Radial-velocity searches, by contrast, focus on hundreds of nearby stars and aim to detect the wobbling of the star that is induced by the gravity pull of the orbiting planet.

The exoplanets discovered first were gas giants akin to Jupiter and Saturn. A significant step forward was achieved in 2004 with the detection of Uranus- and Neptune-sized planets (CERN Courier October 2004 p19). A subsequent milestone was the detection of two planets, with sizes only slightly greater than that of Earth, orbiting Gliese 581 (CERN Courier June 2007 p12). The claim that at least one of them was in the habitable zone of this low-mass star made this stellar system a popular place to study, but subsequent investigations suggested that one of the planets is most likely too hot and the other too cool to keep water liquid on its surface.

Now, Gliese 581 is again in the headlines with the possible discovery of a new, even smaller planet that is located in between the two already found and thus at the appropriate distance from the star to sustain life. Together with another newly detected object at longer periods, the small planet – with a mass of 3 to 4 times the mass of the Earth – would bring to six the number of planets in this system, which is only 20 light-years away.

The presence of the two new planets has been derived from a study led by Steven Vogt of the Lick Observatory of the University of California. His team combined 122 radial-velocity measurements by the HIgh-Resolution Echelle Spectrometer (HIRES) of the Keck 10 m telescope on Mauna Kea, Hawaii, with 119 previously published measurements by the High Accuracy Radial-velocity Planet Searcher (HARPS) mounted on the 3.6 m telescope of the European Southern Observatory at La Silla, Chile. Although the HIRES measurements are less accurate than the HARPS ones, the combination of the two datasets improves the statistics.

The group using HARPS gained attention a month earlier with the detection of a rich stellar system of up to seven planets orbiting the solar-type star HD 10180, 130 light-years away. This study, led by Christophe Lovis of the Observatory of the University of Geneva, clearly detects five Neptune-like planets and there is good evidence for a heavier planet orbiting further out, as well as for an Earth-sized planet close to the star.

Although the existence of the new planets is still to be more firmly confirmed in both exoplanetary systems, these new studies show that they can be highly populated by low-mass planets and suggest that potentially habitable planets might be relatively common. According to Vogt, they could be present around a few tens of per cent of stars in the solar neighbourhood.

TRIUMF lays on a feast of nuclear physics

CCtri1_0910

The 25th International Nuclear Physics Conference (INPC) took place on 4–9 July at the University of British Columbia, hosted by TRIUMF, Canada’s national laboratory for particle and nuclear physics in Vancouver. As the main conference in the field, this triennial meeting is endorsed and supported by the International Union for Pure and Applied Physics (IUPAP). This year it attracted more than 750 delegates – including 150 graduate students – from 43 countries and covered topics in nuclear structure, reactions and astrophysics; hadronic structure, hadrons in nuclei and hot and dense QCD; new accelerators and underground nuclear-physics facilities; neutrinos and nuclei; and applications and interdisciplinary research. Participants found many opportunities to connect with fellow nuclear physicists from across the globe. At conferences such as the INPC, which span an entire discipline, many unexpected links emerge, often leading to fruitful new discussions or collaborations.

Impressive progress

INPC 2010 opened with an afternoon public lecture by Lawrence Krauss of Arizona State University. In his talk, “An atom from Vancouver”, the renowned cosmologist and public speaker gave a broad perspective on why nuclear physics is key to a deeper understanding of how the universe was formed, as well as the birth, life and death of stars. The next morning, Peter Braun-Munzinger of GSI opened the scientific plenary programme with a talk that highlighted progress since the previous INPC in Tokyo in 2007, with theoretical and experimental examples from around the world. All topics at the conference were then well represented in both the plenary programme and the well attended afternoon parallel programme, where more than 250 invited and contributed talks were presented, as well as more than 380 posters. The poster presentations were among the most lively of the sessions, with many graduate students and post-doctoral fellows participating.

The scientific high points included the presentations in the field of hot and dense QCD, which reported on experimental and theoretical progress at Brookhaven’s Relativistic Heavy Ion Collider. The session on nuclear reactions provided highlights from many new and exciting facilities, including the Radio Isotope Beam Factory at the RIKEN centre in Japan, as well as an outlook of what can be expected from the Facility for Antiproton and Ion Research in Germany and the Facility for Rare Isotope Beams in the US. The quest towards the “island of stability” in the superheavy-element community is still ongoing, and new progress was reported with the identification of element 114.

There is also impressive progress being made in the theoretical sector, in particular with new ab initio approaches to calculations. Applications of these methods and progress in nucleon–nucleon interactions, where three-body interactions are now considered state of the art, were presented in the sessions on nuclear structure. The predictions of such calculations can be tested by experiments, for example laser experiments and ion-trap measurements give access to the ground-state properties of exotic nuclei. In-beam or in-flight experiments pave the way to even more exotic isotopes, where new magic numbers for the nuclear-shell model are appearing. This will also prove relevant for nuclear astrophysics, where there has been significant experimental progress with new measurements of direct-capture reactions using rare-isotope beams and background-suppressed facilities located in underground laboratories. Presentations in this field also covered research on neutron stars and new results from the modelling of core-collapse supernovae, which clearly indicate the need for neutrino interactions to be included.

Neutrinos played a large role in other sessions, for example on new facilities, where progress from the deep underground facilities was presented, together with other exciting new projects. The first results from long-baseline oscillation experiments show progress in this field, while double-beta-decay experiments are coming close to first results. These are keenly awaited not only by the community of nuclear physicists but by many others as well.

The sessions on fundamental symmetry are always one of the highlights of the INPC series, where tests of the Standard Model using atomic nuclei or nuclear physics methods can probe sectors complementary to those investigated by large particle-physics experiments, for example in experiments that measure atomic and neutron electric-dipole moments. Recent progress was reported in nuclear beta decay in the context of the testing of the unitarity of the Cabibbo–Kobayashi–Maskawa matrix, as well as measurements of the mass of the W-boson and the weak mixing-angle. Talks on the muon anomalous magnetic moment and its sensitivity for probing “new physics” showcased the burgeoning activity in this field.

One of the keenly anticipated presentations was given in a session on hadron structure, in which the collaboration that has measured the Lamb shift in muonic hydrogen at the Paul Scherrer Institute presented their results. Their measurement of the rms charge radius of the proton indicates a 5σ deviation from the established value, spawning a flurry of new experimental and theoretical activity.

CCtri2_0910

The conference also featured discussions on the growing importance of nuclear physics in near-term societal and economic arenas. David Dean of the US Department of Energy shared an interesting perspective on the future of the field in relation to growing concerns about energy production and consumption. From India, Swaminathan Kailas of the Bhabha Atomic Research Centre talked about the utilization of nuclear technologies in the development of thorium-based nuclear reactors. Andrew Macfarlane of the University of British Columbia described the application of nuclear physics to probing magnetic behaviours at the nanoscale level in regimes relevant for condensed-matter physics.

The large programme of the oral and poster sessions was extended to include special presentations by the winners of the IUPAP Young Scientist prizes, which are awarded in the field of nuclear physics every three years during the INPC conference. This year’s winners were: Kenji Fukushima of the Yukawa Institute for Theoretical Physics, Kyoto University; Peter Müller of Argonne National Laboratory; and Lijuan Ruan of Brookhaven National Laboratory. These three researchers represent the future excellence in nuclear physics, in the fields of theoretical QCD, precision experiments in low-energy nuclear-halo physics and experimental techniques related to quark-gluon plasma.

The organizers of INPC 2010 made a special effort to attract many graduate students and post-doctoral fellows to the conference. For example, TRIUMF combined its traditional summer school with the US National Science Foundation’s summer school for nuclear physics, directly prior to the conference. This not only allowed the school to recruit some of the INPC delegates as lecturers, but also gave students a broad overview of the field of nuclear physics before the conference. In addition, INPC 2010 teamed up with Nuclear Physics A to provide awards for the best student oral presentation and the top three poster presentations at the conference. An international panel of judges together with members from the editorial board of Nuclear Physics A decided on the following award winners from a strong field of applicants: Paul Finlay (Guelph) for oral presentation; Young Jin Kim (Indiana), Evan Rand (Guelph) and Thomas Brunner (Munich) for posters.

A treat of a different kind awaited delegates at the conference banquet at Vancouver’s famous Museum of Anthropology. Olivia Fermi, the grand-daughter of the famed nuclear physicist Enrico Fermi, was among the guests and in the after-dinner speech she shared anecdotes from her life growing up in the Fermi household. The first-nation artefacts and art pieces, together with the museum’s setting overlooking the Pacific Ocean and the skyline of Vancouver, made this venue a perfect fit to a very special conference. The field clearly presented itself in a healthy and dynamic state, with many young people eagerly anticipating the advent of new experiments, theory and facilities. At the end of the conference, IUPAP announced the location of the next in the series, which will be held in Florence in 2013.

• For more about the full programme and presentations, see http://inpc2010.triumf.ca/.

Physics buzz in Paris

CCich1_0910

Sixty years ago, particle physics was in its infancy. In 1950 Cecil Powell received the Nobel Prize in Physics for the emulsion technique and the discovery of the charged pions, and an experiment at Berkeley revealed the first evidence for the neutral version. In New York, the first in a new series of conferences organized by Robert Marshak took place at the University of Rochester with 50 participants. The “Rochester conference” was to evolve into the International Conference on High-Energy Physics (ICHEP) and this year more than 1100 physicists gathered in Paris for the 35th meeting in the series.

ICHEP’s first visit to the French capital was in 1982. CERN’s Super Proton Synchrotron had just begun to operate as a proton–antiproton collider and the UA2 collaboration reported on the first observations of back-to-back jets with high transverse momentum. This year, as ICHEP retuned to Paris, jets in a new high-energy region were again a highlight. This time they were from the LHC, one undoubted “star of the show”, together with the president of France, Nicolas Sarkozy.

CCich4_0910

Given the growth in the field since the first Rochester conference, this report can only touch on some of the highlights of ICHEP 2010, which took place on 22–28 July at the Palais des Congrès and followed the standard format of three days of parallel sessions, a rest day (Sunday) and then three days of plenary sessions. The evening of 27 July saw Parisians and tourists well outnumber physicists at the “Nuit des particules”, a public event held at the Grand Rex theatre (see box). On the rest day, in addition to various tours, there was the opportunity to watch the final stage of the 2010 Tour de France as it took over the heart of Paris.

A tour of LHC physics

The LHC project has had similarities to the famous cycle race – participants from around the world undertaking a long journey, with highs and lows en route to a thrilling climax. In the first of the plenary sessions, Steve Myers, director for accelerators and technology at CERN, looked back over more than a year of repair and consolidation work that led to the LHC’s successful restart with first collisions in November 2009. With the collider running at 3.5 TeV per beam since March this year, the goal is to collect 1 fb–1 of integrated luminosity with proton collisions before further consolidation work takes place in 2012 to allow the machine to run at its full energy of 7 TeV per beam in 2013. The long-term goal is to reach 3000 fb–1 by 2030. This will require peak luminosities of 5 × 1034 cm–2 s–1 in 2021–2030 for which studies are already underway, for example on the use of crab cavities.

The proposed long-term schedule envisages one-year shutdowns for consolidation in 2012, 2016 and 2020, with shorter periods of maintenance in December/January in the intervening years, and 6–8 month shutdowns every other year after 2020. Heavy-ion runs are planned for each November when the LHC is running, starting this year. Myers also provided glimpses of ideas for a 16.5 TeV version of the LHC that would require 20 T dipole magnets based on NbSn3, NbAI and high-temperature superconductors.

CCich2_0910

What many at the conference were waiting for were the reports from the LHC experiments on the first collision data, presented both in dedicated parallel sessions and by the spokespersons on the first plenary day. Common features of these talks revealed just how well prepared the experiments were, despite the unprecedented scale and complexity of the detectors. The first data – much of it collected only days before the conference as the LHC ramped up in luminosity – demonstrated the excellent performance of the detectors, the high efficiency of the triggers and the swift distribution of data via the worldwide computing Grid. All of these factors combined to allow the four large experiments to rediscover the physics of the Standard Model and make the first measurements of cross-sections in the new energy regime of 7 TeV in the centre-of-mass.

The ATLAS and CMS collaborations revealed some of their first candidate events with top quarks – previously observed only at Fermilab’s Tevatron. They also displayed examples of the more copiously produced W and Z bosons, seen for the first time in proton–proton collisions, and presented cross-sections that are in good agreement with measurements at lower energies. Lighter particles provided the means to demonstrate the precision of the reconstruction of secondary vertices, shown off in remarkable maps of the material in the inner detectors.

Both ATLAS and CMS have observed dijet events, with masses higher than that of the Tevatron’s centre-of-mass energy. The first measurements of inclusive jet cross-sections in both experiments show good agreement with next-to-leading-order QCD (The window opens on physics at 7 TeV). In searches for new physics, ATLAS has provided a new best limit on excited quarks, which are now excluded in the mass region 0.4 <M <1.29 TeV at 95% CL. For its part, by collecting data in the period between collisions at the LHC, CMS derived limits on the existence of the “stopped gluino”, showing that it cannot exist with lifetimes of longer than 75 ns.

The LHCb collaboration reported clear measurements of several rare decays of B mesons and cross-sections for the production of open charm, the J/ψ and bb states. With the first 100 pb–1 of data, the experiment should become competitive with Belle at KEK and DØ at Fermilab, with discoveries in prospect once 1 fb–1 is achieved.

The ALICE experiment, which is optimized for heavy-ion collisions, is collecting proton–proton collision data for comparison with later heavy-ion measurements and to evaluate the performance of the detectors. The collaboration has final results in charged multiplicity distributions at 7 TeV, as well as at 2.36 TeV and 0.9 TeV in the centre-of-mass. These show significant increases with respect to Monte Carlo predictions, as do similar measurements from CMS. ALICE also has interesting measurements of the antiproton to proton ratio.

CCich3_0910

While the LHC heads towards its first 1 fb–1, the Tevatron has already delivered some 9 fb–1, with 6.7 fb–1 analysed by the time of the conference. One eagerly anticipated highlight was the announcement of a new limit on the Higgs mass from a combined analysis of the CDF and DØ experiments. This excludes a Higgs between 158–175 GeV/c2, thus eliminating about 25% of the favoured region from analysis of data from the Large Electron–Positron collider and elsewhere. As time goes by, there is little hiding place for the long-sought particle. In other Higgs-related searches, the biggest effect is a 2σ discrepancy found in CDF for the decay to bb of the Higgs in the minimal supersymmetric extension to the Standard Model.

Stressing the Standard Model

The strongest hint at the Tevatron for physics beyond the Standard Model comes from measurements of the decays of B mesons. The DØ experiment finds evidence for an anomalous asymmetry in the production of muons of the same sign in the semi-leptonic decays of Bs mesons, which is greater than the asymmetry predicted by CP violation in the B system in the Standard Model by about 3.2σ. While new results from DØ and CDF for the decay Bs→J/ψ+Φ show a better consistency with the Standard Model, they are not inconsistent with the measurement of Absl,.

Experiments at the HERA collider at DESY, and at the B factories at KEK and SLAC, have also searched extensively for indications of new physics, and although they have squeezed the Standard Model in every way possible it generally remains robust. Of course, the searches extend beyond the particle colliders and factories, to fixed-target experiments and detectors far from accelerator laboratories. The Super-Kaminokande experiment, now in its third incarnation, is known for its discovery of neutrino oscillations, which is the clearest indication yet of physics beyond the Standard Model, but it also searches for signs of proton decay. It has now accumulated data corresponding to 173 kilotonne-years and, with no evidence for the proton’s demise, it sets the proton’s lifetime at greater than 1 × 1034 years for the decay to e+π0 and greater than 2.3 × 1034 years for νK+.

The first clear evidence for neutrino oscillations came from studies of neutrinos from the Sun and those created by cosmic rays in the upper atmosphere, but now it is the turn of the long-baseline experiments based at accelerators and nuclear reactors to bring the field into sharper focus. At accelerators a new era is opening with the first events in the Tokai-to-Kamioka (T2K) experiment, as well as the observation of the first candidate ντ in the OPERA detector at the Gran Sasso National Laboratory, using beams of νμ from the Japan Proton Accelerator Research Complex and CERN respectively.

While T2K aims towards production of the world’s highest intensity neutrino beam, the honour currently lies with Fermilab’s Neutrino beam at the Main Injector, which delivers νμ to the MINOS experiment, with a far-detector 735 km away in the Soudan Mine. MINOS now has analysed data for 7.2 × 1020 protons on target (POT) and observes 1986 events where 2451 would be expected without oscillation. The result is the world’s best measurement for |Δm2| with a value of 2.35+0.11/–0.08 × 10–3 eV2, and sin22θ> 0.91 (90% CL). MINOS also finds no evidence for oscillations to sterile neutrinos and puts limits on θ13. Recently, the experiment has been running with an anti-neutrino beam, and this has proved to hint at differences in the oscillations of antineutrinos as compared with neutrinos. With antineutrinos, the collaboration measures |Δm2|= 3.36+0.45/–0.40 × 10–3 eV2 and sin22θ = 0.86±0.11. As yet the statistics are low, with only 1.7 × 1020 POT for the antineutrinos, but the experiment can quickly improve this with more data.

The search for direct evidence of dark-matter particles, which by definition lie outside the Standard Model, continues to have tantalizing yet inconclusive results. Experiments on Earth search for the collisions of weakly interacting massive particles (WIMPs) in detectors where background suppression is even more challenging than in neutrino experiments. Recent results include those from the CDMS II and EDELWEISS II experiments, in the Soudan Mine and the Modane Underground Laboratory in the Fréjus Tunnel, respectively. CDMS II presented its final results in November 2009, following a blind analysis. After a timing cut, the analysis of 194 kg days of data yields two events, with an expected background of 0.8±01(stat.)±0.2 (syst.) events. The collaboration concludes that this “cannot be interpreted as significant evidence for WIMP interactions”. EDELWEISS II has new, updated results, which now cover an effective 322 kg days. They have three events near threshold and one with a recoil energy of 175 keV, giving a limit on the cross-section of 5.0 × 10–8 pb for a WIMP mass of 80 GeV (at a 90% CL).

Higher energies, in nature and in the lab

Looking to the skies provides a window on nature’s own laboratory of the cosmos. The annihilation of dark matter in the galaxy could lead to detectable effects, but the jury is still out on the positron excess observed by the PAMELA experiment in space. Back on Earth, the Pierre Auger Observatory and the High-Resolution Fly’s Eye (HiRes) experiment in the southern and northern hemispheres, respectively, detect cosmic rays with energies up to 1020 eV (100 EeV) and more. Both have evidence for the suppression of the highest energies by the Greisen-Zatsepin-Kuzmin (GZK) cut-off. There is also evidence for a change in composition towards heavier nuclei at higher energies, although this may also be related to a change in cross-sections at the highest energies. The correlation of the direction of cosmic rays at energies of 55 EeV or more with active galactic nuclei, first reported by the Pierre Auger collaboration in 2007, has weakened with further data, from the earlier value of 69 + 11/–13% to stabilize around 38 + 7/–6%, now with more than 50 events.

Cosmic neutrinos provide another possibility for identifying sources of cosmic rays. The ANTARES water Cherenkov telescope in the Mediterranean Sea now has a sky map of its first 1000 neutrinos and puts upper limits on point sources and on the diffuse astrophysical neutrino flux. IceCube, with its Cherenkov telescope in ice at the South Pole, also continues to push down the upper limits on the diffuse flux with measurements that begin to constrain theoretical models.

In the laboratory, the desire to push further the exploration of the high-energy frontier continues to drive R&D into accelerator and detector techniques. The world community is already deeply involved in studies for a future linear e+e collider. The effort behind the International Linear Collider to reach 500 GeV per beam is relatively mature, while work on the more novel two-beam concept for a Compact Linear Collider to reach 3 TeV is close to finishing a feasibility study. Other ideas for machines further into the future include the concept for a muon collider, which would require muon-cooling to create a tight beam, but could provide collisions at 4 TeV in the centre-of-mass. Reaching much higher energies will require new technologies to overcome the electrical breakdown limits in RF cavities. Dielectric structures offer one possibility, with studies showing breakdown limits that approach 1 GV/m. Beyond that, plasma-based accelerators still hold the promise of still greater gradients, as high as 50 GV/m.

Particle physics has certainly moved on since the first Rochester conference; maybe a future ICHEP will see results from a muon collider or the first plasma-wave accelerator. For now, ICHEP 2010 proved a memorable event, not least as the first international conference to present results from collisions at the LHC. Its success was thanks to the hard work of the French particle-physics community, and in particular the members of the local organizing committee, led by Guy Wormser of LAL/Orsay. Now, the international community can look forward to the next ICHEP, which will be in Melbourne in 2012.

• Proceedings of ICHEP 2010 are published online in the Proceedings of Science, see http://pos.sissa.it.

Microcosm meets macrocosm in Valencia

CCpas1_0910

The guiding spirit behind the series of annual symposia on Particles, Strings and Cosmology (PASCOS) is the unification of the microcosm with the macrocosm. It follows from the basic principles of uncertainty and mass-energy equivalence, which imply that when we probe deep inside subatomic space, we inevitably come across states of very high energy and mass that would have abounded in the early history of the universe. Recreating them in the laboratory is like recreating dinosaurs as in Jurassic Park, but is much more significant because it helps us to trace the very early history of the universe.

Making heavy particles such as the W and Z bosons and the top quark, as well as studying their interactions in the laboratory, helps us retrace the history of the universe to within a few picoseconds of its beginning. The discovery of the Higgs boson(s) and supersymmetric (SUSY) particles should, likewise, throw light on the nature of the phase transition that the universe experienced during those first few picoseconds – as well as on the nature of the cold dark matter (CDM) that permeates the universe today as a relic of its early history. But the story does not end there. We would like to follow the history of the universe right back to the instant of the Big Bang – and even beyond, where the standard tool of quantum field theory breaks down. The recent developments in string theory offer us the best hope of addressing these issues.

The interface of particle physics, string theory and cosmology is thus a highly active field of research at the frontiers of human knowledge. The PASCOS series of international symposia was started in the early 1990s in the US to recognize this interplay. The meetings strive to bring together researchers from these three areas to facilitate their mutual interaction and the cross-fertilization of ideas. After circulating round the US during its first decade, the series is now global, having visited India, South Korea, the UK, Canada and Germany.

PASCOS 2010, the 16th symposium in the series, took place on 19–23 July in the Spanish city of Valencia and was organized by the Instituto de Fisica Corpuscular (IFIC), which is the largest particle-physics laboratory of the Spanish National Research Council (CSIC) and is jointly operated by the University of Valencia. This year’s symposium, attended by more than 160 participants from around the world, was of particular significance because it came in the wake of the start-up of the LHC and the launch of the Planck satellite. In general, plenary sessions were held in the mornings, while the afternoons were devoted to parallel sessions focusing on the three areas of particles, strings and cosmology.

The microcosm

The first day’s proceedings started with an overview of the status of the LHC by Richard Hawkings of CERN. This first run of the LHC should accumulate an integrated luminosity of 1 fb–1 at a total energy of 7 TeV by the end of 2011. After a shutdown for a year to increase the total energy to 14 TeV, the next run is scheduled to start in early 2013. In talks on phenomenology, Manuel Drees of Bonn University and Werner Porod of the University of Würzburg and IFIC Valencia said that a meaningful SUSY search could already begin with the 1 fb–1 data at 7 TeV. However, a meaningful search for Higgs bosons will require about 10 fb–1 of data at 14 TeV, as Howard Haber of the University of California, Santa Cruz, discussed. If the Higgs does not show up then other physics may come into play, as Francesco Sannino of the University of Southern Denmark noted. In looking at alternative mechanisms he argued that a successful technicolour theory requires near-conformal dynamics.

Meanwhile experiments at the Tevatron have results based on 5 fb–1 of data, which Vadim Rusu of Fermilab presented. These include the recent result from DØ on CP violation in B–B mixing in the like-sign di-muon channel, which disagrees with the Standard Model by 3.2σ. Searches for a Standard Model Higgs particle continue at the Tevatron with a very complex multichannel analysis that leads to a small window, only a few giga-electron-volts wide, of excluded masses around 165 GeV, with the window set to widen as the luminosity increases beyond 10 fb–1 per experiment through 2011.

Assuming that SUSY does appear at the LHC, then quantitative predictions will be valuable. Kiwoon Choi, of the Korea Advanced Institute of Science and Technology, discussed SUSY-breaking from the perspective of string theory, including dilaton/moduli mediation, gauge mediation, anomaly mediation and D-term breaking. He argued that in string theory it is quite plausible that all mediation schemes could be present and give comparable contributions. Angel Uranga, of the Instituto de Física Teórica UAM/CSIC Madrid pointed out that, although string theory is unique (all versions being related by dualities), the way that the extra six dimensions are compactified is far from unique, leading to different low-energy physics. Current approaches include heterotic models, (intersecting) D-brane models and F-theory constructions, with concrete models leading to predictions for the spectrum for SUSY and exotic particles at the LHC.

An exciting possibility, discussed by Ignatios Antoniadis of CERN, is that the string scale is at energies of tera-electron-volts. This could in principle solve the gauge-hierarchy problem and provide an explanation of the weakness of gravity, provided that the extra dimensions perpendicular to the D-brane on which the Standard Model lives are large, which would lead to spectacular missing energy signals at the LHC. The extra tera-electron-volt-scale dimensions parallel to the brane also lead to Kaluza-Klein excitations of gauge bosons, and string/strong gravity effects, including the possible production of micro-black holes. “Are these ideas physical reality or imagination?” asked Antoniadis, replying that “the LHC will explore physics beyond the Standard Model”.

The only solid new microscopic physics in the past dozen years has been the discovery of neutrino mass and mixing. Yoichiro Suzuki of the University of Tokyo tracked the progress of atmospheric and solar-neutrino experiments over this period, focusing on the results from SuperKamiokande. This talk was complemented by that of Mayly Sanchez of Iowa State University/Argonne National Laboratory, who reported on the latest results from accelerator and reactor experiments. These included the antineutrino results from the MINOS experiment that show a 2σ discrepancy with the neutrino results, and the OPERA detector’s first observation of a τ event in the beam of muon neutrinos sent from CERN to the Gran Sasso National Laboratory.

Steve King of the University of Southampton considered the antineutrino results from MiniBooNE at Fermilab, which show an excess consistent with oscillations of the kind reported some years ago by the LSND collaboration at Los Alamos, while the neutrino results do not. He advocated a “wait and see” approach to these data and focused instead on the paradigm of three active neutrinos, as well as ideas such as SUSY R-parity violation and the several different types of see-saw mechanism that have been proposed and studied by José Valle’s group at IFIC. King also discussed the exciting observation that accurate tribimaximal lepton mixing suggests a non-Abelian discrete family symmetry that might unlock the long-standing flavour puzzle, which began with the discovery of the muon in 1937. These ideas may be incorporated into a complete SUSY grand unified theory (GUT) of flavour, as Eduardo Peinado and Stefano Morisi of IFIC, and Reiner de Adelhart Toorop of Nikhef also discussed. This could include new SUSY GUT relations presented by Stefan Antusch of Max Planck Institut für Physik, Munich.

The macrocosm

Carlos Frenk of Durham University set the agenda for the challenges facing the macrocosm with an entertaining talk on “The Standard Model of cosmogony: what next?”. After reviewing this “a priori implausible model but one which makes definite predictions and is therefore testable,” he focused on the prospects for testing the three assumptions that underpin the ΛCDM model: dark energy density Λ with negative pressure; structure seeded by quantum fluctuations during inflation; and CDM particles.

If CDM turns out to arise from weakly interacting massive particles, such as predicted for example in SUSY models with conserved R-parity, they could soon be discovered in direct detection experiments at underground laboratories.

Although the equation of state for dark energy can be constrained, with current combined limits giving the ratio of the pressure to the density, w = –0.97±0.05, Frenk regarded the prospects for understanding the nature of dark energy as questionable. However in a later talk on the self-tuning cosmological constant, Jihn Kim of Seoul National University reported on progress in the search for a non-anthropic solution to this big problem, including ideas such as inflation, the wave-function of the universe and “quintessential axions”. By contrast, in talking of the holographic principle and the surface of last scatter, Paul Frampton of the University of North Carolina at Chapel Hill attempted to dispense with dark energy altogether, starting from the observation that the observable universe is close to being a black hole. He argued that from our viewpoint, the apparent acceleration of the universe arises as a consequence of information storage on the surface of the visible universe because of the entropy of the black hole. The contrasting nature of these talks perhaps underscores Frenk’s point that we are a long way from understanding dark energy.

Alessandro Melchiorri of Sapienza Università di Roma reviewed the latest results from seven years of observation by the Wilkinson Microwave Anisotropy Probe and presented the First Light Survey from the Planck satellite from September 2009. The polarization of the cosmic microwave background (CMB) will be measured accurately with Planck, including the curl-free E-mode and the divergenceless B-mode, which at large angular scales are produced only by gravitational waves and provide a key signature of inflation. Planck will measure the gravitational wave background at 3σ if the tensor-to-scalar ratio r = 0.05, as could be the case in some of the inflation models discussed by Philipp Kostka and Jochen Bauman of the Max Planck Institut für Physik at Munich, as well as Lancaster University’s Anupam Mazumdar and others. Qaisar Shafi of the University of Delaware also reviewed some recent ideas including gauge singlet Higgs inflation and Standard Model inflation, including a non-minimal coupling of the Higgs field to gravity, where the gravitational couplings can have desirable effects if their magnitude is tuned to be very large.

If CDM turns out to arise from weakly interacting massive particles (WIMPs), such as predicted for example in SUSY models with conserved R-parity, they could soon be discovered in direct detection experiments at underground laboratories. For example, the Cryogenic Dark Matter Search in the Soudan Mine has seen two candidate events, although, as both Jodi Cooley-Sekula of Southern Methodist University and Andrea Giammanco of the Catholic University of Louvain pointed out, neither of them are “golden events”. Other direct detection experiments such as DAMA in Gran Sasso and CoGeNT, again in the Soudan Mine, also have candidate events, as Nicolao Fornengo of INFN/Torino described. He also talked about indirect WIMP detection signals that could be observed via annihilation radiation. Dark-matter effects in gamma rays could be seen by the Fermi Gamma-Ray Space Telescope, and leptonic anomalies in cosmic rays studied by the PAMELA satellite experiment, Fermi, the HESS Cherenkov telescope array and future experiments such as the Alpha Magnetic Spectrometer. Aldo Morselli of INFN Roma Tor Vergata showed that the positron excess observed by PAMELA is, however, well fitted by the assumption of nearby pulsar(s) and the electron discrepancies observed by Fermi are now being used to help constrain the pulsar models. A potentially clean signal of WIMPs could come as gamma-ray spectral lines from dwarf spheroidal galaxies, which are dark-matter dominated systems with low astrophysical background, but Fermi has not yet detected such signals.

The microcosm-macrocosm connection

CCpas2_0910

Bhaskar Dutta of Texas A&M University illustrated the connection between particle colliders and dark matter in the framework of minimal supergravity (mSUGRA). The favoured CDM regions of mSUGRA, such as the co-annihilation region, imply distinctive signatures for gluinos produced at the LHC, including two jets, two τ leptons and missing energy. By suitable choices of kinematic variables, the SUSY particle masses can be reconstructed and the mSUGRA parameters determined to check for a consistent CDM region. John Gunion of University of California at Davis also discussed such connections in the next-to-minimal supersymmetric Standard Model (NMSSM), motivated by data from the Large Electron–Positron (LEP) collider, which prefers a Higgs mass at around 100 GeV. This is possible in the NMSSM if the dominant Higgs decays are to pairs of CP-odd Higgs bosons that are sufficiently light that they do not decay to b-quark pairs so as to escape LEP limits. Such a scenario with the lightest neutralinos at 5–10 GeV might also account for recent results from the CoGENT experiment. However these findings are already challenged by first data from the XENON 100 experiment in the Gran Sasso laboratory, with the next results from this powerful experiment eagerly expected soon.

In the quest to discover the particle responsible for dark matter, which experiment will be first, the LHC or XENON 100? Whatever happens, it is clear that these and other experiments will all be required in order to unveil the complete theory at the heart of both the microcosm and the macrocosm.

Into Africa – a school in fundamental physics

CCasp1_0910

On 1 August, 65 students arrived at the National Institute for Theoretical Physics (NITheP) in Stellenbosch, South Africa. They were there to participate in the first African School on Fundamental Physics and its Applications (ASP2010). More than 50 participants had travelled from 17 African countries, fully supported financially to attend the intensive, three-week school. Others, from Canada, Germany, India, Switzerland and the US, helped to create a scientific melting pot of cultural diversity that fused harmoniously throughout the duration of the school.

ASP2010 was planned as the first in a series of schools to be held every two years in a different African country. It was sponsored by an unprecedentedly large number of international physics institutes and organizations, indicating the widespread interest that exists in making high-energy physics and its benefits the basis of a truly global partnership by reaching out to a continent where increased participation needs to be developed. The school covered a range of topics: particle physics, particle detectors, cosmology and accelerator technologies, as well as some of the applications, such as computing, medical physics, light sources and magnetic confinement fusion.

The courses were taught by physicists from around the globe, but included a significant number from South Africa, which has relatively well established research and training programmes in these areas of physics. The picture throughout the rest of Africa, in particular the sub-Saharan region, is rather different. As an example, consider the facts about African researchers at CERN. Currently, only 51 researchers of the 10,000 researchers registered at CERN have African nationalities, and only 18 of them currently work for African institutes. As CERN’s director-general, Rolf Heuer, points out: “When I show people the map of where CERN’s users come from, it’s gratifying to see it spanning the world, and in particular to see southern-hemisphere countries starting to join the global particle-physics family. Africa, however, remains notable more for the number of countries that are not involved than for those that are.” John Ellis, CERN’s adviser for relations with non-member states and one of the school’s founders, confirms that “sub-Saharan African countries are under-represented in CERN’s collaborations”.

CCasp2_0910

“This new series of schools will strengthen existing collaborations and develop current and new networks involving African physicists,” explains Fernando Quevedo, director of the Abdus Salam International Centre for Theoretical Physics (ICTP) in Trieste, one of the sponsors of the school. He said: “This activity was a big success in all respects: lecturers of the highest scientific level, a perfect example of close collaboration among several international institutions towards a single goal and, most importantly, bringing the excitement and importance of the study of basic sciences to a community with great potential. The standard set for future activities is very high.” ICTP, with its 46 years of experience in training, working and collaborating with scientists in developing nations, is committed to the ASP2010 wholeheartedly. The aims and mission of the school fit perfectly with ICTP’s mission to foster science in Africa. The knowledge, relationships and collaborations that will result from it will enhance ICTP’s existing programmes in Africa.

“An extraordinary opportunity”

A strikingly new aspect of the school was that a large number of national and international organizations and institutes collaborated to make it happen, thereby demonstrating a common belief in its importance and worth. These included Spain (Ministry of Foreign Affairs), France (Centre National de la Recherche Scientifique/IN2P3, Institut des Grilles, Commissariat à l’énergie atomique), Switzerland (École polytechnique fédérale de Lausanne, Paul Scherrer Institute), South Africa (NITheP, National Research Foundation), and the US (Fermilab, Department of Energy, Brookhaven, Jefferson Lab, National Science Foundation), as well as the international institutions CERN and ICTP. On top of this, the International Union of Pure and Applied Physics offered travel grants to five female students. The number of involved organizations is set to increase in future editions of the school. Steve Muanza, a French experimental physicist of Congolese origin and also a founder of the school, says that in particular, “early support from IN2P3 was crucial for involving the other organizations in this new type of school in Africa”.

CCasp3_0910

The 65 students were selected from more than 150 applicants. Among them were some of the brightest aspiring physicists in the continent, who represent the future of fundamental physics and its applications in Africa. Chilufya Mwewa, a participant from Zambia, summarizes what the school meant for her: “Attending ASP2010 was such an extraordinary opportunity that it had a huge positive impact on my life. The school indeed enhanced my future career in physics. Thanks to you and other organizers for opening us up to other physics platforms that we never had a chance to know about in our own countries.” Ermias Abebe Kassaye, a student from Ethiopia, underlines these aspects: “I have got a lot of knowledge and experience from the school. The school guides me to my future career. I obtained the necessary input to disseminate the field to my country and encourage others to do research in this field. I am working strongly to achieve my desire and to shine like a star, and your co-operation and help is essential to our success.”

CCasp4_0910

Apart from highlighting established research in fundamental physics in South African universities and research institutes, ASP2010 also emphasized the role of high-energy physics in the innovation of medicine, computing and other areas of technology through the “applications” aspect of the programme. The iThemba Laboratory for Accelerator Based Sciences (iThemba LABS), situated between Cape Town and Stellenbosch, is a significant player in this area. “As well as being an important producer of radioisotopes, it is the only laboratory in the southern hemisphere where hadron therapy is performed with neutron and proton beams, which have to date treated more than 1400 and 500 patients, respectively,” explains Zeblon Vilakazi, director of the iThemba LABS.

Participating students had the opportunity to perform two practical courses in which they became acquainted with the use of scintillation detectors and performed measurements of environmental radioactivity. Laser practicals and a computing tutorial for simulations using the GEANT4 toolkit were also available at the University of Stellenbosch. The breaks between lectures provided the opportunity for many informal discussions to continue. “In these discussions, practical information was given to the students about opportunities for fellowships for further education, research positions and other schemes, such as Fermilab International fellowships, the CERN summer student programme and the ICTP Diploma Programme,” explains Ketevi Assamagan, a Brookhaven physicist of Togolese origin and a member of the ASP2010 organizing committee.

A number of additional demonstrations and talks were also incorporated into the programme. A video conference with Young-Kee Kim, Fermilab’s deputy-director, provided a vision of science on a planetary scale; a webcast that connected the students to the CERN Control Centre enabled them to experience a live demonstration of proton acceleration; and special talks by John Ellis, Albert De Roeck and Philippe Lebrun of CERN, and Jim Gates, of the University of Maryland (and a scientific adviser to President Obama), also made big impressions on the students. In parallel, several of the school’s lecturers gave public lectures in Cape Town. Anne Dabrowski, a former South African physics student, provided a role model to support the dream of African participation in high-energy physics. Now an applied physicist in the Beams Department at CERN, she was a member of the local organizing committee.

CCasp5_0910

South Africa has recently formed a programme for collaboration with CERN and has become the second African country to join the ATLAS collaboration. “We are ready to do our best to assist any deserving student or postdoc to become involved via one of our member universities or national facilities that are participating in activities at CERN,” says Jean Cleymans, the director of the SA-CERN Programme. “Students are welcome to visit our SA-CERN website or the ASP2010 website for further information and to get in contact with us.” From discussions with the students, it was clear that several were keen to take advantage of these opportunities.

Several high-profile South African scientists and government officials participated in the last day of the school. This outreach and forum day reviewed the practical aspects of fundamental physics, which could be used as a gateway to innovation and to enhance future collaborations. The inspirational enthusiasm of the students at ASP2010 indicates that overall the future of fundamental science and technology on the African continent is in very good hands.

• For more about ASP2010, see http://AfricanSchoolofPhysics.web.cern.ch/.

Second Banff Workshop debates discovery claims

On 11–16 July, the Banff International Research Station in the Canadian Rockies hosted a workshop for high-energy physicists, astrophysicists and statisticians to debate statistical issues related to the significance of discovery claims. This was the second such meeting at Banff (CERN Courier November 2006 p34) and the ninth in a series of so-called “PHYSTAT” workshops and conferences that started at CERN in January 2000 (CERN Courier May 2000 p17). The latest meeting was organized by Richard Lockhart, a statistician from Simon Fraser University, together with two physicists, Louis Lyons of Imperial College and Oxford, and James Linnemann of Michigan State University.

The 39 participants, of whom 12 were statisticians, prepared for the workshop by studying a reading list compiled by the organizers and by trying their hand at three simulated search problems inspired by real data analyses in particle physics. These problems are collectively referred to as the “second Banff Challenge” and were put together by Wade Fisher of Michigan State University and Tom Junk of Fermilab.

Significant issues

Although the topic of discovery claims may seem rather specific, it intersects many difficult issues that physicists and statisticians have been struggling with over the years. Particularly prominent at the workshop were the topics of model selection, with the attendant difficulties caused by systematic uncertainties and the “look-elsewhere” effect; measurement sensitivity; and parton density function uncertainties. To bring everyone up to date on the terminology and problematics of searches, three introductory speakers surveyed the relevant aspects of their respective fields: Lyons for particle physics, Tom Loredo of Cornell University for astrophysics and Lockhart for statistics.

Bob Cousins of the University of California, Los Angeles, threw the question of significance into sharp relief by discussing a famous paradox in the statistics literature, originally noted by Harold Jeffreys and later developed by Dennis Lindley, both statisticians. The paradox demonstrates with a simple measurement example that it is possible for a frequentist significance test to reject a hypothesis, whereas a Bayesian analysis indicates evidence in favour of that hypothesis. Perhaps even more disturbing is that the frequentist and Bayesian answers scale differently with sample size (CERN Courier September 2007 p39). Although there is no clean solution to this paradox, it yields several important lessons about the pitfalls of testing hypotheses.

One of these is that the current emphasis in high-energy physics on a universal “5σ” threshold for claiming discovery is without much foundation. Indeed, the evidence provided by a measurement against a hypothesis depends on the size of the data sample. In addition, the decision to reject a hypothesis is typically affected by one’s prior belief in it. Thus one could argue, for example, that to claim observation of a phenomenon predicted by the Standard Model of elementary particles, it is not necessary to require the same level of evidence as for the discovery of new physics. Furthermore, as Roberto Trotta of Imperial College pointed out in his summary talk, the emphasis on 5σ is not practiced in other fields, in particular cosmology. For example, Einstein’s theory of gravity passed the test of Eddington’s measurement of the deflection of light by the Sun with rather weak evidence when judged by today’s standards.

Statistician David van Dyk, of the University of California, Irvine, came back to the 5σ issue in his summary talk, wondering if we are really worried about one false discovery claim in 3.5 million tries. His answer, based on discussions during the workshop, was that physicists are more concerned about systematic errors and the “look-elsewhere” effect (i.e. the effect by which the significance of an observation decreases because one has been looking in more than one place). According to van Dyk, the 5σ criterion is a way to sweep the real problem under the rug. His recommendation: “Honest frequentist error rates, or a calibrated Bayesian procedure.”

Many workshop participants commented on the look-elsewhere effect. Taking this effect properly into account usually requires long and difficult numerical simulations, so that techniques to simplify or speed up the latter are eagerly sought. Eilam Gross, of the Weizmann Institute of Science, presented the work that he did on this subject with his student Ofer Vitells. Using computer studies and clever guesswork, they obtained a simple formula to correct significances for the look-elsewhere effect. In his summary talk, Luc Demortier of Rockefeller University showed how this formula could be derived rigorously from results published by statistician R B Davies in 1987. Statistician Jim Berger of Duke University explained that in the Bayesian paradigm the look-elsewhere effect is handled by a multiplicity adjustment: one assigns prior probabilities to the various hypotheses or models under consideration, and then averages over these.

Likelihoods and measurement sensitivity

Systematic uncertainties, the second “worry” mentioned by van Dyk, also came under discussion several times. From a statistical point of view, these uncertainties typically appear in the form of “nuisance parameters” in the physics model, for example a detector energy scale. Glen Cowan, of Royal Holloway, University of London, described a set of procedures for searching for new physics, in which nuisance parameters are eliminated by maximizing them out of the likelihood function, thus yielding the so-called “profile likelihood”. An alternative treatment of these parameters is to elicit a prior density for them and integrate the likelihood weighted by this density; the resulting marginal likelihood was shown by Loredo to take better account of parameter uncertainties in some unusual situations.

While the marginal likelihood is essentially a Bayesian construct, some statisticians have advocated combining a Bayesian handling of nuisance parameters with a frequentist handling of parameters of interest. Kyle Cranmer of New York University showed how this hybrid approach could be implemented in general within the framework of the RooFit/RooStats extension of CERN’s ROOT package. Unfortunately, systematic effects are not always identified at the beginning of an analysis. Henrique Araújo of Imperial College illustrated this with a search for weakly interacting massive particles that was conducted blindly until the discovery of an unforeseen systematic bias. The analysis had to be redone after taking this bias into account – and was no longer completely blind.

In searches for new physics, the opposite of claiming discovery of a new object is excluding that it was produced at a rate high enough to be detected. This can be quantified with the help of a confidence limit statement. For example, if we fail to observe a Higgs boson of given mass, we can state with a pre-specified level of confidence that its rate of production must be lower than some upper limit. Such a statement is useful to constrain theoretical models and to set the design parameters of the next search and/or the next detector. Therefore, in calculating upper limits, it is of crucial importance to take into account the finite resolution of the measuring apparatus.

How exactly to do this is far from trivial. Bill Murray of Rutherford Appleton Laboratory reviewed how the collaborations at the Large Electron–Positron collider solved this problem with a method known as CLS. He concluded that although this method works for the simplest counting experiment, it does not behave as desired in other cases. Murray recommended taking a closer look at an approach suggested by ATLAS collaborators Gross, Cowan and Cranmer, in which the calculated upper limit is replaced by a sensitivity bound whenever the latter is larger. Interestingly, van Dyk and collaborators had recently (and independently) recommended a somewhat similar approach in astrophysics.

Parton density uncertainties

As Lyons pointed out in his introductory talk, parton distribution functions (PDFs) are crucial for predicting particle-production rates, and their uncertainties affect the background estimates used in significance calculations in searches for new physics. It is therefore important to understand how these uncertainties are obtained and how reliable they are. John Pumplin of Michigan State University and Robert Thorne of University College London reviewed the state of the art in PDF fits. These fits use about 35 experimental datasets, with a total of approximately 3000 data points. A typical parametrization of the PDFs involves 25 floating parameters, and the fit quality is determined by a sum of squared residuals. Although individual datasets exhibit good fit quality, they tend to be inconsistent with the rest of the datasets. As a result, the usual rule for determining parameter uncertainties (Δχ2 = 1) is inappropriate, as Thorne illustrated with measurements of the production rate of W bosons.

The solution proposed by PDF fitters is to determine parameter uncertainties using a looser rule, such as Δχ2 = 50. Unfortunately, there is no statistical justification for such a rule. It clearly indicates that the assumption of Gaussian statistics badly underestimates the uncertainties, but it is not yet understood whether this is the result of unreported systematic errors in the data, systematic errors in the theory or the choice of PDF parametrization.

Statistician Steffen Lauritzen of the University of Oxford proposed a random-effects model to separate the experimental variability of the individual datasets from the variance arising from systematic differences. The idea is to assume that the theory parameter is slightly different for each dataset and that all of these individual parameters are constrained to the formal parameter of the theory via some distributional assumptions (a multivariate t prior, for example). Another suggestion was to perform a “closure test”, i.e. to check to what extent one could reproduce the PDF uncertainties by repeatedly fluctuating the individual data points by their uncertainties before fitting them.

In addition to raising issues that require further thought, the workshop provided an opportunity to discuss the potential usefulness of statistical techniques that are not well known in the physics community. Chad Schafer of Carnegie Mellon University presented an approach to constructing confidence regions and testing hypotheses that is optimal with respect to a user-defined performance criterion. This approach is based on statistical decision theory and is therefore general: it can be applied to complex models without relying on the usual asymptotic approximations. Schafer described how such an approach could help solve the Banff Challenge problems and quantify the uncertainty in estimates of the parton densities.

Harrison Prosper of Florida State University criticized the all-too frequent use of flat priors in Bayesian analyses in high-energy physics, and proposed that these priors be replaced by the so-called “reference priors” developed by statisticians José Bernardo, Jim Berger and Dongchu Sun over the past 30 years. Reference priors have several properties that should make them attractive to physicists; in particular their definition is very general, they are covariant under parameter transformations and they have good frequentist sampling behaviour. Jeff Scargle, of NASA’s Ames Research Center, dispatched some old myths about data binning and described an optimal data-segmentation algorithm known as “Bayesian blocks”, which he applied to the Banff Challenge problems. Finally, statistician Michael Woodroofe of the University of Michigan presented an importance-sampling algorithm to calculate significances under nonasymptotic conditions. This algorithm can be generalized to cases involving a look-elsewhere effect.

After the meeting, many participants expressed their enthusiasm for the workshop, which raised issues that need further research and pointed to new tools for analysing and interpreting observations. The discussions between sessions provided a welcome opportunity to deepen understanding of some topics and exchange ideas. That the meeting took place in the magical surroundings of the Banff National Park could only help its positive effect.

Further reading

The most recent PHYSTAT conference was at CERN in 2007, see http://phystat-lhc.web.cern.ch/phystat-lhc/. (Links to the earlier meetings can be found at www.physics.ox.ac.uk/phystat05/reading.htm.) Details about the 2010 Banff meeting are available at www.birs.ca/events/2010/5-day-workshops/10w5068.

Ultraviolet and Soft X-ray Free-Electron Lasers: Introduction to Physical Principles, Experimental Results, Technological Challenges

by Peter Schmüser, Martin Dohlus and Jörg Rossbach, Springer. Hardback ISBN 9783540795711, £126 (€139.95, $189).

CCboo3_0910

Even at first glance my impression of this book was positive. Many coloured illustrations with detailed comments attracted my attention, so initially I began to read around them. A further study did not alter this first impression.

The field of free-electron laser (FEL) technology has reached a high state of the art in recent years, with operation demonstrated at high power (14 kW at Jefferson Lab), for soft X-rays (FLASH at DESY) and hard X-rays (the Linac Coherent Light Source at SLAC). The authors are well known experts in the field. Jörg Rossbach, for example, led the successful development of FELs at DESY for many years. Therefore, their book is interesting not only as a primer on FEL physics for students, but also as a reflection of the “view from inside”, expressing the personal opinions of people who made a real FEL with unique radiation parameters.

“We must study a lot to learn something.” This three-century-old aphorism of the Baron de Montesquieu is fully true for modern technology, and in particular for FEL technology. One really needs to know much to understand how an FEL works, and much more to design and build an FEL facility. The book therefore covers both the theoretical description of FEL physics and the experimental methods used to build an FEL and to control the radiation parameters.

The first half provides an introduction to the theory of the FEL. It gives the reader a clear picture of electron motion in an FEL, with the 1-D FEL equations used to demonstrate principles of FEL operation. Analytical and numerical solutions of these equations, combined with a discussion of limitations of the 1-D theory, give the full and explicit picture of FEL physics. Despite the use of simplified mathematical models, the authors succeed in presenting a physically transparent description of issues as advanced as self-amplified spontaneous emission (SASE), the FEL radiation spectrum, radiation-energy fluctuations etc. Parametrizations of numerical results allow the reader to make fast but reasonable estimates of the influence of the electron-beam parameters on the length and output power of a SASE FEL. Some theoretical issues, which are frequently not included in courses on general physics, but are useful for deeper understanding, are briefly described with references to more detailed textbooks and are given in several appendices.

The second part of the book contains a description of experimental results and the FEL installation at the FLASH facility, which provides an excellent example for the explanation of technical details. It is recent enough to use relatively new techniques and approaches, but has operated long enough as a user facility for the experimental techniques to be well developed and tested, as well as for the real parameters of the electron and radiation beams and the corresponding limitations to be explored. The authors compare measurements with theoretical predictions for the dependence of the radiation power and the degree of bunching on the co-ordinate along the undulator, for example. This confirms that the numerous formulae of the first part are really useful.

The main part of the description of FLASH is devoted to the accelerator and electron-beam parameters. This is natural, because the cost of the accelerator and the efforts for its operation are the dominant parts of the cost and effort of the whole FEL installation. The undulator line, which is another important part of the FEL, is described only briefly, probably indicating that the FLASH undulator is so good and reliable that people almost forget about it. A brief discussion of the challenges and prospects for X-ray FELs concludes the book.

Because the book focuses on X-ray FELs, it cannot touch all aspects of FEL physics and technology, so some important FEL-related issues must be studied through other books and papers. For undulators the authors refer to the corresponding book by J A Clarke, The Science and Technology of Undulators and Wigglers (OUP 2004). A better understanding of high-gain FEL physics can be achieved by reading old books on microwave travelling-wave tubes, which contain almost all the equations and results of 1-D FEL theory. Indeed, the first high-gain FEL – the travelling-wave tube with undulator called the “ubitron” (150 kW peak power at 5 mm wavelength) – was built by Robert M Phillips in 1957. Further study may be continued through the annual FEL conference proceedings and references to papers they contain.

Thus, this book is very useful for students who are beginning to study FEL physics. It is also valuable for experts, who may look at their research from a different point of view and compare the authors’ way of presenting material with their own way of explaining FEL physics.

Presenting Science: A Practical Guide to Giving a Good Talk and The Craft of Scientific Communication

Presenting Science: A Practical Guide to Giving a Good Talk by Çiğdem IŞsever and Ken Peach, OUP. Hardback ISBN 9780199549085, £39.95 ($75). Paperback ISBN 9780199549092, £19.95 ($35).

The Craft of Scientific Communication by Joseph E Harmon and Alan G Gross, Chicago University Press. Hardback ISBN 9780226316611, $55. Paperback ISBN 9780226316628, $20. E-book ISBN 9780226316635, $7–$20.

CCboo1_0910

Communication takes many forms, each with its own “how to” manual. Peer-to-peer, communication with the media, reaching exhibition visitors and the public in general: all now have their guides. Each audience deserves particular attention, but the ground rules are always the same: define your objectives, work out a strategy for achieving those objectives and then plan your tactics. This approach comes across loud and clear in these two very different books.

Physicists Çiğdem IŞsever and Ken Peach give a practical guide to preparing a talk, while science communicators Joseph Harmon and Alan Gross take a rather more academic look, focusing on the craft of writing a scientific paper.

IŞsever and Peach deserve high praise not only for producing this book, but also for recognizing that communication skills are important enough to be taught to science students: their book is based on a course they deliver at the University of Oxford. Their key message is to be prepared: know who your audience is, why you are talking to them and what messages you want them to carry away. “The aim,” they write, in bold text, “is to get your message across to your audience clearly and effectively.”

The book walks its readers through the steps towards achieving that goal, urging would-be speakers to research the event that they’ll be talking at and the audience they’ll be talking to, before giving advice on how to prepare the ubiquitous PowerPoint presentation. “The purpose of the slides,” reads chapter two, “is to help the audience understand the subject. Once you start to relax on this and make the slides serve some other purpose (like being intelligible to those who were not there) you risk confusing the audience.” In other words, choose your message, package it for your audience and stick to it. It’s good advice.

Later chapters develop key themes. Chapter three talks about structure: tell people what you’re going to say, say it and then remind them of what you’ve said. Chapter four develops the theme of understanding the audience’s needs, while chapter five addresses style: if you’re talking to an audience of particle physicists, for example, you’ll adopt a different style from what you would choose for school pupils.

IŞsever and Peach are somewhat disparaging about the use of corporate image, arguing that it takes up too much space and leaves little for content. In corporate communication, this is often the case, but it doesn’t have to be that way. Whether we like it or not, the word “branding” has entered the lexicon of communication in particle physics: we’re all jostling for a place in the public’s consciousness, and brand identity helps. Establishing the brand has been a key ingredient of CERN’s communication, for example, throughout the start-up phase of the LHC. Partly as a result, CERN and the LHC are fast becoming household names, providing a strong platform on which to build scientific and societal messages.

The book winds up where it began: reminding readers that the key to success is thorough preparation. Like much of the book’s advice, this applies equally well to any form of communication, be it with lab visitors, journalists or even your neighbours.

CCboo2_0910

Harmon and Gross take an altogether more academic approach, analysing and dissecting the scientific paper through the ages to identify and codify what works and what doesn’t. It is the classic textbook to IsŞsever and Peach’s field guide, each chapter ending with exercises for the student.

It’s a bold thing to attempt to improve on some of the most successful papers of the 20th century, but on page 22 Harmon and Gross do just that, while being careful to point out that their book was not subject to the same length constraints that a journal imposes. The point they make is that each part of a paper has a specific role to play, and by respecting that rule you’ll craft a better paper. A typical abstract, they argue, tells the reader what was done, how it was done and what was discovered. On page 22, they add a fourth element: why it matters. In doing so, the abstract becomes not only informative but also persuasive.

Harmon and Gross go on to apply the same rigorous approach to communications, ranging from grant proposals to writing for the general public, inevitably arriving at the subject of PowerPoint. In a chapter that resonates strongly with IŞsever and Peach, they point out a common failing of PowerPoint presentations: their creators often forget that audiences have only a minute or two to view each slide. Their key message? A PowerPoint slide is not a page from a scientific paper.

The book concludes with a final thought that, while most of us will never scale the intellectual heights of the great names of science, we can all aspire to approach them in terms of the clarity of our communication. These are two very different books on science communication, but their authors share a common belief that good science communication is a craft that can be learnt. Either one is a good place to start.

Bunch trains lead towards target luminosity

Three weeks of intense machine development on the LHC came to a satisfying conclusion on the night of 21 September with the final validation of the machine-protection systems for operation with “bunch trains”. Less than three weeks later, the machine was running with 248 bunches per beam, giving a peak luminosity of 8.8×1031 cm–2 s–1, close to this year’s target of 1032 cm–2 s–1.

CCnew3_0910

Until the beginning of September, the LHC ran with bunches spaced by 1–2 μs, injected one bunch at a time from the Super Proton Synchrotron, the final stage in the injection chain. The change to injecting bunch trains – groups of bunches – not only reduces the time required to fill the machine but also allows for further increases in luminosity. It is therefore another important step on the route to full operation of the LHC. Eventually, the collider will run with 2808 bunches per beam, with 25 ns between bunches in a train. The target for 2010 was for a bunch spacing of 150 ns (equivalent to about 45 m) in the trains.

Running with bunch trains requires the careful setting-up of crossing angles between the beams at the interaction points in order to avoid unwanted collisions on either side of the experiments. Tests showed that the minimum angle needed to avoid parasitic collisions with the 150 ns trains is 100 μrad. They also revealed that there is more dynamic machine aperture at the interaction region than predicted at the nominal crossing angle, at injection, of 170 μrad. For the subsequent physics runs the crossing angle was reduced to 100 μrad during the ramp of the beam energy and the “squeeze”.

Using crossing angles has the consequence that all the protection devices had to be set up to match the new trajectories round the machine, a process that alone took the best part of a week, but all was ready for the first physics fill with the new conditions on 22 September. For this, the operations team injected three trains of eight bunches to give 24 bunches per beam. The fill of 13.5 hours provided around 170 nb–1 of integrated luminosity. A day later, the number of bunches was increased to 56 per beam.

This initial work on bunch trains was with approximately the same total beam intensity as in August, but the first fill brought a bonus. Bunches of nominal intensity were injected into the LHC with a smaller-than-usual transverse size. While this might give a higher initial luminosity, it was expected to cause lifetime problems when the beams were brought into collision. However, the beam lifetime remained surprisingly high (around 25 hours) and the luminosity was significantly higher than expected.

The first step to higher intensity took place on 25 September, with an increase to 104 bunches per beam. The total intensity was now more than 1013 protons per beam and a single fill for physics could deliver more than 1 pb–1. At 3.5 TeV the LHC had reached a stored energy per beam of 6 MJ, the highest for any collider and exceeding the record set at the Intersecting Storage Rings at CERN many years ago.

The next increase, to 152 bunches per beam, was made on 30 September by injecting 16 bunches at a time in two 8-bunch trains. This was followed on the night of 4–5 October with the first physics fill with 200 bunches, which provided 2 pb–1 in 12 hours. Then, on 7–8 October, the fill with 248 bunches was achieved, with bunch trains injected three at a time.

The strategy for increasing the intensity is driven by the machine protection, as the stored beam energy increases with each step. The aim is to provide three successful fills for physics to deliver more than 20 hours of colliding beams before progressing to the next step. Running with protons is scheduled to stop towards the end of October, by which time the LHC should be running with 344 bunches per beam. There will then be a period to set up for the first runs with heavy ions, before a short shutdown at the end of the year.

AMS takes off for Kennedy Space Center

The Alpha Magnetic Spectrometer (AMS), an experiment that will search for antimatter and dark matter in space, left Geneva on 26 August on the penultimate leg of its journey to the International Space Station (ISS). Following work to reconfigure the AMS detector at CERN, it was flown to the Kennedy Space Center in Florida on board a US Air Force Galaxy transport aircraft.

The AMS experiment will examine fundamental issues about matter and the origin and structure of the universe directly from space. Its main scientific target is the search for dark matter and antimatter, in a programme that is complementary to that of CERN’s LHC.

Last February the AMS detector travelled from CERN to the European Space Research and Technology Centre (ESTEC) in Noordwijk for testing to certify its readiness for travel into space (CERN Courier April 2010 p5). Following the completion of the testing, the AMS collaboration decided to return the detector to CERN for final modifications. In particular, the detector’s superconducting magnet was replaced by the permanent magnet from the AMS-01 prototype, which had already flown in space in 1998. The reason for the decision was that the operational lifetime of the superconducting magnet would have been limited to three years because there is no way of refilling the magnet with liquid helium – which is necessary to maintain the magnet’s superconductivity – on board the space station. The permanent magnet, on the other hand, will now allow the experiment to remain operational for the entire lifetime of the ISS.

the AMS detector

Following its return to CERN, the AMS detector was reconfigured with the permanent magnet before being tested with particle beams. The tests were used to validate and calibrate the new configuration before the detector leaves Europe for the last time.

On arrival at the Kennedy Space Center, AMS will be installed in a clean room for further tests. A few weeks later, the detector will be moved to the space shuttle. NASA is planning the last flight of the space-shuttle programme, which will carry AMS into space, for the end of February 2011.

Once docked to the ISS, AMS will search for antimatter and dark matter by measuring cosmic rays. Data collected in space by AMS will be transmitted to Houston and on to CERN’s Prévessin site, where the detector control centre will be located, as well as to a number of regional physics-analysis centres set up by the collaborating institutes.

• The AMS experiment stems from a large international collaboration, which links the efforts of major European funding agencies with those in the US and China. The detector components were produced by an international team, with substantial contributions from CERN member states (Germany, France, Italy, Spain, Portugal and Switzerland), and from China (Taipei) and the US. The detector was assembled at CERN, with the assistance of the laboratory’s technical services.

  • This article was adapted from text in CERN Courier vol. 50, October 2010, p5
bright-rec iop pub iop-science physcis connect