A new, 4-year project co-funded by the European Union FP7 Research Infrastructures programme and worth €26 million began on 1 February. The AIDA project (Advanced European Infrastructures for Detectors at Accelerators) will develop detector infrastructures for future particle-physics experiments in line with the European Strategy for Particle Physics.
The project, which is co-ordinated by CERN, has more than 80 institutes and laboratories involved either as beneficiaries or as associate partners, thus ensuring that the whole European particle detector community is represented. The project will receive a contribution of €8 million from the European Commission.
The particle detectors developed in the AIDA project will be used in a planned upgrade to the LHC; at the proposed International Linear Collider, which will study the Standard Model of physics and beyond with higher precision; Super-B factories, which aim to understand the matter–antimatter asymmetry in the universe; and neutrino facilities.
The AIDA project is divided into three main activities: networking, joint research and transnational access. The networking activity will study promising new technologies, such as 3D detectors and vital electronics, as well as specifying technological needs for the future. Interactions with appropriate industrial partners will also be planned.
The joint research activity will see many of the beneficiary institutes working together to improve beam lines that already exist to test particle detectors. The equipment and technology needed to produce these detectors will also be upgraded.
The transnational access activity will see access to beam lines for testing particle detectors at CERN, DESY and irradiation facilities across Europe opened up to new users. Experts in this area can contribute to the field through their findings made at these facilities.
• For details about the project and the full list of participants, see http://cern.ch/aida.
Using data collected in proton–proton collisions at the LHC at a centre-of-mass energy of 7 TeV, the LHCb experiment has observed new rare decay modes of B0s mesons for the first time. The decay B0s → J/ψ f0(980) will be important for studying CP violation in the B0s system, while the semileptonic decay B0s → D*–s2Xμ+ν will be valuable for testing QCD-based theoretical predictions.
The first new decay mode observed is of the hadronic decay B0s → J/ψ f0(980). This is particularly interesting because it is to a CP eigenstate, which means that it can be used in measuring mixing-induced CP violation. The B0s consists of a b antiquark (b) bound with an s quark, and can decay to a J/ψ (cc) together with an ss state, which can be a φ or, more rarely, an f0. While the φ decays to K+K–, the f0 decays to π+π–. The collaboration analysed J/ψK+K– and J/ψπ+π– events to search for the relevant decay candidates. Finally, using a fit to the π+π– mass spectrum with two interfering f0 resonances (f0(980) and f0(1370)), they measured a ratio of the B0s decays to J/ψ f0(980) and J/ψ φ of 0.252+0.046-0.032(stat.)+0.027-0.033 (syst.) (LHCb collaboration 2011a). The events close to the f0(980) could be used to measure the CP-violating phase for B0s decays, which is some 20 times smaller than in B0 mixing and hence much more sensitive to physics beyond the Standard Model.
The LHCb collaboration has also made the first observation of another decay, B0s → D*–s2Xμ+ν. The most frequent decays of the B0s involve the b quark changing into a c quark, resulting in a cs charm hadron, such as a D–s or D*–s , or other excited states. The relative proportion of such final states provides valuable information for testing theoretical models based on QCD. To investigate decays of this kind, the collaboration looked for final states in which the decay D0 → K+π– formed a vertex with a K– and a μ+. The analysis revealed two structures in the D0 K– mass spectrum at masses consistent with the Ds1(2536)– and D*s2(2573)– mesons (LHCb collaboration 2011b). While the Ds1(2536)– has been observed previously in B0s decays by the DØ collaboration at Fermilab’s Tevatron, LHCb’s result marks the first observation of the D*s2(2573)– state in B0s decays. The measured branching fraction relative to the total B0s semileptonic rate for the D*s2(2573)– comes out at 3.3±1.0(stat.)±0.4(syst.)%, while the value for the Ds1(2536)– is measured to be 5.4±1.2 ±0.5 %. These values agree well the prediction of the updated Isgur-Scora-Grinstein-Wise quark model, ISGW2.
The observation of these two new decay modes demonstrates that the LHCb experiment is already competitive in the field of heavy flavour physics. Great progress is expected with the larger data sample due from the coming run, with the potential to constrain, or even observe, new physics.
The Hubble Space Telescope has been pushed to its limits to find a galaxy candidate at a record distance of 13.2 thousand million light-years. The tiny, dim object observed by Hubble could be a compact galaxy of blue stars that is seen as it was only 480 million years after the Big Bang. In addition to the discovery of this galaxy candidate at a redshift, z, of around 10, the study published in Nature shows that the rate of star formation was 10 times lower – at a redshift of z=10 – than it is about 200 million years later, at z=8.
The quest for the most distant object in the universe has reached a legendary milestone: a redshift of 10. The redshift, z, measures the relative shift in the wavelength of light that results from the expansion of the universe during the journey from the remote galaxy to the telescope. It is therefore also a measure of the scale factor of the universe: at a redshift of z=10, distances between the seeds of the massive galaxy clusters of today would have been typically closer to each other by a factor of 1+z=11.
The first claim for a galaxy at a z of around 10 was later disproved (CERN Courier May 2004 p13). In 2006, the record redshift approached z=7 (CERN Courier November 2006 p10), a benchmark that was itself overridden in less than two years (CERN Courier April 2008 p11). In 2003–2004, Hubble accumulated a total exposure of more than 10 days from an apparently empty region in the Fornax constellation. This original Hubble Ultra Deep Field (HUDF) revealed about 10,000 remote galaxies up to z of about 6. After the recent installation of the Wide Field Camera 3 (WFC3), which allows additional infrared measurements, the HUDF was observed again in the summers of 2009 and 2010, offering an unprecedented glimpse of the very first galaxies.
Rychard J Bouwens of Leiden University and the University of California, Santa Cruz, has analysed this unique dataset together with collaborators in Europe and the US. In 2010, based on the first-year data, they published the detection of three galaxies at z of around 8, one of which was confirmed spectroscopically at a redshift of z=8.6 by the European Very Large Telescope. With the addition of the 2010 data, they have now also found a galaxy that would be at z of around 10. The ultraviolet light emitted by this tiny galaxy only one hundredth the size of the Milky Way is measured in the infrared channel at 1.6 μm. It is detected neither at shorter wavelengths because of hydrogen absorption in the early universe, nor by the Spitzer Space Telescope at longer wavelengths – thus excluding a dusty galaxy at lower redshift.
How confident is the team that the faint smudge of light seen in a single channel is not spurious? Bouwens and colleagues first checked that the source is visible in both the 2009 and 2010 datasets, as well as in two random subsets, each containing 50% of the data. Using Monte-Carlo simulations they find a probability of about 80% that the candidate is, indeed, real. Regardless of the uncertainty of this detection the main surprise comes from the fact that this is the only candidate at z around 10. Based on the extrapolation from z=6–7 towards z=10, the team should have found about three galaxies. So, instead of finding an upturn of star formation between z=8 and z=10, there seems to be a downturn in the already decreasing trend of star formation towards higher redshift. It therefore seems that galaxies at z around 10 are not only extremely difficult to observe but are also much less luminous and/or numerous than the galaxies observed later in cosmic time. Galaxies that are less luminous would be expected by a hierarchical growth of galaxies and would better suit Hubble’s successor, the James Webb Space Telescope – scheduled for launch in 2014 – which has a reduced field of view but high sensitivity to faint sources at very high redshift.
On 17 November 2010 the ALPHA collaboration at CERN’s Antiproton Decelerator (AD) reported online in the journal Nature that they had observed trapped antihydrogen atoms by releasing them quickly from the magnetic trap in which they were produced and detecting the annihilation of the antiproton – the nucleus of the antihydrogen atom (Andresen et al. 2010a). This exciting result from a proof-of-principle experiment paves the way to detailed study of antimatter atoms.
Do matter and antimatter obey the same laws of physics? One intriguing way to test this would be to compare the spectra of hydrogen and its antimatter twin: antihydrogen. Such studies would build on almost a century of detailed theoretical and experimental investigation of the hydrogen atom, from the Bohr model to the ultraprecise measurements of Nobel laureate Theodor Hänsch and colleagues. The frequency of the 1s–2s transition in hydrogen has been measured with a precision of about 2 parts in 1014. The CPT theorem requires that this frequency must be exactly the same in antihydrogen. The goal of the ALPHA experiment is to test this claim – at least from the high-energy physics point of view. To the atomic physicist, for whom hydrogen is the basic, elegant workhorse of the evolution of quantum mechanics, the question is perhaps: “How could you possibly have access to antihydrogen and not try to measure that?”
While our colleagues at the LHC have been busily setting new records for the highest energy stored hadrons, we at the AD have been headed in the other direction – setting a new record for the lowest energy anti-hadrons. The antihydrogen atoms in ALPHA can be trapped only if their kinetic energy, in temperature units, is less than 0.5 K. This corresponds to about 9 × 10–5 eV, or 3 × 10–17 times the energy of protons in the LHC, which represents quite a dynamic range for CERN.
The low temperature necessary has been a daunting challenge for the ALPHA experimenters. Antihydrogen is formed by mixing antiprotons from the AD with positrons from a special accumulator fuelled by a 22Na positron emitter. The particles are mixed in cryogenic Penning traps, which feature strong solenoidal magnetic fields for transverse confinement and electrostatic fields for longitudinal confinement (figure 1). The resultant antihydrogen, which is electrically neutral, can be confined only by the weak interaction of its magnetic dipole moment with an external magnetic trapping-field. The strength of this dipole interaction is such that, for ground state antihydrogen, a 1 T deep magnetic well can confine atoms with kinetic energy up to 0.7 K.
The atom trap in ALPHA comprises an octupole magnet and two solenoidal “mirror coils” (figure 1). These produce a magnetic minimum at the position at which the antihydrogen atoms are formed. If the atoms are formed with a kinetic energy of less than about 0.5 K (in temperature units), they are trapped. (This is for the ground state; excited atoms can have a larger magnetic moment and experience a deeper well.)
The difficulty lies in the transition from plasmas of charged particles to neutral atoms. The space-charge potential energies in the plasmas can be of order 10 eV – about 120 000 K in temperature equivalent. So one of the experimental challenges for antihydrogen trapping has been to learn how to cool and carefully manipulate the charged species to produce cold, trappable atoms.
At ALPHA, we mix about 30 000 antiprotons with about two million positrons in each attempt to trap antihydrogen. The two plasmas are placed in adjacent potential wells, as in figure 1, and the antiprotons are then driven into the positron plasma using a frequency-swept, axial electric field (Andresen et al. 2011). This drive is “autoresonant”, i.e. the amplitude of the antiproton oscillation automatically matches the corresponding drive frequency in the nonlinear potential well. The idea is to control the energy of the antiprotons precisely by carefully tailoring the drive frequencies. The antiprotons enter the positron cloud with low relative energy and do not heat the positron cloud on entry.
The positrons themselves are self-cooling: they lose energy by radiation in the 1 T magnetic field in the Penning trap. We supplement this process using evaporative cooling. Starting with an equilibrated positron plasma in a potential well, we lower one side of the well, allowing the hottest positrons to escape. The remaining positrons re-equilibrate through collisions, settling to a lower temperature. The technique, which is well known in the field of Bose-Einstein condensation for neutral atoms, was also demonstrated by ALPHA on antiprotons in 2009 (Andresen et al. 2010b). After evaporative cooling, the positrons in ALPHA are at about 40 K. Under ALPHA conditions, the antiprotons can enter the positron plasma and come into thermal equilibrium before making antihydrogen. Thus, only a small fraction of the antihydrogen atoms produced will have a kinetic energy equivalent to less than 0.5 K.
Antiprotons and positrons are allowed to interact or “mix” for 1s to produce antihydrogen, after which we remove any charged particles that remain trapped in the potential wells and then ground the electrodes of the Penning trap. The decisive step is to shut down the magnetic atom trap quickly to see if there are any trapped antihydrogen atoms that escape and annihilate on the walls of the device. However, even with the Penning trap’s electric fields turned off, there is still a small chance that antiprotons could be magnetically trapped due to the mirror effect in the strong magnetic field gradients in the atom trap. To eliminate this possibility, we apply pulsed electric fields along the axis of the trap, in alternating directions, so as to kick any stubborn antiprotons out of the trapping volume.
The ALPHA experiment’s superconducting atom-trap magnets, manufactured at Brookhaven National Laboratory, can be turned off with a time constant of about 9 ms. This fast shutdown helps to discriminate between antihydrogen annihilations and cosmic rays.
Antiproton annihilations are detected by an imaging, three-layer silicon vertex detector (see figure 2) that surrounds the cryostat for the traps and magnets. To be absolutely sure that any annihilations observed come from neutral antimatter and not from charged antiprotons, we apply an axial electric “bias” field to the trap while it is shutting off. While antiprotons would be deflected by this field, antihydrogen is not, and we can see the result using the position-sensitive silicon vertex detector. The silicon detector is also extremely useful in topologically rejecting cosmic rays.
The result of many trapping attempts is shown in figure 3, reproduced from the article in Nature. Each trapping attempt takes about 20 minutes of real time. In 335 trapping attempts, we observed 38 annihilations consistent with the controlled release of trapped antihydrogen atoms. The spatial distribution of these annihilations is not consistent with the expected behaviour of charged particles (figure 3). We can conclude that neutral antihydrogen atoms were trapped for at least 172 ms, which is the time it took to eject the charged particles from the trap and to apply the multiple field pulses to ensure the clearing of mirror-trapped antiprotons.
In subsequent experiments, we made good progress on improving the trapping probability and investigated the storage lifetime of antihydrogen atoms in the trap. At holding times up to 1000s, we still see the signal for release of trapped atoms. This is an encouraging result that leads us to be optimistic about the future of spectroscopic studies with trapped antihydrogen.
When the AD starts up again in 2011, we hope to pick up where we left off in 2010. The first step is to continue to improve the trapping probability for produced antihydrogen atoms, by, for example, working on reducing the positron temperature and studying improvements in the mixing manipulations to make colder antihydrogen. As regards the spectrum of antihydrogen, the 1s to 2s laser-frequency transition described above is not the only game in town. Microwaves can interact with antiatoms in the magnetic trap, either with the positron spin (positron spin resonance) or with the antiproton spin (antinuclear magnetic resonance). Paradoxically, using rare atoms of antimatter can offer a detection bonus for such experiments, as a resonant interaction can lead to loss and annihilation of the trapped atom – an event that can be detected with high efficiency. At ALPHA we hope to take the first steps towards microwave spectroscopy – the first resonant look at the inner workings of an antiatom – in 2011. At the same time we will be working on a new atom-trapping device that is optimized for precision measurements with both lasers and microwaves.
Having demonstrated trapping of antihydrogen atoms, the ALPHA collaboration was able to finish off the year by celebrating the honour of being recognized as the Physics Breakthrough of the Year for 2010 by Physics World magazine. We shared this honour with our friends across the wall at the AD in the ASACUSA collaboration, who produced antihydrogen in a new type of device that could lead to in-flight studies of the antiatoms. Finally, the American Physical Society news staff named our trapping of antihydrogen as one of the top ten physics-related news stories of 2010. All in all, 2010 was a vintage year for antimatter at the AD.
Last December, the cusp-trap group of the Japanese–European ASACUSA collaboration demonstrated for the first time the efficient synthesis of antihydrogen, in a major step towards the production of a spin-polarized antihydrogen beam. Such a beam will allow, for the first time, high-precision microwave spectroscopy of ground-state hyperfine transitions in antihydrogen atoms, enabling tests of CPT symmetry (the combination of charge conjugation, C, parity, P, and time reversal, T) – the most fundamental symmetry of nature. The new experiment may also shed light on some of the most profound mysteries of our universe: the asymmetry of matter and antimatter in our universe. Why is it that the universe today is made up almost exclusively of matter, and not antimatter? Scientists believe that the answer may lie in tiny differences between the properties of matter and antimatter, manifested in violations of CPT symmetry.
Testing CPT symmetry
Antihydrogen, made up of an antiproton and a positron, is attractive for testing CPT symmetry given its simple structure. In particular, comparisons of antihydrogen’s transition frequencies with those of ordinary hydrogen atoms will provide stringent tests of CPT symmetry. For this purpose, the ATRAP and ALPHA experiments under way at CERN’s Antiproton Decelerator (AD) aim to make high-precision measurements of the transition frequency between the ground state (1s) and first excited state (2s) of antihydrogen, which is close to 2466 THz, in the realm of laser spectroscopy. The ALPHA collaboration made an essential breakthrough in this approach when they successfully trapped antihydrogen for the first time in November.
The ASACUSA experiment, also at the AD, is taking the complementary approach of measuring precisely the transition frequency between the two substates of the ground state that arise from hyperfine splitting as a result of the interaction between the two magnetic moments associated with the spins of the antiproton and the positron. The collaboration aims to measure the ground-state hyperfine transition frequency, which is about 1420 MHz in the microwave region, by extracting a spin-polarized antihydrogen beam in a field-free region. Last December, the cusp-trap group of ASACUSA reported that the cusp trap, which is designed not to trap antihydrogen but to concentrate spin-polarized antiatoms into a beam, succeeded in synthesizing antihydrogen atoms with an efficiency as high as 7%. This is a big step towards the realization of high-precision microwave spectroscopy of the ground-state hyperfine transition in antihydrogen.
The cusp trap uses anti-Helmholtz coils, which are like Helmholtz coils but with the excitation currents antiparallel rather than parallel to each other. This arrangement yields a magnetic quadrupole field that has axial symmetry about the coil axis: a so-called cusp magnetic field (figure 1). In addition, an axially symmetric electric field is generated by an assembly of multi-ring electrodes (MREs) that is coaxially arranged with respect to the coils. Having axial symmetry, these magnetic and electric fields guarantee the stable storage and manipulation of a large number of antiprotons and positrons simultaneously – one of the unique features of the cusp trap. Furthermore, the magnetic field distribution of the cusp trap can produce an intensified antihydrogen beam with high spin-polarization in low-field-seeking (LFS) states. In other words, antihydrogen atoms can be tested for CPT symmetry in a field-free (or weak field) region – a vital condition for making high-precision spectroscopy a reality. These properties are exclusive to the cusp-trap scheme.
As figure 1 shows, the extracted beam is injected into a microwave cavity, followed by a sextupole magnet and a spin analyser, and then focused on an antihydrogen detector (shown in red). When the microwave frequency is in resonance with one of the hyperfine transition frequencies, it induces a spin flip, which converts the LFS state into a high-field-seeking (HFS) state. In this case, the antihydrogen beam becomes defocused (shown in purple), a transition that is easily monitored by an intensity drop in the antihydrogen detector. As is evident from this description, the cusp trap scheme does not need to trap antihydrogen atoms, but it can do so if necessary. The big advantage is that a large number of antihydrogen atoms with higher temperatures can participate in the measurements.
The AD at CERN supplies a pulsed antiproton beam of around 3 × 107 particles per pulse at 5.3 MeV, which is slowed down to 120 keV in ASACUSA by the radio-frequency quadrupole decelerator. For the antihydrogen experiments the beam is then injected into an antiproton catching trap (called the MUSASHI trap) through two layers of thin degrader foil. In this way, about 1.5 × 106 antiprotons per AD shot are accumulated in the trap where they are cooled with preloaded electrons. The antiproton cloud is then radially compressed by a “rotating wall” technique to allow efficient transportation into the cusp trap. The positrons that make up the antihydrogen are supplied via a compact all-in-one positron accumulator that was designed and developed for this research. Both antiprotons and positrons are then injected into the cusp trap to synthesize cold antihydrogen atoms. A 3D track detector monitors the cusp track to determine the annihilation position of antiprotons by tracking charged pions. The detector comprises two pairs of two modules, each with 64 horizontal and 64 vertical scintillator bars that are 1.5 cm wide.
Figure 2 shows schematically the structure of the central part of the cusp trap. The MRE is housed in a cryogenic ultrahigh vacuum tube held at a temperature of several Kelvin with a good heat contact, while still being insulated electrically. Thermal shields at 30 K located on both ends of the MRE prevent room-temperature radiation creeping in from the beamline. Outside the MRE part of the bore tube, five superconducting coils installed symmetrically with respect to the MRE centre provide the cusp magnetic field. On the downstream side, the bore diameter is expanded for efficient extraction of the antihydrogen beam.
In the recent experiment, antihydrogen atoms were synthesized by mixing antiprotons and positrons at the nested trap region, as shown by the blue solid line (φ1) in figure 3. As antihydrogen atoms are neutral, they are not trapped and move more or less freely so some of them reached the field-ionization trap (FIT). If the antihydrogen atoms were formed via a three-body-recombination process in high Rydberg states, i.e. relatively loosely bound, they are field-ionized and their antiprotons are accumulated in the FIT. During the experiment, the FIT was opened (as indicated by the dash dotted line, φ2) every 5 s and the antiprotons accumulated were released and counted by the 3D tracker through their annihilations. This gave the antihydrogen synthesis rate as a function of time since the start of the mixing process. Figure 4 shows an example of the evolution of the synthesis rate for 3 × 105 antiprotons and 3 × 106 positrons, in which the rate grew in the first 20–30 s, and then gradually decreased. In this case, a total of around 7 × 103 antihydrogen atoms were synthesized.
The ASACUSA collaboration is now looking forward to starting the microwave spectroscopy of hyperfine transition frequencies – which may lead to groundbreaking insights into the nature of antimatter and symmetry.
PSI2010, the 2nd International Workshop on the Physics of Fundamental Symmetries and Interactions at low energies and at the precision frontier, brought together experimentalists and theoreticians, united by a common quest for experimental precision using probes as diverse as neutrons, antiprotons, muons, atoms, molecules and even condensed-matter samples. The meeting, which was aimed at consolidating recent results and planning future directions in the field, took place at the Paul Scherrer Institut (PSI) on 11–14 October and was supported by PSI and the Swiss Institute for Particle Physics (CHIPP).
With 146 participants from 17 countries, the form of the workshop led to lively discussions, helping to promote the transfer of information within the community. Results were presented in 65 plenary talks and some 30 posters, most of which related to experiments. PSI being a world-leading centre for muon, pion and neutron physics, many presentations were related to investigations with neutrons (40%) and pions or muons (30%). This reflected both the high local interest as well as the strength of the worldwide community – about three-quarters of the presentations were on work at facilities other than PSI.
Gearing up for new physics
The workshop began with a talk on “How to look at low-energy precision physics in the era of the LHC” given by Daniel Wyler of the University of Zurich. He described how low-energy precision physics is complementary to the search for new physics at the LHC and how it can even answer specific questions that reach beyond the LHC – a theme that was highlighted in other talks. The final results from the TRIUMF Weak Interaction Symmetry Test (TWIST) experiment on muon decay demonstrate the impact of precision results on, for example, left–right symmetric models or sterile neutrinos, as TRIUMF’s Glen Marshall explained.
Fundamental neutron physics, introduced by Torsten Soldner of Institut Laue-Langevin (ILL), cropped up in several sessions. These covered recent controversial results on neutron-lifetime measurements in storage bottles and results on neutron decay at ILL and the Los Alamos National Laboratory (LANL), as well as new proposals for measurements with higher sensitivity. Peter Geltenbort of ILL provided a special twist to the topic with his results on the efficient guiding capabilities for ultracold neutrons (UCNs) using coated commercial Russian water hoses.
The search for permanent electric dipole moments (EDMs) of fundamental particles was discussed by several speakers, who covered the majority of the present worldwide efforts. Michael Ramsey-Musolf of the University of Wisconsin discussed the paramount importance of permanent EDMs and their cosmological implications, and he set the scene for several talks on the experimental searches for a neutron EDM at ILL, the Spallation Neutron Source, PSI, Osaka and TRIUMF. Ben Sauer of Imperial College showed new data on the search for the electron EDM in ytterbium fluoride, while Blayne Heckel reported on activities to improve on the present world record in the experiment on mercury at the University of Washington. Future directions for EDM searches and co-magnetometers using 129Xe or neutron crystal-diffraction were introduced in further talks, as well as in posters.
Part of the workshop was devoted to violations of space–time symmetry. Ralf Lehnert of Universidad Nacional Autónoma de México outlined the theoretical framework of the extension to the Standard Model that causes oriented universal fields, which could typically manifest themselves in daily or yearly time variations of physics observables. On the experimental side, Michael Romalis of Princeton University and Werner Heil of the University of Mainz presented impressive new limits from searches for violations in Lorentz symmetry in clock-comparison experiments using, respectively, the K-3He system and 129Xe and 3He.
Searches for extra forces were introduced by Hartmut Abele of the Technical University Vienna, who described using gravitational states of UCNs, while Anatoli Serebrov of the Petersburg Nuclear Physics Institute (PNPI) discussed the potential of stored UCNs for detecting dark matter. Several presentations also covered the search for tensor-type weak currents in nuclear beta-decay, using the WITCH experiment at ISOLDE at CERN and the LPCTrap facility at GANIL. Seth Hoedl of the University of Washington showed new results of an axion search based on a torsion pendulum. There were also reports on the status of the ALPHA and ASACUSA experiments at CERN, which aim at atomic spectroscopy of antihydrogen and related CPT tests and CERN’s Michael Doser explained the AEGIS experiment to probe gravity with antihydrogen.
On the facilities side, a special session provided an excellent overview of the present status of UCNs – a flourishing global area. This included reports on the performance of UCN sources in operation at LANL and the University of Mainz, as well as on the status of construction at the Technical University Munich and commissioning at PSI. Proposals for future UCN sources at the Japan Proton Accelerator Research Complex (J-PARC), TRIUMF and the PNPI were also shown at the workshop.
Several sessions were devoted to muon physics. Peter-Raymond Kettle of PSI reported on the latest results of the MEG experiment searching for the lepton-flavour violating μ → e + γ decay. The community is currently planning ahead for the next generation of searches for rare muon decays, as became clear when Bob Bernstein from Fermilab explained the Mu2e proposal, which will search for the neutrinoless conversion of muons to electrons, and Andre Schöning of Heidelberg University suggested a new μ → 3e search at PSI. Efforts towards considerably higher muon beam intensities were presented for the Research Centre for Nuclear Physics at Osaka, J-PARC and PSI, and Harry Van der Graaf of Nikhef presented new silicon-gas detectors that could be used at such future facilities.
In one of the highlights, Dave Hertzog from University of Washington presented the newly released final result on the muon lifetime from the MuLan experiment at PSI, which gives a new determination to 0.6 ppm of the Fermi weak coupling constant. The competing muon lifetime experiment at PSI, FAST, was presented by Eusebio Sanchez of CIEMAT, who showed the current status of the analysis and gave the outlook for results expected soon.
Laura Marcucci of the University of Pisa explained the motivations for precision measurements in muon capture in the context of theoretical efforts in effective field theory, while Peter Winter of the University of Washington detailed the on-going MuSun experiment to determine precisely the rate of muon capture in deuterium. Results and opportunities from pion decays were discussed by Dinco Pocanic of Virginia.
The new proton charge radius result from the muonic hydrogen Lambshift experiment, presented by Aldo Antognini of the Swiss Federal Institute of Technology (ETH) Zurich, revived a heated discussion about the results published earlier in 2010. Theory still struggles to explain the discrepancy between the muonic and ordinary hydrogen Lambshift results, both of which involve QED calculations. While optical hydrogen spectroscopy and QED appear to be in agreement with electron scattering data, the muonic hydrogen result, which is far more precise, is 5 σ from the CODATA value. Antognini went on to explain how all systematic errors in the muonic experiment are found to be far below the observed difference.
Aside from the programme of talks, the poster session provoked lively discussions among participants, enhanced by locally brewed draught beer and grilled specialities. There was also the opportunity to gather at organized evening events. In particular, a special trumpet concert linked music to physics through the performance of modern interpretations of Baroque master-works and through the demonstration of acoustic phenomena in a special quadrophenia opus composed by one of the performers, Eckhard Kopetzki. The workshop dinner took place at the local historic grape-pressing cellar (Trotte), an easy stroll from the workshop site. The Swiss speciality of raclette cheese was served freshly melted accompanied by the sounds of alphorns.
Many participants expressed their wish for a repeat of this low-energy precision physics workshop at PSI – the best indication of the workshop’s success. This also showed the growing interest in the field, in which various experiments and particle sources will soon come online.
In their quest to learn more about the fundamental nature of matter, high-energy physicists have developed particle accelerators to reach ever higher energies to allow them to “see” how matter behaved in the extreme conditions that existed in the very early universe. The LHC at CERN has set the latest record for this “energy frontier” in particle physics, but looking beyond the LHC affordable colliders operating at ever larger centre-of-mass energies will call for new – perhaps even radical – approaches to particle acceleration.
In the past decade, the plasma wakefield accelerator (PWFA) has emerged as one such promising approach, thanks to the spectacular experimental progress at the Final Focus Test Beam (FFTB) facility at the SLAC National Accelerator Laboratory. Experiments there have shown that plasma waves or wakes generated by high-energy particle beams can accelerate and focus both high-energy electrons and positrons. Accelerating wakefields in excess of 50 GeV/m – roughly 3000 times the gradient in the SLAC linac – have been sustained in a metre-scale PWFA to give, for the first time using an advanced acceleration scheme, electron energy gains of interest to high-energy physicists.
To develop the potential of the PWFA and other exploratory advanced concepts for particle acceleration further, the US Department of Energy recently approved the construction of a new high-energy beam facility at SLAC: the Facility for Accelerator Science and Experimental Tests (FACET). It will provide electron and positron beams of high energy density, which are particularly well suited for next-generation experiments on the PWFA (Hogan et al. 2010).
In 2006 the FFTB facility was decommissioned to accommodate the construction of the Linac Coherent Light Source (LCLS) – the world’s first hard X-ray free-electron laser. The new FACET facility is located upstream of the injector for the LCLS (figure 1). It uses the first 2 km of the SLAC linac to deliver 23 GeV electron and positron beams to a new experimental area at Sector 20 in the existing linac tunnel. By installing a new focusing system and compressor chicane at Sector 20, the electron beam can be focused to 10 μm and compressed to less than 50 fs – dimensions appropriate for research on a high-gradient PWFA. Comparable positron beams will be provided with the addition of an upstream positron bunch-compressor in Sector 10. Peak intensities greater than 1021 W/cm2 at a pulse repetition rate of 30 Hz will be routinely available at the final focus of FACET. Electron and positron beams of such high energy-density are not available to researchers anywhere else in the world.
The construction phase of the FACET project started in July 2010 and should finish in April this year. Beam commissioning will follow and the first experiments are expected to begin in the summer. A recently completed shielding wall at the end of Sector 20 allows simultaneous operation of FACET and the LCLS.
The FACET beam will offer new scientific opportunities not only in plasma wakefield acceleration but also in dielectric wakefield acceleration, investigation of material properties under extreme conditions and novel radiation sources. To get a head start on the research opportunities, university researchers and SLAC physicists met at SLAC in March 2010 for the first FACET Users Workshop. This was the first opportunity for SLAC to unveil details about FACET’s capabilities and for the visiting scientists to outline their research needs. Beam time will be allocated using an annual, peer-reviewed proposal process.
In the PWFA a short but dense bunch of highly relativistic charged particles produces a space-charge density wave or a wake as it propagates through a plasma. As figure 2 shows, the head of the single bunch ionizes a column of gas – lithium vapour – to create the electrically neutral plasma and then expels the plasma electrons to set up the wakefield. As the plasma electrons rush outward, they create a longitudinally decelerating electric field that extracts energy from the head of the bunch. The plasma ions that are left behind create a restoring force that draws the plasma electrons back to the beam axis. When the electrons rush inwards, they create a longitudinally accelerating field in the back half of the wake, which returns energy to the particles in the back of the same bunch or alternately to a distinct second accelerating bunch. The plasma thus acts as an energy transformer.
The FFTB plasma wakefield experiments used a single 20 kA electron drive bunch to excite 50 GeV/m wakes in plasma of density 2.7 × 1017 e–/cm3. Energy was transferred from the particles in the front of the bunch to the particles in the tail of the same bunch via the wakefield. These experiments verified that the accelerating gradient scales inversely with the square of the bunch length and demonstrated that these large fields can be sustained over distances of a metre, leading to doubling of the energy of the initially 42 GeV electrons in the trailing part of the drive bunch.
Plasma wakefield acceleration will be a major area of research at FACET. Simply put, this research will strive to answer most of the outstanding physics issues for high-gradient plasma acceleration of both electrons and positrons, so that the potential for a PWFA as a technology for a future collider can be realistically assessed. The main goal of these future experiments is to demonstrate that plasma wakefield acceleration can not only provide an energy gain of giga-electron-volts for electron and positron bunches in a single, compact plasma stage, but can also accelerate a significant charge while preserving the emittance and energy spread of the beam.
The plasma wakefield experiments on FACET will need two distinct bunches, each about 100 fs long separated by about 300 fs. The first contains about 10 kA of peak current both to produce a uniform, metre-long column of plasma and then to drive the wake. The second bunch, which extracts energy from the wake, has a variable peak current. The sub-100 fs bunches needed for plasma wakefield acceleration are generated at FACET through a three-stage compression process that continually manipulates the longitudinal phase space so as to exchange correlated energy spread for bunch length, in a process called “chirped pulse compression”. There will be an additional collimation system within the final compression stage at FACET and the collimation in the transverse plane will result in structures in the temporal distribution of the final compressed bunch(es).
In this way FACET will produce two co-propagating bunches. By adjusting the charge and duration of the witness bunch, FACET will be able to pass from the regime of negligible beam-loading that has been studied so far to beam acceleration with strong wake-loading. By loading down or flattening the accelerating wakefield, FACET will accelerate the witness bunch with a narrow, well defined, energy spread as the simulation in figure 3 shows.
Improving accelerator performance is one of the forefronts of research in beam physics that can be explored at FACET
High-energy physics applications require not only high energies but also high beam power to deliver sufficient luminosity. For a linear collider with an energy of tera-electron-volts in the centre-of-mass this translates to nearly 20 MW of beam power for a luminosity of 1034 cm–2s–1. When combined with the efficiencies of other subsystems (wall-plug to klystron to drive beam), maximizing the efficiency of the plasma interaction will be a crucial element in keeping down the overall costs of the facility. For example, a recent conceptual design for a PWFA-based linear collider (PWFA-LC) used a drive-beam-to-witness-beam coupling of 60% to achieve an overall efficiency of 15% (Seryi et al. 2009). Theoretical models and computer simulations have estimated the efficiency of the plasma interaction to be on the order of 60% for Gaussian beams and approaching 90% for specifically tailored current profiles (Tzoufras et al. 2008).
Improving accelerator performance using spatially and temporally shaped pulses is one of the forefronts of research in beam physics that can be explored at FACET. Tailoring the current profile of the drive beam allows the plasma to extract energy at a uniform rate along the bunch so as to maximize the overall efficiency. Figure 4 shows an example of such a tailored current profile for FACET and the accompanying simulated plasma wake. Bunch shaping has the added benefit of increasing the transformer ratio – that is, the ratio of the peak accelerating field divided by the peak decelerating field. A larger transformer ratio will lead to more energy gain per plasma stage. Finally, tailoring the profile of the witness beam loads the accelerating wakefield to produce the desired narrow energy spread.
In addition to high beam power, the luminosity needed to do physics at the energy frontier will require state-of-the-art emittance with final beam sizes in the nanometre range. The ion column left in the wake of the drive beam provides a focusing channel with strong focusing gradients (MT/m for 1017 e–/cm3) that are linear in radius and constant along the bunch. This ion column allows a trailing witness bunch to propagate over many betatron wavelengths in a region free of geometric aberrations and emittance growth. There are however other sources of emittance growth in the PWFA. For instance, hosing instability (in which any transverse displacement of the beam slices grows as the beam propagates) between the beam and the wake, motion of the plasma ions in response to the dense beam, synchrotron radiation and multiple-Coulomb scattering can all lead to emittance growth. For a plasma accelerator at a few tera-electron-volts, the latter two effects have been shown to be negligible for appropriately injected beams. Experiments at FACET will determine the influence of the electron hose instability and the ion motion on emittance growth.
Turning plasma wakefield acceleration into a future accelerator technology.
Although plasma wakefield acceleration may find applications in areas other than high-energy physics, such as compact X-FELs, collider applications will require plasmas to accelerate not only electrons, but also positrons. Studies have already shown that relatively long positron bunches can create wakefields analogous to the electron case, which can be used to accelerate particles over distances of a metre or so with energy gains approaching 100 MeV (Blue et al. 2003). The response of the plasma to the incoming positron beam is different than for an electron beam. In the positron case, the plasma electrons are drawn in towards the beam core. This leads to fields that vary nonlinearly in radius and position along the bunch, resulting in halo formation and emittance growth (Hogan et al. 2003 and Muggli et al. 2008). FACET will be the first facility in the world to deliver compressed positron bunches suitable for studying positron acceleration with gradients of giga-electron-volts per metre in high-density plasmas.
Recent studies have shown that there may be an advantage in accelerating positrons in the correct phase of the periodic wakes produced by an electron drive beam. A simple yet elegant study of this concept will be done at FACET by placing a converter target at the entrance of the plasma cell and allowing the trailing witness beam to create an e–/e+ shower. The positrons born at the correct phase will ride the wake of multi-giga-electron-volts/metre through the plasma and emerge from the downstream end with a potentially narrow energy spread and emittance (Wang et al. 2008). In the longer term, FACET has been designed to allow an upgrade to the Sector 20 beam line, called a “sailboat chicane”, which will allow electron and positron bunches from the SLAC linac to be delivered simultaneously to the plasma entrance with varying separation in time (figure 5). By switching the bunch order and delivering the compressed positron beam to the plasma first, FACET can also study the physics of proton-driven plasma wakefield acceleration (CERN Courier March 2010 p7). The combination of high energy, and high peak-current electron and positron beams will make FACET the premier facility in the world for studying advanced accelerator concepts and lead the way in turning plasma wakefield acceleration into a future accelerator technology.
• Work supported by the US Department of Energy under contract numbers DE-AC02-76SF00515 and DE-FG02-92ER40727.
On 18 December 2010, just after 6 p.m. New Zealand time, seven austral summers of construction came to an end as the last of 86 optical sensor strings was lowered into the Antarctic ice – IceCube was complete, a decade after the collaboration submitted the proposal. A cubic kilometre of ice has now been fully instrumented with 5160 optical sensors built to detect the Cherenkov light from charged particles produced in high-energy neutrino interactions.
The rationale for IceCube is to solve an almost century-old mystery: to find the sources of galactic and extragalactic cosmic rays. Neutrinos are the ideal cosmic messengers. Unlike charged cosmic rays they travel without deflection and, as they are weakly interacting, arrive at Earth from the Hubble distance. The flip side of their weak interaction with matter is that it takes a very large detector to observe them – this is where the 1 km3 of ice comes in. The IceCube proposal argues that 1 km3 is required to reach a sensitivity to cosmic sources after several years of operation. This volume will allow IceCube to study atmospheric muons and neutrinos while searching for extra-terrestrial neutrinos with unprecedented sensitivity.
The concept is simple. A total of 5160 optical sensors turn a cubic kilometre of natural Antarctic ice into a 3D tracking calorimeter, measuring energy deposition by the amount of Cherenkov light emitted by charged particles. Each sensor is a complete, independent detector – almost like a small satellite – containing a photomultiplier tube 25 cm in diameter, digitization and control electronics, and built-in calibration equipment, including 12 LEDs.
Designing these digital optical modules (DOMs) was not easy. As well as the requirement for a high sampling speed of 300 million samples a second and a timing resolution better than 5 ns across the array (the actual time resolution is better than 3 ns), the DOMs needed to have the reliability of a satellite but on a much smaller budget. They were designed with a 15-year lifetime and operate from room temperature down to –55 °C, all the while using less than 5 W. This power per DOM may not sound like much, but it mounts up to about 10 planeloads of fuel a year. Nevertheless, the design was good, and 98% of the IceCube DOMs are working perfectly, with another 1% usable. Since the first deployments in January 2005, only a few DOMs have failed, so the 15-year lifetime should be met easily.
Building the DOMs was only the first challenge. Because the shallow ice contains air bubbles, the DOMs must be placed deep, between 1450 and 2450 m below the surface. The sensors are deployed on strings, each containing 60 DOMs spaced vertically at 17 m intervals. Pairs of DOMs communicate with the surface via twisted pairs that transmit power, data, control signals and bidirectional timing calibration pulses. The 78 “original” strings are laid out on a 125 m triangular grid, covering 1 km2 on the surface. The remaining eight strings are then placed in the centre of IceCube, with a dense packing of 50 high-quantum-efficiency DOMs covering the bottom 350 m of the detector. This more densely instrumented volume, known as DeepCore, will be sensitive to neutrinos with energies as low as 10 GeV, which is an order of magnitude below the threshold for the rest of the array.
The key to assembling the detector was a fast drill. Hot water does the trick: a 200 gal/min stream of 88ºC water can melt a hole 60 cm in diameter and 2500 m deep in about 40 hours. It takes another 12 hours to attach the DOMs to the cable and lower them to depth. This proved fast enough to drill 20 holes in roughly two months.
Speed was vital because the construction season is necessarily short in this region – the Amudsen-Scott South Pole Station is accessible by plane for only four and a half months a year. Add the time to set up the drill at the start of the season and take it down at the end, and less than two months are left for drilling.
This brief description does not do justice to the host of difficulties faced by the construction crew. First, hot water drills are not sold at hardware stores – many human-years of effort went into developing a reliable, fuel-efficient system. Second, the South Pole is one of the least hospitable places on Earth. Every piece of equipment and every gallon of fuel is flown in from McMurdo station, 1500 km away on the Antarctic coast. The altitude of 2800 m and the need to land on skis limited the cargo that could be carried: everything had to fit inside an LC130 turboprop plane. The weather also complicates operations. Typical summer temperatures are between –15 °C and –45 °C, which is hard on both people and equipment. The need for warm clothing further exacerbates the effect of the high altitude; many tasks become challenging when you are wearing thick gloves and 10 kg of extreme cold weather gear.
Nevertheless the collaboration succeeded. From the humble single string deployed in 2005 (and, incidentally, adequate by itself to see the first neutrinos), construction ramped up every year, reaching a peak of 20 strings deployed during the 2009/2010 season. This was good enough to allow for a shorter season this final year, leaving time to clean up and prepare the drill for long-term storage.
Even though IceCube has just been completed, the collaboration has been actively analysing data taken with the partially completed detector. This is also no simple matter. Even at IceCube’s depths, there are roughly a million times as many downwards-going muons produced in cosmic-ray air showers as there are upwards-going muons from neutrino interactions in the rock and ice below IceCube. To avoid false neutrino tracks, IceCube analysers must be extremely efficient at avoiding misreconstructed events. Worse still, IceCube is big enough to observe two or more muons, from different cosmic-ray interactions, simultaneously. Still, with stringent cuts to reject background events, it is possible to select an almost pure neutrino sample. In a one-year sample, taken with half of the full detector, IceCube collected more than 20 000 neutrinos. This sample was used to extend measurements of the atmospheric neutrino spectrum to an energy of 400 TeV. The events are being scrutinized for any deviation from the anticipated flux that would mean evidence of new neutrino physics or, on the more exotic side, deviations in neutrino arrival directions that could signal a breakdown of Lorentz invariance or Einstein’s equivalence principle.
With the 40-string event sample the collaboration has produced a map of the neutrino sky that has been examined for evidence of suspected cosmic-ray accelerators. None have been found, although it is important to realize that at this stage no signal is expected at a significant statistical level. For instance, we have reached a sensitivity that can observe a single cosmogenic neutrino for the higher end of the range of fluxes calculated. We have also started to probe the neutrino flux predicted from gamma-ray bursts, assuming that they are the sources of the highest-energy cosmic rays.
The first surprise from IceCube does not involve neutrinos at all. IceCube triggers on cosmic-ray muons at a rate of about 2 kHz, thus collecting billions of events a year. These muons have energies of tens of tera-electron-volts and are produced in atmospheric interactions by cosmic rays with energies of hundreds of tera-electron-volts, i.e. the highest-energy Galactic cosmic rays. A skymap of well reconstructed muons with an average energy of 20 TeV reveals a rich structure with a dominant excess in arrival directions pointing at the Vela region. These muons come from cosmic rays with energies of many tens to hundreds of tera-electron-volts; the gyroradiius of these particles in the microgauss field of the galaxy is in the order of 0.1 parsec, too large to be affected by our solar neighbourhood. However, these radii are too small to expect that the cosmic rays would point back even to the nearest star, never mind a candidate source like the Vela pulsar or any other distant source remnant at more than 100 parsec.
Either we do not understand propagation in the field, or we do not understand the field itself
There is some mystery here: either we do not understand propagation in the field, or we do not understand the field itself. Does the detector work? Definitely: we observe in the same data sample the Moon’s shadow in cosmic rays at more than 10 σ, as well as the dipole resulting from the motion of the Earth around the Sun relative to the cosmic rays.
Additionally, IceCube has established the tightest limits yet on the existence of dark matter, which consists of weakly interacting massive particles that have spin-dependent interactions with ordinary matter. In the alternative case – dominant spin-independent interactions – IceCube’s limits are almost competitive with direct searches. In addition, by monitoring the signal rates from its photomultiplier tubes, IceCube will be sensitive to million-electron-volt energy neutrinos from supernova explosions anywhere in the galaxy.
Looking forward
Now the 220-strong IceCube collaboration – with members from the US, Belgium, Germany, Sweden, Barbados, Canada, Japan, New Zealand, Switzerland and the UK – is eagerly looking forward to analysing data from a complete and stable detector. Analysing and simulating data from an instrument that changed every Antarctic season has been a challenge.
At the same time, neutrino astronomers are thinking about the future. Even IceCube is too small to collect a significant number of events at the highest energies. This has already been pointed out in the case of cosmogenic neutrinos with typical energies in excess of 106 TeV. These are produced when ultra-high-energy cosmic rays interact with photons in the cosmic microwave background. To observe these neutrinos requires a much larger detector. Physicists are aiming for a volume of 100 km3. This will require a new technology, and several groups are already deploying antennas to observe the brief coherent radio Cherenkov pulses emitted by neutrino-induced showers. The advantages of radio detection are that the signal is coherent, so it scales as the neutrino energy squared. Also, the radio signals have larger attenuation lengths than light, allowing detectors to be placed on a 1 km, rather than 125 m, grid. The cost is that radiodetectors have energy thresholds that are much higher than IceCube.
Our story begins in 1911, with the first International Women’s Day celebrated in Austria, Denmark, Germany and Switzerland. That same year, Marie Skłodowska-Curie won the Nobel Prize in Chemistry for the discovery of two new elements: radium and polonium. This was the second time that she had been called to Stockholm – eight years earlier, she received the Nobel Prize in Physics, which she and her husband Pierre Curie shared with Henri Becquerel for their research on radioactivity. The Nobel Prize in Chemistry has only been awarded to three other women: Curie’s daughter Irène Joliot-Curie in 1935, Dorothy Crowfoot-Hodgkin in 1964 and Ada Yonath in 2009. The only other woman to receive the Nobel Prize in Physics is Maria Goeppert Mayer for her work on the structure of atomic nuclei (1963).
Marie Curie achieved several “firsts”. She was the first woman to receive the Nobel Prize (in 1903), the first person to receive it twice, the first female professor at the University of Paris and, as the photograph on this page shows, the only female among the 24 participants at the first international physics conference – the Solvay Conference – held in Brussels in 1911. A century later, the situation has changed. At the recent -International Conference on High-Energy Physics, ICHEP2010, 15.4% of the participants were women.
Nuclear physics pioneers
The 1940s and 1950s were exciting years in physics. The first accelerators were being built, and with them physicists took the first steps towards the current understanding of particle physics. At this challenging time after the Second World War, many women joined physics groups around the world and helped to open doors for the future generations of women in science.
Marietta Blau (1894–1970) pioneered work in photographic methods to study particle tracks and was the first to use nuclear emulsions to detect neutrons by observing recoil protons. Her request for a better position at Vienna University was rejected because she was a woman and a Jew. Blau left Austria and was appointed professor in Mexico City after the war. She was nominated for the Nobel Prize several times.
Nella Mortara (1893–1988), one of the most beloved assistant professors of physics at the University of Rome in the late 1930s, faced a similar ordeal and was expelled for being Jewish. She escaped to Brazil but returned secretly during the war to be reunited with her family in Rome, living in great danger. After the war Mortara was reappointed as a professor, to the great joy of her students, many of whom were women.
Lise Meitner (1878–1968) also suffered from this double discrimination. She became the second woman to obtain a PhD from Vienna in 1903. She moved to Berlin where she was “allowed” by Max Planck to attend his lectures – the first woman to be granted this privilege – and then later became his assistant. Many believe that Meitner should have been co-awarded the Nobel Prize in Chemistry with Otto Hahn in 1944 for the discovery of nuclear fission. She had the courage to refuse to work on the Manhattan Project, saying: “I will have nothing to do with a bomb!”
Maria Goeppert Mayer (1906–1972) lectured at prestigious universities, published numerous papers on quantum mechanics and chemical physics, and collaborated with her husband on an important textbook. Despite her accomplishments, antinepotism rules forbade Mayer from receiving an official post and for many years she taught physics at universities as an unpaid volunteer. Finally, in 1959, four years before receiving the Nobel Prize, the University of California at San Diego offered her a full-time position.
Leona Marshall Libby (1919–1986) was an innovative developer of nuclear technology. She built the tools that led to the discovery of cold neutrons and she also investigated isotope ratios. Libby was the first woman to be part of Enrico Fermi’s team for the Manhattan Project and eventually became professor of physics at that institution, leaving a legacy of exploration and innovation.
Closer to particle physics, Hildred Blewett (1911–2004) was an accelerator physicist from Brookhaven. In the early 1950s she contributed to the design of CERN’s first high-energy accelerator, the Proton Synchrotron, while also working on a similar machine proposed for Brookhaven.
Maria Fidecaro has spent her life dedicated to her research at CERN, where she still works. She arrived at CERN in 1956 after working in Rome on cosmic-ray experiments and, for a year, at the synchrocyclotron in Liverpool. Maria remembers fondly what it was like to be a physics student just after the war, the challenges of balancing a career with family and collaborating with other pioneers of modern physics from all over the world, including many women. “I remained dedicated to research throughout the changing circumstances,” she says, “and always to the best of my ability.”
These are just a few of the women physicists who were around during the mid-20th century; courageous women dedicated to their science, they served as role models for later generations.
Women and the growth of CERN
CERN was founded in 1954 during the post-war period of renewal. The CERN Convention was very modern because it mentioned all of the professional categories needed to form a large international organization such as that which exists today. Women were initially recruited in supportive administrative positions, but this changed as they began to enter all areas of university training, physics and technical professions. CERN was willing to provide and encourage working opportunities for women, which they needed to be able to flourish in these new areas.
Between the 1960s and the mid-1980s, dozens of women worked at CERN and elsewhere as scanners. Their job consisted of finding interesting events among the many tracks left by particles in bubble chambers and captured on photographs. Madeleine Znoy recalls how tedious it was: “Initially, the work was done manually, using a pencil and a sheet of paper to note down the co-ordinates where the interactions had taken place. Scanning took place round the clock, because the quantity of films to be scanned was enormous. From 7.00 am to 10.00 p.m., female scanners studied the films, and from 10.00 p.m. until 7.00 a.m., men (often students) took over,” she explains. Each shift lasted only four hours due to the work being so strenuous, working in complete darkness with three projectors illuminating the film. Znoy once beat a record, scanning more than 750 photographs in one day. “At first, some physicists thought that this was impossible, that surely I had missed interesting events. But all was fine and they were very surprised!”
Anita Bjorkebo started as a scanner in 1965. After her scanning shift she compiled data and classified events, all by hand, and even made the histograms.
Later, as scanning became more automated, the scanners moved on to operating the computers connected to the measuring equipment. Some, including Bjorkebo and Znoy, would set them up for other scanners or streamline the operation for new experiments. Bjorkebo became so interested in her work that she signed up for two particle-physics classes, attending lectures and doing homework after work. “A Swedish physicist only had five students here so he invited the technical staff to join in,” she explains.
Even though these women did not get their names on publications, they felt appreciated. “We were part of the team, we had a role to play,” says Znoy proudly. “With the scanning, we could really see the particles. I really enjoyed working with the physicists and technicians, and collaborating with other laboratories. We were young and full of enthusiasm. It was a great period.”
Nevertheless, after more than 50 years, the situation for women at CERN could still be improved, especially in the intermediate administrative categories where they are most represented. Recruitment in this area is now often based on standardized job criteria that leave less room to appreciate the level of education and professional skills needed for a post. However, the administrative staff category is essential to the day-to-day life of CERN. To function properly CERN depends on good communication within the organization, the dedication of its staff and proper advancement prospects at all levels. Danièle Lajust, an administrative assistant who joined CERN in 1978, is quick to add: “We are proud of belonging to an organization that now welcomes a gender mix at all levels and of participating in our own way in its great and passionate adventure.”
Showing the way
CERN also has an important part to play in educating young scientists and hence providing role models. In 2000, Melissa Franklin, an alumna of the CERN Summer Student programme in 1977, returned as a lecturer on Classic Experiments. She then became the first tenured female professor of physics at Harvard University, and now works on the ATLAS experiment at CERN. Franklin’s is just one of the many amazing careers experienced by former CERN summer students.
Started in 1962 by the then director-general of CERN, Viki Weisskopf, the Summer Student programme began with just 70 students. Nowadays, walk through any of the buildings at CERN in mid-July and you can see evidence of about 140 students participating in the programme. Although half of the students today are women, there are still only a handful of female lecturers – in 2010, only three of the 31 lecturers were women. Providing the summer students with adequate role models is just as important as enhancing diversity in their ranks, because the teachers, authors and educators that we encounter have a great influence on our lives and careers.
Today, women make up about 17% of the many thousand scientists, engineers and technicians who work on the LHC experiments, from graduate students to full professors. Nearly half of these women are students or post-docs, showing that more women are joining the field.
The day-to-day operation of the LHC is in the hands of eight “engineers in charge”, half of whom are women. One of them, Giulia Papotti, thanks the management for having an open mind: “They were looking for someone with radio-frequency expertise, which is my field. Other considerations such as nationality or gender were secondary.” It proved to be the greatest challenge of her career, training to be an LHC operator during the high-intensity period of the first days of operation. “I had to learn fast,” she recalls. The workload was additionally taxing because two people were still in training and two were on parental leave. “Our work is to think about how to improve things. We are meant to be critical. We are paid to think,” says Papotti.
Amalia Ballarino, a scientist working on superconductivity, designed and managed the production of the high-temperature superconducting components that power the LHC magnets. For this work she won the international Superconductor Industry Person of the Year award in 2006 (p37). “We had to work to a tight schedule,” she says of building a system that consists of 3000 components made across the world. “Working at CERN gives you the opportunity to contribute to innovative projects. Basic science is the primary driver of innovation.” Ballarino adds: “Working here is an opportunity to create something new.” Lene Norderhaug, a CERN fellow working on software development and looking towards the future says enthusiastically: “In five years I hope to have my PhD and my second job at CERN. We like it here!”
Women in science have been passionate about their chosen field, undertaking tasks with great responsibility and continuously striving to push science and technology beyond their limits. Looking back on 100 years of International Women’s Day, we have progressed from just one woman at a conference to females representing 17% of the field. What will the next 100 years bring?
At the end of January, a small fraction of CERN decamped to Chamonix along with experts from around the world – not to ski but to work out the plans for the coming year’s LHC run. It is a tradition that began with CERN’s previous accelerator, the Large Electron–Positron (LEP) collider, and time and again it has proved its worth.
Chamonix is an important fixture on the CERN calendar, not only because it sets the agenda for the coming year but also because it is particle physics in microcosm. Chamonix embodies the spirit of our field. It is an intense week of discussion and debate, involving a wide community base drawn from CERN, the LHC experiments and beyond. The CERN machine advisory committee is there and everyone has the chance to air opinions, before the meeting invariably winds up with a broad consensus.
It is this ability to reach consensus that makes our science so remarkable. Particle physicists can be every bit as opinionated and attached to their own ideas as anyone else but, at the end of the day, we are all united by the overriding goal of doing our research and finding out more about this wonderful universe that we live in. That is what allows us to reach consensus – and always has. In the past, whenever a big particle-physics project involved 50 people or so – or even the 300 or so from my old LEP experiment, OPAL – consensus was maybe not so surprising. But with collaboration sizes today numbering in the thousands, this model still holds true and management gurus are beginning to take notice. My message to them? It is amazing what people can do when they are united by a common goal.
So what of this year’s deliberations at Chamonix? They were all about maximizing discovery potential while minimizing risk to the LHC and the experiments. The problems with the LHC’s high-current splices, which became so painfully evident in 2008 when one of them failed and put the machine out of action, are not completely resolved. That is why the LHC is not yet running at its full design energy of 7 TeV per beam. In 2010, 3.5 TeV per beam was selected as a safe energy to run at for the LHC’s first physics, and experience has clearly demonstrated the wisdom of that choice.
The big question at Chamonix this year was whether we could safely move up a notch. Some argued for; others against. But at the workshop’s conclusion the participants were united in recommending that we stay at 3.5 TeV until at least the end of 2011.
Why? Well, we know that the LHC performs fantastically at this energy, and that exciting new physics is potentially within our reach. We have also developed new techniques to sniff out bad splices that could spoil the show if we go to a higher energy. These will be in place by the end of 2011, giving us the input needed to take a fully informed decision on a possible increase in energy at next year’s Chamonix meeting.
Being CERN’s director-general is sometimes a tough job but the consensual model of particle physics makes some aspects easy
Which brings me to the next big question on the table at Chamonix: what about next year? It has long been clear that with lengthy warm-up and cool-down periods, an annual cycle does not make sense for major maintenance shutdowns at the LHC. And we also know that the first long shutdown involves substantial work to make good the high-current interconnects that will allow us to reach the design energy of 7 TeV per beam. Originally foreseen for 2012, it was almost a foregone conclusion that the Chamonix workshop would recommend postponing the first long shutdown to 2013, and that is indeed what happened.
The reason is that the LHC’s performance in 2010 was so good, with the promise of much better to come. That led to simple extrapolations clearly showing that if there’s new physics to be found in the 3.5 TeV-per-beam energy range, two years of running will be enough to find it. On the other hand, one year alone could leave us with just tantalizing hints. Under these circumstances, stopping at the end of 2011 makes little sense.
Taken together, the recommendations that emerged from Chamonix optimize the LHC’s discovery potential, not just for 2011 but for the longer term as well, and they do it while minimizing the risk of damage to the LHC’s infrastructure.
Being CERN’s director-general is sometimes a tough job but the consensual model of particle physics makes some aspects easy, as Chamonix once again showed this year. Following a recommendation arrived at by consensus, which has the buy-in of the whole community, is a simple choice to make. That consensus is our strength.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.