Comsol -leaderboard other pages

Topics

Particle Fever

CCboo5_06_14

“He was ALWAYS there!” This was the reaction of CERN scientists who spent years being followed by film-maker Mark Levinson. The result is Particle Fever – a feature-length documentary about CERN, which has been touring cinemas and festivals, reaching audiences far beyond particle physics. Why? Because Levinson manages to capture, through his narrative and character-driven piece, a compelling story of passion, disaster, loss and then triumph. It is not “boy meets girl”, but scientists build accelerator, scientists lose accelerator (in the September 2008 incident), scientists get accelerator running again and find elusive particle – cue thunderous applause.

The film focuses on a handful of CERN characters, from the ATLAS experiment mainly: Fabiola Gianotti, Martin Aleksa and Monica Dunford, together with Mike Lamont from the accelerator side. While this skews the film away from the reality of thousands of collaborating physicists, it enables a picture to form through the eyes of these protagonists of passionate people working together towards a common goal. Levinson weaves in US-based theorists David Kaplan, Nima Arkani-Hamed and Savas Dimopoulos to stitch together a dramatic narrative of a mighty quest for the Higgs boson. In being swept along by the action, the audience is also taught a fair amount of physics with the help of beautifully designed graphics. My most memorable scene is the moment of the first LHC collisions, where Levinson’s use of music and kaleidoscopic imagery leaves the audience captivated by the almost spiritual exaltation of this scientific achievement.

This US film-maker aiming at a US audience has, inevitably, made an American film, with gutsy postdoc Monica and self-assured theorists. A great deal of the film is dominated by American accents, so much so that I felt that the international spirit of CERN became somewhat neglected. Nonetheless, Monica delivers a spectacular performance and was by far my favourite “character”, with her candid pieces to camera and analogies: “The entire control room is like a group of six-year-olds whose birthday is next week…and there’ll be cake.”

There is something incredibly heart-warming about watching your place of work portrayed dramatically on the big screen. Goosebumps came in waves with the film’s twists and turns, and I came away thinking “Wow, I work there.” As a result, I pity my poor family, who will all have to watch this at Christmas, whether they want to or not!

Particle Fever is currently touring cinemas and festivals, and is available to buy as an HD download worldwide from 15 July. For more details, see http://particlefever.com/.

How Big is Big and How Small is Small: The Sizes of Everything and Why

By Timothy Paul Smith
Oxford University Press
Hardback: £25
Also available as an e-book, and at the CERN bookshop

CCboo2_06_14

This book canters through the sizes and lifetimes of things, from the outermost reaches of the universe to the confined locality of quarks, telling us what is found where and why, and is, according to the publisher’s website, suitable for “interested general readers as well as professional scientists” – a broad church.

In scanning 45 orders of magnitude, the author presents a wealth of information on “everything”, from cosmology to string theory, with passing reference to cooking, football, square dancing and more. The narrative is exuberant and many of the facts are little gems, but they are jumbled up, disordered and congested. The book reads like a series of digressions and there are enough typos and mistakes – bacteria and criteria are plural not singular, the shadow on a sundial is not cast by a gnome – to irritate anyone trying to stay the course.

Concepts seemingly pop up out of nowhere, reappearing again (and again and again) when the plot is all but lost. Much of the material is erudite, abstruse and irrelevant, such as “The delta particles Δ–– Δ Δ0 Δ+ are like neutrons and protons but with complex spin.” Spin, complex or not, is not in the too-brief index, so the reader cannot check whether it has been defined earlier, or indeed anywhere, and the doubly charged member of this quartet is actually the Δ++, although by now – page 123 – it is debatable whether even the most interested readers care. And why should they?

Some aggressive editing would have been in order, not only to fix imperfections and remove chunks of repeated or unnecessary text, but also to avoid slowing down the observant with infelicitous phrasing, for example, “A number of species in the new world and the old world have the same common name because, at least superficially, they look the same, for example the robin and the buffalo.”

And in a cup of water drawn from an ocean today, how many molecules were in a cupful poured into the oceans long ago? After 10 pages of exhaustive and exhausting accounts of the work of Avogadro, Dalton, Gay-Lussac, Loschmidt and Maxwell, we arrive at the numbers. There are 3.3 × 1024 water molecules in a cup and 1.3 × 1022 cups in the oceans. So, 250 of the original molecules are in today’s cup and, although not stated, the oceans contain 4.3 × 1046 water molecules. Yes? No! On the following page, “there are about 8 × 1045 molecules of water on Earth.”

I was once told, if lost for affable words when asked for an opinion on something quite extraordinary, to say “astounding!” This book is astounding, which is a pity as it could and should have been excellent.

LS1: beams are back in the Proton Synchrotron

CCnew1_06_14

Beams of protons are back in CERN’s Proton Synchrotron (PS), having circulated around the accelerator on 20 June for the first time in more than 15 months. The PS restart followed on from the restart of beam in Linac 2 and then the PS Booster (PSB) on 2 June.

The beam made it into the PS on schedule, thanks to the efforts of the PSB specialist teams, who were called upon on many occasions for hardware interventions during the preceding weeks of hardware commissioning and “cold” tests. These included fixing vacuum leaks, re-configuring timing links, correcting magnet connections and, in one instance, replacing an entire magnet with a spare, owing to a water leak. Operations, radiofrequency and instrumentation teams then needed to adjust the settings for beam acceleration and extraction from the PSB to the PS. With beam back in the PS, beam commissioning tests could begin. During this final phase, all of the beam diagnostics – from beam current to bunch spacing – need to be checked, first with low-intensity beams (1011 protons) before moving to higher-intensity levels (1012 protons). The PS will then be ready to send beams to the East Area and the neutron facility, nToF, where physics is planned to start in late July. ISOLDE, the only experimental facility connected directly to the PSB, will be the first user to receive its beams, with physics set to restart in mid-July. The physics programme in the Super Proton Synchrotron is set to restart in the autumn.

CCnew2_06_14

At the LHC, the teams closed the last of the 1695 outer magnet bellows on 18 June, marking the end of the Superconducting Magnets And Circuits Consolidation (SMACC) project. Leak tests on the entire machine are proceeding well, and by late June they had been completed in sector 5-6 and were under way in sector 7-8. At the same time, sector 6-7 – the first to be cooled – had reached 20 K, and will be maintained at this temperature during continuity testing of the copper stabilizer. Four sectors should be cool by the end of the summer, and all eight sectors of the LHC are scheduled to be cooled to the nominal temperature of 1.9 K in late autumn.

Beam is expected back into the LHC early in 2015, and the restart of the LHC physics programme is planned for spring 2015, with a collision energy of 13 TeV instead of the previous 7–8 TeV.

ALPHA measures charge of antihydrogen

The ALPHA experiment at CERN’s Antiproton Decelerator (AD) has made a new precision measurement of the electric charge of antihydrogen atoms, finding it to be compatible with zero to eight decimal places. This is the first time that the charge of an antiatom has been measured to high precision. The ALPHA collaboration studied the trajectories of antihydrogen atoms released from the experiment’s system of particle traps in the presence of an electric field. If the antihydrogen atoms had a charge, the field would deflect them. The analysis, based on 386 events, gives the value of the antihydrogen electric charge as (–1.3±1.1±0.4) × 10–8.

COSY confirms existence of six-quark states

CCnew3_06_14

Experiments at the Jülich Cooler Synchrotron (COSY) have found compelling evidence for a new state in the two-baryon system, with a mass of 2380 MeV, width of 80 MeV and quantum numbers I(JP) = 0(3+). The structure, containing six valence quarks, constitutes a dibaryon, and could be either an exotic compact particle or a hadronic molecule. The result answers the long-standing question of whether there are more eigenstates in the two-baryon system than just the deuteron ground-state. This fundamental question has been awaiting an answer since at least 1964, when first Freeman Dyson and later Robert Jaffe envisaged the possible existence of non-trivial six-quark configurations.

CCnew4_06_14

The new resonance was observed in high-precision measurements carried out by the WASA-at-COSY collaboration. The first signals of the new state had been seen before in neutron–proton collisions, where a deuteron is produced together with a pair of neutral pions. Now this state has also been observed in polarized neutron–proton scattering and extracted using the partial-wave analysis technique – the generally accepted ultimate method to reveal a resonance. In the SAID partial-wave analysis, the inclusion of the new data produces a pole in the 3D3 partial wave at (2380±10 – i 40±5) MeV.

The mass of the new state is amazingly close to that predicted originally by Dyson, based on SU(6) symmetry breaking. Moreover, recent state-of-the-art Faddeev calculations by Avraham Gal and Humberto Garcilazo reproduce the features of this new state very well. The quantum numbers favour this state as a dibaryon resonance – the “inevitable” non-strange dibaryon predicted by Terry Goldman and colleagues in 1989.

CEBAF delivers first beams following upgrade

CCnew5_06_14

On 7 May, the newly upgraded Continuous Electron Beam Accelerator Facility (CEBAF) delivered the first electron beams to its new experimental complex at the US Department of Energy’s (DOE’s) Jefferson Lab. The success capped a string of accelerator commissioning milestones that were needed for approval to restart experimental operations following CEBAF’s first major upgrade.

CEBAF is an electron-accelerator facility that employs superconducting radiofrequency (SRF) technology to investigate the quark structure of the nucleus. The first large-scale application of SRF technology in the US, it was originally built to circulate electrons through 1–5 passes to provide 4 GeV electron beams. As a result of the operators’ experience in running the machine at its peak potential, the original installation eventually achieved operational energies of 6 GeV.

In May 2012, the accelerator was shut down for its 12 GeV upgrade. This $338-million project, which will double CEBAF’s maximum energy, includes the construction of a fourth experimental hall (Hall D), as well as upgrades to equipment in three existing halls (Halls A, B and C).

Accelerator operators began the painstaking task of bringing the accelerator back online last December. By 5 February, they had achieved the full upgrade-energy acceleration of 2.2 GeV in one pass through the machine. Then on 1 April, the operators exceeded CEBAF’s previous maximum energy. The accelerator delivered three-pass, 6.11-GeV electron beams with 2 nA average current onto a target in Hall A, and recorded the first data of the 12-GeV era, holding the pattern for more than an hour.

The operators continued to push the upgraded machine, and early on 7 May the energy was increased to 10.5 GeV through the entire 5.5 passes. In the last minutes of the day, 10.5 GeV beam was delivered into the new Hall D complex. Having met all of the major milestones in the 12-GeV project for the DOE approval step, Critical Decision-4A (Accelerator Project Completion and Start of Operations), staff and users are now looking forward to demonstration of 12-GeV energy and beam delivery to Jefferson Lab’s experimental halls for commissioning and the start of experiments.

Laser experiment simulates supernova

CCnew6_06_14

Supernova explosions, triggered when the fuel within a star reignites or its core collapses, launch a shock wave that sweeps through a few light-years of space in only a few hundred years. The remnants of these explosions are now recognized widely as one of nature’s major particle accelerators. The theory is that charged particles increase in energy through repeated encounters with magnetic “mirrors” or changing magnetic fields in the shocks. Now, a team of researchers has brought some of these processes down to Earth, in an experiment to investigate the turbulent amplification of magnetic fields in the supernova remnant, Cassiopeia A, which was first seen about 300 years ago in the constellation Cassiopeia.

Radio observations of Cassiopeia A have revealed regions within the expanding remnant that are consistent with synchrotron radiation emission from giga-electron-volt electrons spiralling in a magnetic field of a few milligauss – 100 times higher than expected from the standard shock compression of the interstellar medium. The origin of such high magnetic fields, which help to make Cassiopeia A a particularly effective particle accelerator and bright radio source, appears to lie with regions of turbulence that could amplify the magnetic field and that could be related to puzzling irregular “knots” seen in optical observations. One explanation for these knots is that the shock produces turbulence as it passes through a region of space that already contains dense clumps or clouds of gas.

To investigate these possibilities, an international team led by Gianluca Gregori at Oxford University used the Vulcan laser facility at the UK’s Rutherford Appleton Laboratory to focus three laser beams onto a carbon rod 0.5 mm thick in a chamber filled with low-density gas. The heat generated made the rod explode, creating a blast that expanded through the surrounding gas, mimicking a supernova shock wave. To simulate the clumps that might surround an exploding star, the team introduced a mesh of fine plastic wires 0.4 mm thick with cells 1.1 mm square at a distance of 1 cm from the rod. Using hydrodynamical scaling relations, the team can relate the experimental conditions 0.3 μs after the laser burst to Cassiopeia A as it is now, about 310 years after the supernova explosion. With the same scaling, the wire thickness corresponds to a distance of about one parsec in the remnant.

The researchers used various techniques to monitor the evolution of the shock wave, including an induction coil to measure the magnetic fields produced. The measurements show that the grid produces additional turbulent flow and gives rise to magnetic-field components that are 2–3 times larger than without it. The results are also in good agreement with the output from numerical simulation code, in particular, the magnetohydrodynamic code FLASH, developed by Don Lamb at Chicago University. The simulations reproduce well the position of the shock, the peak electron density and temperature – with and without the grid – and confirm that the magnetic field is indeed enhanced as a result of induced turbulence created as the shock moves through the grid.

These results demonstrate that the amplification of the magnetic field within the Cassiopeia A “particle accelerator” might indeed arise from the interaction of the shock with a clumpy interstellar medium. Importantly, the experiment also gives valuable confirmation of the simulations, providing for the first time an experimental means to validate the simulation codes used for many astrophysical phenomena.

First direct high-precision measurement of the proton’s magnetic moment sets the stage for BASE

A German/Japanese collaboration working at the University of Mainz has performed the first direct high-precision measurement of the magnetic moment of the proton – which is by far the most accurate to date. The result is consistent with the currently accepted value of the Committee on Data for Science and Technology (CODATA), but is 2.5 times more precise and 760 times more accurate than any previous direct measurement. The techniques used will feature in the Baryon-Antibaryon Symmetry Experiment (BASE) – recently approved to run at CERN’s Antiproton Decelerator (AD) – which aims at the direct high-precision measurement of the magnetic moments of the proton and the antiproton with fractional precisions at the parts-per-billion (ppb) level, or better.

Prior to this work, the record for the most precise measurement of the proton’s magnetic moment had stood for more than 40 years. In 1972, a group at Massachusetts Institute of Technology measured its value indirectly by performing ground-state hyperfine spectroscopy with a hydrogen maser in a magnetic field. This experiment measured the ratio of the magnetic moments of the proton and the electron. The results, combined with theoretical corrections and two additional independent measurements, enabled the calculation of the proton magnetic moment with a precision of about 10 parts in a billion.

In an attempt to surpass the record, the collaboration of scientists from Mainz University, the Max Planck Institute for Nuclear Physics in Heidelberg, GSI Darmstadt and the Japanese RIKEN institute applied the so-called double Penning trap technique to a single proton for the first time (see figure 1). One Penning trap – called the analysis trap – is used for the non-destructive detection of the spin state, through the continuous Stern-Gerlach effect. In this elegant approach, a strong magnetic inhomogeneity is superimposed on the trap, so coupling the particle’s spin-magnetic-moment to its axial oscillation frequency in the trap. By measuring the axial frequency, the spin quantum state of the trapped particle can be determined. And by recording the quantum-jump rate as a function of a spin-flip drive frequency, the spin precession frequency νL is obtained. Together with a measurement of the cyclotron frequency νc of the trapped particle, the magnetic moment of the proton μp is obtained finally in units of the nuclear magneton, μpN = νLc.

This approach has already been applied with great success in measurements of the magnetic moments of the electron and the positron. However, the magnetic moment of the proton is about 660 times smaller than that of the electron, so the proton measurement requires an apparatus that is orders of magnitude more sensitive. To detect the proton’s spin state, the collaboration used an extremely strong magnetic inhomogeneity of 300,000 T/m2. However, this limits the experimental precision in the frequency measurements to the parts-per-million (ppm) level. Therefore a second trap – the precision trap – was added about 45 mm away from the strong magnetic-field inhomogeneity. In this trap the magnetic field is about 75,000 times more homogeneous than in the analysis trap.

To determine the magnetic moment of the proton, the first step was to identify the spin state of the single particle in the analysis trap. Afterwards the particle was transported to the precision trap, where the cyclotron frequency was measured and a spin flip induced. Subsequently the particle was transported back to the analysis trap and the spin state was analysed again. By repeating this procedure several hundred times, the magnetic moment was measured in the homogeneous magnetic field of the precision trap. The result, extracted from the normalized resonance curve (figure 2), is the value μp = 2.792847350(9)μN, with a relative precision of 3.3 ppb.

In the BASE experiment at the AD the technique will be applied directly to a single trapped antiproton and will potentially improve the currently accepted value of the magnetic moment by at least a factor of 1000. This will constitute a stringent test with baryons of CPT symmetry – the most fundamental symmetry underlying the quantum field theories of the Standard Model of particle physics. CPT invariance implies the exact equality of the properties of matter–antimatter conjugates and any measured difference could contribute to understanding the striking imbalance of matter and antimatter observed on cosmological scales.

ALICE and the flowing particle zoo

Relativistic heavy-ion collisions produce large numbers of particles that do not move individually, but rather as an organized group, with a collective motion known as flow. Flow studies at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) contributed to the surprising realization that the hot and dense matter created in the collisions behaves like a perfect liquid and not as a hadron gas. Now, the ALICE collaboration at the LHC has looked further into how these effects vary for different particle species.

In relativistic heavy-ion collisions the collective motion, or flow, is governed by the spatial anisotropy of the almond-shaped overlap region of the colliding nuclei and the initial density inhomogeneities of the fireball. These features are transformed, through interactions between the produced particles, into an anisotropy in momentum space. The degree of this transformation depends on the ratio of shear viscosity to entropy, η/s, which quantifies the friction of the created matter. This resulting anisotropy in particle production can be quantified by a Fourier analysis of the azimuthal distribution relative to the system’s symmetry plane, characterized by Fourier coefficients, vn. The second harmonic, v2, is known as the elliptic-flow coefficient.

One of the major outcomes from RHIC was the measurement of the elliptic flow of identified particles. These results led to the conclusion that the matter created acts as a system where the value of η/s is very close to the lower bound of ħ/4πkB conjectured within anti-de Sitter/conformal field theory – i.e. a nearly perfect liquid. At low values of transverse momentum (pT), for pT < 2 GeV/c, the experiments found an interesting mass-ordering of v2(pT), attributed to the interplay between elliptic and radial flow.

Radial flow tends to create a depletion in the particle pT spectrum at low values, which increases with increasing particle mass and transverse velocity. When introduced in a system that exhibits azimuthal anisotropy, this depletion becomes larger along the shorter axis of the anisotropy, thereby reducing v2. The net result is that at a fixed value of pT, heavier particles have a smaller value of v2 than lighter ones. In the intermediate pT region (2 < pT < 5 GeV/c), the v2 of baryons is larger than that of mesons. This phenomenon was conjectured to originate within a picture where flow develops at the partonic level and quarks coalesce into hadrons during hadronization. The proposed mechanism was argued to lead to the observed hierarchy in the flow values – the so-called number of constituent quarks (NCQ) scaling.

The ALICE collaboration, profiting from the unique particle-identification capabilities that the detector set-up provides, had measured v2 in PbPb collisions at √sNN = 2.76 TeV for different centrality intervals and various particles: π, K, p, Λ, Ξ, Ω (and their antiparticles), K0s and φ. The figure illustrates how v2 develops for different particle species within the same centrality interval in central (left plot) and peripheral (middle plot) PbPb collisions.

A clear mass ordering is seen for all centralities in the low-pT region (i.e. pT ≤ 2 GeV/c). Comparisons with hydrodynamic calculations in this transverse-momentum range indicate that the produced matter at the LHC seems to favour a value of η/s smaller than twice the quantum mechanical limit. In the intermediate pT region (pT > 2 GeV/c), although the particles tend to group according to their type (i.e. mesons and baryons), the NCQ scaling, if any, is only approximate. In particular, the φ-meson, with a mass close to that of p and Λ, seems to follow the baryon band in central events and shifts progressively to the band of mesons for peripheral collisions. This seems to indicate that the mass, rather than the number of constituent quarks, is the driving force of the v2 evolution with pT also in the intermediate region.

Higgs and top: a new window on dark matter

With the discovery of a Higgs boson at the LHC two years ago, the last piece of the Standard Model puzzle fell into place. Yet, several mysteries remain, one of which is the enigma of the origin of dark matter. One of the most popular classes of models predicts that the dark matter is made of weakly interacting neutral and colourless particles, χ, with mass ranging from a few to a few hundred giga-electron-volts. The LHC, with its high-energy collisions, provides an excellent pace to search for such particles, and the CMS collaboration has been taking a new look at ways in which they could be produced.

CCnew12_06_14

Until recently, the main experimental method to look for dark-matter particles was to exploit their elastic scattering on nuclei inside sensitive detectors, working typically at low temperatures. These direct-detection experiments aim to observe the scattering by measuring the momentum of the recoiling nucleus. While interesting hints for dark-matter detection in various mass ranges have been reported by some of these experiments, none of these hints have been confirmed by later, more precise measurements.

Several years ago, a new idea appeared: to look for the production of pairs of dark-matter particles in high-energy particle collisions, like those at the LHC, via a process described by the same Feynman diagram as the scattering of the dark-matter particles on quarks inside the nuclei, but “rotated” by 90°. While such direct-detection experiments look for the process qχ → qχ, experiments at the LHC can look for qq → χχ. The challenge is to trigger on these events, because dark-matter particles would leave no trace in the detector. One possibility is to search for a more complicated process, where an additional particle, for example a gluon or a photon, is produced together with the dark matter.

The CMS experiment has performed a number of such searches, which are referred to collectively as mono-X searches, because they look for a single object, X, recoiling against the invisible particles. Recently, these searches have been extended to more complicated signatures, for example the production of dark matter in association with a pair of top quarks, which are produced in abundance at the LHC. The new analysis looks for top-quark pairs that are recoiling against a large amount of “missing” transverse momentum, carried away by dark-matter particles.

As figure 1 shows, a new measurement by CMS of the production of top-quark pairs in association with missing transverse momentum sets stringent limits in the plane of the dark-matter particle mass Mχ vs an effective interaction energy scale, M* (CMS Collaboration 2014a). The interaction of dark matter with the known particles is usually assumed to be carried by new “messenger” particles. If the messengers are heavy – which would be a good reason why they have not yet been seen – the interaction can be approximated via a point-like interaction with an effective energy scale of M*. This is similar to Enrico Fermi’s effective theory of muon decay, where the messenger – a W boson – is much heavier than the muon.

Another interesting way to look for dark matter is based on precision measurements of the properties of the Higgs boson. If the mass of the dark-matter particle is less than roughly half of the Higgs boson mass, for instance, Mχ < 60 GeV, then it is possible to look for a direct decay of the Higgs boson into a χχ pair. This decay is called “invisible” because its products are not detected.

The CMS collaboration recently published a search for such invisible Higgs-boson decays, where the production of the Higgs is tagged either by the presence of a Z boson (associated ZH production), or by the presence of two forward jets, characteristic of vector-boson fusion (CMS Collaboration 2014b). The upper limits set on the invisible branching fraction of the Higgs boson are 51% and 58% at a 90% and 95% confidence level, respectively. The former limit can be translated to limits on the mass of the dark-matter particle vs its interaction cross-section with a nucleon, which allows for a direct comparison with the limits coming from various direct-detection experiments, as figure 2 shows. The limits are set for various types of dark-matter particle: scalar, vector, or a Majorana fermion. They are significantly more stringent than the direct-detection limits for low masses for dark matter, emphasizing the complementarity of the searches by the LHC and the direct-detection experiments.

bright-rec iop pub iop-science physcis connect