Comsol -leaderboard other pages

Topics

RIKEN gets clear view of element 113

Researchers at the RIKEN Nishina Center for Accelerator-based Science have obtained the most unambiguous data to date on element 113. A chain of six consecutive α decays, produced in experiments at the RIKEN Radioisotope Beam Factory, conclusively identifies the element through connections to well known daughter nuclides.

In the experiment at the RIKEN Linear Accelerator Facility in Wako, near Tokyo, Kosuke Morita and his team fired zinc ions travelling at 10% the speed of light at a thin target of bismuth and used a custom-built gas-filled recoil ion separator coupled to a position-sensitive semiconductor detector to identify the reaction products. On 12 August they detected the production of a very heavy ion followed by a chain of six consecutive α decays, which they identified as the products of an isotope of element 113. The chain began with the decay to roentgenium-274 (element 111) and ended in mendelevium-254 (element 101).

The team previously detected element 113 in experiments conducted in 2004 and 2005, but were then able to identify only four α decays followed by spontaneous fission of dubnium-262 (element 105), which is not a well known process. The decay chain detected in the latest experiments takes an alternative route via α-decay, the data indicating that the dubnium decayed into lawrencium-258 (element 103) and finally into mendelevium-254. The decay of dubnium-262 to lawrencium-258 is well known and provides unambiguous proof that element 113 is the origin of the chain.

One CP-violating phase, three beautiful results

Three independent measurements

The last day of September saw an exciting coincidence of three competing experiments simultaneously releasing three new and directly similar results. The occasion was the CKM2012 workshop in Cincinnati and the subject of interest: excellent new measurements of the CKM phase, γ.

Two of the contenders were well known to each other, having battled for supremacy in B physics for more than a decade. The “B factory” experiments, Belle and BaBar, were designed on the same principle: e+e collisions at the Υ(4S) resonance produce large numbers of BB pairs, which can be cleanly reconstructed in isolation. Except for a few selective technology choices, their most obvious dissimilarity is their location: Belle is at KEK in Japan while BaBar resides at SLAC in the US.

The meeting in Cincinnati saw these old foes joined by a new competitor, LHCb, which unlike the B factories collects its huge samples of bottom hadrons from high-energy proton–proton collisions at the LHC. Although there is little doubt that the CERN-based experiment will ultimately triumph with precision measurements of γ, on the morning of 30 September no one yet knew if that time had come.

Among the fundamental forces of nature, the weak force is special. Not only does it have a unique structure that gives rise to fascinating and often counter-intuitive physical effects, it is also highly predictive, making it excellent territory for searches for new physics. Perhaps the most celebrated phenomenon is CP violation – a common short-hand for saying that weak interactions of matter differ subtly from those of antimatter. Discovered in 1964 as a small effect (10–3) in KL0 decays, CP violation has more recently been observed as a large effect (10–2–10–1) in several B-meson decay modes.

The CKM matrix

The size and variety of CP violation in b-quark transitions is widely acknowledged as a triumphant validation of the Cabibbo-Kobayashi-Maskawa (CKM) description of quarks coupling to W± bosons. This mechanism explains three-generation quark-mixing – up-type quarks (u, c, t) transmuting to and from down-type quarks (d, s, b) via the charged weak current – in terms of a 3 × 3 matrix rotation of the quarks’ mass eigenstates into their weak-interaction eigenstates. CP violation arises naturally through the mathematically mandatory presence of one complex phase in this generically complex matrix. Furthermore, if nature indeed has only three quark generations and probability is conserved, then the CKM transformation must be unitary.

Unitary matrices have a property that the scalar product of any two rows or columns must equate to zero. In the case of the 3 × 3 CKM matrix, six equations can be written down that must hold true if there are three – and only three – generations of quarks. Of these six relations, which are all triangles on the Argand plane, the most celebrated is

V*ubVud +V*cbVcd +V*tbVtd = 0

where each VXY is one of nine CKM matrix elements that encode the strength with which quark X couples to quark Y. This triangle, whose internal angles are usually labelled α, β and γ, is widely publicized because it summarizes concisely the largest CP-violating processes in B mesons. Studying the geometry of this unitarity triangle (UT) tests the internal consistency of the three-generation CKM picture of quark mixing. The lengths of the sides of the UT are measured in CP-conserving processes, whereas the size of the angles (or phases) can be measured only via CP-violating decays.

In Cincinnati, the BaBar collaboration announced that it had achieved a measurement of γ = 69+17–16° from a combination of many analyses of B± → D(*)K± decays. The precision of around 25% can be compared with the precision with which the other two UT angles are known. The smallest of the three angles, β, is known to less than 4%, β = 21.4 ± 0.8°, principally from measuring the time-dependent CP asymmetry in the mixing and decay of B→ J/ψK0 decays. The angle subtended by the apex of the triangle, α, is known to around 5%, α = 88.7+4.6–4.2°, from similar, time-dependent analyses of B0 → ππ and B0 → ρρ decays. Remembering that the three angles of a triangle always add up to 180°, it is clear that BaBar’s central value is remarkably close to the CKM expectation.

The Belle collaboration’s presentation quickly followed and explained a similar measurement of γ = 68+15–14°, the modest improvement perhaps being a result of the almost twice-as-large data set. As with BaBar, this number results from the careful combination of various measurements of CP-violating properties of B± → DK± and B± → D*K± decays.

Interfering amplitudes

The B factories’ common choice of B± → DK± decays is not a coincidence. Among the current UT angle analyses, only γ measurements use direct CP violation in charged B decays. This promises a simple asymmetry of matter versus antimatter but requires two interfering amplitudes resulting in the same, indistinguishable final state. They must have different CP-conserving phases (generally true for any two quantum processes) and be of similar magnitude, or the influence of the less-likely process is too hard to detect.

Accessing γ in B± → DK± decays

In the UT definition, γ is identified as the weak phase difference between b → c and b → u quark transitions. Figure 2 shows Feynman diagrams for two paths of B± → DK±. The one involving a b → c quark transition is labelled “favoured” because a b quark is most likely to decay to a c quark. The second diagram involves a b → u quark transition and is labelled “suppressed” because the chance of its occurrence is around 1% of that of the favoured process (i.e. the ratio of amplitudes, rB is around 0.1).

This all looks good except for the detail in figure 2 that the favoured diagram results in a D0 while the suppressed diagram yields a D0. For the two B decays to interfere, the two neutral particles must be reconstructed in a final state that is common to both, i.e. the D0 and D0 should be indistinguishable. This might occur in the following ways, all of which are studied by Belle, BaBar and to some extent, LHCb.

• CP-eigenstate decays of neutral D mesons are by definition equally accessible to D0 and D0. In this case, the interference – and hence the size of the direct CP violation – is around 10% (from rB in figure 2). Examples of this type are B± → [K+K]DK± and B± → [KS0π0]DK± decays, where the D indicates that the particles in parentheses originated from a D meson.

• The unequal rate of the favoured and suppressed B decays can be redressed by selecting D final states that have an opposite suppression. Such combinations are referred to as ADS decays, after their original proponents. The most obvious example is B± → [π±K+–]DK± decays where, importantly, the kaon from the D decay is of an opposite charge to that emanating from the B decay. In this particular case, the favoured B decay from figure 2 is followed by the doubly Cabibbo-suppressed D0 → πK+ decay, whereas the suppressed B decay precedes a favoured D0 → K+π decay. With this opposite suppression, the total ratio of amplitudes (rB/rD) is much closer to unity than the first case, so larger CP violation, and hence greater sensitivity to γ, is achieved.

• A third possibility considers multi-body D decays such as B± → [KS0π+π]DK±. In this case, the kinematics of the three-body D decay is studied across a 2D histogram, the Dalitz plot. When the D → KS0π+π Dalitz plot for B → DK decays is compared with that of B+ → DK+ decays, they look identical except for a few places where γ has induced CP violation. Some places on the Dalitz plot have large sensitivity to γ, others less, but a big advantage comes from understanding the CP-conserving phases that vary smoothly across the Dalitz plot. Such an analysis is complicated, but worth it as the patterns of CP asymmetry across the Dalitz plane can be solved by only one value of γ (modulo 180°). This compares well to the first two cases whose interpretations suffer from trigonometric ambiguities because of their non-trivial sinusoidal dependence on γ.

Both the Belle and BaBar results combine all of these methods using B± → DK± and B± → D*K± decays. This diversity is vital since the branching fraction of γ-sensitive decays is so small (proportional to |Vub|2) and only a few hundred events have been collected in these experiments, even after a decade of operation.

Invariant mass distributions

LHCb has different advantages and challenges. On one hand the huge cross-section for B production at the LHC means that LHCb has a considerable advantage in the number of charged-track-only decays that it can gather. On the other hand, because of the hadronic environment LHCb fairs less well with modes containing neutral particles. The D → KS0π+π mode is still useful, but cannot be relied on as heavily as at the B factories. Modes with a π0 or a photon, notably the otherwise important B± → D*K±, D* → D0π0/D0γ suite of modes, have not yet been attempted at LHCb.

Nevertheless for the charged-track final states, such as the easiest ADS modes, LHCb has triumphed with first observations of the B± → [π±K+–]DK± mode (see figure 3), as well as the similarly interesting B± → [π±K+–ππ+]DK± mode. By measuring the large CP asymmetries in these modes, and with the help of an ambiguity-busting B± → [KS0π+π]DK± analysis, the LHCb collaboration concluded the CKM2012 session by announcing a measurement of γ = (71.1+16.6–15.7)° from B± → DK± decays.

Such exotic processes are the reason for well established phenomena such as B-mixing and flavour-changing neutral-current decays

The simple combination of these three independent results (neglecting their common systematics) leads to the conclusion that γ is known to better than 14% accuracy: γ = 69.3+9.4–8.8°. This is illustrated in figure 1, which also shows the remarkable similarity of the three measurements and their mutual agreement with the expectation based on the world-average values of β and α.

The concluding theme in Cincinnati was that despite LHCb’s coming of age since CKM2010, the CKM description of the quarks’ weak interactions continues to prove impressively complete. It was noted however, that many flagship B-physics measurements, including the UT angles α and β, involve processes that contain quantum loops and/or boxes. Such exotic processes are the reason for well established phenomena such as B-mixing and flavour-changing neutral-current decays. Standard Model loop-processes contain the virtual existence of high-mass particles such as W±, top quarks and by extension, possibly non-Standard Model particles too. If they exist, and if they couple to quarks, such new-physics particles could be altering the physical behaviour of B mesons from the CKM-based expectation.

Detection of non-CKM effects is possible only if loop-sensitive observations can be compared with a gold-standard CKM process. B± → DK± decays provide exactly this. They are “tree-level” measurements (meaning, no loops) that are almost unique in heavy-flavour physics for their theoretical cleanliness. The measurement of γ in these modes is a measurement of γCKM, something the other two angles of the UT cannot boast with such certainty.

Though γ is currently the least well known UT property, by the end of this decade LHCb will have reduced its uncertainty to less than 5° (less than about 8%). By the end of the epoch of the Belle and LHCb upgrades, sub-degree precision looks likely. Such stunning precision will mean that this phase will become the CKM standard candle against which loop processes will be compared increasingly carefully.

Large CP-violation effects appear in three-body B decays

One of the interesting ways to search for CP violation in B-meson decays is by using three-body decays of charged B mesons, i.e. B+ →K+KK+, K+ππ+, K+Kπ+ and π+ππ+ (and the charge-conjugated modes). In the 1.0 fb–1 of data accumulated in 2011, LHCb already recorded samples of these decays that are an order of magnitude larger than those available to previous experiments. The first studies of the K+KK+ and K+ππ+ decays were presented by the collaboration at the 2012 International Conference on High-Energy Physics in July, revealing evidence of large CP-violation effects (LHCb 2012a). Now, at the 7th International Workshop on the CKM Unitarity Triangle, held at the end of September, LHCb has complemented these with results from the rarer decays to K+Kπ+ and π+ππ+, finding evidence of even larger CP violation (LHCb 2012b).

CCnew5_09_12

While the inclusive CP asymmetries (which are integrated over the entire phase space, or the Dalitz plots, of the three-body decays) show evidence of CP violation, more pronounced effects are visible when looking at the variation of the effect in different regions. The LHCb analyses have used model-independent approaches – based on binning the Dalitz plot – to explore the local asymmetries.

A remarkable feature of the new results is that the CP-violation effects appear to arise in regions of the Dalitz plots that are not dominated by contributions from narrow resonances. For example, previously the BaBar collaboration observed a broad feature at low values of the K+K invariant mass in B±→K+Kπ± decays (Aubert et al. 2007); in the LHCb data, this appears to be present only in B+ decays, as the figure shows, indicating direct CP violation in these decays. This points to some interesting hadronic dynamics that must generate the strong (CP conserving) phase difference that is necessary for direct CP violation to emerge.

To understand these effects further, the LHCb collaboration is now starting detailed studies of these channels and will also exploit the larger data sample that will be available after the 2012 running. The results from these analyses will also establish whether the observed CP violation is consistent with the expectations of the Standard Model or whether it has a more exotic origin.

Using the LHC as a photon collider

The protons and nuclei accelerated by the LHC are surrounded by strong electric and magnetic fields. These fields can be treated as an equivalent flux of photons, making the LHC the world’s most powerful collider not only for protons and lead ions but also for photon–photon and photon–hadron collisions. This is particularly so for beams of multiply charged heavy-ions, where the number of photons is enhanced by almost four orders of magnitude compared with the singly charged protons (the photon flux is proportional to the square of the ion charge).

CCnew8_09_12

The ALICE collaboration has recently taken advantage of this effect in a study of coherent photoproduction of J/ψ mesons in lead–lead (PbPb) collisions. The J/ψ is detected through its dimuon decay in the muon arm of the ALICE detector, which also provides the trigger for these events. The relevant collisions typically occur at impact parameters of several tens of femtometres, which is well beyond the range of the strong force, so the nuclei usually remain intact and continue down the beam pipe. The photonuclear origin of the J/ψ is therefore ensured by requiring that the detector is void of other particles, that there is only one positive and one negative muon candidate, and that the J/ψ has very low transverse momentum, etc. The appearance of these events (see figure) stands in sharp contrast to central heavy-ion collisions, where thousands of particles are produced.

CCnew9_09_12

These interactions carry an interesting message about the partonic substructure of heavy nuclei. Exclusive photoproduction of heavy vector mesons is believed to be a good probe of the nuclear gluon distribution. The cross-section measured in a heavy-ion collision Pb+Pb → Pb+Pb+J/ψ is a convolution of the equivalent photon spectrum with the photonuclear cross-section for γ+Pb → J/ψ+Pb. The latter process can be modelled as the colourless exchange of two gluons.

At the rapidities (y around 3) studied in ALICE, J/ψ photoproduction is sensitive mainly to the gluon distribution at values of Bjorken-x of about 10–2. Although the experimental error is rather large, the conclusion from ALICE is that the data favour models that include strong modifications to the nuclear gluon distribution, known as nuclear shadowing.

Top-quark production gets a boost

Upper limit on the production

Top quarks are especially interesting at the LHC because they are the most massive fundamental particle known, suggesting an intimate association with electroweak symmetry breaking and possible new-physics scenarios.

The top quark decays via two channels: t → Wb → lνb or t → Wb → qqb. When a tt pair is created in an experiment with energy roughly equal to the quark–antiquark rest mass, the decay products appear well separated in the detector. With the higher energies at the LHC, however, particles are often given a “boost” in momentum when produced so the decay products of a tt pair have extra momentum along the directions of the top and antitop, and are found in opposite hemispheres of the detector.

While higher energies allow the experiments at the LHC to probe for new physics as never before, they also bring new challenges. For example, what if the top quark is so boosted that the three jets from the decay t → Wb → qqb merge to a point where they are indistinguishable from each other and appear as one large jet? With the high energy at the LHC, this boosted situation happens quite often and must be accounted for when reconstructing top-quark decays. Analyses involving top quarks or other “boosted objects” at the LHC, now include approaches that allow for these effects.

The special techniques for measuring boosted top quarks are particularly important when searching for new resonances, where a new heavy particle decaying primarily into tt pairs could be observed as a bump in the relevant invariant mass spectrum. The higher the mass of the new particle, the more likely it is that the top-quark decay products will merge in the detector.

ATLAS recently performed searches for tt resonances in final states with one or no leptons. In the former case, the lepton is allowed to be much closer to the b quark than in non-boosted analyses. In the other hemisphere of the detector, a wide massive jet with underlying structure is required. Using these boosted techniques, the sensitivity to a new heavy-gauge boson increased by nearly 700 GeV.

With the expected energy upgrade of the LHC the frequency of boosted final states will increase and even more sophisticated methods will be needed to search for physics beyond the Standard Model. The future, is certainly boosted.

Theorists calculate the route to carbon-12

CCnew14_09_12

The triple-alpha reaction rate that produces carbon-12 in stars and other energetic astronomical phenomena has been a tricky subject for nuclear theorists for some time. Initially, Fred Hoyle proposed that there should be a 0+ resonance close to the 3α threshold to justify the observed abundances of carbon-12 in stars, a theory that was later confirmed experimentally. However, if there is not enough energy in the stellar environment to reach the narrow resonances involved, then a direct three-body capture becomes the favoured path.

In the Nuclear Astrophysics Compilation of Reaction Rates (NACRE), the direct triple-alpha capture rate has been extrapolated from the two-step resonant capture to temperatures well below 108 K, where the resonant capture dominates (C Angulo et al. 1999). However, this estimation has proved inadequate and nuclear theorists began trying to solve the problem more directly.

Recently, a team at Kyushu University in Japan made use of the continuum-discretized coupled-channel (CDCC) method, which expands the full three-body wave function in terms of the continuum states of the two-body subsystem – in this case beryllium-8 (Ogata et al. 2009). This method is challenging in the case of the triple-alpha reaction problem because the charged-particle reaction occurs at large distances and is dominated by Coulomb interactions. The results reflected these challenges, as the predicted rates showed an increase of 20 orders of magnitude when compared with NACRE, and caused the red-giant phase in low- and intermediate-mass stars to disappear in theoretical models of stellar evolution. Additionally, studies of helium ignition in accreting white dwarfs and accreting neutron stars showed that the CDCC rate is barely consistent with observations of Type Ia supernovae and type I X-ray bursts, respectively.

To skirt some of these difficulties, the nuclear theory group at the National Superconducting Cyclotron Laboratory at Michigan State University combined the Faddeev hyperspherical harmonics and the R-matrix method (HHR) to obtain a full solution to the three-body triple-alpha continuum (Nguyen et al. 2012). The researchers find that the HHR method agrees well with NACRE above 7 × 107 K. However, below that temperature the calculations revealed a pronounced increase of the rate accompanied by a completely different temperature dependence. Though the results do show a strong enhancement at these low energies, it is not as strong as that seen in the CDCC result.

This finding turns out to have crucial repercussions for astrophysics. When the new results are used in stellar evolution simulations within the MESA (Modules for Experiments in Stellar Astrophysics) code, the red-giant phase in the stellar evolution of low- and intermediate-mass stars survives. The team plans to carry out further astrophysical studies to understand the implications of the new rate in explosive scenarios in the near future.

Quark Matter goes to Washington

CCqm1_09_12

The Quark Matter conferences, held roughly every 18 months, form the most important series of meetings in relativistic heavy-ion physics. The latest and 23rd in the series took place on 13–18 August at the Omni Shoreham hotel, a historic landmark in downtown Washington, DC. The meeting attracted around 700 participants from all around the world who discussed an unprecedented amount of new heavy-ion data from experiments at both Brookhaven National Laboratory (BNL) and CERN. This rich harvest of high-quality experimental results from the PHENIX and STAR collaborations at BNL’s Relativistic Heavy-Ion Collider (RHIC) and the ALICE, ATLAS and CMS collaborations at CERN’s LHC is providing a deep insight into the behaviour of quarks and gluons under the extreme conditions of high temperature and density.

The opening ceremony included presentations by Bart Gordon, former chair of the US House of Representatives Committee on Science and Technology, Timothy Hallman, associate director of Science for Nuclear Physics of the US Department of Energy, and Samuel Aronson, director of BNL. Urs Wiedemann of CERN provided an overview of the current status of relativistic heavy-ion physics, followed by highlights from the experiments presented by Takao Sakaguchi of PHENIX, Xin Dong of STAR, Karel Safarik of ALICE, Barbara Wosiek of ATLAS and Gunther Roland of CMS. The welcome reception was held at the spectacular Smithsonian Institute’s National Portrait Gallery.

Understanding the quark–gluon plasma

Quantum chromodynamics (QCD) – the theory describing the interactions of quarks and gluons – is believed to be responsible for 99% of the mass of the visible universe, with the Higgs boson responsible for the remaining 1%. It has become clear that this mass originates mainly from the self-interaction of gluons, which at short distances is governed by asymptotic freedom. Yet the dynamics of gluon interactions in the large-distance, strong-coupling regime, which is responsible for quark confinement and the existence of atomic nuclei, remains mysterious. It is intimately linked to the complicated and poorly understood structure of the QCD vacuum.

QCD is believed to be responsible for 99% of the mass of the visible universe

The understanding of matter is often advanced by the study of phase transitions in macroscopic systems; thus heavy-ion physics aims towards a better understanding of QCD by creating a “macroscopic” domain of excited vacuum populated by a hot quark–gluon fireball. Advancing the understanding of the quark–gluon plasma also helps in better understanding the origins of the universe – this is because the conditions created in heavy-ion collisions, albeit fleetingly, are similar to the conditions that existed a few microseconds after the Big Bang. In addition, because both QCD and the electroweak sector of the Standard Model are described by non-Abelian gauge theories, understanding the QCD plasma will therefore provide valuable insight into the dynamics of matter at temperatures above the electroweak phase transition that are not accessible in the laboratory. This is important because, for example, the topological “sphaleron” transitions in the electroweak plasma could be responsible for the baryon asymmetry of the present-day universe.

CCqm2_09_12

A major advance in the physics of the QCD plasma made possible by the data from RHIC – and now also from the LHC – was the realization that at experimentally accessible temperatures the plasma behaves as a liquid with small dissipation, quantified by small values of shear and bulk viscosities. This implies that there exists a range of temperatures above the deconfinement phase transition in which the plasma does not at all resemble the quasi-ideal gas of quarks and gluons that is expected at high temperatures as a result of asymptotic freedom. At strong coupling, the non-Abelian plasma possesses small shear viscosity as exemplified by the supersymmetric plasma that is amenable to the studies by holographic methods based on string theory. In the latter case, the ratio of shear viscosity to entropy at strong coupling reaches the value of 1/(4π), a value that was conjectured to be the universal lower bound for any fluid. The physics underlying this bound is of a quantum nature because at strong coupling the mean free path approaches the de Broglie wavelength of the constituents, making the quasi-particle picture inapplicable.

The new data presented at Quark Matter 2012 have strengthened the case for the “perfect liquid” and made the physical picture more detailed. The data on hadron spectra and azimuthal correlations from RHIC and the LHC point towards the presence of well localized quantum fluctuations at the early stage of the collision that induce excitations in the quark–gluon liquid. The azimuthal distributions of hadrons are conveniently parameterized by their Fourier coefficients vn. For large n, these coefficients signal the presence of localized fluctuations at the early stage of the collision; their values should be sensitive to the shear viscosity of the liquid.

Ordinarily, the “elliptic flow” v2 dominates over higher harmonics because it reflects decompression of the elliptical shape of the produced fireball. However, all harmonics in the most-central heavy-ion collisions become similar, as illustrated in figure 1 by data from the CMS collaboration in the 0.2% most-central lead–lead (PbPb) collisions at the LHC. A comparison of the data with hydrodynamical calculations shows that the shear viscosity of the liquid is quite close to the conjectured quantum bound, although its precise value depends on the choice of initial conditions.

The initial conditions in heavy-ion collisions are determined by the structure of nuclear wave-functions at small Bjorken-x and the dynamics of their interaction. Significant progress in the understanding of QCD at small x has been made in recent years, triggered by the data from RHIC and the LHC. The quantum evolution in QCD and the high density of partons in Lorentz-contracted nuclei (which can be described as the “colour glass condensate”) lead to the emergence of strong colour fields that dominate the early moments of heavy-ion collisions. The data on the collective flow and other observables suggest that thermalization occurs very early on – within 1 fm/c of the beginning of the collision. The dynamics of this “early thermalization” is not yet entirely understood but several promising theoretical developments were reported at the conference.

One of the proposed signatures of the colour glass condensate is the disappearance of quantum back-to-back di-jet correlations in the forward rapidity region of deuterium–gold collisions at RHIC, owing to the emergence of a semi-classical gluon field at small Bjorken-x. Both the PHENIX and STAR collaborations reported on observations of this effect at RHIC.

The QCD medium can be studied using hard probes to investigate its response to external localized perturbations. The RHIC experiments observed the strong quenching of high-transverse-momentum hadrons and jets that had been proposed as a signature of hot and dense quark–gluon matter. The LHC has significantly extended the kinematic reach in the studies of jets. All LHC experiments found that the strong suppression of jets persists up to high jet energies.

There is no clear sign of the dependence of jet-energy loss on the colour charge of the parton

The mechanism behind the jet-energy loss is still not clear: does it depend on the colour charge of the leading parton (quark or gluon)? Is it suppressed for heavy quark jets, as expected for the medium-induced gluon radiation as a result of the “dead cone” effect? Are the dynamics of energy loss adequately described by perturbative QCD, or does it call for new strong-coupling methods? These questions can be answered only after more detailed data are acquired on jet shapes and flavour-tagged jets.

An interesting effect of the modification of the jet-fragmentation function in PbPb collisions was reported at the conference by the ATLAS and CMS collaborations. Figure 2 shows the ATLAS result. In addition to the enhancement of hadron production at a small fraction of the jet energy z, there is also a sizable dip for the intermediate values of z, which has yet to be understood.

As for the flavour-tagged jets, high-energy b- and c-tagged jets are seen by the LHC experiments to be quenched similarly to the inclusive jets, which are dominated by gluons. At present there is no clear sign of the dependence of jet-energy loss on the colour charge of the parton. At transverse momenta below 8 GeV, there is a hint of weaker quenching for D mesons than for light hadrons, as reported by the ALICE collaboration. The electrons from heavy-flavour decays that receive a significant contribution from beauty decays at high transverse momenta have been found by ALICE to be quenched less than the charm decays of D mesons, as figure 3 shows. This suggests that the quenching of bottom quarks is weaker than that of charm quarks.

The PHENIX collaboration presented the first data on heavy-meson quenching from decay electrons obtained by using their new silicon vertex detector. In accord with the expectations from theory, D mesons are observed to be suppressed less than light hadrons. However, the PHENIX collaboration found surprising hints of a significantly stronger suppression of B-mesons.

An important baseline for jet quenching is provided by the colourless probes – the photons, Z and W bosons. Indeed, the ATLAS and CMS collaborations reported the production of Z bosons with no sign of suppression up to transverse momenta of about 100 GeV. This implies that the observed suppression of jets is, indeed, a result of the colour dynamics.

Heavy quarkonium has been proposed as a probe of deconfinement – the Debye screening in the quark–gluon plasma (QGP) is expected to make quarkonium formation impossible. Strong suppression of J/ψ production was observed at CERN’s Super Proton Synchrotron – and then at RHIC and at the LHC. Studies of heavy quarkonium have now been extended to the bottomonium family, with the expected hierarchy of suppression, as shown in figure 4a from the CMS collaboration: it is more difficult to dissolve states with larger binding energies and smaller radii.

Nevertheless, the observed suppression stems from a complicated interplay of final- and initial-state effects, as suggested by the recent PHENIX data on J/ψ production in asymmetrical copper–gold (CuAu) collisions presented at the conference (figure 4b). The J/ψ suppression at rapidities in the Cu fragmentation region is found to be stronger than in the Au-fragmentation region. This is inconsistent with a final-state effect alone because the density of produced particles is larger in the Au-fragmentation region. On the other hand, J/ψ production in central CuAu collisions in the Cu-fragmentation region selects a high-density region in the wave function at small x of the Au nucleus. The rescattering of heavy quarks in this dense gluon system before the formation of the QGP is expected to reduce the probability for J/ψ formation.

Fluctuations, broken symmetries and the critical point

An important goal of heavy-ion physics is to map the QCD phase diagram. A prominent feature of this phase diagram is the possible existence of a critical point at finite baryon density at the end of the first-order phase-transition curve. The signature of the critical point is the enhancement of fluctuations, including the fluctuations of net baryon number. Experimental access to the high baryon-density in heavy-ion collisions requires decreasing the energy of the collisions at RHIC. The search for the critical point, and thus for the disappearance of signatures of deconfined matter, was the goal of the recent scan of the beam energy at RHIC.

Both the STAR and PHENIX collaborations reported results from the RHIC Beam Energy Scan at the conference. The STAR experiment sees an intriguing deviation of the higher moments of net proton-fluctuations from the Poisson baseline and from the expectations based on Monte-Carlo models at centre-of-mass energies below 20 GeV. A measurement at higher statistics with a finer step in collision energy will be needed to tell whether this observation does point to the existence of the critical point. As the energy of the collision was decreased, several signatures of the plasma phase were found to disappear, as reported by STAR. These include the suppression of the high-transverse momentum hadrons, the scaling in constituent number of the elliptic flow and the fluctuations of charge separation, which is a consequence of the chiral magnetic effect.

Theoreticians have proposed the existence of quantum fluctuations of topological origin in the early stage of heavy-ion collisions, which generate chirality similarly to the electroweak sphalerons that generate baryon number at much higher temperatures. In the presence of the strong magnetic field generated by the colliding heavy ions, the fluctuations in net chirality can lead to fluctuations in the electric-charge separation because of the “chiral magnetic effect”. The resulting observable is the event-by-event fluctuation in the electric-charge separation relative to the reaction plane, signalling the fluctuating electric dipole moment of the plasma. The effect can be accessed experimentally by measuring the difference in the fluctuations of the parity-odd harmonics of azimuthal distributions for hadrons of the same and opposite charge. The effect has been seen at RHIC by the STAR and PHENIX experiments; an effect of similar strength was also reported by the ALICE collaboration.

However, because the observable is parity-even it can receive contributions from more mundane effects. An alternative conventional explanation has been put forward based on the combination of correlations between opposite electric charges and the elliptic flow. Usually, the elliptic flow is correlated with the magnetic field by the geometry of the collision and both vanish in central collisions. However, the new RHIC data on uranium–uranium (UU) collisions allow separation of the two effects. Because of the deformed shape of the uranium nucleus, the central collisions produce a deformed fireball leading to a sizeable elliptic flow; yet, the number of spectators detected by the Zero Degree Calorimeter is small, so the magnetic field must be greatly suppressed. Thus, it should be possible to establish whether the observed fluctuations in charge asymmetries are driven by the elliptic flow or by the magnetic field.

Preliminary data from STAR presented at the conference indicate that the difference in the fluctuations of the asymmetry for the same- and opposite-charge hadrons vanishes in central UU collisions (figure 5), suggesting that these fluctuations are driven by the magnetic field. Another important result on this topic reported by STAR was the difference between the elliptic flows of positive and negative pions in AuAu collisions at 200 GeV, which is found to be linearly dependent on the charge asymmetry in the event, as expected on the basis of the chiral magnetic effect. New refined data are necessary to reach a definitive conclusion on this issue. Topological transitions in QCD generating chirality are analogous to the electroweak sphaleron transitions that generated the baryon asymmetry of the universe shortly after the Big Bang. Therefore, understanding them better is important.

Broad connections

The conference highlighted the broad connections of relativistic heavy-ion physics to condensed-matter physics, string theory, cosmology and astrophysics. For example, the small viscosity of the QGP makes it similar to such seemingly distant objects as ultracold atoms and graphene, where the charge carriers are chiral and the effective coupling is large. The non-dissipative chiral magnetic current appears to exist also in Weyl semimetals and opens possibilities for the creation of a new generation of electronic devices.

The conference made clear the need for dedicated future facilities, several of which were discussed, including: the Electron–Ion Collider needed for a precision study of small-x gluon wave-functions of nuclei and of the spin structure of the proton; the Large Hadron–Electron Collider at CERN, which would advance the high-energy, high-momentum-transfer frontier of deep-inelastic scattering; the Facility for Antiproton and Ion Research under construction at GSI in Darmstadt; and the Nuclotron-based Ion Collider facility, currently under construction in Dubna. The case for the latter two facilities was advanced by the first results from the beam-energy scan at RHIC that were reported at the conference.

Summaries of the results presented were provided by three pairs of rapporteurs, each pair composed of a theorist and experimentalist: Boris Hippolyte of the Institut Pluridisciplinaire Hubert Curien and Dirk Rischke of the University of Frankfurt on global variables and correlations; Jorge Casalderrey-Solana of the University of Barcelona and Alexander Milov of the Weizmann Institute of Science on high-transverse-momenta and jets; and Charles Gale of McGill University and Lijuan Ruan of BNL on heavy flavours, quarkonia and electroweak probes. The wealth of new data and the resulting leap in the theoretical understanding of QCD matter were possible only because of the successes of the two complementary experimental programmes at RHIC and the LHC.

Deep-inelastic scattering enters the LHC era

Data-to-theory ratios

The unusually early date for the 20th International Workshop on Deep-Inelastic Scattering and Related Subjects proved not to be a problem. The trees were all in blossom in Bonn during DIS 2012, which was held there on 26–30 March, and the sun shone for most of the week. As is the tradition for these workshops, the first day consisted of plenary talks, with the ensuing three days devoted to parallel sessions, followed by a final day of summary talks from the seven working groups. Almost all of the 300 participants also gave talks: there were as many as 275 contributions, not including the summaries. For the first time, the number of results from the LHC experiments at CERN was larger than from DESY’s HERA collider, which shut down in 2007. Given such a large number of contributions, it is not possible to do justice to them all, so the following report presents only a few rather subjective highlights.

With the move from dominantly electron–proton collisions to more and more results coming from hadron colliders, the workshop started with an “Introduction to deep-inelastic scattering: past and present” by Joël Feltesse of IRFU/CEA/Saclay. Talks on theory and on experiment followed, which covered the full breadth of the topics presented in more detail in the parallel sessions. With running at Fermilab’s Tevatron coming to an end in 2011, results with the complete data set are now being released by the CDF and DØ collaborations. There were also several results from the LHC experiments based on the complete data set for 2011. The emphasis in many of the theory presentations was on calculating processes to higher orders and on parton density function (PDF) and scale uncertainties.

Structure functions and PDFs

Measurements that are relevant to the determination of the PDFs in the nucleon, were reported on combined data from the HERA experiments, H1 and ZEUS, the LHC experiments, ATLAS, CMS and LHCb, as well as from the Tevatron experiments, CDF and DØ. New experimental results have come – in particular from the LHC – on Drell-Yan production, including W and Z bosons, and from HERA and the LHC on jet production, including jets with heavy flavour. In addition, analyses of deep-inelastic scattering (DIS) data on nuclei were presented at the workshop.

There has been substantial progress in the development of tools for PDF fitting

There has been substantial progress in the development of tools for PDF fitting, including the so-called HERAfitter package. This package is designed to include all types of data in a global fit and can be used by both experimentalists and theorists to compare different theoretical approaches within a single framework. The FastNLO package can calculate next-to-leading-order (NLO) jet cross-sections on grids that can then be used for comparisons of data and theory, as well as in PDF fitting. Figure 1 shows a comparison of data and theory for many different energies and processes.

Looking at the current status of fit results, the conclusion is that the determination of the PDFs still gives rise to some controversy but that there is progress in understanding the differences, as Amanda Cooper-Sarkar of Oxford University explained. All of the groups presented PDFs up to next-to-next-to-leading order (NNLO) in the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi formalism. Extensions of the formalism into the Balitsky-Fadin-Kuraev-Lipatov regime and into the high-density regime of nuclei are in progress. The H1 and ZEUS collaborations have also measured the longitudinal structure function, FL. However, the precision is still not good enough to discriminate between predictions of the gluon density and different models.

Measurements of cross-sections of diffractive processes in DIS open the opportunity to probe the parton content in the colourless nucleon, the goal being to determine diffractive PDFs of the nucleon. The H1 and ZEUS collaborations selected diffractive processes in DIS either by requiring the detection of a scattered proton in dedicated proton spectrometers (ZEUS-LPS or H1-FPS) at small angles to the direction of the proton beam, or without proton detection but instead requiring a large rapidity gap between the production of a jet or vector-meson and the proton beam direction. Figure 2 shows reduced cross-sections obtained from LPS and FPS data (and also combined), which were presented at the workshop. The LHC experiments have also started to contribute to diffraction studies. The ATLAS collaboration reported on an analysis of diffractive events selected by a rapidity gap.

Diffractive cross-section

Searches and tests

At the time of the conference, the LHC experiments had only tantalizing hints of an excess in the mass region around 125 GeV using the data from 2011. It was nevertheless impressive that many results could be shown using the full 5 fb–1 of data that had been collected that year. The Higgs searches were the only ones to show any real sign of new particles. All others saw no significant indications and could only set upper limits. Experiments at both the LHC and the Tevatron have now measured WW, ZZ and WZ production with cross-sections that are consistent with Standard Model expectations, calculated to NLO and higher.

As Feltesse reminded participants in his talk, measurements of hadronic final states in DIS were the cradle for the development of the theory of strong interactions, QCD. Such measurements remain key for testing QCD predictions. New results were presented from HERA and the LHC, in which the QCD analyses have reached an impressive level of precision. While leading-order-plus-parton-shower Monte Carlos provide a good description of the data in general, a number of areas can be identified where the description is not good enough. Higher-order generators are needed here and it is important that appropriate tunes are used.

In general, NLO QCD predictions give a good description of the data. However, the uncertainty in the theory because of missing higher-order calculations is almost everywhere much larger than the experimental errors. Moreover, it was shown that in several cases the fragmentation process of partons into hadrons is not well described by NLO QCD calculations.

A central issue is the value and precision of the strong coupling constant, αS, and its running as a function of the energy scale. Many results were presented that improve the precision and show that the energy dependence is well described by QCD calculations.

There has been a great deal of progress in calculations of heavy-quark production

There has been a great deal of progress in calculations of heavy-quark production. A particular highlight is the first complete NNLO QCD prediction for the pair-production of top quarks in the quark–antiquark annihilation channel. There is also a wealth of data from HERA, the LHC, the Tevatron and the Relativistic Heavy-Ion Collider (RHIC) on the production both of quarkonia and of open charm and beauty. The precision with which the Tevatron experiments can measure the masses of both the top quark and the W boson is particularly impressive. Although the LHC experiments have more events of both sorts by now, it will still take some time before the systematic uncertainties are understood well enough to achieve similar levels of precision.

The X,Y,Z states discovered in recent years have been studied by the experiments at B factories, the LHC and the Tevatron. Their theoretical interpretations are still a challenge. The LHCb experiment has performed the world’s best measurements of the properties of the Bc meson and b baryons and has made important contributions in other areas where its ability to measure particles in the forward direction is important.

Experiments that use polarized beams in DIS on polarized targets are relevant for studying the spin structure of nucleons. New results were presented from HERMES at HERA and COMPASS at CERN’s Super Proton Synchrotron, as well as from experiments at RHIC and Jefferson Lab. A tremendous amount of data has been collected and is now being analysed. Current results confirm that neither the quarks nor the gluons carry much of the nucleon spin. This leaves angular momentum. However, a picture describing the nucleon as a spatial object carrying angular momentum has yet to be settled.

The conceptual design report for a future electron–proton collider using the LHC together with a new electron accelerator, known as the LHeC, was released a couple of months after DIS 2012. This was the main topic of the last plenary talk at the workshop. In the parallel sessions, a broad spectrum of options for the future was discussed, covering the upgrades of the LHC machine and detectors, the upgrade plans at Jefferson Lab and RHIC, as well as proposed new accelerators such as an electron–ion collider, the EIC. One of the central aims is to understand better the 3D structure of the proton in terms of generalized parton distribution functions.

DIS 2012 participants once again profited from lively and intense discussions. The conveners of the working groups worked hard to put together informative and interesting parallel sessions. They also organized combined sessions for topics that were relevant for more than one working group. For relaxation, the workshop held the conference dinner in the mediaeval castle “Burg Satzvey”, which was a big success. Many of the participants also went on one of several excursions on offer. Next year, DIS moves south and will take place on 22–26 April in Marseilles.

Berkeley welcomes real-time enthusiasts

CCrt1_09_12

The IEEE-NPSS Real-Time Conference is devoted to the latest developments in real-time techniques in particle physics, nuclear and astrophysics, plasma physics and nuclear fusion, medical physics, space science, accelerators and general nuclear power and radiation instrumentation. Taking place every second year, it is sponsored by the Computer Application in Nuclear and Plasma Sciences technical committee of the IEEE Nuclear and Plasma Sciences Society (NPSS). This year, the 18th conference in the series, RT2012, was organized by the Lawrence Berkeley National Laboratory (LBNL) under the chair of Sergio Zimmermann and took place on 11–15 June at the Shattuck Plaza Hotel in downtown Berkeley, California.

The conference returned to the US after being held in Lisbon for RT2010 and in Beijing in 2009, when the first Asian conference of this series was held at the Institute for High-Energy Physics. RT2012 attracted 207 registrants, with a large proportion of young researchers and engineers. Following the meetings in Beijing and Lisbon, there is now a significant attendance from Asia, as well as from the fusion and medical communities, making the conference an excellent place to meet real-time specialists with diverse interests from around the world.

Presentations and posters

As in the past, the 2012 conference consisted of plenary oral sessions. This format encourages participants to look at real-time developments in different sectors other than their own and greatly fosters the necessary interdisciplinary exchange of ideas in the various fields. Following a long tradition, each poster session is associated with a “mini-oral” presentation session. Presenters can opt for a two-minute talk, which helps them to emphasize the highlights of their posters. It is also an excellent educational opportunity for young participants to present and promote their work. With a mini-oral presentation still fresh in mind, delegates can then seek out the appropriate author during the following poster session, an approach that stimulates lively and intensive discussions.

The conference began as usual with an opening session with five invited speakers who surveyed hot topics from physics or innovative technical developments. First, David Schlegel of LBNL gave an introduction to the physics of learning about dark energy from the largest galaxy maps. Christopher Marshall of Lawrence Livermore National Laboratory introduced the National Ignition Facility and its integrated computer system. CERN’s Niko Neufeld gave an overview talk on the trigger and data acquisition (DAQ) at the LHC, which provided an introduction to the large number of detailed presentations that followed during the week. Henry Frisch of the University of Chicago presented news from the Large Area Photodetectors project, which aims for submillimetre and subnanosecond resolution in space and time, respectively. Last, Fermilab’s Ted Liu spoke about triggering in high-energy physics, with selected topics for young experimentalists.

The technical programme, organized by Réjean Fontaine of the University of Sherbrook, Canada, brought together various areas of real-time computing applications and DAQ covering a range of topics in various fields. About half of the topics came from high-energy physics, the rest mainly from astrophysics and nuclear fusion, medical applications and accelerators.

Some important sessions, such as that on Data Acquisition and Intelligent Signal Processing, started with an invited introductory or review talk. Ealgoo Kim of Stanford University reviewed the trend of data-path structures for DAQ in positron-emission tomography systems, showing how the electronics and DAQ are similar to those for detectors in high-energy physics. Bruno Gonçalves of the Instituto Superior Técnico Lisbon spoke about trends in controls and DAQ in fusion devices, such as ITER, particularly towards reaching the necessary high availability. Riccardo Paoletti of the University of Siena and INFN Pisa presented the status and perspectives on fast waveform digitizers, with many examples being given in following presentations.

Rapid evolution

This year the conference saw the rapid and systematic evolution of intelligent signal processing as it moves further towards front-end signal processing at the start of the DAQ chain. This incorporates ultrafast analogue and timing converters that use the waveform analysis concept together with powerful digital signal-processing architectures, which are necessary to compress and extract data in real time in a quasi “deadtime-less” process. Read-out systems are now made of programmable devices that include hardware and software techniques and tools for programming the reconfigurable hardware, such as field-programmable gate arrays, graphic processing units (GPUs) and digital signal processors.

An increasing number of applications and projects using new standards

Participants saw the evolution of many new projects that include architectures dealing with fully real-time signal processing, digital data extraction, compression and storage at the front-end, such as the PANDA antiproton-annihilation experiment for the Facility for Antiproton and Ion Research being built at Darmstadt. For the read-out and data-collection systems, the conceptual model is based on fast data transfer, now with multigigabit parallel links from the front-end data buffers up to terabit networks with their associated hardware (routers, switches, etc.). Low-level trigger systems are becoming fully programmable and in some experiments, such as LHCb at CERN, challenging upgrades of the level-0 selection scheme are planned, with trigger processing taking place in real time at large computer farms. There is an ongoing integration of processing farms for high-level triggers and filter farms for online selection of interesting events at the LHC. Experiences with real data were reported at the conference, providing feedback on the improvement of the event selection process.

A survey of control, monitoring and test systems for small and large instruments, as well as new machines – such as the X-ray Free-Electron Laser at DESY – was presented, showing the increasing similarities and possibilities for integration with standard DAQ systems of these instruments. A new track at the conference this year dealt with upgrades of existing systems, mainly related to LHC experiments at CERN and to Belle II at KEK and the SuperB project.

The conference saw an increasing number of applications and projects using new standards, emerging technologies such as Advance Telecommunications Computing Architecture (ATCA), as well as feedback on the experience and lessons learnt from successes and failures. This last topic, in particular, was new at this conference. Rather than showing only great achievements in glossy presentations, it can also be helpful to learn from other people’s difficulties, problems and even mistakes.

CANPS Prize awarded

A highlight of the Real-Time conference is the presentation of the CANPS prize, which is given to individuals who have made outstanding contributions in the application of computers in nuclear and plasma sciences. This year the award went to Christopher Parkman, now retired from CERN, for the “outstanding development and user support of modular electronics for the instrumentation in physics applications”. Special efforts were also made to stimulate student contributions and awards were given for the three best student papers, selected by a committee chaired by Michael Levine of Brookhaven National Laboratory.

Last, an industrial exhibit by a few relevant companies ran through the week (CAEN, National Instruments, Schroff, Struck, Wiener and ZNYX). There was also the traditional two-day workshop on ATCA and MicroTCA, which is the latest DAQ standard, following CAMAC, Fastbus and VME, from the telecommunications industry. This workshop with tutorials, organized by Ray Larsen and Zheqiao Geng of SLAC and Sergio Zimmermann of LBNL, took place during the weekend before the conference. Two short courses were also held that same weekend, one by Mariano Ruiz of the Technical University of Madrid on DAQ systems and one by Hemant Shukla of LBNL on data analysis with fast graphic cards (GPUs).

The 19th Real-Time Conference will take place in May 2014 in the deer park inside the city of Nara, Japan. It will be organized jointly by KEK, the University of Osaka and RIKEN under the chair of Masaharu Nomachi. A one-week Asian Summer school on advanced techniques on electronics, trigger, DAQ and read-out systems will also be organized jointly with the conference.

• More details about the Real-Time Conference. A special edition of IEEE Transactions on Nuclear Sciences will include all eligible contributions from the RT2012 conference, with Sascha Schmeling of CERN as senior editor.

Deflector shields protect the lunar surface

CCnew1_08_12

The origin of the enigmatic “lunar swirls” – patches of relatively pale lunar soil, some measuring several tens of kilometres across – has been an unresolved mystery since the mid-1960s, when NASA’s Lunar Orbiter spacecraft mapped the surface of the Moon in preparation for the Apollo landings. Now, a team of physicists has used a combination of satellite data, plasma-physics theory and laboratory experiments to show how the features can arise when the hot plasma of the solar wind is deflected around “mini-magnetospheres” associated with magnetic anomalies at the surface of the Moon.

Initially thought to be smeared-out craters, close-range photographs from Lunar Orbiter II showed that at least one large swirl – named Reiner Gamma, after the nearby Reiner impact crater – could not be a crater. Studies from subsequent Apollo missions revealed that the swirls are associated with localized magnetic fields in the lunar crust. Because the Moon today has no overall magnetic field, these “magnetic anomalies” seem to be remnants of a field that has existed in the past.

In 1998–1999, the Lunar Prospector mission discovered that the magnetic anomalies create miniature magnetospheres above the Moon’s surface, just as the Earth’s planetary magnetic field does on a much larger scale when it deflects the charged particles of the solar wind around the planet. Could the mini-magnetospheres on the Moon, which are only a few hundred kilometres in size, somehow shield the crust from the solar wind and so prevent the surface from darkening as a result of constant bombardment by incoming particles?

CCnew2_08_12

One problem with this idea has been that the magnetic fields – in the order of nanotesla – seem to be too weak to affect the energetic particles of the solar wind on the scales observed. However, a team led by Ruth Bamford of the Rutherford Appleton Laboratory and York University has shown that it is the electric field associated with the shock formed when the solar wind interacts with the magnetic field that deflects the particles bombarding the Moon.

Data from various lunar-orbiting spacecraft suggested a picture in which the solar wind is deflected round a magnetic “bubble”, creating the effect of a cavity within the plasma density that is enclosed by a “skin” that is only kilometres thick. This skin effectively reflects incoming protons, increasing their energy.

To explain these observations, Bamford and colleagues invoke a two-fluid model of the plasma, with unmagnetized ions and magnetized electrons. The electrons are slowed down and deflected by the magnetic barrier that forms when the magnetic field of the solar wind encounters the magnetic anomaly – but the much heavier ions do not respond so quickly. This leads to a separation in space-charge and hence an electric field.

The team confirmed the principle of their theoretical model by using a plasma wind tunnel with a supersonic stream of hydrogen plasma and the dipole field of a magnet. The experiment showed that the plasma particles were indeed “corralled” by a narrow electrostatic field to form a cavity in the plasma, so protecting areas of the surface towards which the particles were flowing. Translated to the more irregular magnetic fields on the lunar surface, with a range of overlapping cavities, this can provide the long-awaited explanation of the light and dark patterns – protected and unprotected regions, respectively – that make up the swirls.

bright-rec iop pub iop-science physcis connect