Comsol -leaderboard other pages

Topics

The deepest clean lab in the world

Deep in a mine in Greater Sudbury, Ontario, Canada, you will find the deepest flush toilets in the world. Four of them, actually, ensuring the comfort of the staff and users of SNOLAB, an underground clean lab with very low levels of background radiation that specialises in neutrino and dark-matter physics.

Toilets might not be the first thing that comes to mind when discussing a particle-physics laboratory, but they are one of numerous logistical considerations when hosting 60 people per day at a depth of 2 km for 10 hours at a time. SNOLAB is the world’s deepest cleanroom facility, a class-2000 cleanroom (see panel below) the size of a shopping mall situated in the operational Vale Creighton nickel mine. It is an expansion of the facility that hosted the Sudbury Neutrino Observatory (SNO), a large, heavy-water detector designed to detect neutrinos from the Sun. In 2001, SNO contributed to the discovery of neutrino oscillations, leading to the joint award of the 2015 Nobel Prize in Physics to SNO spokesperson Arthur B McDonald and Super-Kamiokande spokesperson Takaaki Kajita.

Initially, there were no plans to maintain the infrastructure beyond the timeline of SNO, which was just one experiment and not a designated research facility. However, following the success of the SNO experiment, there was increased interest in low-background detectors for neutrino and dark-matter studies.

Building on SNO’s success

The SNO collaboration was first formed in 1984, with the goal of solving the solar neutrino problem. This problem surfaced during the 1960s, when the Homestake experiment in the Homestake Mine at Lead, South Dakota, began looking for neutrinos created in the early stages of solar fusion. This experiment and its successors, using different target materials and technologies, consistently observed only 30–50% of the neutrinos predicted by the standard solar model. A seemingly small nuisance posed a large problem, which required a large-scale solution.

SNO used a 12 m-diameter spherical vessel containing 1000 tonnes of heavy water to count solar neutrino interactions. Canada had vast reserves of heavy water for use in its nuclear reactors, making it an ideal location for such a detector. The experiment also required an extreme level of cleanliness, so that the signals physicists were searching for would not be confused with background events coming from dust, for instance. The SNO collaboration also had to develop new techniques to measure the inherent radioactivity of their detector materials and the heavy water itself.

Using heavy water gave SNO the ability to observe three different neutrino reactions: one reaction could only happen with electron neutrinos; one was sensitive to all neutrino flavours (electron, muon and tau); and the third provided the directionality pointing back to the Sun. These three complementary interactions let the team test the hypothesis that solar neutrinos were changing flavour as they travelled to Earth. In contrast to previous experiments, this approach allowed SNO to make a measurement of the parameters describing neutrino oscillations that didn’t depend on solar models. SNO’s data confirmed what previous experiments had seen and also verified the predictions of theories, implying that neutrinos do indeed oscillate during their Sun–Earth journey. The experiment ran for seven years and produced 178 papers accumulating more than 275 authors.

In 2002, the Canadian community secured funding to create an extended underground laboratory with SNO as the starting point. Construction of SNOLAB’s underground facility was completed in 2009 and two years later the last experimental hall entered “cleanroom” operation. Some 30 letters of interest were received from different collaborations proposing potential experiments, helping to define the requirements of the new lab.

SNOLAB’s construction was made possible by capital funds totalling CAD$73 million, with more than half coming from the Canada Foundation for Innovation through the International Joint Venture programme. Instead of a single giant cavern, local company Redpath Mining excavated several small and two large halls to hold experiments. The smaller halls helped the engineers manage the enormous stress placed on the rock in larger underground cavities. Bolts 10 m long stabilise the rock in the ceilings of the remaining large caverns, and throughout the lab the rock is covered with a 10 cm-thick layer of spray-on concrete for further stability, with an additional hand-troweled layer to help keep the walls dust-free. This latter task was carried out by Béton Projeté MAH, the same company that finished the bobsled track in the 2010 Vancouver Winter Olympics.

In addition to the experimental halls, SNOLAB is equipped with a chemistry laboratory, a machine shop, storage areas, and a lunchroom. Since the SNO experiment was still running when new tunnels and caverns were excavated, the connection between the new space and the original clean lab area was completed late in the project. The dark-matter experiments DEAP-1 and PICASSO were also already running in the SNO areas before construction of SNOLAB was completed.

Dark matter, neutrinos, and more

Today, SNOLAB employs a staff of over 100 people, working on engineering design, construction, installation, technical support and operations. In addition to providing expert and local support to the experiments, SNOLAB research scientists undertake research in their own right as members of the collaborations.

With so much additional space, SNOLAB’s physics programme has expanded greatly during the past seven years. SNO has evolved into SNO+, in which a liquid scintillator replaces the heavy water to increase the detector’s sensitivity. The scintillator will be doped with tellurium, making SNO+ sensitive to the hypothetical process of neutrinoless double-beta decay. Two of tellurium’s natural isotopes (128Te and 130Te) are known to undergo conventional double-beta decay, making them good candidates to search for the long-sought neutrinoless version. Detecting this decay would violate lepton-number conservation, proving that the neutrino is its own antiparticle (a Majorana particle). SNO+ is one of several experiments currently hunting this process down.

Another active SNOLAB experiment is the Helium and Lead Observatory (HALO), which uses 76 tons of lead blocks instrumented with 128 helium-3 neutron detectors to capture the intense neutrino flux generated when the core of a star collapses at the early stages of a supernova. Together with similar detectors around the world, HALO is part of a supernova early-warning system, which allows astronomers to orient their instruments to observe the phenomenon before it is visible in the sky.

With no fewer than six active projects, dark-matter searches comprise a large fraction of SNOLAB’s physics programme. Many different technologies are employed to search for the dark-matter candidate of choice: the weakly interacting massive particle (WIMP). The PICASSO and COUPP collaborations were both using bubble chambers to search for WIMPS, and merged into the very successful PICO project. Through successive improvements, PICO has endeavoured to enhance the sensitivity to WIMP spin-dependent interactions by an order of magnitude every couple of years. Its sensitivity is best for WIMP masses around 20 GeV/c2. Currently the PICO collaboration is developing a much larger version with up to 500 litres of active-mass material.

DEAP-3600, successor to DEAP-1, is one of the biggest dark-matter detectors ever built, and it has been taking data for almost two years now. It seeks to detect spin-independent interactions between WIMPs and 3300 kg of liquid argon contained in a 1.7 m-diameter acrylic vessel. The best sensitivity will be achieved for a WIMP mass of 100 GeV/c2. Using a different technology, the DAMIC (Dark Matter In CCDs) experiment employs CCD sensors, which have low intrinsic noise levels, and is sensitive to WIMP masses as low as 1 GeV/c2.

Although the science at SNOLAB primarily focuses on neutrinos and dark matter, the low-background underground environment is also useful for biology experiments. REPAIR explores how low radiation levels affect cell development and repair from DNA damage. One hypothesis is that removing background radiation may be detrimental to living systems. REPAIR can help determine whether this hypothesis is correct and characterise any negative impacts. Another experiment, FLAME, studies the effect of prolonged time spent underground on living organisms using fruit flies as a model. The findings from this research could be used by mining companies to support
a healthier workforce.

Future research

There are many exciting new experiments under construction at SNOLAB, including several dark-matter experiments. While the PICO experiment is increasing its detector mass, other experiments are using several different technologies to cover a wide range of possible WIMP masses. The SuperCDMS experiment and CUTE test facility use solid-state silicon and germanium detectors kept at temperatures near absolute zero to search for dark matter, while the NEWS-G experiment will use gasses such as hydrogen, helium and neon in a 1.4 m-diameter copper sphere.

SNOLAB still has space available for additional experiments requiring a deep underground cleanroom environment. The Cryopit, the largest remaining cavern, will be used for a next-generation double-beta-decay experiment. Additional spaces outside the large experimental halls can host several small-scale experiments. While the results of today’s experiments will influence future detectors and detector technologies, the astroparticle physics community will continue to demand clean underground facilities to host the world’s most sensitive detectors. From an underground cavern carved out to host a novel neutrino detector to the deepest cleanroom facility in the world, SNOLAB will continue to seek out and host world-class physics experiments to unravel some of the universe’s deepest mysteries.

Exploring how antimatter falls

Two new experiments at CERN, ALPHA-g and GBAR, have begun campaigns to check whether antimatter falls under gravity at the same rate as matter.

The gravitational behaviour of antimatter has never been directly probed, though indirect measurements have set limits on the deviation from standard gravity at the level of 10–6 (CERN Courier January/February 2017 p39). Detecting even a slight difference between the behaviour of antimatter and matter with respect to gravity would mean that Einstein’s equivalence principle is not perfect and could have major implications for a quantum theory of gravity.

ALPHA-g, a close model of the ALPHA experiment, combines antiprotons from CERN’s Antiproton Decelerator (AD) with positrons from a sodium-22 source and traps the resulting antihydrogen atoms in a vertical magnetic trap about 2 m tall. To measure their free-fall, the field is switched off so that the atoms fall under gravity and the position where the antiatoms annihilate with normal matter allows the rate to be determined precisely.

GBAR adopts a similar approach but takes antiprotons from the new and lower-energy ELENA ring attached to the AD (CERN Courier December 2016 p16) and combines them with positrons from a small linear accelerator to make antihydrogen ions. Once a laser has stripped all but one positron, the neutral antiatoms will be released from the trap and allowed to fall from a height of 20 cm.

ALPHA-g began taking beam on 30 October, while ELENA has been delivering beam to GBAR since the summer, allowing the collaboration to perfect the beam-delivery system. Both experiments are being commissioned before CERN’s accelerators are shut down on 10 December for a two-year period. The ALPHA-g team hopes to be able to gather enough data during this short period to make a first measurement of antihydrogen in free fall, while the brand new GBAR experiment aims to make a first measurement when antiprotons are back in the machine in 2021. A third experiment at the AD hall, AEgIS, which has been in operation for several years, is also measuring the effect of gravity on antihydrogen using yet another approach, based on a beam of antihydrogen atoms. AEgIS is also hoping to produce its first antihydrogen atoms this year.

So far, most efforts at the AD have focused on looking for charge–parity–time violation by studying the spectroscopy of antihydrogen and comparing it with that of hydrogen (CERN Courier March 2018 p30). This latest round of experiments opens a new avenue in antimatter exploration.

The tale of a billion-trillion protons

Before being smashed into matter at high energies to study nature’s basic laws, protons at CERN begin their journey rather uneventfully, in a bottle of hydrogen gas. The protons are separated by injecting the gas into the cylinder of an ion source and making an electrical discharge, after which they enter what has become the workhorse of CERN’s proton production for the past 40 years: a 36 m-long linear accelerator called Linac2. Here, the protons are accelerated to an energy of 50 MeV, reaching approximately one-third of the speed of light, ready to be injected into the first of CERN’s circular machines: the Proton Synchrotron Booster (PSB), followed by the Proton Synchrotron (PS) and the Super Proton Synchrotron (SPS). At each stage of the chain, they may end up driving fixed-target experiments, generating exotic beams in the ISOLDE facility, or being injected into the Large Hadron Collider (LHC) to be accelerated to the highest energies.

Situated at ground level on the main CERN site, Linac2 has delivered all of the protons for the CERN accelerator complex since 1978. Construction of Linac2 started in December 1973, and the first 50 MeV beam was obtained on 6 September 1978. Within a month, the design current of 150 mA was reached and the first injection tests in the PSB started. Routine operation of the PSB started soon afterwards, in December 1978. As proudly announced by CERN at the time, Linac2 was completed on budget and on schedule, for an overall cost of 23 million Swiss francs.

Linac2 is the machine that started more than a billion-trillion protons on trajectories that led to discoveries including the W and Z bosons, the creation of antihydrogen and the completion of the long search for the Higgs boson. On 12 November, Linac2 was switched off and will now be decommissioned as part of a major upgrade to the laboratory’s accelerator complex (CERN Courier October 2017 p32). Its design, operation and performance have been key factors in the success of CERN’s scientific programme and paved the way to its successor, Linac4, which will take over the task of producing CERN’s protons from 2020.

The decision to build Linac2 was taken in October 1973, with the aim to provide a higher-intensity proton beam compared to the existing Linac1 machine. Linac1 had been the original injector both to the PS when it began service in 1959, and to its booster (the PSB) when it was added to the chain in 1972. However, Linac1 was limited in the intensity it could provide, and the only way to higher intensity was for an entirely new construction.

Forward thinking

Linac2’s design parameters were chosen to comfortably exceed the nominal PSB requirements, providing a safety margin during operation and for future upgrades. Furthermore, it was decided to install the linac in a new building parallel to the Linac1 location instead of in the Linac1 tunnel. This avoided a long shut-down for installation and commissioning, and ensured that Linac1 was available as a back-up during the first years of Linac2 operation.

Linac2’s proton source was originally a huge 750 kV Cockcroft–Walton generator located in a shielded room, separate from the accelerator hall (figure 1), which provided the pre-acceleration to the entrance of the 4 m-long low energy beam transport line (LEBT). This transport line included a bunching system made of three RF cavities, after which protons were fed to the main accelerator: a drift-tube linac (DTL) that had many improvements with respect to the Linac1 design and became a standard for linacs at the time. The three accelerating RF “tanks”, increasing the beam energy up to 10.3, 30.5 and 50 MeV, respectively, with a total length of 33.3 m, were made of mild steel co-laminated with a copper sheet, with the vacuum and RF sealing provided by aluminium wire joints.

The RF system is of prime importance for the performance of linear accelerators. For Linac2, the amplifiers had to provide a total RF power of 7.5 MW just to accelerate the beam. The RF amplifiers were based on the Linac1 design principles, with larger diameters in order to safely deliver the higher power, and the RF tube was the same triode already used for most of the Linac 1 amplifiers.

The most significant upgrade to Linac2, which took place during the 1992/1993 shutdown, was the replacement of the 750 kV Cockcroft–Walton generator and of the LEBT with a new RF quadrupole (RFQ) only 1.8 m long, capable of bunching, focusing and accelerating the beam in the same RF structure. The RFQ was a new invention of the early 1980s that was immediately adopted at CERN: after the successful construction of a prototype RFQ for Linac1 (which at the time was still in service), the development of a record-breaking high-intensity RFQ for Linac2, capable of delivering to the DTL a current of 200 mA, started in 1984. The prototype high-current RFQ was commissioned on a test stand in 1989, and the replacement of the Linac2 pre-injector was officially approved in 1990.

Gearing up for the LHC

The main motivation for the higher current of Linac2 was to prepare the CERN injectors for the LHC, which was already in progress. It was clear that the LHC would require unprecedented beam brightness (intensity per emittance) from the injector chain, and one of the options considered was to go to single-turn injection into the PSB of a high-current linac beam to minimise emittance growth. This, in turn, required the highest achievable current from the linac. Another motivation for the replacement was the simpler operation and maintenance of the smaller RFQ compared with the large Cockcroft–Walton installation.

Construction of the new RFQ (figure 2) started soon after approval, and the new “RFQ2” system was installed at Linac2 during the normal shut-down in 1992/1993. Commissioning of the RFQ2 with Linac2 took a few weeks, and the 1993 physics run started with the new injector. Reaching the full design performance of the RFQ took a few years, mainly due to the slow cleaning of the surfaces that at first limited the peak RF fields possible inside the cavity. After the optics in the long transfer line were modified, the goal of 180 mA delivered to the PSB was achieved in 1998 – and this still ranks as the highest intensity proton beam ever achieved from a linac.

Throughout its life, Linac2 has undergone many upgrades to its subsystems, including major renovations of the control systems in 1993 and 2012, the exchange of more than half its magnet power supplies to more modern units (although a large number were still the same ones installed in the 1970s) and renovation of the RFQ and vacuum-control systems. Nevertheless, at its core, the three DTL RF cavities that are the backbone of the linac remained unchanged since their construction, as were more than 120 electromagnetic quadrupoles sealed in the drift tubes that have each pulsed more than 700 million times without a single magnet failure (figure 3).

Despite the performance and reliability of Linac2, the performance bottleneck of the injection chain for the LHC moved to the injection process of the PSB, which could only be resolved with a higher injection energy. This meant increasing the energy of the linac. At the time this was being considered, around a decade ago, Linac2 was already reaching 30 years of operation, and basing a new injector on it would have required a major consolidation effort. So the decision was made to move to a new accelerator called Linac4 (the name Linac3 is taken by an existing CERN linac that produces ions), which meant a clean slate for its design. Linac4 (figure 4) not only injects into the PSB at the higher energy of 160 MeV, but also switches to negative hydrogen-ion beam acceleration, which allows higher intensities to be accumulated in the PSB after removing the excess electrons.

As was the case when Linac2 took over from Linac1, Linac4 has been built in its own tunnel, allowing construction and commissioning to take place in parallel to the operation of Linac2 for the LHC (CERN Courier January/February 2018 p19). In connecting Linac4 to the PSB, some of the Linac2 transfer line will be dismantled to make space for additional shielding. But the original source, RFQ and three DTL cavities will remain in place for now – even if there is no possibility of their serving as a back-up once the change to Linac4 is made. As for the future of Linac2, hopefully you might one day be able to find part of the accelerator on display somewhere on the CERN site, so that its place in history is not forgotten.

Fixing gender in theory

Improving the participation of under-represented groups in science is not just the right thing to do morally. Science benefits from a community that approaches problems in a variety of different ways, and there is evidence that teams with mixed perspectives increase productivity. Moreover, many countries face a skills gap that can only be addressed by training more scientists, drawing from a broader pool of talent that cannot reasonably exclude half the population.

In the high-energy theory (HET) community, where creativity and originality are so important, the problem is particularly acute. Many of the breakthroughs in theoretical physics have come from people who think “differently”, yet the community does not acknowledge that being both mostly male and white encourages groupthink and lack of originality.

The gender imbalance in physics is well documented. Data from the American Physical Society and the UK Institute of Physics indicate that around 20% of the physics-research community is female, and the situation deteriorates significantly as one looks higher on the career ladder. By contrast, the percentage of females is higher in astronomy and the number of women at senior levels in astronomy has increased quite rapidly over the last decade.

However, research into gender in science often misses issues specific to particular disciplines such as HET. While many previous studies have explored challenges faced by women in physics, theory has not specifically been targeted, even though the representation of women is anomalously low.

In 2012, a group of string theorists in Europe launched a COST (European Cooperation in Science and Technology) action with a focus on gender in high-energy theory. Less than 10% of string theorists are female, and, worryingly, postdoc-application data in Europe show that the percentage of female early-career researchers has not changed significantly over the past 15 years.

The COST initiative enabled qualitative surveys and the collection of quantitative data. We found some evidence that women PhD students are less likely to continue onto postdoctoral positions than male ones, although further data are needed to confirm this point. The data also indicate that the percentage of women at senior levels (e.g. heads of institutes) is extremely low, less than 5%. Qualitative data raised issues specific to HET, including the need for mobility for many years before getting a permanent position and the long working hours, which are above average even for academics. A series of COST meetings also provided opportunities for women in string theory to network and to discuss the challenges that they face.

Following the conclusion of the COST action in 2017, women from the string theory community obtained support to continue the initiative, now broadened to the whole of the HET community. “GenHET” is a permanent working group hosted by the CERN theory department whose goals are to increase awareness of gender issues, improve the presence of women in decision-making roles, and provide networking, support and mentoring for women, particularly during their early career.

GenHET’s first workshop on high-energy theory and gender was hosted by CERN in September, bringing together physicists, social scientists and diversity professionals (see Faces and Places). Further meetings are planned, and the GenHET group is also developing a web resource that will collect research and reports on gender and science, advertise activities and jobs, and offer  advice on evidence-based practice for supporting women. GenHET aims to propose concrete actions, for example encouraging the community to implement codes of conduct at conferences, and all members of the HET community are welcome to join the group.

Diversity is about much more than gender: in the HET community, there is also under-representation of people of colour and LGBTQ+ researchers, as well as those who are disabled, carers, come from less privileged socio-economic backgrounds, and so on. GenHET will work in collaboration with networks focusing on other diversity characteristics to help improve this situation, turning the high-energy theory community into one that truly reflects all of society.

CMS weighs in on flavour anomalies

A report from the CMS experiment

Recent results from LHCb and other experiments appear to challenge the assumption of lepton-flavour universality. To explore further, the CMS collaboration has recently conducted a new search probing one of the theories that attempts to explain these flavour “anomalies”. Using 77.3 fb–1 of proton–proton collision data recorded in 2016 and 2017 at a centre-of-mass energy of 13 TeV, the CMS analysis is the first dedicated search for a neutral gauge boson with specific properties that couples only to leptons of the second and third family.

Although the Standard Model (SM) has been successful in describing current experimental results, it is generally believed to be incomplete. It cannot, for example, explain dark matter or the observed asymmetry between matter and antimatter in the universe. There are also several smaller differences between the experiment and the SM prediction that have been building up over the last few years. One set of intriguing anomalies has been reported by LHCb and other dedicated B-physics experiments, indicating a possible lepton-flavour universality violation in B-meson decays (CERN Courier April 2018 p23). Another is the long-standing tension in the measurement of the anomalous magnetic moment of the muon, for which an updated measurement is eagerly awaited (CERN Courier September 2018 p9).

One extension to the SM that has been proposed to explain these anomalies is an enlarged SM gauge group with an additional U(1) symmetry. Spontaneous breaking of this symmetry leads to the prediction of a new massive gauge boson, Zʹ. To keep the extended gauge symmetry free from quantum anomalies, only certain generation-dependent couplings are allowed. The model investigated by CMS promotes the difference in lepton numbers between the second and third generation to a local gauge symmetry, and until now has only been constrained slightly by experiment. Since the predicted Zʹ boson only couples to second- and third-generation leptons, the only way to produce it at the LHC is as final-state radiation off one of these leptons. The ideal source of muons for the purposes of this search is the decay of the SM Z boson to two muons, which can be measured with excellent mass resolution (~1%) in CMS. If a Zʹ boson exists, it will be radiated by one of the muons and decay subsequently to another pair of muons, leading to a final state with four muons.

Such a final state is also produced by a rare SM Z-boson decay to four muons mediated by an off-shell photon. The first observation of this rare decay of the SM Z boson in proton–proton collisions was reported by CMS in 2012. In order to reduce this background, the search exploits the resonant character of the new gauge boson’s di-muon decay. Events are selected that contain at least four muons with an invariant mass near the SM Z-boson mass. Di-muon candidates are then formed from muon pairs of opposite sign and a peak in their invariant mass distribution is sought, which would indicate the presence of a Zʹ particle.

The event yields are found to be consistent with the SM predictions (figure 1). Upper limits of the order of 10−8–10−7 are set on the branching fraction of a Z boson decaying to two muons and a Zʹ, with the latter also decaying into two muons, as a function of the Zʹ mass. This can be interpreted as a limit on the Zʹ particle’s coupling strength to muons, and provides the first dedicated limits on these Zʹ models at the LHC. Compared to other experiments and to indirect limits from the LHC obtained at lower centre-of-mass energies during Run 1, this search excludes a significant portion of parameter space favoured by the B-physics anomalies (figure 2). The analysis demonstrates the power and flexibility of the CMS experiment to adapt to and test new incoming physics models, which in turn react to previous experimental results, showing that experiments and theory go hand-in-hand.

Gaia finds evidence of old Milky Way merger

Many of the stars appearing in the night sky did not originate from within our galaxy, concludes a new study of data from the European Space Agency’s Gaia observatory. Instead, Gaia has found evidence that these stars formed in a smaller galaxy that merged with ours about 10 billion years ago.

Gaia was launched in 2013 with the aim of measuring the positions and distances of more than one billion astronomical objects (mainly stars) in and around our galaxy with unprecedented precision. Using Gaia data containing about seven million stars, Amina Helmi of the University of Groningen in the Netherlands and colleagues have found that a subset of these stars is different from the bulk of the stars in the Milky Way. Earlier research had shown that some stars in the galaxy’s inner stellar halo, which surrounds the central bulge and disk, have different chemical abundances from the bulge and disk stars (see figure). But the latest study found that these halo stars also exhibit orbits around the galactic centre that differ significantly from the rest of the stars.

The orbits of the stars in a galaxy typically follow that of the gas cloud in which they were born, which means that a proto-galaxy consisting of an orbiting gas cloud will produce stars orbiting along with the cloud. However, Helmi and co-workers show that many of the Milky Way’s halo stars orbit backwards relative to the rest of the galaxy, suggesting that their origin is probably different. The team then compared the Gaia observations with simulations in which the Milky Way merged in the past with a smaller galaxy with 25% of its mass, finding a remarkable similarity between the observed and simulated orbits.

Additional analysis of spectral data from APOGEE-2 (Apache Point Observatory Galactic Evolution Experiment), which is part of the Sloan Digital Sky Survey, revealed that the halo stars contain fewer of the chemical elements that are produced in specific types of supernovae, indicating that they are significantly older than the bulk of the Milky Way’s stars.

Taken together, the results suggest that, after the smaller galaxy (named Gaia–Enceladus by the authors) merged with the Milky Way, it lost all the gas it needed to produce new stars. As a result, only the old stars survived and no new stars were born. The age of the youngest stars from Gaia–Enceladus – about 10 billion years – can therefore tell astronomers when the merger took place. A final piece of evidence that this dramatic event occurred comes from Gaia data of 13 star clusters orbiting the Milky Way at large distances. The orbits of these clusters, which contain millions of gravitationally bound stars, match those that would be expected for the remnants of Gaia–Enceladus.

The results, published in Nature, constitute one of the first major discoveries to emerge from Gaia data. They shed light on the origin of our galaxy and galaxy mergers in general, but much more will no doubt be learned from the vast amount of data that the satellite has gathered.

Λc+-baryon probes charm-quark hadronisation

The first measurement of Λc+-baryon production in lead–lead (Pb–Pb) collisions at an energy of 5.02 TeV per colliding nucleon pair was presented by the ALICE collaboration at the International Conference on Hard and Electromagnetic Probes of High-Energy Nuclear Collisions, held at Aix-Les-Bains from 30 September to 5 October. This measurement is essential to understand how charm-quark hadronisation is affected by the presence of the quark–gluon plasma (QGP) created in high-energy heavy-ion collisions.

Charm quarks are produced early in the collision, interact with the plasma as they propagate through it, and eventually hadronise. It has been suggested that the presence of many quarks in the final state of a heavy-ion collision may affect the hadronisation process: charm quarks could form hadrons by recombining with light quarks that happen to be nearby. In high-energy proton–proton (pp) collisions, the main hadronisation mechanism is through the formation of light quarks in a parton shower, known as “fragmentation”.

Λc+ pK0s decays, and their charge conjugates, were reconstructed by ALICE in Pb–Pb collisions at mid-rapidity (|y| <0.5) in the transverse momentum interval 6 < pT < 12 GeV/c and within 0–80% centrality range. The ratio of the production yields of Λc+ baryons (which consist of a charm quark and two light quarks) and D0 mesons (which contain a charm quark and a single, light antiquark) was measured. The Λc+/D0 ratio in Pb–Pb collisions is larger than those measured in minimum-bias pp collisions at 7 TeV and in p–Pb collisions at 5.02 TeV. The difference between the results in Pb–Pb and p–Pb collisions is about two times the standard deviation of the combined statistical and systematic uncertainties. The measured ratio in Pb–Pb collisions is also compatible with the Λc+/D0 ratio measured in gold–gold collisions at the Relativistic Heavy-Ion Collider at Brookhaven in the US. The measurement was compared with model calculations including different implementations of charm-quark hadronisation. The calculation with a pure coalescence scenario describes the experimental result, while adding a fragmentation contribution leads to a ratio that is smaller than that observed.

For this first measurement of Λc+-baryon production in Pb–Pb collisions, the uncertainties are still large and it is therefore not possible to draw a firm conclusion about the relative importance of recombination and fragmentation for charm-quark hadronisation. Moreover, it remains crucial to understand the charm-baryon production mechanisms in pp and p–Pb collisions, in particular, whether the assumptions made on the basis of e+e results also hold for fragmentation in hadronic collisions (CERN Courier March 2018 p12). The baryon-to-meson ratio has now been studied with light-flavour, strange and charm hadrons. All baryon-to-meson ratios in pp and p–Pb collisions show a characteristic pT dependence with an enhancement at intermediate pT values up to around 4 GeV/c, which still needs further investigation. 

Future datasets, to be collected during the heavy-ion run in 2018 and during LHC Run 3 and 4 after a major upgrade of the ALICE detector, will improve the Λc+-baryon production measurement. With a higher precision and a finer granularity in pT and centrality, these measurements are fundamental in determining the role of recombination for charm-quark hadronisation.

Doubly strange baryon observed in Japan

High-luminosity collisions of electrons and positrons at the KEKB accelerator in Japan have established the existence of a new baryon with strangeness S = –2, shedding light on the structure of doubly-strange hyperon resonances. In a preprint submitted to Physical Review Letters, researchers at KEKB’s Belle experiment report the first observation of the Ξ(1620)0 based on a 980 fb−1 data sample. The collaboration also found evidence for the slightly heavier Ξ(1690)0.

The constituent-quark model has been very successful in describing the Ξ or “cascade” baryon. Discovered in cosmic-ray experiments half a century ago, and corresponding to the ground state of the flavour-SU(3) octet, it contains one u or d quark plus two more massive quarks (the Ξ0 is made of one u and two s quarks). However, some observed excited states do not agree well with the Standard Model prediction. The study of such unusual states therefore probes the limitation of the quark model and could reveal unexpected aspects of quantum chromodynamics (QCD).

Belle researchers uncovered the resonance from its decay to Ξπ+ via Ξ+c Ξπ+π+, measuring its mass and width to be 1610.4 ± 6.0 (stat)  (syst) MeV/c2 and 59.9 ± 4.8 (stat) (syst) MeV, respectively. The values are consistent with those from previous sightings at other experiments, and the width of the Ξ(1620)0 turns out to be somewhat larger than that of the other exited Ξ states.

Experimental evidence for the Ξ(1620) Ξπ decay was first reported in Kp interactions in the 1970s, but there has been a lingering theoretical controversy about the interpretation of both the Ξ(1620) and Ξ(1690) states because the quark model predicts the first excited states of Ξ to have a mass of around 1800 MeV/c2. The latest results from Belle hint that these states represent a new class of exotic hadrons, writes the team: “The situation is similar to the two poles of the Λ(1405) and suggests the possibility of two poles in the S = −2 sector. Studying these states may explain the riddle about the Λ(1405); consequently, the interplay between the S = −1 and S = −2 states can help resolve this long-standing problem of hadron physics.”

The Belle detector has recently been superseded by Belle II at the upgraded SuperKEKB facility (CERN Courier September 2016 p32). Experiments at the LHC are also turning up new Ξ states. In 2012, CMS detected a Ξ*0b, while in 2014 the LHCb experiment discovered the Ξb and Ξ*b, and, in 2017, the doubly charmed Ξ++cc. Taken together, hadron-spectroscopy studies such as these are helping to piece together the complex process by which fundamental QCD objects combine into hadronic matter (CERN Courier April 2017 p31).

Machine powers down until 2021

The Large Hadron Collider’s 2018 proton physics run came to an end on 24 October, having accumulated an impressive dataset. The integrated luminosity delivered to both the ATLAS and CMS experiments reached an average of around 66 fb–1 for the year, 10% higher than the target. This corresponds to around 5 × 1015 inelastic collisions per experiment. LHCb accumulated just under 2.5 fb–1, while ALICE notched up 27 pb–1. The high figures are due to excellent machine availability and an instantaneous luminosity that regularly touched 2 × 1034 cm–2 s–1 in ATLAS and CMS – twice the nominal value.

The end of the proton run was followed by three and a half weeks of lead–lead collisions at a centre-of-mass energy of 5.02 TeV per colliding nucleon pair. Beginning on 5 November, this is the fourth lead–lead run since the collider began operation. During the last run of this type in 2015, the luminosity achieved was more than three and a half times higher than the LHC’s design luminosity, and the goals for 2018 are even more ambitious. Lead ions were also collided with protons in the LHC back in 2016.

This year’s shut-down marks the end of LHC Run 2, which began in 2015 and saw proton collisions take place at a centre-of-mass energy of 13 TeV. The total data accumulated since the start of Run 2 corresponds to an integrated luminosity of 160 fb–1 to both ATLAS and CMS. From 10 December, CERN’s accelerator complex will enter “long shutdown 2” and undergo an extensive programme of renovation and upgrades, in particular for the High-Luminosity LHC. A week of LHC magnet training tests for operation at a future proton collision energy of 14 TeV is one of the first activities.

High performance 

In terms of performance, LHC Run 2 has been a major success for both the machine and its detectors. In terms of physics output, highlights from ATLAS and CMS include several key measurements of the Higgs boson’s properties, in particular its couplings to top and bottom quarks and to tau leptons, and numerous searches for physics beyond the Standard Model. LHCb has found a clutch of new hadrons, deepening our understanding of strong interactions, and has accumulated interesting results concerning the universality of lepton couplings. In the sphere of nuclear collisions, ALICE has dug even deeper into the extreme dynamics of the quark–gluon plasma – also finding strong evidence that this state is produced in proton–proton collisions.

This is just a flavour of the numerous results produced. So far, no firm signs of physics beyond the Standard Model have been seen at the LHC, but the majority of data collected during Run 2 are still to be analysed. Between now and the return of protons for Run 3 in 2021, the LHC experiment collaborations will throw everything they have at the data to see if anything new is lurking in the Run 2 data.

Plasma lenses promise smaller accelerators

An international team has made an advance towards more compact particle accelerators, demonstrating that beams can be focused via a technique called active plasma lensing without reducing the beam quality.

Building smaller particle accelerators has been a goal of the particle accelerator community for decades, both for basic research and applications such as radiotherapy. In addition to new accelerating mechanisms, smaller accelerators require novel ways to focus particle beams.

Active plasma lensing uses a large electric current to set up strong magnetic fields in a plasma that can focus high-energy beams over distances of centimetres, rather than metres as is the case for conventional magnet-based techniques. However, the large current also heats the plasma, preferentially heating the centre of the lens. This temperature gradient leads to a nonlinear magnetic field, an aberration, which degrades the particle-beam quality.

Using a high-quality 200 MeV electron beam at the CLEAR user facility at CERN, Carl A Lindstrøm of the University of Oslo, Norway, and collaborators recently made the first direct measurement of this aberration in an active plasma lens, finding it to be consistent with theory. More importantly, they discovered that this aberration can be suppressed by simply changing the gas used to make the plasma from a light gas (helium) to a heavier gas (argon). Changing the gas slows down the heat transfer so that the aberration does not have time to form, resulting in ideal, degradation-free focusing. It represents a significant step towards making active plasma lenses a standard accelerator component in the future, says the team.

CLEAR evolved from a test facility for the Compact Linear Collider (CLIC) called CTF3, which ended a successful programme in 2016. CLEAR offers general accelerator R&D and component studies for existing and possible future accelerator applications, such as high-gradient “X-band” acceleration methods (CERN Courier April 2018 p32), as well as prototyping and validation of accelerator components for the High-Luminosity LHC upgrade.

“Working at CLEAR was very efficient and fast-paced – not always the case in large-scale accelerator facilities,” says Lindstrøm. “Naturally, we hope to continue our plasma lens research at CLEAR. One exciting direction is probing the limits of how strong these lenses can be. This is clearly the lens of the future.”

bright-rec iop pub iop-science physcis connect