Comsol -leaderboard other pages

Topics

Sketching out a muon collider

The machine–detector interface for a muon collider

High-energy particle colliders have proved to be indispensable tools in the investigation of the nature of the fundamental forces. The LHC, at which the discovery of the Higgs boson was made in 2012, is a prime recent example. Several major projects have been proposed to push our understanding of the universe once the LHC reaches the end of its operations in the late 2030s. These have been the focus of discussions for the soon-to-conclude update of the European strategy for particle physics. An electron–positron Higgs factory that allows precision measurements of the Higgs boson’s couplings and the Higgs potential seems to have garnered consensus as the best machine for the near future. The question is: what type will it be?

Today, mature options for electron–positron colliders exist: the Future Circular Collider (FCC-ee) and the Compact Linear Collider (CLIC) proposals at CERN; the International Linear Collider (ILC) in Japan; and the Circular Electron–Positron Collider (CEPC) in China. FCC-ee offers very high luminosities at the required centre-of-mass energies. However, the maximum energy that can be reached is limited by the emission of synchrotron radiation in the collider ring, and corresponds to a centre-of-mass energy of 365 GeV for a 100 km-circumference machine. Linear colliders accelerate particles without the emission of synchrotron radiation, and hence can reach higher energies. The ILC would initially operate at 250 GeV, extendable to 1 TeV, while the highest energy proposal, CLIC, has been designed to reach 3 TeV. However, there are two principal challenges that must be overcome to go to higher energies with a linear machine: first, the beam has to be accelerated to full energy in a single passage through the main linac; and, second, it can only be used once in a single collision. At higher energies the linac has to be longer (around 50 km for a 1 TeV ILC and a 3 TeV CLIC) and is therefore more costly, while the single collision of the beam also limits the luminosity that can be achieved for a reasonable power consumption.

Beating the lifetime 

An ingenious solution to overcome these issues is to replace the electrons and positrons with muons and anti-muons. In a muon collider, fundamental particles that are not constituents of ordinary matter would collide for the first time. Being 200 times heavier than the electron, the muon emits about two billion times less synchrotron radiation. Rings can therefore be used to accelerate muon beams efficiently and to bring them into collision repeatedly. Also, more than one experiment can be served simultaneously to increase the amount of data collected. Provided the technology can be mastered, it appears possible to reach a ratio of luminosity to beam power that increases with energy. The catch is that muons live on average for 2.2 μs, which leads to a reduction in the number of muons produced by about an order of magnitude before they enter the storage ring. One therefore has to be rather quick in producing, accelerating and colliding the muons; this rapid handling provides the main challenges of such a project.

Precision and discovery

Two muon-collider concepts

The development of a muon collider is not as advanced as the other lepton-collider options that were submitted to the European strategy process. Therefore the unique potential of a multi-TeV muon collider deserves a strong commitment to fully demonstrate its feasibility. Extensive  studies submitted to the strategy update show that a muon collider in the multi-TeV energy range would be competitive both as a precision and as a discovery machine, and that a full effort by the community could demonstrate that a muon collider operating at a few TeV can be ready on a time scale of about 20 years. While the full physics capabilities at high energies remain to be quantified, and provided the beam energy and detector resolutions at a muon collider can be maintained at the parts-per-mille level, the number of Higgs bosons produced would allow the Higgs’ couplings to fermions and bosons to be measured with extraordinary precision. A muon collider operating at lower energies, such as those for the proposed FCC-ee (250 and 365 GeV) or stage-one CLIC (380 GeV) machines, has not been studied in detail since the beam-induced background will be harsher and careful optimisation of machine parameters would be required to reach the needed luminosity. Moreover, a muon collider generating a centre-of-mass energy of 10 TeV or more and with a luminosity of the order of 1035 cm–2 s–1 would allow a direct measurement of the trilinear and quadrilinear self-couplings of the Higgs boson, enabling a precise determination of the shape of the Higgs potential. While the precision on Higgs measurements achievable at muon colliders is not yet sufficiently evaluated to perform a comparison to other future colliders, theorists have recently shown that a muon collider is competitive in measuring the trilinear Higgs coupling and that it could allow a determination of the quartic self-coupling that is significantly better than what is currently considered attainable at other future colliders. Owing to the muon’s greater mass, the coupling of the muon to the Higgs boson is enhanced by a factor of about 104 compared to the electron–Higgs coupling. To exploit this, previous studies have also investigated a muon collider operating at a centre-of-mass energy of 126 GeV (the Higgs pole) to measure the Higgs-boson line-shape. The specifications for such a machine are demanding as it requires knowledge of the beam-energy spread at the level of a few parts in 105.

Half a century of ideas

A sketch of the MICE apparatus

The idea of a muon collider was first introduced 50 years ago by Gersh Budker and then developed by Alexander Skrinsky and David Neuffer until the Muon Collider Collaboration became a formal entity in 1997, with more than 100 physicists from 20 institutions in the US and a few more from Russia, Japan and Europe. Brookhaven’s Bob Palmer was a key figure in driving the concept forward, leading the outline of a “complete scheme” for a muon collider in 2007. Exploratory work towards a muon collider and neutrino factory was also carried out at CERN around the turn of the millennium. It was only when the Muon Accelerator Program (MAP), directed by Mark Palmer of Brookhaven, was formally approved in 2011 in the US, that a systematic effort started to develop and demonstrate the concepts and critical technologies required to produce, capture, condition, accelerate and store intense beams of muons for a muon collider on the Fermilab site. Although MAP was wound down in 2014, it generated a reservoir of expertise and enthusiasm that the current international effort on physics, machine and detector studies can not do without.

So far, two concepts have been proposed for a muon collider (figure 1). The first design, developed by MAP, is to shoot a proton beam into a target to produce pions, many of which decay into muons. This cloud of muons (with positive and negative charge) is captured and an ionisation cooling system of a type first imagined by Budker rapidly cools the muons from the showers to obtain a dense beam. The muons are cooled in a chain of low-Z absorbers in which they lose energy by ionising the matter, reducing  their phase space volume; the lost energy would then be replaced by acceleration. This is so far the only concept that can achieve cooling within the timeframe of the muon lifetime. The beams would be accelerated in a sequence of linacs and rings, and injected at full energy into the collider ring. A fully integrated conceptual design for the MAP concept remains to be developed.

The unique potential of a multi-TeV muon collider deserves a strong commitment to fully demonstrate its feasibility

The alternative approach to a muon collider, proposed in 2013 by Mario Antonelli of INFN-LNF and Pantaleo Raimondi of the ESRF, avoids a specific cooling apparatus. Instead, the Low Emittance Muon Accelerator (LEMMA) scheme would send 45 GeV positrons into a target where they collide with electrons to produce muon pairs with a very small phase space (the energy of the electron and positron in the centre-of-mass frame are small, so little transverse momentum can be generated). The challenge with LEMMA is that the probability for a positron to produce a muon pair is exceedingly low, requiring an unprecedented positron-beam current and inducing a high stress in the target system. The muon beams produced would be circulated about 1000 times, limited by the muon lifetime, in a ring collecting muons produced from as many positron bunches as possible before they are accelerated and collided in a fashion similar to the proton-driven scheme of MAP. The low emittance of the LEMMA beams potentially allows the use of lower muon currents, easing the challenges of operating a muon collider due to the remnants of the decaying muons. The initial LEMMA scheme offered limited performance in terms of luminosity, and further studies are required to optimise all parameters of the source before capture and fast acceleration. With novel ideas and a dedicated expert team, LEMMA could potentially be shown to be competitive with the MAP scheme.

Results of muons that pass through MICE

Concerning the ambitious muon ionisation-cooling complex (figure 2), which is the key challenge of MAP’s proton-driven muon-collider scheme, the Muon Ionization Cooling Experiment (MICE) collaboration recently published results demonstrating the feasibility of the technique (CERN Courier March/April 2020 p7). Since muons produced from proton interactions in a target emerge in a rather undisciplined state, MICE set out to show that their transverse phase-space could be cooled by passing the beam through an energy-absorbing material and accelerating structures embedded within a focusing magnetic lattice – all before the muons have time to decay. For the scheme to work, the cooling (squeezing the beam in transverse phase space) due to ionisation energy loss must exceed the heating due to multiple Coulomb scattering within the absorber. Materials with low multiple scattering and a long radiation length, such as liquid hydrogen and lithium hydride, are therefore ideal.

MICE, which was based at the ISIS neutron and muon source at the Rutherford Appleton Laboratory in the UK, was approved in 2005. Using data collected in 2018, the MICE collaboration was able to determine the distance of a muon from the centre of the beam in 4D phase space (its so-called amplitude or “single-particle emittance”) both before and after it passed through the absorber, from which it was possible to estimate the degree of cooling that had occurred. The results (figure 3) demonstrated that ionisation cooling occurs with a liquid-hydrogen or lithium-hydride absorber in place. Data from the experiment were found to be well described by a Geant4-based simulation, validating the designs of ionisation cooling channels for an eventual muon collider. The next important step towards a muon collider would be to design and build a cooling module combining the cavities with the magnets and absorbers, and to achieve full “6D” cooling. This effort could profit from tests at Fermilab of accelerating cavities that can operate in a very high magnetic field, and also from the normal-conducting cavity R&D undertaken for the CLIC study, which pushed accelerating gradients to the limit.

Collider ring

The collider ring itself is another challenging aspect of a muon collider. Since the charge of the injected beams decreases over time due to the random decays of muons, superconducting magnets with the highest possible field are needed to minimise the ring circumference and thus maximise the average number of collisions. A larger muon energy makes it harder to bend the beam and thus requires a larger ring circumference. Fortunately, the lifetime of the muon also increases with its energy, which fully compensates for this effect. Dipole magnets with a field of 10.5 T would allow the muons to survive about 2000 turns. Such magnets, which are about 20% more powerful than those in the LHC, could be built from niobium-tin (Nb3Sn) as used in the new magnets for the HL-LHC (see Taming the superconductors of tomorrow).

Magnet model

The electrons and positrons produced when muons decay pose an additional challenge for the magnet design. The decay products will hit the magnets and can lead to a quench (whereby the magnet suddenly loses its superconductivity, rapidly releasing an immense amount of stored energy). It is therefore important to protect the magnets. The solutions considered include the use of large-aperture magnets in which shielding material can be placed, or designs where the magnets have no superconductor in the plane of the beam. Future magnets based on high-temperature superconductors could also help to improve the robustness of the bends against this problem since they can tolerate a higher heat load.

Other systems necessary for a muon collider are only seemingly more conventional. The ring that accelerates the beam to the collision energy is a prime example. It has to ramp the beam energy in a period of milliseconds or less, which means the beam has to circulate at very different energies through the same magnets. Several solutions are being explored. One, featuring a so-called fixed-field alternating-gradient ring, uses a complicated system of magnets that enables particles at a wider than normal range of energies to fly on different orbits that are close enough to fit into the same magnet apertures. Another possibility is to use a fast-ramping synchrotron: when the beam is injected at low energy it is kept on its orbit by operating the bending magnets at low field. The beam is then accelerated and the strength of the bends is increased accordingly until the beam can be extracted into the collider. It is very challenging to ramp superconducting magnets at the required speed, however. Normal-conducting magnets can do better, but their magnetic field is limited. As a consequence, the accelerator ring has to be larger than the collider ring, which can use superconducting magnets at full strength without the need to ramp them. Systems that combine static superconducting and fast-ramping normal-conducting bends have been explored by the MAP collaboration. In these designs, the energy in the fields of the fast-ramping bends will be very high, so it is important that the energy is recuperated for use in a subsequent accelerating cycle. This requires a very efficient energy-recovery system which extracts the energy after each cycle and reuses it for the next one. Such a system, called POPS (“power for PS”), is used to power the magnets of CERN’s Proton Synchrotron. The muon collider, however, requires more stored energy and much higher power flow, which calls for novel solutions.

High occupancy

Muon decays also induce the presence of a large amount of background in the detectors at a muon collider – a factor that must be studied in detail since it strongly depends on the beam energy at the collision point and on the design of the interaction region. The background particles reaching the detector are mainly produced by the interactions between the decay products of the muon beams and the machine elements. Their type, flux and characteristics therefore strongly depend on the machine lattice and the configuration of the interaction point, which in turn depends on the collision energy. The background particles (mainly photons, electrons and neutrons) may be produced tens of metres upstream of the interaction point. To mitigate the effects of the beam-induced background inside the detector, tungsten shielding cones, called nozzles, are proposed in this configuration and their opening angle has to be optimised for a specific beam energy, which affects the detector acceptance (see figure 4). Despite these mitigations, a large particle flux reaches the detector, causing a very high occupancy in the first layers of the tracking system, which impacts the detector performance. Since the arrival time in each sub-detector is asynchronous with respect to the beam crossing, due to the different paths taken by the beam-induced background and the muons, new-generation 4D silicon sensors that allow exploitation of the time distribution will be needed to remove a significant fraction of the background hits.

Energy expansion

It was recently demonstrated, by a team supported by INFN and Padova University in collaboration with MAP researchers, that state-of-the-art detector technology for tracking and jet reconstruction would make one of the most critical measurements at a muon collider – the vector-boson fusion channel μ+μ → (W*W*) ν ν → H ν ν, with H → b b – feasible in this harsh environment, with a high level of precision, competitive to other proposed machines (figure 5). A muon collider could in principle expand its energy reach to several TeV with good luminosity, allowing unprecedented exploration in direct searches and high-precision tests of Standard Model phenomena, in particular the Higgs self-couplings.

Muon collider Higgs-boson decay simulation

The technology for a muon collider also underpins a so-called neutrino factory, in which beams of equal numbers of electron and muon neutrinos are produced from the decay of muons circulating in a storage ring – in stark contrast to the neutrino beams used at T2K and NOvA, and envisaged for DUNE and Hyper-K, which use neutrinos from the decays of pions and kaons from proton collisions on a fixed target. In such a facility it is straightforward to tune the neutrino-beam energy because the neutrinos carry away a substantial fraction of the muon’s energy. This, combined with the excellent knowledge of the beam composition and energy spectrum that arises from the precise knowledge of muon-decay characteristics, makes a neutrino factory an attractive place to measure neutrino oscillations with great precision and to look for oscillation phenomena that are outside the standard, three-neutrino-mixing paradigm. One proposal – nuSTORM, an entry-level facility proposed for the precise measurement of neutrino-scattering and the search for sterile neutrinos – can provide the ideal test-bed for the technologies required to deliver a muon collider.

Muon-based facilities have the potential to provide lepton–antilepton collisions at centre-of-mass energies in excess of 3 TeV and to revolutionise the production of neutrino beams. Where could such a facility be built? A 14 TeV muon collider in the 27 km-circumference LHC tunnel has recently been discussed, while another option is to use the LHC tunnel to accelerate the muons and construct a new, smaller tunnel for the actual collider. Such a facility is estimated to provide a physics reach comparable to a 100 TeV circular hadron collider, such as the proposed Future Circular Collider, FCC-hh. A LEMMA-like positron driver scheme with a potentially lower neutrino radiation could possibly extend this energy range still further. Fermilab, too, has long been considered a potential site for a muon collider, and it has been demonstrated that the footprint of a muon facility is small enough to fit in the existing Fermilab or CERN sites. However, the realistic performance and feasibility of such a machine would have to be confirmed by a detailed feasibility study identifying the required R&D to address its specific issues, especially the compatibility of existing facilities with muon decays. Minimising off-site neutrino radiation is one of the main challenges to the design and civil-engineering aspects of a high-energy muon collider because, while the interaction probability is tiny, the total flux of neutrinos is sufficiently high in a very small area in the collider plane to produce localised radiation that can reach a fraction of natural-radiation levels. Beam wobbling, whereby the lattice is modified periodically so that the neutrino flux pointing to Earth’s surface is spread out, is one of the promising solutions to alleviate the problem, although it requires further studies.

It was only when the Muon Accelerator Program was formally approved in 2011 in the US that a systematic effort started

A muon collider would be a unique lepton-collider facility at the high-energy frontier. Today, muon-collider concepts are not as mature as those for FCC-ee, CLIC, ILC or CEPC. It is now important that a programme is established to prove the feasibility of the muon collider, address the key remaining technical challenges, and provide a conceptual design that is affordable and has an acceptable power consumption. The promises for the very high-energy lepton frontier suggests that this opportunity should not be missed.  

New SMOG on the horizon

Figure 1

LHCb will soon become the first LHC experiment able to run simultaneously with two separate interaction regions. As part of the ongoing major upgrade of the LHCb detector, the new SMOG2 fixed‑target system will be installed in long shutdown 2. SMOG2 will replace the previous System for Measuring the Overlap with Gas (SMOG), which injected noble gases into the vacuum vessel of LHCb’s vertex detector (VELO) at a low rate with the initial goal of calibrating luminosity measurements. The new system has several advantages, including the ability to reach effective area densities (and thus luminosities) up to two orders of magnitude higher for the same injected gas flux.

SMOG2 is a gas target confined within a 20 cm‑long aluminium storage cell that is mounted at the upstream edge of the VELO, 30 cm from the main interaction point, and coaxial with the LHC beam (figure 1). The storage‑cell technology allows a very limited amount of gas to be injected in a well defined volume within the LHC beam pipe, keeping the gas pressure and density profile under precise control, and ensuring that the beam‑pipe vacuum level stays at least two orders of magnitude below the upper threshold set by the LHC. With beam‑gas interactions occurring at roughly 4% of the proton–proton collision rate at LHCb, the lifetime of the beam will be essentially unaffected. The cell is made of two halves, attached to the VELO with an alignment precision of 200 μm. Like the VELO halves, they can be opened for safety during LHC beam injection and tuning, and closed for data‑taking. The cell is sufficiently narrow that as small a flow as 10–15 particles per second will yield tens of pb–1 of data per year. The new injection system will be able to switch between gases within a few minutes, and in principle is capable of injecting not just noble gases, from helium up to krypton and xenon, but also several other species, including H2, D2, N2, and O2.

SMOG2 will open a new window on QCD studies and astroparticle physics at the LHC

SMOG2 will open a new window on QCD studies and astroparticle physics at the LHC, performing precision measurements in poorly known kinematic regions. Collisions with the gas target will occur at a nucleon–nucleon centre‑of‑mass energy of 115 GeV for a proton beam of 7 TeV, and 72 GeV for a Pb beam of 2.76 TeV per nucleon. Due to the boost of the interacting system in the laboratory frame and the forward geometrical acceptance of LHCb, it will be possible to access the largely unexplored high‑x and intermediate Q2 regions.

Combined with LHCb’s excellent particle identification capabilities and momentum resolution, the new gas target system will allow us to advance our understanding of the gluon, antiquark, and heavy‑quark components of nucleons and nuclei at large‑x. This will benefit searches for physics beyond the Standard Model at the LHC, by improving our knowledge of the parton distribution functions of both protons and nuclei, particularly at high‑x, where new particles are most often expected, and will inform the physics programmes of proposed next‑generation accelerators such as the Future Circular Collider. The gas target will also allow the dynamics and spin distributions of quarks and gluons inside unpolarised nucleons to be studied for the first time at the LHC, a decade before corresponding measurements at much higher accuracy are performed at the Electron‑Ion Collider in the US. Studying particles produced in collisions with light nuclei, such as He, and possibly N and O, will also allow LHCb to give important inputs to cosmic‑ray physics and dark‑matter searches. Last but not least, SMOG2 will allow LHCb to perform studies of heavy‑ion collisions at large rapidities, in an unexplored energy range between the SPS and RHIC, offering new insights into the QCD phase diagram.

EPS announces 2020 accelerator awards

The European Physical Society’s accelerator group (EPS-AG) has announced the winners of its 2020 prizes, which are awarded every three years for outstanding achievements in the accelerator field. The prizes will be presented on 14 May during the International Particle Accelerator Conference (IPAC), which was planned to be held at the GANIL laboratory in Caen, France, and will now take place from 11-14 May in a virtual format due to restrictions resulting from the COVID-19 epidemic.

Lucio Rossi

The EPS-AG Rolf Widerøe Prize for outstanding work in the accelerator field has been given to Lucio Rossi of CERN, who is project leader for the high-luminosity LHC. Rossi, who initially worked in plasma physics before moving into applied superconductivity for particle accelerators, was rewarded “for his pioneering role in the development of superconducting magnet technology for accelerators and experiments, its application to complex projects in high-energy physics including strongly driving industrial capability, and for his tireless effort in promoting the field of accelerator science and technology”.

Hideaki Hotchi

The Gersch Budker Prize for a recent significant, original contribution to the accelerator field, has been awarded to Hideaki Hotchi of J-PARC in Japan. He receives the prize for his achievements “in the commissioning of the J-PARC Rapid Cycling Synchrotron, with sustained 1 MW operation at unprecedented low levels of beam loss made possible by his exceptional understanding of complex beam dynamics processes, thereby laying the foundations for future high power proton synchrotrons worldwide”.

The Frank Sacherer Prize, for an individual in the early part of his or her career goes, to Johannes Steinmann of Argonne national Laboratory for his “significant contribution to the development and demonstration of ultra-fast accelerator instrumentation using THz technology, having the potential for major impact on the field of electron bunch-by-bunch diagnostics”.

 

Applicants for the EPS-AG Bruno Touschek prize, which is awarded to a student or trainee accelerator physicist or engineer, will be judged on the quality of the work submitted to the IPAC conference.

The previous (2017) EPS-AG prizewinners were: Lyn Evans of CERN (Rolf Widerøe Prize); Pantaleo Raimondi of the ESRF (Gersh Budker Prize), Anna Grassellino of Fermilab (Frank Sacherer Prize); and Fabrizio Giuseppe Bisesto of INFN-LNF (Bruno Touschek Prize).

First foray into CP symmetry of top-Higgs interactions

One of the many doors to new physics that have been opened by the discovery of the Higgs boson concerns the possibility of finding charge-parity violation (CPV) in Higgs-boson interactions. Were CPV to be observed in the Higgs sector, it would be an unambiguous indication of physics beyond the Standard Model (SM), and could have important ramifications for understanding the baryon asymmetry of the universe. Recently, the ATLAS and CMS collaborations reported their first forays into this area by measuring the CP-structure of interactions between the Higgs boson and top quarks.

While CPV is well established in the weak interactions of quarks (most recently in the charm system by the LHCb collaboration), and is explained in the SM by the existence of a phase in the CKM matrix, the amount of CPV observed is many orders of magnitude too small to account for the observed cosmological matter-antimatter imbalance. Searching for additional sources of CPV is a major programme in particle physics, with a moderate-significance suggestion of CPV in lepton interactions recently announced by the T2K collaboration. It is likely that sources of CPV from phenomena beyond the scope of the SM are needed, and the detailed properties of the Higgs sector are one of several possible hiding places.

Based on the full LHC Run 2 dataset, ATLAS and CMS studied events where the Higgs boson is produced in association with one or two top quarks before decaying into two photons. The latter (ttH) process, which accounts for around 1% of the Higgs bosons produced at the LHC, was observed by both collaborations in 2018. But the tH production channel is predicted to be about six times rarer. This is due to destructive interference between higher order diagrams involving W bosons, and makes the tH process particularly sensitive to new-physics processes.

Exploring the CP properties of these interactions is non-trivial

According to the SM, the Higgs boson is “CP-even” – that is, it is possible to rotate-away any CP-odd phase from the scalar mass term. Previous probes of the interaction between the Higgs and vector bosons by CMS and ATLAS support the CP-even nature of the Higgs boson, determining its quantum numbers to be most consistent with JPC = 0++, though small CP-odd contributions from a more complex coupling structure are not excluded. The presence of a CP-odd component, together with the dominant CP-even one, would imply CPV, altering the kinematic properties of the ttH process and modifying tH production. Exploring the CP properties of these interactions is non-trivial, and requires the full capacities of the detectors and analysis techniques.

The collaborations employed machine-learning (Boosted Decision Tree) algorithms to disentangle the relative fractions of the CP-even and CP-odd components of top-Higgs interactions. The CMS collaboration observed ttH production at significance of 6.6σ, and excluded a pure CP-odd structure of the top-Higgs Yukawa coupling at 3.2σ. The ratio of the measured ttH production rate to the predicted production rate was found by CMS to be 1.38 with an uncertainty of about 25%. ATLAS data also show agreement with the SM. Assuming a CP-even coupling, ATLAS observed ttH with a significance of 5.2σ. Comparing the strength of the CP-even and CP-odd components, the collaboration favours a CP-mixing angle very close to 0 (indicating no CPV) and excludes a pure CP-odd coupling at 3.9σ. ATLAS did not observe tH production, setting an upper limit on its rate of 12 times the SM expectation.

In addition to further probing the CP properties of the top–Higgs interaction with larger data samples, ATLAS and CMS are searching in other Higgs-boson interactions for signs of CPV.

Gamma-ray polarisation sharpens multi-messenger astrophysics

POLAR polarisation plot

Recent years have seen the dawn of multi-messenger astrophysics. Perhaps the most significant contributor to this new era was the 2017 detection of gravitational waves (GWs) in coincidence with a bright electromagnetic phenomenon, a gamma-ray burst (GRB). GRBs consist of intense bursts of gamma rays which, for periods ranging from hundreds of milliseconds to hundreds of seconds, outshine any other source in the universe. Although the first such event was spotted back in 1967, and typically one GRB is detected every day, the underlying astrophysical processes responsible remain a mystery. The joint GW–electromagnetic detection answered several questions about the nature of GRBs, but many others remain.

Recently, researchers made the first attempts to add gamma-ray polarisation into the mix. If successful, this could enable the next step forward within the multi-messenger field.

So far, three photon parameters – arrival time, direction and energy – have been measured extensively for a range of different objects within astrophysics. Yet, despite the wealth of information it contains, the photon polarisation has been neglected. X-ray or gamma-ray fluxes emitted by charged particles within strong magnetic fields are highly polarised, while those emitted by thermal processes are typically unpolarised. Polarisation therefore allows researchers to easily identify the dominant emission mechanism for a particular source. GRBs are one such source, since a consensus on where the gamma rays actually originate from is still missing.

Difficult measurements

The reason that polarisation has not been measured in great detail is related to the difficulty of performing the measurements. To measure the polarisation of an incoming photon, details of the secondary products produced as it interacts in a detector need to be measured. With gamma rays, for example, the angle at which the gamma ray scatters in the detector is related to its polarisation vector. This means that, in addition to detecting the photon, researchers need to study its subsequent path. Such measurements are further complicated by the need to perform them above the atmosphere on satellites, which complicates the detector design significantly.

The 2020s should see the start of a new type of astrophysics

Recent progress has shown that, although challenging, polarisation measurements are possible. The most recent example came from the POLAR mission, a Swiss, Polish and Chinese experiment fully dedicated to measuring the polarisation of GRBs, which took data from September 2016 to April 2017. The team behind POLAR, which was launched to space in 2016 attached to a module for the China Space Station, recently published its first results. Though they indicate that the emission from GRBs is likely unpolarised, the story appears to be more complex. For example, the polarisation is found to be low when looking at the full GRB emission, but when studying it over short time intervals, a strong hint of high polarisation is found with a rapidly changing polarisation angle during the GRB event. This rapid evolution of the polarisation angle, which is yet to be explained by the theoretical community, smears out the polarisation when looking at the full GRB. In order to fully understand the evolution, which could give hints of an evolution of a magnetic field, finer time-binning and more precise measurements are needed, which require more statistics.

POLAR-2

Two future instruments capable of providing such detailed measurements are currently being developed. The first, POLAR-2, is the follow-up of the POLAR mission and was recently recommended to become a CERN-recognised experiment. P OLAR-2 w ill b e a n order of magnitude more sensitive (due to larger statistics and lower systematics) than its predecessor and therefore should be able to answer most of the questions raised by the recent POLAR results. The experiment will also play an important role in detecting extremely weak GRBs, such as those expected from GW events. POLAR-2, which will be launched in 2024 to the under-construction China Space Station, could well be followed by a similar but slightly smaller instrument called LEAP, which recently progressed to the final stage of a NASA selection process. If successful, LEAP would join POLAR-2 in 2025 in orbit on the International Space Station.

Apart from dedicated GRB polarimeters, progress is also being made at other upcoming instruments such as NASA’s Imaging X-ray Polarimetry Explorer and China-ESA’s enhanced X-ray Timing and Polarimetry mission, which aim to perform the first detailed polarisation measurements of a range of astrophysical objects in the X-ray region. While the first measurements from POLAR have been published recently, and more are expected soon, the 2020s should see the start of a new type of astrophysics, which adds yet another parameter to multi-messenger exploration.

Opening doors with a particle-physics PhD

Alexandra Martín Sánchez

Alexandra Martín Sánchez began her studies in particle physics at the University of Salamanca, Spain, in 2003, during which she had an internship at the University of Paris-Sud at Orsay working in the LHCb collaboration. This prompted her to take a master’s degree in particle physics, followed by a PhD at Laboratoire de l’Accélérateur Linéaire (LAL) in Orsay. She worked on CP violation in B0 DK*0 decays and hadronic trigger performance with the LHCb detector, and the subject fascinated her. She recalls with emotion witnessing the announcement of the Higgs-boson discovery in July 2012 from Melbourne, Australia, where the ICHEP conference was being held and where she was presenting her work: “Despite the distance, the atmosphere was super-charged with excitement.”

Getting a permanent position is particularly hard nowadays

Alexandra Martín Sánchez

Yet, one year later, Alexandra decided to leave the field. Why? “There were possibilities to do a postdoc in Marseille for LHCb, or elsewhere for other experiments, but I had already changed countries once and had created strong links in Paris,” she explains. “I loved working in research at CERN, and if it had been easier to continue in this way I would have, but getting a permanent position is particularly hard nowadays and you need to do several postdocs, often switching countries.”

After submitting her thesis, she consulted the careers office at Orsay to discuss her options. But it was word-of-mouth and friends who had already made the transition from research to industry that were the biggest help. After attending an IT careers fair in Paris in 2013, she was offered a job with French firm Bertin Technologies, who were looking for skills in scientific computing, in particular to offer consulting services for large groups including French energy giant EDF. Reckoning that this first step into industry could open the door to a large company, she took the plunge.

“Bertin Technologies had recruited me without having a clear idea regarding the profile of a particle-physics researcher, but they were immediately very satisfied with the way I worked. My recruiters were surprised to see me at ease in all aspects of the job, whether it was coding, functioning in teams or collaborating with other services.”

Moving on

After one year with the firm, Alexandra was recruited by EDF R&D, just as she had hoped for. Initially joining as a research engineer, five years later she is now project manager of open-source software called SALOME and leads a team of seven people. SALOME is used for industrial studies that need physical simulations, making it possible to model EDF’s operation of facilities and means of production, such as nuclear power plants or hydroelectric dams. “Computer science is the same as at CERN, even if it is applied to different data. Programming is also done in Python and C++. The code used is also that generated by researchers, that is to say, more or less ‘industrial’ and I easily found my way around, as we share the same development work habits. At CERN we work on software developed by CERN, and at EDF on software developed by EDF. In both cases it is also teamwork. The principles remain the same,” she explains.

“Large groups like EDF are of course fairly hierarchical companies, but CERN is also very large and very hierarchical. One can feel protected by such structures. On the other hand, they have a cumbersome administrative side, which means that things do not necessarily move as quickly as we would like. What I miss, however, is the international aspect of the collaborations. Today I’m thinking of staying at EDF because I’m happy there. The career paths are varied and the company motivates its engineers to change jobs every four or five years, unless they wish to become specialists in their fields.”

The thesis is a real professional experience!

Alexandra Martín Sánchez

The biggest lesson is that the skills she had learned during the process of obtaining a PhD in an environment like CERN are extremely transferable. “During my recruitment interviews, I highlighted my programming experience, my ability to communicate and present my work, and especially my ability to complete a thesis project over several years,” she says. “My advice to alumni looking for a job is to make the most of this PhD experience. Both sides of the job are of interest to recruiters: the technical part but also the communication and collaboration skills with researchers and engineers from all over the world. This makes a real difference from candidates coming from an engineering school: the thesis is a real professional experience!”

Bridging Europe’s neutron gap

The Institut Laue-Langevin

In increasing its focus towards averting environmental disaster and maintaining economic competitiveness, both the European Union and national governments are looking towards green technologies, such as materials for sustainable energy production and storage. Such ambitions rely on our ability to innovate – powered by Europe’s highly developed academic network and research infrastructures.

Neutron science holds enormous potential at every stage of innovation

Europe is home to world-leading neutron facilities that each year are used by more than 5000 researchers across all fields of science. Studies range from the dynamics of lithium-ion batteries, to developing medicines against viral diseases, in addition to fundamental studies such as measurements of the neutron electric-dipole moment. Neutron science holds enormous potential at every stage of innovation, from basic research through to commercialisation, with at least 50% of publications globally attributed to European researchers. Yet, just as the demand for neutron science is growing, access to facilities is being challenged.

Helmut Schober

Three of Europe’s neutron facilities closed in 2019: BER II in Berlin; Orphée in Paris; and JEEP II outside Oslo. The rationale is specific to each case. There are lifespan considerations due to financial resources, but also political considerations when it comes to nuclear installations. The potentially negative consequences of these closures must be carefully managed to ensure expertise is maintained and communities are not left stranded. This constitutes a real challenge for the remaining facilities. Sharing the load via strategic collaboration is indispensable, and is the motivation behind the recently created League of advanced European Neutron Sources (LENS).

We must also ensure that the remaining facilities – which include the FRM II in Munich, the Institut Laue-Langevin (ILL) in France, ISIS in the UK and the SINQ facility in Switzerland – are fully exploited. These facilities have been upgraded in recent years, but their long-term viability must be secured. This is not to be underestimated. For example, 20% of the ILL’s budget relies on the contributions of 10 scientific members that must be renegotiated every five years. The rest is provided by the ILL’s three associate countries (France, Germany and the UK). The loss of one of its major scientific members, even only partially, would severely threaten the ILL’s upgrade capacity.

Accelerator sources

The European Spallation Source (ESS) under construction in Sweden, which was conceived more than 20 years ago, must become a fully operating neutron facility at the earliest possible date. This was initially foreseen for 2019, now scheduled for 2023. Europe must ask itself why building large scientific facilities such as ESS, or FAIR in Germany, takes so long, despite significant strategic planning (e.g. via ESFRI) and sophisticated project management. After all, neutron-science pioneers built the original ILL in just over four years, though admittedly at a time of less regulatory pressure. We must regain that agility. The Chinese Spallation Neutron Source has just reached its design goal of 100 kW, and the Spallation Neutron Source in Oak Ridge, Tennessee, is actively pursuing plans for a second target station.

The value of neutron science will be judged on its contribution to solving society’s problems

We therefore need to look to next-generation sources such as Compact Accelerator driven Neutron Sources (CANS). Contrary to spallation sources that produce neutrons by bombarding heavy nuclei with high-energy protons, CANS rely on nuclear processes that can be triggered by proton bombardment in the 5 to 50 MeV range. While these processes are less efficient than spallation, they allow for a more compact target and moderator design. Examples of this scheme are SONATE, currently under development at CEA-Saclay and the High Brilliance Source being pursued at Jülich. CANS must now be brought to maturity, requiring carefully planned business models to identify how they can best reinforce the ecosystem of neutron science.

It is also important to begin strategic discussions that aim beyond 2030, including the need for powerful new national sources that will complement the ESS. Continuous (reactor) neutron sources must be part of this because many applications, such as the production of neutron-rich isotopes for medical purposes, require the highest time-averaged neutron flux. Such a strategic evaluation is currently under way in the US, and Europe should soon follow suit.

Despite last year’s reactor closures, Europe is well prepared for the next decade thanks to the continuous modernisation of existing sources and investment in the ESS. The value of neutron science will be judged on its contribution to solving society’s problems, and I am convinced that European researchers will rise to the challenge and carve a route to a greener future through world-leading neutron science.

Neutrino oscillations constrain leptonic CP violation

Physicists working on the T2K experiment in Japan have reported the strongest hint so far that charge-conjugation × parity (CP) symmetry is violated by the weak interactions of leptons. Based on an analysis of nine years of neutrino-oscillation data, the T2K results indicate discrepancies between the way muon-neutrinos transform into electron-neutrinos and the way muon-antineutrinos transform into electron-antineutrinos, at 3σ confidence. While further data are required to confirm the findings, the result strengthens previous observations and offers hope for a future discovery of leptonic CP violation at T2K or at next-generation long-baseline neutrino-oscillation experiments due to come online this decade.

These exciting results are thanks to the hard work of hundreds of T2K collaborators

Federico Sanchez

“These exciting results are thanks to the hard work of hundreds of T2K collaborators involved in the construction, data collection and data analysis for T2K over the past two decades,” says T2K international co-spokesperson Federico Sanchez of the University of Geneva.

Discovered in 1964, CP violation has so far only been observed in the weak interactions of quarks, mostly recently in the charm system by the LHCb collaboration. Since the size of the effect in quarks is too small to explain the observed matter-antimatter disparity in the universe, finding additional sources of CP violation is one of the outstanding mysteries in particle physics. The quantum mixing of neutrino flavours as neutrinos travel over large distances, the discovery of which was marked by the 2015 Nobel Prize in Physics, provides a way to probe another potential source of CP violation: a complex phase, δCP, in the neutrino mixing matrix. Though models indicate that no value of δCP could explain the cosmological matter-antimatter asymmetry without new physics, the observation of leptonic CP violation would make models such as leptogenesis, which feature heavy Majorana partners for the Standard Model neutrinos, more plausible.

Muon and e-like rings in Super-Kamiokande

Long baseline

The T2K (Tokai-to-Kamioka) experiment uses the Super Kamiokande detector to observe neutrinos and antineutrinos generated by a proton beam at the J-PARC accelerator facility 295 km away. As the beams travel through Earth, a fraction of muon neutrinos and antineutrinos in the beam oscillate into electron neutrinos that are recorded via nuclear-recoil interactions in Super Kamiokande’s 50,000-tonne tank of ultrapure water, where the charged lepton generated by the weak interaction creates a Cherenkov ring which can be distinguished as being created by an electron or muon (see image above). Since the beam-line and detector components are made out of matter and not antimatter, the observation of neutrinos is already enhanced. The T2K analysis therefore includes corrections based on data from magnetised near-detectors (ND280, which uses the magnet originally built for the UA1 detector at CERN’s Spp̄S collider) placed 280m from the target.

T2K 3 sigma bound in Nature

The δCP parameter is a cyclic phase: if δCP=0, neutrinos and antineutrinos will change from muon- to electron-types in the same way during oscillation; any other value would enhance the oscillations of either neutrinos or antineutrinos, violating CP symmetry. Analysing data with 1.49×1021 and 1.64×1021 protons produced in neutrino- and antineutrino-beam mode respectively, T2K observed 90 electron-neutrino candidates and 15 electron-antineutrino candidates. This may be compared with the 56 and 22 events expected for maximal antineutrino enhancement (δCP=+π/2), and the 82 and 17 events expected for maximal neutrino enhancement (δCP=−π/2). Being most compatible with the latter scenario, the T2K data disfavour almost half of the possible values of δCP at 3σ confidence. For the “normal” neutrino-mass ordering favoured by T2K and other experiments, and averaged over all other oscillation parameters, the measured 3σ confidence-level interval for δCP is [−3.41, −0.03], while for the “inverted” mass ordering (in which the first mass splitting is greater than the second) it is [−2.54, −0.32]. Averaged over all oscillation parameters, δCP=0 is now disfavoured at 3σ confidence, though it is still within the 3σ bound for some allowed values of the mixing angle θ23 (see figure, above).

“Our results show the strongest constraint yet on the parameter governing CP violation in neutrino oscillations, one of the few parameters governing fundamental particle interactions that has not yet been precisely measured,” continues Sanchez. “These results indicate that CP violation in neutrino mixing may be large, and T2K looks forward to continued operation with the prospect of establishing evidence for CP violation in neutrino oscillations.”

Next steps

To further improve the experimental sensitivity to a potential CP-violating effect, the collaboration plans to upgrade the near detector to reduce systematic uncertainties and to accumulate more data, while J-PARC will increase the beam intensity by upgrading its accelerator and beam line.

“This is the first time ever CP-violation is glimpsed in the lepton sector and it has the potential of being a very large effect,” says Albert De Roeck, group leader of the CERN neutrino group, which has participated in the T2K experiment since last year. “Future neutrino CP violation measurements will be further performed by currently running neutrino experiments, and then the torch will be passed to the planned high precision neutrino experiments DUNE and Hyper-Kamiokande that will provide measurements of the exact degree of CP violation in the neutrino system.”

First physics for Belle II

Belle II

The Belle II collaboration at the SuperKEKB collider in Japan has published its first physics analysis: a search for Z′ bosons, which are hypothesised to couple the Standard Model (SM) with the dark sector. The team scoured four months of data from a pilot run in 2018 for evidence of invisibly decaying Z′ bosons in the process e+e→μ+μZ′, and for  lepton-flavour violating Z′ bosons in e+e→e±μZ′, by looking for missing energy recoiling against two clean lepton tracks. “This is the first ever search for the process e+e→μ+μZ′ where the Z′ decays invisibly,” says Belle II spokesperson Toru Iijima of Nagoya University.

The team did not find any excess of events, yielding preliminary sensitivity to the coupling g′ in the so-called Lμ−Lτ extension of the SM, wherein the Z′ couples only to muon and tau-lepton flavoured SM particles and the dark sector. This model also has the potential to explain anomalies in b → sμ+μ decays reported by LHCb and the longstanding muon g-2 anomaly, claims the team.

Belle II Z

The results come a little over a year since the first collisions were recorded in the fully instrumented Belle II detector on 25 March 2019. Following in the footsteps of Belle at the KEKB facility, the new SuperKEKB b-factory plans to achieve a 40-fold increase on the luminosity of its predecessor, which ran from 1999 to 2010. First turns were achieved in February 2016, and first collisions between its asymmetric-energy electron and positron beams were achieved in April 2018. The machine has now reached a luminosity of 1.4 × 1034 cm-2 s-1 and is currently integrating around 0.7 fb-1 each day, exceeding the peak luminosity of the former PEP-II/BaBar facility at SLAC, notes Iijima.

By summer the team aims to exceed the Belle/KEKB record of 2.1 × 1034 cm-2 s-1 by implementing a nonlinear “crab waist” focusing scheme. First used at the electron-positron collider DAΦNE at INFN Frascati, and not to be confused with the crab-crossing technology used to boost the luminosity at KEKB and planned for the high-luminosity LHC, the scheme stabilises e+e beam-beam blowup using carefully tuned sextupole magnets located symmetrically on either side of the interaction point. “The 100 fb-1 sample which we plan to integrate by summer will allow us to provide our first interesting results in B physics,” says Tom Browder of the University of Hawaii, who was Belle II spokesperson until last year.

Flavour debut

Belle II will make its debut in flavour physics at a vibrant moment, complementing  efforts to resolve hints of anomalies seen at the LHC, such as the recent test of lepton-flavour universality in beauty-baryon decays by the LHCb collaboration.

We will then look for the star attraction of the dark sector, the dark photon

Tom Browder

As well as updating searches for  invisible decays of the Z′ with one to two orders of magnitude more data, Belle II will now conduct further dark-sector studies including a search for axion-like particles decaying to two photons, the Z′ decaying to visible final states and dark-Higgstrahlung with a μ+μ pair and missing energy, explains Browder. “We will then look for the star attraction of the dark sector, the dark photon, with the difficult signature of e+e to a photon and nothing else.”

CERN establishes COVID-19 task force

The CERN-against-COVID-19 logo. Credit: CERN.

The CERN management has established a task force to collect and coordinate ideas from the global CERN community to fight the COVID-19 pandemic. Drawing on the scientific and technical expertise of some 18,000 people worldwide who have links with CERN, these initiatives range from the production of sanitiser gel to novel proposals for ventilators to help meet rising clinical demand.

CERN-against-COVID-19 was established on 27 March, followed by the launch of a dedicated website on 4 April. The group aims to draw on CERN’s many competencies and to work closely with experts in healthcare, drug development, epidemiology and emergency response to help ensure effective and well-coordinated action. The CERN management has also written directly to the director general of the World Health Organization, with which CERN has an existing collaboration agreement, to offer CERN’s support.

It’s not about going out there and doing things because we think we know best, but about offering our services and waiting to hear from the experts as to how we may be able to help

Beniamino Di Girolamo

The initiative has already attracted a large number of suggestions at various stages of development. These include three proposals by particle physicists for stripped-down ventilator designs, one of which is led by members of the LHCb collaboration. Other early suggestions range from the use of CERN’s fleet of vehicles to make deliveries in the surrounding region, to offers of computing resources and 3D printing of components for medical equipment. From 3-5 April, CERN supported a 48-hour online hackathon organised by the Swiss government to develop “functional digital or analogue prototypes” to counter the virus. Other ways in which computing resources are being deployed include the deployment of distance-learning tools such as Open Up2U, coordinated by the GÉANT partnership. CERN is also producing sanitiser gel and Perspex shields which will be distributed to gendarmeries in the Pays de Gex region.

Another platform, Science Responds, has been established by “big science” researchers in the US to facilitate interactions between COVID-19 researchers and the broader science community.

“It has been amazing to see so many varied and quality ideas,” says Beniamino Di Girolamo of CERN, who is chair of CERN-against-COVID-19 task force. “It’s not about going out there and doing things because we think we know best, but about offering our services and waiting to hear from the experts as to how we may be able to help. This is also much wider than CERN – these initiatives are coming from everywhere.”

Proposals and ideas can be made by members of the CERN community via an online form, and questions to the task force may be submitted via email.

 

bright-rec iop pub iop-science physcis connect