The inaugural Sparks! Serendipity Forum attracted 49 leading computer scientists, policymakers and related experts to CERN from 17 to 18 September for a multidisciplinary science-innovation forum. In this first edition, participants discussed a range of ethical and technical issues related to artificial intelligence (AI), which has deep and developing importance for high-energy physics and its societal applications. The structure of the discussions was designed to stimulate interactions between AI specialists, scientists, philosophers, ethicists and other professionals with an interest in the subject, leading to new insights, dialogue and collaboration between participants.
World-leading cognitive psychologist Daniel Kahneman opened the public part of the event by discussing errors in human decision making, and their impact on AI. He explained that human decision making will always have bias, and therefore be “noisy” in his definition, and asked whether AI could be the solution, pointing out that AI algorithms might not be able to cope with the complexity of decisions that humans have to make. Others speculated as to whether AI could ever achieve the reproducibility of human cognition – and if the focus should shift from searching for a “missing link” to considering how AI research is actually conducted by making the process more regulated and transparent.
Introspective AI
Participants discussed both the advantages and challenges associated with designing introspective AI, which is capable of examining its own processes and could be beneficial in making predictions about the future. Participants also questioned, however, whether we should be trying to make AI more self-aware and human-like. Neuroscientist Ed Boyden explored introspection through the lens of neural pathways, and asked whether we can design introspective AI before we understand introspection in brains. Following the introspection theme, philosopher Luisa Damiano addressed the reality versus fiction of “social-embodied” AI – the idea of robots interacting with us in our physical world – arguing that such a possibility would require careful ethical considerations.
AI is already a powerful, and growing, tool for particle physics
Many participants advocated developing so-called “strong” AI technology that can solve problems it has not come across before, in line with specific and targeted goals. Computer scientist Max Welling explored the potential for AI to exceed human intelligence, and suggestedthat AI can potentially be as creative as humans, although further research is required.
On the subject of ethics, Anja Kaspersen (former director of the UN Office for Disarmament Affairs) asked: who makes the rules? Linking to military, humanitarian and technological affairs, she considered how our experience in dealing with nuclear weapons could help us deal with the development of AI. She said that AI is prone to ethics washing: the process of creating an illusory sense that ethical issues are being appropriately addressed when they are not. Participants agreed that we should seek to avoid polarising the community when considering risks associated with current and future AI, and suggested a more open approach to deal with the challenges faced by AI today and tomorrow. Skype co-founder Jann Tallin identified AI as one of the most worrying existential risks facing society today; the fact that machines do not consider whether their decisions are unethical demands that we consider the constraints of the AI design space within the realm of decision making.
Fruits of labour
The initial outcomes of the Sparks! Serendipity Forum are being written up as a CERN Yellow Report, and at least one paper will be submitted to the journal Machine Learning Science and Technology. Time will tell what other fruits of the serendipitous interactions at Sparks! will bring. One thing is certain, however, AI is already a powerful, and growing, tool for particle physics. Without it, the LHC experiments’ analyses would have been much more tortuous, as discussed by Jennifer Ngadiuba and Maurizio Pierini (CERN Courier September/October 2021 p31)
Future editions of the Sparks! Serendipity Forum will tackle different themes in science and innovation that are relevant to CERN’s research. The 2022 event will be built around future health technologies, including the many accelerator, detector and simulation technologies that are offshoots of high-energy-physics research.
The leading role of CERN in fundamental research is complemented by its contribution to applications for the benefit of society. A strong example is the Heavy Ion Therapy Masterclass (HITM) school, which took place from 17 to 21 May 2021. Attracting more than 1000 participants from around the world, many of whom were young students and early-stage researchers, the school demonstrated the enormous potential to train the next generation of experts in this vital application. It was the first event of the European Union project HITRIplus (Heavy Ion Therapy Research Integration), in which CERN is a strategic partner along with other research infrastructures, universities, industry partners, the four European heavy-ion therapy centres and the South East European International Institute for Sustainable Technologies (SEEIIST). As part of a broader “hands-on training” project supported by the CERN and Society Foundation with emphasis on capacity building in Southeast Europe, the event was originally planned to be hosted in Sarajevo but was held online due to the pandemic.
The school’s scientific programme highlighted the importance of developments in fundamental research for cancer diagnostics and treatment. Focusing on treatment planning, it covered everything needed to deliver a beam to a tumour target, including the biological response of cancerous and healthy tissues. The Next Ion Medical Machine Study (NIMMS) group delivered many presentations from experts and young researchers, starting from basic concepts to discussions of open points and plans for upgrades. Expert-guided practical sessions were based on the matRad open-source professional toolkit, developed by the German cancer research centre DKFZ for training and research. Several elements of the course were inspired by the International Particle Therapy Masterclasses.
Virtual visits to European heavy-ion therapy centres and research infrastructures were ranked by participants among the most exciting components of the course. There were also plenty of opportunities for participants to interact with experts in dedicated sessions, including a popular session on entrepreneurship by the CERN Knowledge Transfer group. This interactive approach had a big impact on participants, several of which were motivated to pursue careers in related fields and to get actively involved at their home institutes. This future expert workforce will become the backbone for building and operating future heavy-ion therapy and research facilities that are needed to fight cancer worldwide (see Linacs to narrow radiotherapy gap).
Further support is planned at upcoming HITRIplus schools on clinical and medical aspects, as well as HITRIplus internships, to optimally access existing European heavy-ion therapy centres and contribute to relevant research projects.
The LHC was built in the 27 km tunnel originally excavated for LEP, the highest energy electron–positron collider ever built. Designed to study the carriers of the weak force, LEP’s greatest legacy is the accuracy with which it pinned down the properties of the Z boson. Among the highlights is the measurement of the Z boson’s invisible width and decay branching fraction, which was used to deduce that there are three, and only three, species of light neutrinos that couple to the Z boson. This measurement of the Z-boson invisible width from LEP has remained the most precise for two decades.
This precise measurement of the Z-boson invisible width is the first of its kind at a hadron collider
In a bid to provide an independent and complementary test of the Standard Model (SM) at a new energy regime, CMS has performed a precise measurement of the Z-boson invisible width – the first of its kind at a hadron collider. The analysis uses the experimental signature of a very energetic jet accompanied by large missing transverse momentum to select events where the Z boson decays predominantly to neutrinos. The invisible width is then extracted from the well-known relationship between the Z-boson coupling to neutrinos and its coupling to muons and electrons.
While the production of a pair of neutrinos occurs through a pure Z interaction, the production of a pair of charged leptons can also occur through a virtual photon. The contribution of virtual photon exchange and the interference between photon and Z-boson exchange are determined to be less than 2% for a dilepton invariant mass range of 71–111 GeV, and was accounted for to allow the collaboration to compare the results directly to the Z’s decay to neutrinos.
Figure 1 shows the missing transverse momentum distribution for the three key regions contributing to this measurement: the jets-plus-missing-transverse-momentum region; the dimuon-plus-jets region; and the dielectron-plus-jets region. For the dilepton regions, selected muons and electrons are not included in the calculation of the missing transverse momentum. The dominant background to the jets plus missing transverse momentum region is from a W boson decaying leptonically, and accounts for 35% of the events. Estimating this background with a high accuracy is one of the key aspects of the measurement, and was performed by studying several exclusive regions in data that are designed to be kinematically very similar to the signal region, but statistically independent.
The invisible width of the Z boson was extracted from a simultaneous likelihood fit and measured to be 523 ±3 (stat) ±16 (syst) MeV. This 3.2% uncertainty in the final result is dominated by systematic uncertainties, with the largest contributions coming from the uncertainty in the efficiencies of selecting muons and electrons. In a fitting tribute to its predecessor and testament to the LHC entering a precision era of physics, this measurement from CMS is competitive with the LEP combined result of 503 ± 16 MeV and is currently the world’s most precise single direct measurement.
The ALICE collaboration has reported a new measurement of the production of Ds+ mesons, which contain a charm and an anti-strange quark, in Pb–Pb collisions collected in 2018 at a centre- of-mass energy per nucleon pair of 5.02 TeV. The large data sample and the use of machine-learning techniques for the selection of particle candidates led to increased precision on this important quantity.
D-meson measurements probe the interaction between charm quarks and the quark–gluon plasma (QGP) formed in ultra-relativistic heavy-ion collisions. Charm quarks are produced in the early stages of the nucleus–nucleus collision and thus experience the whole system evolution, losing part of their energy via scattering processes and gluon radiation. The presence of the QGP medium also affects the charm-quark hadronisation and, in addition to the fragmentation mechanism, a competing process based on charm–quark recombination with light quarks of the medium might occur. Given that strange quark–antiquark pairs are abundantly produced in the QGP, the recombination mechanism could enhance the yield of Ds+ mesons in Pb–Pb collisions with respect to that of D0 mesons, which do not contain strange quarks.
ALICE investigated this possibility using the ratio of the yields of Ds+ and D0 mesons. The figure displays the Ds+ /D0 yield ratio in central (0–10%) Pb–Pb collisions divided by the ratio in pp collisions, showing that the values of the ratio in the 2 < pT < 8 GeV/c interval are higher in central Pb–Pb collisions by about 2.3σ. The measured Ds+ /D0 double ratio also hints at a peak for pT≃5–6 GeV/c. Its origin could be related to the different D-meson masses and to the collective radial expansion of the system with a common flow-velocity profile. In addition, the hadronisation via fragmentation becomes dominant at high transverse momenta, and consequently, the values of the Ds+ /D0 ratio become similar between Pb–Pb and pp collisions.
The measurement was compared with theoretical calculations based on charm–quark transport in a hydrodynamically expanding QGP (LGR, TAMU, Catania and PHSD), which implement the strangeness enhancement and the hadronisation of charm quarks via recombination in addition to the fragmentation in the vacuum. The Catania and PHSD models predict a ratio almost flat in pT, while TAMU and LGR describe the peak at pT≃ 3–5 GeV/c.
Complementary information was obtained by comparing the elliptic flow coefficient v2 of Ds+ and non-strange D mesons (D0, D+ and D*+) in semi-central (30–50%) Pb–Pb collisions. The Ds+– meson v2 is positive in the 2 < pT < 8 GeV/c interval with a significance of 6.4σ, and is compatible within uncertainties with that of non-strange D mesons. These features of the data are described by model calculations that include recombination of charm and strange quarks.
The freshly-completed upgrade of the detectors and the harvest of Pb–Pb collision data expected in Run 3 will allow the ALICE collaboration to further improve the measurements, deepening our understanding of the heavy-quark interaction and hadronisation in the QGP.
Since the discovery of the Higgs boson at the LHC in 2012, physicists have a more complete understanding of the Standard Model (SM) and the origin of elementary particle mass. However, theoretical questions such as why the Higgs boson is so light remain. An attractive candidate explanation postulates that the Higgs boson is not a fundamental particle, but instead is a composite state of a new, strongly-interacting sector – similar to the pion in ordinary strong interactions. In such composite-Higgs scenarios, new partners of the top and bottom quarks of the SM could be produced and observed at the LHC.
If they exist, VLQs could be very heavy, with masses at the TeV scale, and could be produced either singly or in pairs at the LHC.
Ordinary SM quarks come in left-handed and right-handed varieties, which behave differently in weak interactions. The hypothetical new quark partners, however, behave the same way in weak interactions, whether they are left- or right-handed. Composite-Higgs models, and several other theories beyond the SM, predict the existence of such “vector-like quarks” (VLQs). Searching for them is therefore an exciting opportunity for the LHC experiments.
If they exist, VLQs could be very heavy, with masses at the TeV scale, and could be produced either singly or in pairs at the LHC. Furthermore, VLQs could decay into regular top or bottom quarks in combination with a W, Z or Higgs boson. This rich phenomenology warrants a varied range of complementary searches to provide optimal coverage.
The ATLAS collaboration has recently carried out two VLQ searches based on the full Run–2 dataset (139 fb–1) at 13 TeV. The first analysis targets pair-production of VLQs, focusing on the possibility that most VLQs decay to a Z boson and a top quark. To help identify likely signal events, leptonically decaying Z bosons were tagged in events with pairs of electrons or muons. To maximise the discriminating power between the VLQ signal and the SM background, machine-learning techniques using a deep neural network were employed to identify the hadronic decays of top quarks, Z, W or Higgs bosons, and categorise events into 19 distinct regions.
The second analysis targets the single production of VLQs. While the rate of pair production of VLQs through regular strong interactions only depends on their mass, their single production also depends on their coupling to SM electroweak bosons. As a result, depending on the model under consideration, VLQs heavier than approximately 1 TeV might predominantly be produced singly, and a measurement would therefore uniquely allow insight into this coupling strength.
The analysis was optimised for VLQ decays to top quarks in combination with either a Higgs or a Z boson. Events with a single lepton and multiple jets were selected, and tagging algorithms were used to identify the boostedleptonic and hadronic decays of top quarks, and the hadronic decays of Higgs and Z bosons. The presence of a forward jet, characteristic of the single VLQ production mode, was used (along with the multiplicity of jets, b-jets and reconstructed boosted objects) to categorise the analysed events into 24 regions.
The observations from both analyses are consistent with SM predictions, which allows ATLAS to set the strongest constraints to date on VLQ production. Together, the pair- and single-production analyses exclude VLQs with masses up to 1.6 TeV (see figure 1) and 2.0 TeV (see figure 2), respectively, depending on the assumed model. These two analyses are part of a broader suite of searches for VLQs underway in ATLAS. The combination of these searches will provide the greatest potential for the discovery of VLQs, and ATLAS therefore looks forward to the upcoming Run–3 data.
Just over 60 years ago, physicists and engineers at CERN were hard at work trying to tune the world’s first proton synchrotron, the PS. It was the first synchrotron of its kind, employing the strong-focusing principle to produce higher-energy beams within a smaller aperture and with a lower construction cost compared to, for example, the CERN synchrocyclotron. Little could physicists in 1959 imagine the maze of technical galleries and tunnels stemming out of the PS ring not many years later.
The first significant expansion to CERN’s accelerator complex was prompted by the 1962 discovery of the muon neutrino at the competing Alternating Gradient Synchrotron at Brookhaven National Laboratory in the US. Soon afterwards, CERN embarked on an ambitious programme starting with a new east experimental area, the PS booster and the first hadron collider – the Intersecting Storage Rings (ISR). A major challenge during this expansion was transferring the beam to targets, experiments and the ISR, which required that CERN build transfer lines that could handle different particles, different extraction energy levels and various duty cycles (see “In service” figure).
Transfer lines transport particle beams from one machine to another using powerful magnets. Once fully accelerated, a beam is given an ultra-fast “kick” off its trajectory by a kicker magnet and then guided away from the ring by one or more septum magnets. A series of focusing and defocusing quadrupole magnets contain the beams in the vacuum pipe while bending magnets direct them to their new destination (a target or a subsequent accelerator ring).
Making the connection
The first transfer lines linking two different CERN accelerators were TT1 and TT2, which were originally built for the ISR. The need to handle different particle energies and even different particle charges required continuous adjustment of the magnetic field at every extraction, typically once per second in the PS. One of the early challenges faced was a memory effect in the steel yokes of the magnets: alternating among different field values leaves a remnant field that changes the field density depending on the order of cycles played out before. Initially, complex solutions with secondary field-resetting coils were used. Later, magnetic reset was achieved by applying a predefined field excitation that brings the magnet to a reproducible state prior to the next physics cycle.
Solving the magnetic hysteresis problem was not the only hurdle that engineers faced. Handling rapid injections and extractions through the magnets was also a major challenge for the electronics of the time. The very first powering concept used machine/generator setups with adjustable speeds to modulate the electric current and consequently the field density in the transfer-line magnets. Each transfer line would have its own noisy generation plant that required a control room with specialised personnel (see “Early days” images). Modifying the mission-profile of a magnet to test new physics operations was a heavy and tedious operation.
Towards the end of 1960s, electrical motors in the west PS hall were replaced by the first semiconductor-operated thyristor rectifiers, which transformed the 50 Hz alternating grid voltage to a precisely regulated (to nearly 100 parts per million) current in the beamline magnets. They also occupied a fraction of the space, had lower power losses and were able to operate unsupervised. All of a sudden, transporting different particles with variable energies became possible at the touch of a knob. The timing could not have been better, as CERN prepared itself for the Super Proton Synchrotron (SPS) era, which would see yet more transfer lines added to its accelerator complex.
By the early 1980s the ISR had completed its mission, and the TT1 transfer line was decommissioned together with the storage rings. However, the phenomenal versatility of TT2 has allowed it to continue to extract particles for experiments. Today, virtually all user beams, except those for the East Area and ISOLDE, pass through the 300 m-long line. It delivers low-energy 3 GeV beams to “Dump 2” for machine development, 14 GeV beams to the SPS for various experiments in the North Area, 20 GeV beams towards the n_ToF facility, 26 GeV beams to the Antiproton Decelerator, and to the SPS – where protons are accelerated to 450 GeV before being injected into the LHC. While beams traverse TT2 in just over a microsecond, other beamlines, such as those in the East Area, spill particles out of the PS continuously for 450 ms towards the CLOUD experiment and other facilities – a process known as slow extraction.
Energy economy
Transfer lines are heavy users of electrical power, since typically their magnets are powered for long periods compared to the time it takes a beam to pass. During their last year of operation in 2017, for example, the East Area transfer lines accounted for 12% of all energy consumption by CERN’s PS/PSB injector complex. The reason for this inefficiency was the non-stop powering of the few dozen magnets used in each transfer line for the necessary focusing, steering and trajectory-correction functions. This old powering system, combined with a solid-yoke magnet structure, did not permit extraction of the magnetic field energy between beam operations.
CERN is looking at testing and implementing new systems that lower its environmental impact today and into the far future
For reference, a typical bending magnet absorbs the same energy as a high-performance car accelerating from 0 to 100 km/h, and must do so in a period of 0.5 s every 1.2 s for beams from the PS. To supply and recover all this energy between successive beam operations, powerful converters are required along with laminated steel magnet yokes, all of which became possible with the recent East Area renovation project.
Energy economy was the primary motivation for CERN to adopt the “Sirius” family of regenerative power converters for TT2 and, subsequently, the East Area and Booster transfer lines. While transfer lines typically absorb and return all the magnetic field energy from and to the power grid, the new Sirius power converter allows a more energy-efficient approach by recovering the magnetic field energy locally into electrolytic capacitors for re-use in the next physics cycle. Electrolytic capacitors are the only energy-storage technology that can withstand the approximately 200 million beam transports that a Sirius converter is expected to deliver during its lifetime, and the system employs between 15 and 420 such wine-bottle-sized units according to the magnet size and beam energy to be supplied (see “Transformational” image).
Sirius is also equipped with a front-end unit that can control the energy flow from the grid to match what is required to compensate the thermal losses in the system. By estimating in real time how much of the total energy can be recycled, Sirius has enabled the newly renovated East Area to be powered using only two large-distribution transformers rather than the seven transformers used in the past for the old 1960s thyristor rectifiers. To control the energy flow in the magnets, Sirius uses powerful silicon-based semiconductors that switch on and off 13,000 times per second. By adjusting the “on” time of the switches the average current in and out of the energy-storing units can be controlled with precision, while the high switching frequency allows rapid corrections of the generated voltage and current across the magnet.
The Sirius converters entered operation gradually from September 2020, and at present a total of 500 million magnetic cycles have been completed. Recent measurements made on the first circuits commissioned in the East Area demonstrated an energy consumption 95% lower than compared to the original 1960s figures. But above all, the primary role of Sirius is to provide current and hence magnetic field in transfer-line magnets to a precision of 10 parts per million, which enables excellent reproducibility for the beams coming down the lines. The most recent measurements demonstrated a stability better than 10 ppm during a 24-hour interval.
Unusual engineering model
CERN employs a rather unusual engineering model compared to those in industry. For Sirius, a team of experts and technicians from the electrical power converters group designed, prototyped and validated the power-converter design before issuing international tenders to procure the subsystems, assembly and testing. Engineers therefore have the opportunity to work with their counterparts in member-state industries, often helping them develop new manufacturing methods and skills. Sirius, for example, helped a magnetics-component manufacturer in Germany achieve a record precision in their manufacturing process and to improve their certification procedures for medium-power reactors. Another key partner acquired new knowledge in the manufacturing and testing of inoxidised water-cooling circuits, enabling the firm to expand its project portfolio.
Thanks to the CERN procurement process, Sirius components are built by a multitude of suppliers across Europe. For some, it was their first time working with CERN. For example, the converter-assembly contract was the first major (CHF 12 million) contract won by Romanian industry after the country’s accession to CERN five years ago. Other significant contributions were made by German, Dutch, French, UK, Danish and Swedish industries. Recent work by the CERN knowledge transfer group resulted in a contract with a Spanish firm that licensed the Sirius design for production for other laboratories, with the profits invested in R&D for future converter families.
Energy recycling tends to yield more impressive energy savings in fast-cycling accelerators and transfer lines, such as those in the PS. However, CERN is planning to deploy similar technologies in other experimental facilities such as the North Area that will undergo a major makeover in the following years. The codename for this new converter project is Polaris – a scalable converter family that can coast through the long extraction plateaus used in the SPS (see “Physics cycles” figure). The primary goal of the renovation, beyond better energy efficiency, is to restore the reliability and provide a 10-fold improvement in the precision of the magnetic field regulation.
Development efforts in the power-converters group do not stop here. The electrification of transportation and the net-zero carbon emission targets of many governments are also driving innovation in power electronics, which CERN might take advantage of. For example, wide bandgap semiconductors exhibit higher reverse-blocking capabilities and faster transitions that could allow switching at a rate of more than 40,000 Hz and therefore help to reduce size, losses and eliminate the audible noise emitted by power conversion altogether.
Another massive opportunity concerns energy storage, with CERN looking closely at the technologies driven by the battery mega-factories that are being built around the world. As part of our mission to provide the next generation of sustainable scientific facilities, as outlined in CERN’s recently released second environment report, we are looking at testing and implementing new systems to lower our environmental impact today and into the far future.
The CP-violating angle γ of the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix is a benchmark of the Standard Model, since it can be determined from tree-level beauty decays in an entirely data-driven way with negligible theoretical uncertainty. Comparisons between direct and indirect measurements of γ therefore provide a potent test for new physics. Before LHCb began taking data, γ was one of the poorest known constraints of the CKM unitarity triangle, but that is no longer the case.
A new result from LHCb marks an important change in strategy, by including not only results from beauty decays sensitive to γ but additionally exploiting the sensitivity to CP violation and mixing of charm meson (D0) decays. Mixing in the D0–D0 system proceeds via flavour-changing neutral currents, which may also be affected by contributions from new heavy particles. The process is described by two parameters: the mass difference, x, and width difference, y, between the two charm flavour states (see figure 1).
The latest combination takes the results of more than 20 LHCb beauty and charm measurements to determine γ = (65.4 –4.2+3.8 )°, which is the most precise measurement from a single experiment (see figure 2). Furthermore, various charm-mixing parameters were determined by combining, for the first time, both the beauty and charm datasets, which results in x= (0.400)% and y= (0.630)%. The latter is a factor-of-two more precise than the current world average, which is entirely due to the new methodology that harnesses additional sensitivity to the charm sector from beauty decays.
This demonstrates that LHCb has already achieved better precision than its original design goals. When the redesigned LHCb detector restarts operations in 2022, the target of sub-degree precision on γ, and the chance to observe CP violation in charm mixing, comes ever closer.
Whenever we perform an analysis of our data, whether measuring a physical quantity of interest or testing some hypothesis, it is necessary to assess the accuracy of our result. Statistical uncertainties arise from the limited accuracy with which we can measure anything, or from the natural Poisson fluctuations involved in counting independent events. They have the property that repeated measurements result in greater accuracy.
Systematic uncertainties, on the other hand, arise from many sources and may not cause a spread in results when experiments are repeated, but merely shift them away from the true value. Accumulating more data usually does not reduce the magnitude of a systematic effect. As a result, estimating systematic uncertainties typically requires much more effort than for statistical ones, and more personal judgement and skill is involved. Furthermore, statistical uncertainties between different analyses usually are independent; this often is not so for systematics.
The November event saw the largest number of statisticians at any PHYSTAT meeting
In particle-physics analyses, many systematics are related to detector and analysis effects. Examples include trigger efficiency; jet energy scale and resolution; identification of different particle types; and the strength of backgrounds and their distributions. There are also theoretical uncertainties which, as well as affecting predicted values for comparison with measured ones, can also influence the experimental variables extracted from the data. Another systematic comes from the intensity of accelerator beams (the integrated luminosity at the LHC for example), which is likely to be correlated for the various measurements made using the same beams.
At the LHC, it is in analyses with large amounts of data where systematics are likely to be most relevant. For example, a measurement of the mass of the W boson published by the ATLAS collaboration in 2018, based on a sample of 14 million W-boson decays, had a statistical uncertainty of 7 MeV but a systematic uncertainty of 18 MeV.
PHYSTAT-Systematics
Two big issues for systematics are how the magnitudes of the different sources are estimated, and how they are then incorporated in the analysis. The PHYSTAT-Systematics meeting concentrated on the latter, as it was thought that this was more likely to benefit from the presence of statisticians – a powerful feature of the PHYSTAT series, which started at CERN in 2000.
The 20 talks fell into three categories. The first were those devoted to analyses in different particle-physics areas: the LHC experiments; neutrino-oscillation experiments; dark-matter searches; and flavour physics. A large amount of relevant information was discussed, with interesting differences in the separate sub-fields of particle physics. For example, in dark-matter searches, upper limits sometimes are set using Yellin’s Maximum Gap method when the expected background is low, or by using Power Constrained Limits, whereas these tend not to be used in other contexts.
The second group followed themes: theoretical systematics; unfolding; mis-modelling; an appeal for experiments to publish their likelihood functions; and some of the many aspects that arise in using machine learning (where the machine-learning process itself can result in a systematic, and the increased precision of a result should not be at the expense of accuracy).
Finally, there was a series of talks and responses by statisticians. The November event saw the largest number of statisticians at any PHYSTAT meeting, and the efforts that they made to understand our intricate analyses and the statistical procedures that we use were much appreciated. It was valuable to have insights from a different viewpoint on the largely experimental talks. David van Dyk, for instance, emphasised the conceptual and practical differences between simply using the result of a subsidiary experiment’s estimate of a systematic to assess its effect on a result, and using the combined likelihood function for the main and the subsidiary measurements. Also, in response to talks about flavour physics and neutrino-oscillation experiments, attention was drawn to the growing impact in cosmology of non-parametric, likelihood-free (simulation-based likelihoods) and Bayesian methods. Likelihood-free methods came up again in response to a modelling talk based on LHC-experiment analyses, and the role of risk estimation was emphasised by statisticians. Such suggestions for alternative statistical strategies open the door to further discussions about the merits of new ideas in particular contexts.
A novel feature of this remote meeting was that the summary talks were held a week later, to give speakers Nick Wardle and Sara Algeri more time. In her presentation, Algeri, a statistician, called for improved interaction between physicists and statisticians in dealing with these interesting issues.
Overall, the meeting was a good step on the path towards having a systematic approach to systematics. Systematics is an immense topic, and it was clear that one meeting spread over four afternoons was not going to solve all the issues. Ongoing PHYSTAT activities are therefore planned, and the organisers welcome further suggestions.
The 11th Higgs Hunting workshop took place remotely between 20 and 22 September 2021, with more than 300 registered participants engaging in lively discussions about the most recent results in the Higgs sector. ATLAS and CMS presented results based on the full LHC Run-2 dataset (up to 140 fb-1) recorded at 13 TeV. While all results remain compatible with Standard Model expectations, the precision of the measurements benefited from significant reductions in statistical uncertainties, more than three times smaller with the 13 TeV data than in previous LHC results at 7 and 8 TeV. This also brought into sharp relief the role of systematic uncertainties, which in some cases are becoming dominant.
The status of theory improvements and phenomenological interpretations, such as those from effective field theory, were also presented. Highlights included the Higgs pair-production process, which is particularly challenging at the LHC due to its low rate. ATLAS and CMS showed greatly improved sensitivity in various final states, thanks to improvements in analysis techniques. Also shown were results on the scattering of weak vector bosons, a process that is strongly related to the Higgs sector, highlighting large improvements from both the larger datasets and the higher collision energy available in Run 2.
Several searches for phenomena beyond the Standard Model – in particular for additional Higgs bosons – were presented. No significant excesses have yet been found.
The historical talk “The LHC timeline: a personal recollection (1980-2012)” was given by Luciano Maiani, former CERN Director-General, and concluding talks were given by Laura Reina (Florida) and Paolo Meridiani (Rome). A further highlight was the theory talk from Nathaniel Craig, who discussed the progress being made in addressing six open questions. Does the Higgs boson have a size? Does it interact with itself? Does it mediate a Yukawa force? Does it fulfill the naturalness strategy? Does it preserve causality? And does it realise electroweak symmetry?
The next Higgs Hunting workshop will be held in Orsay and Paris from 12 to 14 September 2022.
In the gold-medal category David Deutsch of the University of Oxford has been awarded the Isaac Newton Prize “for founding the discipline named quantum computation and establishing quantum computation’s fundamental idea, now known as the ‘qubit’ or quantum bit.” In the same category, Ian Chapman received the Richard Glazebrook Prize “for outstanding leadership of the UK Atomic Energy Authority and the world’s foremost fusion research and technology facility, the Joint European Torus, and the progress it has delivered in plasma physics, deuterium-tritium experiments, robotics, and new materials”.
Among this year’s silver-medal recipients, experimentalist Mark Lancaster of the University of Manchester earned the James Chadwick Prize “for distinguished, precise measurements in particle physics, particularly of the W boson mass and the muon’s anomalous magnetic moment”. Michael Bentley (University of York) received the Ernest Rutherford Prize for his contributions to the understanding of fundamental symmetries in atomic nuclei, while Jerome Gauntlett (Imperial College London) received the John William Strutt Lord Rayleigh Prize for applications of string theory to quantum field theory, black holes, condensed matter physics and geometry.
Finally, in the bronze medal category for early-career researchers, the Daphne Jackson Prize for exceptional contributions to physics education goes to accelerator physicist Chris Edmons (University of Liverpool) in recognition of his work in improving access for the visually impaired, for example via the Tactile Collider project. And the Mary Somerville Prize for exceptional contributions to public engagement in physics goes toXinRan Liu (University of Edinburgh) for his promotion of UK research and innovation to both national and international audiences.
Acknowledging physicists who have contributed to the field generally, 2021 honorary Institute of Physics fellowships were granted to Lyn Evans (for sustained and distinguished contributions to, and leadership in, the design, construction and operation of particle accelerator systems, and in particular the LHC) and climate physicist Tim Palmer, a proponent of building a ‘CERN for climate change’, for his pioneering work exploring the nonlinear dynamics and predictability of the climate system.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.