Comsol -leaderboard other pages

Topics

The Photomultiplier Handbook

By A G Wright
Oxford University Press

CCboo4_10_17

This volume is a comprehensive handbook aimed primarily at those who use, design or build vacuum photomultipliers. Drawing on his 40 years of experience as a user and manufacturer, the author wrote it to fill perceived gaps in the existing literature.

Photomultiplier tubes (PMTs) are extremely sensitive light detectors, which multiply the current produced by incident photons by up to 100 million times. Since their invention in the 1930s they have seen huge developments that have increased their performance significantly. PMTs have been and still are extensively applied in physics experiments and their evolution has been shaped by the requirements of the scientific community.

The first group of chapters sets the scene, introducing light-detection techniques and discussing in detail photocathodes – important components of PMTs – and optical interfaces. Since light generation and detection are statistical processes, detectors providing electron multiplication are also considered statistical in their operation. As a consequence, a chapter is dedicated to some theory of statistical processes, which is important to choose, use or design PMTs. The second part of the book deals with all of the important parameters that determine the performance of a PMT, each analysed thoroughly: gain, noise, background, collection and counting efficiency, dynamic range and timing. The effects of environmental conditions on performance are also discussed. The last part is devoted to instrumentation, in particular voltage dividers and electronics for PMTs.

Each chapter concludes with a summary and a comprehensive set of references. Three appendices provide additional useful information.

The book could become a valuable reference for researchers and engineers, and for students working with light sensors and, in particular, photomultipliers.

The Lazy Universe: An Introduction to the Principle of Least Action

By Jennifer Coopersmith
Oxford University Press

CCboo1_10_17

With contagious enthusiasm and a sense of humour unusual in this kind of literature, this book by Jennifer Coopersmith deals with the principle of least action or, to be more rigorous, of stationary action. As the author states, this principle defines the tendency of any physical system to seek out the “flattest” region of “space” – with appropriate definitions of the concepts of flatness and space. This is certainly not among the best-known laws of nature, despite its ubiquity in physics and having survived the advent of several scientific revolutions, including special and general relativity and quantum mechanics. The author makes a convincing case for D’Alembert’s principle (as it is often called) as a more insightful and conceptually fertile basis to understand classical mechanics than Newton’s laws. As she points out, Newton and D’Alembert asked very different questions, and in many cases variational mechanics, inspired by the latter, is more natural and insightful than working in Newton’s absolute space, but it can also feel like using a sledgehammer to crack a peanut.

The book starts with a general and very accessible introduction to the principle of least action. Then follows a long and interesting description of the developments that led to the principle as we know it today. The second half of the book delves into Lagrangian and Hamiltonian mechanics, while the final chapter illustrates the relevance of the principle for modern (non-classical) physics, although this theme is also touched upon several times in the preceding chapters.

An important caveat is that this is not a textbook: it should be seen as complementary to, rather than a replacement for, a standard introduction to the topic. For example, the Euler–Lagrange equation is presented but not derived and, in general, mathematical formulae are kept to a bare minimum in the main text. Coopersmith compensates for this with several thorough appendices, which range from classical textbook-like examples to original derivations. She makes a convincing critique of a famous argument by Landau and Lifshitz to demonstrate the dependence of kinetic energy on the square of the speed, and in one of the appendices she develops an interesting alternative explanation.

Although the author pays a lot of credit to The Variational Principles of Mechanics by Cornelius Lanczos (written in 1949 and re-edited in 1970), hers is a very different kind of book aimed at a different public. Moreover, the author has developed several original and insightful analogies. For example, she remarks upon how smartphones know their orientation: instead of measuring positions and angles with respect to external (absolute) space, three accelerometers in the phone measure tiny motions in three directions of the local gravity field. This is reminiscent of the methods of variational mechanics.

Notations are coherent throughout the book and clearly explained, and footnotes are used wisely. With an unusual convention that is never made explicit, the author graphically warns the reader when a footnote is witty or humorous, or potentially perceived as far-fetched, by putting the text in parenthesis.

My main criticism concerns the frequent references to distant chapters, which entangle the logical flow. This is a book made for re-reading and, as a result, it might be difficult to follow for readers with little previous knowledge of the topic. Moreover, I was rather baffled by the author’s confession (repeated twice) that she was unable to find a quote by Feynman that she is sure to have read in his Lectures. Nevertheless, these minor flaws do not diminish my general appreciation for Coopersmith’s very useful and well-written book.

The first part is excellent reading for anybody with an interest in the history and philosophy of science. I also recommend the book to students in physics and mathematics who are willing to dig deeper into this subject after taking classes in analytical mechanics, and I believe that it is accessible to any student in STEM disciplines. Practitioners in physics from any sub-discipline will enjoy a refresh and a different point of view that puts their tools of the trade in a broader context.

Exploring the physics case for a very-high-energy electron–proton collider

Rapid progress is being made in novel acceleration techniques. An example is the AWAKE experiment at CERN (CERN Courier January/February 2017 p8), which is currently in the middle of its first run demonstrating proton-driven plasma wakefield acceleration. This has inspired researchers to propose further applications of this novel acceleration scheme, among them a very-high-energy electron−proton (VHEeP) collider.

Simulations show that electrons can be accelerated up to energies in the TeV region over a length of only a kilometre using the AWAKE scheme. The VHEeP collider would use one of the LHC proton beams to drive a wakefield and accelerate electrons to an energy of 3 TeV over a distance less than 4 km, then collide the electron beam with the LHC’s other proton beam to yield electron−proton collisions at a centre-of-mass energy of 9 TeV – 30 times higher than the only other electron−proton collider, HERA at DESY. Other applications of the AWAKE scheme with electron beams up to 100 GeV are being considered as part of the Physics Beyond Colliders study at CERN (CERN Courier November 2016 p28).

Of course, itʼs very early days for AWAKE. Currently the scheme offers instantaneous luminosities for VHEeP of just 1028 – 1029 cm−2 s−1, mainly due to the need to refill the proton bunches in the LHC once they have been used as wakefield drivers. Various schemes are being considered to increase the luminosity, but for now the physics case of a VHEeP collider with very high energy but moderate luminosities is being considered. Motivated by these ideas, a workshop called Prospects for a very high energy ep and eA collider took place on 1–2 June at the Max Planck Institute for Physics in Munich to discuss the VHEeP physics case.

Electron−proton scattering can be characterised by the variables Q2 (the squared four-momentum of the exchanged boson) and x (the fraction of the proton’s momentum carried by the struck parton), the reaches of which are extended by a factor 1000 to high Q2 and to low x. The energy dependence of hadronic cross-sections at high energies, such as the total photon−proton cross-section, which has synergy with cosmic-ray physics, can be measured and QCD and the structure of matter better understood in a region where the effects are completely unknown. With values of x down to 10−8 expected for Q2 >1 GeV2, effects of saturation of the structure of the proton will be observed and searches at high Q2 for physics beyond the Standard Model will be possible, most significantly the increased sensitivity to the production of leptoquarks.

Pioneer of applied superconductivity: Henri Desportes 1933–2017

It is with great sadness that we announce the death of Henri Desportes, at the age of 84, on 24 September in the village of Gif sur Yvette, France. He was the head of the CEA Saclay department STCM until his retirement in the mid 1990s. Since the 1960s he was a pioneer of applied superconductivity and rapidly became an internationally recognised expert in the development of numerous accelerator and detector magnet systems for high-energy physics.

In particular, Desportes contributed to the creation of the first superconducting magnets  for many experimental programmes, including: polarised targets (HERA, installed at CERN and then in Protvino); the 15 foot bubble chamber at Argonne National Laboratory in the US; the magnet of the CERN hybrid spectrometer bubble chamber in 1972; the first thin-walled solenoid, CELLO, in 1978 at DESY; and the solenoid for the ALEPH experiment at LEP in 1986. His early participation in the genesis and design of the large magnets for the CMS and ATLAS detectors for the LHC should also not be forgotten.

Desportes supervised numerous work at Saclay on the development of innovative superconducting magnets with a wide range of scientific, technical and medical applications. He was the main initiator of new techniques using helium indirect cooling, the stabilisation of superconductor by aluminium co-extrusion and externally supported coils. Henri worked on all of these subjects with some of the great names in physics. It is partly thanks to him that Saclay has been involved in most of the magnets for large detectors built in Europe since the early 1970s. For this work he received a prestigious IEEE Council on Superconductivity Award in 2002.

We will remember his courtesy, his humour and his unfailing involvement in these flagship projects that have contributed greatly to physics experiments and to several fundamental discoveries.

Baby MIND takes first steps

In mid-October, a neutrino detector that was designed, built and tested at CERN was loaded onto four trucks to begin a month-long journey to Japan. Once safely installed at the J-PARC laboratory in Tokai, the “Baby MIND” detector will record muon neutrinos generated by beams from J-PARC and play an important role in understanding neutrino oscillations at the T2K experiment.

Weighing 75 tonnes, Baby MIND (Magnetised Iron Neutrino Detector) is bigger than its name suggests. It was initiated in 2015 as part of the CERN Neutrino Platform (CERN Courier July/August 2016 p21) and was originally conceived as a prototype for a 100 kt detector for a neutrino factory, specifically for muon-track reconstruction and charge-identification efficiency studies on a beamline at CERN (a task defined within the earlier AIDA project). Early in the design process, however, it was realised that Baby MIND was just the right size to be installed alongside the WAGASCI experiment located next to the near detectors for the T2K experiment, 280 m downstream from the proton target at J-PARC.

T2K studies the oscillation of muon (anti)neutrinos, especially their transformation into electron (anti)neutrinos, on their 295 km-long journey from J-PARC on the east coast of Japan to Kamioka on the other side of the island. The experiment discovered electron-neutrino appearance in a muon-neutrino beam in 2013 and earlier this year reported a two-sigma hint of CP violation by neutrinos, which will be explored further during the next eight years. Another major current target is to remove the ambiguity affecting the measurement of the neutrino mixing angle θ23.

Baby MIND will help in this regard by precisely tracking and identifying muons produced when muon neutrinos from the T2K beamline interact with the WAGASCI detector. This will allow the ratio of cross-sections in water and plastic scintillator (the active material in WAGASCI) to be determined, helping researchers understand  energy reconstruction biases that affect target nuclei-dependent neutrino fluxes and cross-sections. “Besides the water-to-scintillator ratio, the interest of the experiment is to measure a slightly higher-energy beam and compare the energy distribution (simply reconstructed from the muon angle and momentum, that Baby MIND measures) for the various off-axis positions relevant to the T2K and NOVA beams,” says Baby MIND spokesperson Alain Blondel.

Since its approval in December 2015, the Baby MIND collaboration – comprising CERN, the Institute for Nuclear Research of the Russian Academy of Sciences, and the universities of Geneva, Glasgow, Kyoto, Sofia, Tokyo, Uppsala, Valencia and Yokohama – has designed, prototyped, constructed and tested the Baby MIND apparatus, which includes custom designed magnet modules, electronics, scintillator sensors and support mechanics.

Significant departure

The magnet modules were the responsibility of CERN, and mark a significant departure from traditional magnetised-iron neutrino detectors, which have large coils threaded through the entire iron mass. Each of the 33 two-tonne Baby MIND iron plates is magnetised by its own aluminium coil, a feature imposed by access constraints in the shaft at J-PARC and resulting in a highly optimised magnetic field in the tracking volume. Between them, plastic scintillator slabs embedded with wavelength-shifting fibres transmit light produced by the interactions of ionising particles to silicon photomultipliers.

The fully assembled Baby MIND detector was qualified with cosmic rays prior to tests on a beamline at the experimental zone of CERN’s Proton Synchrotron in the East Area during the summer of this year, and analyses showed the detector to be working as expected. First physics data from Baby MIND are expected in 2018. “That new systems for the Baby MIND were designed, assembled and tested on a beamline in a relatively short period of time (around two years) is a great example of people coming together and optimising the detector by using the latest design tools and benefiting from the pool of experience and infrastructures available at CERN,” says Baby MIND technical co-ordinator Etam Noah.

Majorana neutrinos remain elusive

Researchers at the Cryogenic Underground Observatory for Rare Events (CUORE), located at Gran Sasso National Laboratories (LNGS) in Italy, have reported the latest results in their search for neutrinoless double beta-decay based on CUORE’s first full data set. This exceedingly rare process, which is predicted to occur less than once about every 1026 years in a given nucleus, if it occurs at all, involves two neutrons in an atomic nucleus simultaneously decaying into two protons with the emission of two electrons and no neutrinos. This is only possible if neutrinos and antineutrinos are identical or “Majorana” particles, as posited by Ettore Majorana 80 years ago, such that the two neutrinos from the decay cancel each other out.

The discovery of neutrinoless double beta-decay (NDBD) would demonstrate that lepton number is not a symmetry of nature, perhaps playing a role in the observed matter–antimatter asymmetry in the universe, and constitute firm evidence for physics beyond the Standard Model. Following the discovery two decades ago that neutrinos have mass (a necessary condition for them to be Majorana particles), several experiments worldwide are competing to spot this exotic decay using a variety of techniques and different NDBD candidate nuclei.

CUORE is a tonne-scale cryogenic bolometer comprising 19 copper-framed towers that each house a matrix of 52 cube-shaped crystals of highly purified natural tellurium (containing more than 34% tellurium-130). The detector array, which has been cooled below a temperature of 10 mK and is shielded from cosmic rays by 1.4 km of rock and thick lead sheets, was designed and assembled over a 10 year period. Following initial results in 2015 from a CUORE prototype containing just one tower, the full detector with 19 towers was cooled down in the CUORE cryostat one year ago and the collaboration has now released its first publication, submitted to Physical Review Letters, with much higher statistics. The large volume of detector crystals greatly increases the likelihood of recording a NDBD event during the lifetime of the experiment.

Based on around seven weeks of data-taking, alternated with an intense programme of commissioning of the detector from May to September 2017 and corresponding to a total tellurium exposure of 86.3 kg per year, CUORE finds no sign of NDBD, placing a lower limit of the decay half-life of NDBD in tellurium-130 of 1.5 × 1025 years (90% C.L.). This is the most stringent limit to date on this decay, says the team, and suggests that the effective Majorana neutrino mass is less than 140−400 meV, where the large range results from the nuclear matrix-element estimates employed. “This is the first preview of what an instrument this size is able to do,” says CUORE spokesperson Oliviero Cremonesi of INFN. “Already, the full detector array’s sensitivity has exceeded the precision of the measurements reported in April 2015 after a successful two-year test run that enlisted one detector tower.”

Over the next five years CUORE will collect around 100 times more data. Combined with search results in other isotopes, the possible hiding places of Majorana neutrinos will shrink much further.

EU project lights up X-band technology

Advanced linear-accelerator (linac) technology developed at CERN and elsewhere will be used to develop a new generation of compact X-ray free-electron lasers (XFELs), thanks to a €3 million project funded by the European Commission’s Horizon 2020 programme. Beginning in January 2018, “CompactLight” aims to design the first hard XFEL based on 12 GHz X-band technology, which originated from research for a high-energy linear collider. A consortium of 21 leading European institutions, including Elettra, CERN, PSI, KIT and INFN, in addition to seven universities and two industry partners (Kyma and VDL), are partnering to achieve this ambitious goal within the three-year duration of the recently awarded grant.

X-band technology, which provides accelerating-gradients of 100 MV/m and above in a highly compact device, is now a reality. This is the result of many years of intense R&D carried out at SLAC (US) and KEK (Japan), for the former NLC and JLC projects, and at CERN in the context of the Compact Linear Collider (CLIC). This pioneering technology also withstood validation at the Elettra and PSI laboratories.

XFELs, the latest generation of light sources based on linacs, are particularly suitable applications for high-gradient X-band technology. Following decades of growth in the use of synchrotron X-ray facilities to study materials across a wide spectrum of sciences, technologies and applications, XFELs (as opposed to circular light sources) are capable of delivering high-intensity photon beams of unprecedented brilliance and quality. This provides novel ways to probe matter and allows researchers to make “movies” of ultrafast biological processes. Currently, three XFELs are up and running in Europe – FERMI@Elettra in Italy and FLASH and FLASH II in Germany, which operate in the soft X-ray range – while two are under commissioning: SwissFEL at PSI and the European XFEL in Germany (CERN Courier July/August 2017 p18), which operates in the hard X-ray region. Yet, the demand for such high-quality X-rays is large, as the field still has great and largely unexplored potential for science and innovation – potential that can be unlocked if the linacs that drive the X-ray generation can be made smaller and cheaper.

This is where CompactLight steps in. While most of the existing XFELs worldwide use conventional 3 GHz S-band technology (e.g. LCLS in the US and PAL in South Korea) or superconducting 1.3 GHz structures (e.g. European XFEL and LCLS-II), others use newer designs based on 6 GHz C-band technology (e.g. SCALA in Japan), which increases the accelerating gradient while reducing the linac’s length and cost. CompactLight gathers leading experts to design a hard-X-ray facility beyond today’s state of the art, using the latest concepts for bright electron-photo injectors, very-high-gradient X-band structures operating at frequencies of 12 GHz, and innovative compact short-period undulators (long devices that produce an alternating magnetic field along which relativistic electrons are deflected to produce synchrotron X-rays). Compared with existing XFELs, the proposed facility will benefit from a lower electron-beam energy (due to the enhanced undulator performance), be significantly more compact (as a consequence both of the lower energy and of the high-gradient X-band structures), have lower electrical power demand and a smaller footprint.

Success for CompactLight will have a much wider impact: not just affirming X-band technology as a new standard for accelerator-based facilities, but advancing undulators to the next generation of compact photon sources. This will facilitate the widespread distribution of a new generation of compact X-band-based accelerators and light sources, with a large range of applications including medical use, and enable the development of compact cost-effective X-ray facilities at national or even university level across and beyond Europe.

First cosmic-ray results from CALET on the ISS

The CALorimetric Electron Telescope (CALET), a space mission led by the Japan Aerospace Exploration Agency with participation from the Italian Space Agency (ASI) and NASA, has released its first results concerning the nature of high-energy cosmic rays.

Having docked with the International Space Station (ISS) on 25 August 2015, CALET is carrying out a full science programme with long-duration observations of high-energy charged particles and photons coming from space. It is the second high-energy experiment operating on the ISS following the deployment of AMS-02 in 2011. During the summer of 2017 a third experiment, ISS-CREAM, joined these two. Unlike AMS-02, CALET and ISS-CREAM have no magnetic spectrometer and therefore measure the inclusive electron and positron spectrum. CALET’s homogeneus calorimeter is optimised to measure electrons, and one of its main science goals is to measure the detailed shape of the electron spectrum.

Due to the large radiative losses during their travel in space, high-energy cosmic electrons are expected to originate from regions relatively close to Earth (of the order of a few thousand light-years). Yet their origin is still unknown. The shape of the spectrum and the anisotropy in the arrival direction might contain crucial information as to where and how electrons are accelerated. It could also provide a clue on possible signatures of dark matter – for example, the presence of a peak in the spectrum might tell us about a possible dark-matter decay or annihilation with an electron or positron in the final state – and shed light on the intriguing electron and positron spectra reported by AMS-02 (CERN Courier December 2016 p26).

To pinpoint possible spectral features on top of the overall power-law energy dependence of the spectrum, CALET was designed to measure the energy of the incident particle with very high resolution and with a large proton rejection power, well into the TeV energy region. This is provided by a thick homogeneous calorimeter preceded by a high-granularity pre-shower with imaging capabilities with a total thickness of 30 radiation length at normal incidence. The calibration of the two instruments is the key to control the energy scale and this is why CALET – a CERN-recognised experiment – performed several calibration tests at CERN.

The first data from CALET concern a measurement of the inclusive electron and positron spectrum in the energy range from 10 GeV to 3 TeV, based on about 0.7 million candidates (1.3 million in full acceptance). Above an energy of 30 GeV the spectrum can be fitted with a single power law with a spectral index of –3.152±0.016. A possible structure observed above 100 GeV requires further investigation with increased statistics and refined data analysis. Beyond 1 TeV, where a roll-off of the spectrum is expected and low statistics is an issue, electron data are now being carefully analysed to extend the measurement. CALET has been designed to measure electrons up to around 20 TeV and hadrons up to an energy of 1 PeV.

CALET is a powerful space observatory with the ability to identify cosmic nuclei from hydrogen to elements heavier than iron. It also has a dedicated gamma-ray-burst instrument (CGBM) that so far has detected bursts at an average rate of one every 10 days in the energy range of 7 KeV–20 MeV. The search for electromagnetic counterparts of gravitational waves (GWs) detected by the LIGO and Virgo observatories proceeds around the clock thanks to a special collaboration agreement with LIGO and Virgo. Upper limits on X-ray and gamma-ray counterparts of the GW151226 event were published and further research on GW follow-ups is being carried out. Space-weather studies relative to the relativistic electron precipitation (REP) from the Van Allen belts have also been released.

With more than 500 million triggers collected so far and an expected extension of the observation time on the ISS to five years, CALET is likely to produce a wealth of interesting results in the near future.

The twists and turns of a successful year for the LHC

On 11 December, the Large Hadron Collider (LHC) is scheduled to complete its 2017 proton-physics run and go into standby for its winter shutdown and maintenance programme. With the LHC having surpassed this year’s integrated luminosity target of 45 fb–1 to both the ATLAS and CMS experiments 19 days before the end of the run, 2017 marks another successful year for the machine. September 2017 also saw the LHC’s total integrated luminosity since 2010 pass the milestone of 100 fb–1 per high-luminosity experiment (see panel). But the year has not been without its challenges, demonstrating once again the quirks and unprecedented complexities involved in operating the world’s highest-energy collider. The story of the LHC’s 2017 run unfolded in three main parts.

Following a longer than usual technical stop that began at the end of 2016, the LHC was cooled to its operating temperature in April and took first beam towards the end of the month, with first stable beams declared about four weeks later. Physics got off to a great start, with an impressively efficient ramp-up reaching 2556 bunches per beam and a peak luminosity of 1.6 × 1034 cm–2 s-1 in very good time.

Careful examination

However, from the start of the run, for some unknown reason the beams were occasionally dumped with a particular signature of localised beam loss and the onset of a fast-beam instability. The cause of the premature dumps was traced to a region called 16L2, referring to the sixteenth LHC half-cell to the left of point 2 (each half-cell comprises three dipoles, one quadrupole and associated corrector magnets). The hypothesis was that the problems were caused by the presence of frozen gas in the beam pipes in this region; air had perhaps entered during the cool down and had become trapped on and around the beam screen. All available diagnostics were deployed and careful examination of the beam losses in the region revealed steady-state losses, which occasionally increased rapidly followed by a very fast beam instability. The issue appeared to respond positively to a non-zero field in a local orbit corrector, and this allowed the LHC teams to establish more-or-less steady operation by careful control of the corrector in question.

To ameliorate and understand the situation better, an attempt was made to flush the gas supposedly condensed on the beam screen onto the cold mass of the magnets. To this end the beam screen around 16L2 was warmed up to around 80 K with careful monitoring of the vacuum conditions. Unfortunately, the manoeuvre was not a success: the 16L2 dumps became more frequent and many subsequent fills were lost to the problem. By this stage, electron-cloud effects had been identified as a possible co-factor in driving the instability, prompting the teams to change the bunch configuration to the so-called 8b4e scheme in which gaps are introduced into the bunch configuration. This significantly reduced the rate of 16L2 losses and allowed steady and productive running to be established by late summer.

New heights

Performance was further improved by a reduction in the “beta-star” parameter following a technical stop in the middle of September. This move exploited the excellent aperture, collimation-system performance, stability, and optics understanding of the LHC and benefited from many years of experience operating the machine. Working with an optimised 8b4e scheme and beta-star of 30 cm resulted in CMS and ATLAS reaching their event pile-up limit, forcing the deployment of luminosity levelling as is already routine in LHCb and ALICE. The peak-levelled luminosity under these running conditions is around 1.5 × 1034 cm–2 s–1, compared to more than 2 × 1034 cm–2 s–1 without levelling. The beam availability in the latter part of the year has been truly excellent and integrated-luminosity delivery reached new heights. One day in October was also dedicated to operation with xenon beams, taking advantage of their presence in the SPS for North Area’s fixed target programme (CERN Courier November 2017 p7).

Following a period of machine development and some special physics runs, the winter maintenance break is due to begin on 11 December. The year-end technical stop will see the usual extensive programme of maintenance and consolidation for both the machine and experiments. It will also see sector 12 warmed up to room temperature to fully resolve the 16L2 issue. Then, in the spring of 2018, the LHC will begin a final 13 TeV run before a long shutdown of two years to make key preparations for its high-luminosity upgrade.

A century of femtobarns
 

On 28 September, the LHC passed a high-energy proton–proton collision milestone: the accumulation of 100 fb<sup–1/sup> since its inception, equivalent to around 1015 collisions in each of the ATLAS and CMS experiments. The LHC started physics operations in late 2009, and by the middle of 2012 had delivered enough integrated luminosity to enabled physicists to discover the Higgs boson. After the first LHC long shutdown in 2013 and 2014, the LHC was restarted in 2015 at higher energy, paving the way for 2016, another record production year that notched up 40 fb<sup–1/sup>. Following this success, the target for 2017 and 2018 combined was raised to 90 fb<sup–1/sup>, which, despite some challenges this year, looks to be well within reach.

 

 

Novel charmonium spectroscopy at LHCb

The LHCb collaboration has published the result of precision mass and width measurements of the χc1 and χc2 charmonium states, performed for the first time by using the newly discovered decays χc1→ J/ψμ+μ and χc2 J/ψμ+μ. Previously it has not been possible to make precision measurements for these states at a particle collider due to the absence of a fully charged final state with a large enough decay rate, allowing powerful comparisons with results from earlier fixed-target experiments.

The dominant decay mode of such charmonium states is χc1,2 J/ψγ. However, the precision measurement of the energy of the final-state photon, γ, is experimentally very challenging, particularly in the harsh environment of a hadron collider such as the LHC. For this reason, such measurements were only possible at dedicated experiments that exploited antiproton beams annihilating into fixed hydrogen targets and forming prompt χc1 states. By modulating the energy of the impinging antiprotons, it was possible to scan the invariant mass of the states with high precision. But the obvious difficulties in building such dedicated facilities has meant that precision mass measurements were only performed by two experiments: E760 and E835 at Fermilab, the latter being an upgrade of the former.

In these new Dalitz decays, χc1,2→ J/ψμ+μ, where the J/ψ meson subsequently decays to another μ+μ pair, the final state is composed of four charged muons. Thus these modes can be triggered and reconstructed very efficiently by the LHCb experiment. The high precision of the LHCb spectrometer already enabled several world-best mass measurements of heavy-flavour mesons and baryons to be performed, and now it has allowed the two narrow χc1 and χc2 peaks to be observed in the invariant J/ψμ+μ mass distribution with excellent resolution (see figure). The values of the masses of the two states, along with the natural width of the χc2, have been determined with a similar precision to, and in good agreement with, those obtained by E760 and E835.

This new measurement opens an avenue to precision studies of the properties of χc mesons at the LHC, more than 40 years since the discovery of the first charmonium state, the J/ψ meson. It will allow precise tests of production mechanisms of charmomium states down to zero transverse momentum, providing information hardly accessible using other experimental techniques. In addition to the charmonium system, these observations are expected to have important consequences for the wider field of hadron spectroscopy at the LHC. With larger data samples, studies of the Dalitz decays of other heavy-flavour states, such as the exotic X(3872) and bottomonium states, will become possible. In particular, measurements of the properties of the X(3872) via a Dalitz decay may help to elucidate the nature of this enigmatic particle.

bright-rec iop pub iop-science physcis connect