Comsol -leaderboard other pages

Topics

CMS looks into the dark

Fig. 1.

A report from the CMS experiment

Dark energy and dark matter together make up about 95% of the universe, yet we do not know the origin, constituents, or dynamics (apart from gravity) of these substances. Various extensions of the Standard Model (SM) of particle physics predict the existence of new particles as dark-matter candidates. One such model posits the existence of “dark quarks” that are charged under a new QCD-like force. Like normal SM quarks, dark quarks are only found in bound states (such as the dark proton, a stable dark-matter candidate resembling the ordinary proton) and they can only interact with SM quarks via a mediator particle. The similarity between the mechanisms of hadron production in dark and SM QCD would provide a natural explanation for the puzzling closeness of the observed energy densities of dark and baryonic matter.

In an attempt to explain the nature of dark matter, the existence of dark quarks was recently investigated by the CMS collaboration. If dark-QCD mediators were produced in pairs in the CMS detector, their signature would be striking: each mediator particle would decay into one dark quark and one SM quark, both of which hadronise and produce multiple dark and SM pions, respectively. Dark pions can travel sizable distances in the detector before decaying into detectable SM particles. Therefore, the signature would be two ordinary jets originating from the proton–proton collision, and two “emerging jets” composed of multiple neutral particles that decay at a significant distance away from their origin. Signal events could exhibit large missing transverse momentum from decays beyond the acceptance of the CMS detector.

Fig. 2.

To identify emerging jets, the CMS analysis relies on two discriminants that quantify the displacement of a jet’s constituents from the collision point. One is based on the impact parameters of the tracks associated to the jet; the other is the fraction of a jet’s energy carried by tracks compatible with the primary vertex. Figure 1 shows an event display for an emerging-jet candidate, with two jets containing multiple displaced vertices and consequently tagged by the discriminants. Substantial background is expected from the decays of B mesons and baryons, whose lifetime makes them more likely to pass the discriminating criteria. To model this background, the analysis derives flavour-dependent misidentification probabilities for jets.

This first dedicated search for the emerging jet signature explores a broad dark-QCD parameter space with mediator masses between 0.4 and 2 TeV, dark-pion masses between 1 and 10 GeV, and dark-pion proper decay lengths between 1 mm and 100 cm. The observed number of events in the CMS data is consistent with the background-only expectation, excluding mediator particles with masses of 400–1250 GeV for dark-pion proper decay lengths between 5 and 225 mm (figure 2). While new data are being collected, the quest for dark matter at the LHC is broadening its scope towards new signatures.

The day the world switched on to particle physics

CERN Control Centre

When Lyn Evans, project leader of the Large Hadron Collider (LHC), turned up for work at the CERN Control Centre (CCC) at 05:30 on 10 September 2008, he was surprised to find the car park full of satellite trucks. Normally a scene of calm, the facility had become the focus of global media attention, with journalists poised to capture the moment when the LHC switched on. Evans knew the media were coming, but not quite to this extent. A few hours later, as he counted down to the moment when the first beam had made its way through the last of the LHC’s eight sectors, the CCC erupted in cheers – and Evans wasn’t even aware that his impromptu commentary was being beamed live to millions of people. “I thought I was commenting to others on the CERN site,” he recalls. The following weekend, he was walking in the nearby ski town of Megève when a stranger recognised him in the street.

Of all human endeavours that have captured the world’s attention, the events of 10 September 2008 are surely among the most bizarre. After all, this wasn’t something as tangible as sending a person to the Moon. At 10:28 local time on that clear autumn Wednesday, a bunch of subatomic particles made its way around a 27 km-long subterranean tube, and the spectacle was estimated to have reached an audience of more than a billion people. There were record numbers of hits to the CERN homepage, overtaking visits to NASA’s site, in addition to some 2500 television broadcasts and 6000 press articles on the day. The event was dubbed “first-beam day” by CERN and “Big Bang day” by the BBC, which had taken over a room in the CCC and devoted a full day’s coverage on Radio 4. Google turned its logo into a cartoon of a collider – such “doodles” are now commonplace, but it was a coup for CERN back then. It is hard to think of a bigger media event in science in recent times, and it launched particle physics, the LHC and CERN into mainstream culture.

It is all the more incredible that no collision data, and therefore no physics results, were scheduled that day; it was “simply” part of the commissioning period that all new colliders go through. When CERN’s previous hadron collider, the Super Proton Synchrotron, fired up in the summer of 1981, says Evans, there was just him and Carlo Rubbia in the control room. Even the birth of the Large Electron Positron collider in 1989 was a muted affair. The LHC was a different machine in a different era, and its birth offers a crash course in the communication of big-science projects.

News values

Fears that the LHC would create a planet-eating black hole were a key factor behind the enormous media interest, says Roger Highfield, who was science editor of the UK’s The Telegraph newspaper at the time. “I have no doubt that the public loved all the stuff about the hunt for the secrets of the universe, the romance of the Peter Higgs story and the deluge of superlatives about energy, vacuum and all that,” says Highfield. “But the LHC narrative was taken to a whole new level by the potty claim by doomsayers that it could create a black hole to swallow the Earth. When ‘the biggest and most complex experiment ever devised’ was about to be turned on, it made front-page news, with headlines like, ‘Will the world end on Wednesday?’”.

Journalists

The conspiracies were rooted in attempts by a handful of individuals to prevent the LHC from starting up in case its collisions would produce a microscopic black hole – one of the outlandish models that the LHC was built to test. That the protons injected into the LHC that day had an energy far lower than that of the then-operational Tevatron collider in the US, and that collisions were not scheduled for weeks afterwards, didn’t seem to get in the way of a good story. Nor, for that matter, did CERN’s efforts to issue scientific reassurances. Indeed, when science editor of The Guardian, Ian Sample, turned up at CERN on first-beam day, he expected to find protestors chained to the fence outside, or at least waving placards asking physicists not to destroy the planet. “I did not see a single protestor – and I looked for them,” he says. “And yet, inside the building, I remember one TV host doing a piece to camera on how the world might end when the machine switched on. It was a circus that the media played a massive part in creating. It was shameful and it made the media who seriously ran with those stories look like fools.”

The truth is the black-hole hype came long after the LHC had started to capture the public imagination. As the machine and its massive experiments progressed through construction in the early 2000s, the project’s scale and abstract scientific goals offered an appeal to wonder. Though designed to explore a range of phenomena at a new energy frontier, the LHC’s principal quarry, the Higgs boson, had a bite-sized description: the generator of mass. It also had a human angle – a real-life, white-haired Professor Higgs and a handful of other theorists waiting to see if their half-century-old prediction was right, and international teams of thousands working night and day to build the necessary equipment. Nobel laureate Leon Lederman’s 1993 book The God Particle, detailing the quest for the Higgs boson, added a supernatural dimension to the enterprise.

“I am confident that no editor-in-chief of any newspaper in the world truly understood the Higgs field, the meaning or significance of electroweak symmetry breaking, or how the Higgs boson fits into the picture,” continues Sample. “But what they did get was the appeal of hunting for a particle that in their minds explained the origin of mass. It is such an intriguing concept to imagine that we even need to explain the origin of mass. Isn’t it the case that matter just has mass, plain and simple? All of this, in addition to the sheer awe at the engineering and physics achievement, made for an enormously exotic and appealing story.”

There were also more practical reasons for LHC’s media extravaganza, notes Geoff Brumfiel, a reporter at Nature at the time and now a senior editor at National Public Radio in the US. The fact that pretty much every country and region on Earth had somebody working on the LHC meant that there was a local story for thousands of news outlets, he says, plus CERN’s status as a publicly funded institution made it possible for the lab to open up to the world. “There was also great visual appeal: the enormous, colourful detectors, deep underground, teeming with little scientists in hard hats – it just looked cool. That was hugely important for cable news, television documentary producers, etc.” In addition, says Brumfiel, something actually happened on first-beam day – there was something for journalists to see. “That’s always big in the news business. A big new machine was turning on and might or might not work. And when it worked there were lots of happy people to look at and hear.”

Strategy first

Despite the many external factors influencing LHC communications, the switch-on would never have had the huge reach that it did were it not for a dedicated communication strategy, says James Gillies, CERN’s head of communications at the time. It started as far back as 2000, when Dan Brown’s science-fiction novel Angels & Demons, about a plot to blow up the Vatican using antimatter stolen from CERN, was published. “Luckily for us, it didn’t sell, but it alerted us to the fact that the notion that CERN could be dangerous was bubbling up into popular culture,” says Gillies. A few years later, the BBC made a drama documentary called End Day, which examined a range of ways that humanity might not last the century – including a black hole being created at a particle accelerator. Then, when Dan Brown’s next book, The Da Vinci Code, became a bestseller, CERN realised that Angels & Demons would be next on people’s reading list – so it had better act. “That led to one of the most peculiar conversations that I’ve ever had with a CERN director general, and resulted in us featuring fact and fiction in Angels & Demons on the CERN website,” says Gillies. “Our traffic jumped by an order of magnitude overnight and we never looked back.” CERN later played a significant role in the screen adaption of the book, and Sony Pictures included a short film about CERN in its Blu-ray release.

Scenes from first-beam day

The first dedicated LHC communications strategy was put in place in 2006. The perception of CERN as portrayed in End Day and Angels & Demons was so wide off the mark that is was laughable, says Gillies, so he took it as opportunity to lead the conversation about CERN and be transparent and timely. In addition to actions such as working with science communicators in CERN Member States and beyond, to organise national media visits for key journalists, he says, “the big idea is that we took a conscious decision to do our science in the public eye, to involve people in the adventure of research at the forefront of human knowledge”. Publicly fixing the date for first beam was a high-risk strategy, but it paid off. The scheduled LHC start-up exceeded the expectations of everyone involved. Both proton beams made a full turn around the machine and one beam was captured by the radio-frequency system, showing that it could be accelerated. For the thousands of people working on the LHC and its experiments, it marked the transition from 25 years of preparation to a new era of scientific discovery. But the terrain was about to get tougher.

Once the journalists had departed and the champagne bottles were stacked away, the LHC teams continued with the task of commissioning away from the spotlight, with a view to obtaining collisions as soon as possible. Then, a couple of days after first-beam day, a transformer powering part of the LHC’s cryogenic system failed, forcing a pause in commissioning during which the teams decided to test the last octant of the machine for high-current operations. While ramping the magnets towards 9.3 kA on 19 September, one of the LHC’s 10,000 superconducting-dipole interconnects failed, ultimately damaging roughly 400 m of the machine. Evans described the event, which set operations back by 14 months, as “a kick in the teeth”. But CERN recovered quickly (see “Lessons from the accelerator frontier“) and, today, Evans says that he is glad that the fault was discovered when it was. “It would have been a disaster had it happened five years in. As it was, we didn’t come under criticism. We were pushing the limits of technology.”

The timing of the incident was doubly fortuitous: the same week it took place, US investment bank Lehman Brothers filed for the largest bankruptcy in history, with other banks looking set to follow suit. The world might not have been consumed by a black hole, but the prospect of a distinctly more real financial Armageddon dominated the headlines that week.

To collisions and beyond

The coming to life of the LHC is a thrilling story, a scientific fairy-tale. From its long-awaited completion, to the tense sector-by-sector threading of its first beam in front of millions of people and the incident nine days later that temporarily ruined the party, the LHC finally arrived at a new energy frontier in November 2009 (achieving 1.18 TeV per beam). Its physics programme began in earnest a few months later, on 30 March 2010, at a collision energy of 7 and then 8 TeV. Barely two years later, the LHC produced its first major discovery – the Higgs boson, announced to a packed CERN auditorium on 4 July 2012 by the ATLAS and CMS collaborations and webcast around the world. The discovery was followed by the award of the 2013 Nobel Prize in Physics to Peter Higgs and François Englert. The CERN seminar was the first time that the pair had met, with cameras capturing Higgs wiping a tear from his eye as the significance of the event sunk in. Since 2015, the LHC has been operating at 13 TeV while notching up record levels of performance, and the machine is now being prepared for its high-luminosity upgrade (HL-LHC).

VIPs

Has the success of LHC communications set the bar too high? The CERN press office tracked a steady increase in the number of LHC-related articles in the period leading up to the switch-on, in addition to an increasing number of visits by the media and the public. Coverage peaked around September 2008, died down a little, then picked up again four years later as the drama of the Higgs-boson discovery started to unfold. When ATLAS and CMS announced the discovery, press coverage exceeded even that of first-beam day. Of the top 10-read items on The Guardian website, says Sample, stories about the Higgs made up eight or nine of them, when there were plenty of other big news stories around that day. Why? “The absolute competence and dedication and hard work of those scientists and engineers was so refreshing compared to the crooks, bullies, liars and murderers that we write about every day,” he says. “Perhaps people enjoyed reading about something positive, about people doing astounding work, about something far bigger than the world they normally encounter in the news.”

Today, press coverage of the LHC remains higher than it was before the switch-on, with an average of 200 clippings per day worldwide. The number of media visits to CERN, having peaked in around 2008 and 2012, is now at the level that it was before the switch-on, corresponding to around 300 media outlets per year. The LHC’s life so far has also coincided with the explosion of social-media tools. CERN’s first ever tweet, on 7 August 2008, announced the date for first-beam day, and today the lab has more than two million Twitter followers – rising at a rate of around 1000 per day. During the announcement of the Higgs-boson discovery in 2012, CERN’s live tweets reached journalists faster than the press release and helped contribute to worldwide coverage of the news.

Framing the search for the Higgs boson as the LHC’s only physics goal was never the message that CERN intended to put out, but it’s the one that the media latched on to. Echoing others working in the media who were interviewed for this article, Brumfiel thinks that the LHC has largely left the public eye. In terms of the media, he says, “It’s a victim of its own success: it was designed to do one thing, and it’s done it.”

The challenge facing communications at CERN today is how to capitalise on the existing interest while constructing a new or updated narrative of exploration and discovery. After all, in terms of physics measurements, the LHC is only getting into its stride – having collected just 5% of its expected total dataset and with up to two decades of operations still to go. Although the LHC has not yet found any conclusive signs of physics beyond the Standard Model, it is clear from astronomical and other observations that such phenomena are out there, somewhere. In the absence of direct discoveries, identifying the new physics will be a hard slog involving ever more precise measurements of known particles – a much tougher sell to the public, even if it is all part of the same effort to uncover the basic laws of the universe.

Angels & Demons

“CERN has managed to build upon previous communication successes as the public is already interested, so they can simply strap a camera onto a drone, fly it around and a lot of people will happily watch!” says David Eggleton of the Science Policy Research Unit at the University of Sussex in the UK, who studies leadership and governance in major scientific projects such as the LHC. “But, just like with the scientists, the public is going to need something new and exciting to focus on – even if the pay-off is 10 years in the future, so it depends on how the laboratory wants to strategise – do they want to pitch HL-LHC as the next big machine or is it just going to be articulated as an upgrade with the FCC (Future Circular Collider) becoming the thing to capture the public’s imagination?”

Theoretical physicist and science populariser Sabine Hossenfelder of the Frankfurt Institute for Advanced Studies in Germany thinks the excitement surrounding the switch-on of the LHC has come back to haunt the field, going so far as to label the current situation in particle physics a “PR disaster”. Before the LHC’s launch in 2008, she says, some theorists expressed themselves confident that the collider would produce new particles besides the Higgs boson. “That hasn’t happened. The big proclamations came almost exclusively from theoretical physicists; CERN didn’t promise anything that they didn’t deliver. That is an important distinction, but I am afraid in the public perception the subtler differences won’t matter.”

Cultural icon

At least for now, and in some countries, the LHC has become embedded in popular culture. The term “hadron collider” is the new “rocket science” – a term dropped into commentary and public discourse to denote the pinnacle of human ingenuity. The LHC has inspired books, films, plays, art and, crucially, adverts – in which firms have used high-production visuals to associate their brands with the standards of the LHC. The number of applications for physics degrees, in the UK at least, soared around the time that the LHC switched on, and the event also launched the television career of ATLAS physicist Brian Cox, who went on to further engage a primed public. Annually, around 300,000 people apply to visit CERN, less than half of whom can be accommodated.

Press conference

If the communications surrounding the LHC have proved one thing, it is that there is an inherent interest among huge swathes of the global population in the substance of particle physics. Highfield, who is now director of external affairs at the Science Museum in the London, sees this on a daily basis. “Although I think physicists would have liked to have seen more surprises, I know from my work at the Science Museum that the public has a huge appetite for smashing physics,” he says. In November 2013, the Science Museum launched Collider, an immersive exhibition that blended theatre, video and sound art with real artefacts from CERN to recreate a visit to the laboratory. The exhibition went on international tour, finishing in Australia in April 2017, having pulled in an audience of more than 600,000 people. “Yes, the public still cares about the quest to reveal the deepest secrets of the cosmos,” says Highfield.

From a communications perspective, the switch-on of the LHC proves the importance of a clear strategy, the rewards from taking risks, and the difficulty in keeping control of a narrative. For Evans, the LHC changed everything. “Of all the machines that I’ve worked on, never before has there been such interest,” he says. “Before the LHC, no one knew what you were talking about. Now, I can get into a cab in New York or speak to an immigration officer in Japan, and they say: oh, cool, you work at CERN?”.

LHCb tests consistency of unitarity triangle

Unitarity-triangle angle constraints

A report from the LHCb experiment

Since the beginning of the LHC physics programme in 2010, the LHCb collaboration has been working to drive down the uncertainty on the least-precisely measured angle of the unitarity triangle, γ. The unitarity triangle exists in the complex plane and its area is a measure of the amount of CP violation in the Standard Model. Mathematically, the triangle represents a requirement that the Cabibbo–Kobayashi–Maskawa (CKM) quark-mixing matrix is unitary, meaning that the number of quarks is conserved in weak interactions and that there are only three generations of quarks. If new physics exists and breaks these assumptions, it would show up as internal inconsistencies in the unitarity triangle – for example, the angles of the triangle might not add up to 180°. Checking the consistency of different measurements of the unitarity triangle is therefore an important test of the SM.

The unitarity triangle exists in the complex plane and its area is a measure of the amount of CP violation in the Standard Model.

Experimentally, γ can be measured through the interference between b̅ c̅ u s̅ and b̅ u̅ c s̅ transitions. It is the only CKM angle that is easily accessible in tree-level processes and, as a result, it can be measured with negligible theoretical uncertainty. In the absence of new-physics effects at tree level, a precise measurement of γ can be compared with other observables related to the CKM matrix that are more likely to be affected by physics beyond the SM. Such comparisons are currently limited by the relatively large uncertainty on γ.

LHCb has recently made a model-independent study of the decay mode B± DK± (where D could be D0 or D̅0), with the D meson being reconstructed via the decays D KS0 π+ π and D KS0 K+ K. This measurement is particularly important for determining γ, as it selects a single solution without ambiguities and with small uncertainty. Because the D meson undergoes a three-body decay, the distribution of events across the phase space (the Dalitz plot) carries information about the underlying amplitudes. And since the B± DK± amplitudes depend on γ, it is possible to measure γ by comparing the distributions for B+ and B. In practice, the distributions depend mainly on the amplitudes of the D decay, with only small CP-violating deviations introduced by γ. The measurement therefore demands a good understanding of the magnitudes of the D0 and D̅0 decay amplitudes, as well as the strong phase differences between them, δD. The former comes from high-statistics calibration channels, and the latter from external measurements performed by the CLEO collaboration.

The new measurement uses 2 fb−1 of proton–proton collision data taken in 2015 and 2016, with signals of about 4100 B± D K± decays for the more copious D KS0 π+ π mode and about 560 for D KS0 K+ K. LHCb found γ = 87+11–12 °, which is consistent with the previous world average, as well as measuring other decay parameters rB = 0.087+0.013–0.014  and δB = 101±11°. This is the most precise determination of γ from a single analysis, and LHCb has performed several other measurements of γ, each providing different constraints. Their combination (γ = 74.0+5.0–5.8°, see figure) dominates the current world average and allows increasingly precise tests for new physics by probing the internal consistency of the unitarity triangle.

Lessons from the accelerator frontier

Superconducting dipole magnet

When the Large Hadron Collider (LHC) started up a decade ago, on 10 September 2008, those of us who had been involved in designing and building the machine were extremely happy. I was in the CERN control centre that day, as was my predecessor, Romeo Perin, who first led the study for the LHC magnet design. Also present was Carlo Rubbia, without whose vision and battling spirit we would not have the LHC today, along with other former CERN Directors-General: Herwig Schopper, under whom initial discussions and workshops took place; Christopher Llewellyn-Smith, who got the LHC approved and secured international collaboration; Luciano Maiani, who took the decision to close LEP to make way for the LHC; and Robert Aymar, who saw the LHC to first beam.

That more than 300 journalists were present made it even more remarkable. CERN opened up to the world as it never did before and news of the event reached more than a billion people (see “The day the world switched on to particle physics“). But for many of us, our heads were already in the future, thinking about what was then simply called the LHC upgrade.

The first paper proposing a luminosity upgrade of the LHC was written in 1994 – the year the LHC was approved – and was inspired by Giorgio Brianti, who led the LHC design effort until the baton passed to Lyn Evans in 1993. He envisaged the benefits from a future superconductor, made of niobium tin rather than the niobium-titanium used in the LHC dipoles, to increase the luminosity of the LHC. Later, I proposed that INFN carry out research on this conductor and I have worked on the high-luminosity LHC (HL-LHC) ever since.

Lucio Rossi

From a technical point of view, the LHC was a turning point in collider design, demanding a vast quantity of superconducting magnets with unprecedented field strengths. The past 10 years has also taught us that the machine is very well designed indeed (thanks Lyn!). The LHC works so well, in fact, that it’s easy for operators to forget that it is a superconducting machine. I realised immediately when those first protons made their way around the ring that the LHC is, as we say in Italian, “bionic”: it can do things beyond our expectations.

But we also learned, nine days later, when the breakdown of an electrical interconnect led to significant damage to one section of the machine, that the LHC is very fragile. This we will never forget: we have to be careful and we have to be humble, as a machine of this scale and complexity can stop working at any moment. The incident on 19 September happened in an interconnect between two magnets; it was not a technical difficulty but one of the LHC’s innumerable complex interfaces that tricked us. Bad as it was, we could repair it in a reasonable time period. Had we made a technical mistake concerning the superconductor itself, or the basic magnet design, it would have been a potential show-stopper.

The LHC is so complicated that it’s impossible not to make mistakes. What’s important is that we learn from them. Not only did we learn how to repair and to react very fast, we also learned how to work cooperatively. It was a healthy sociological exercise for CERN, where, like in any large organisation, it is easy for people, divisions and departments to insulate themselves from others. After the incident, the spirit of collaboration has clearly been stronger.

This is definitely the case for the HL-LHC, where we are working not only on how to produce higher luminosity, but also on how to make it the most effective for the LHC experiments. In addition, and in contrast to the LHC, which was first approved and then sought partnership, HL-LHC has been a partnership with other institutions since the very beginning. It is another good lesson, and one that paves the way for a future supercollider beyond the LHC.

We are now exactly halfway through the HL-LHC programme, which was set up as a design study in 2010, approved with full budget in 2016, and will start operating in 2026. Now we have to complete the second half of the journey to generate a luminous future for our young colleagues. I will be long retired when the HL-LHC switches on, but I expect that, when it does, the CERN control centre will once again be a scene of celebration.

Probing quark–gluon plasma with charmed mesons

"Average

The ALICE collaboration has released a new measurement of the production of D0, D+, D*+ and Ds+ mesons, which contain a charm quark, in lead–lead (PbPb) collisions at a centre-of-mass energy per nucleon pair (sNN) of 5.02 TeV. These measurements probe the propagation of charm quarks in the quark–gluon plasma (QGP) produced in high-energy heavy-ion collisions. Charm quarks are produced early in the collision and subsequently experience the whole system evolution, losing part of their energy via inelastic (gluon radiation) or elastic (“collisional”) scattering processes. The charm quarks emerge from the collision in D mesons, which are identified by their characteristic decays.

The result is reported in terms of the nuclear modification factor (RAA), which is the ratio between the measured pT distribution in heavy-ion and proton–proton (pp) collisions, scaled by the average number of binary nucleon–nucleon collisions in each nuclear collision. The figure shows the average RAA of non-strange D mesons (D0, D+, D*+) and strange (D+s) mesons in central (0–10%) PbPb collisions. For the non-strange mesons, a minimum of RAA ≈ 0.2 for pT = 6–10 GeV/c indicates a significant energy loss for charm quarks. The RAA is compatible with that of charged particles for pT > 8 GeV/c, while it is larger at lower pT. The comparison to light-flavour hadrons helps to study the colour-charge and quark-mass dependence of the in-medium parton energy loss.

The RAA of Ds+ mesons is larger than that of non-strange D mesons. Though the experimental uncertainty is still large, such a difference would suggest that charm quarks also form hadrons by recombining with the surrounding light quarks in the QGP. This mechanism differs from the fragmentation process that is thought to be the main hadronisation mechanism in the absence of a medium. The recombination mechanism enhances the yield of particles with strangeness because strange quarks are copiously produced in the QGP.

The RAA at LHC Run 2 is compatible with that measured at a lower centre-of-mass energy per nucleon pair of 2.76 TeV, but the larger collected data sample at Run 2 made it possible to reduce the uncertainties by a factor of about two. A similar suppression at the two energies is expected by the “Djordjevic model” (figure, right) due to the combination of a stronger suppression in the denser medium and a harder pT distribution at 5.02 TeV with respect to 2.76 TeV.

The next PbPb run at the end of 2018, and the subsequent upgrade of the ALICE detector, will allow us to improve the measurement. This will shed further light on the energy loss and hadronisation of heavy quarks in the QGP and allow researchers to determine the transport coefficients describing the scattering power of the QGP and the diffusion of charm quarks in the medium.

First human 3D X-ray in colour

3D colour x-ray image

New-Zealand company MARS Bioimaging Ltd has used technology developed at CERN to perform the first colour 3D X-ray of a human body, offering more accurate medical diagnoses. Father and son researchers Phil and Anthony Butler from Canterbury and Otago universities in New Zealand spent a decade building their product using Medipix read-out chips, which were initially developed to address the needs of particle tracking in experiments at the Large Hadron Collider.

The CMOS-based Medipix read-out chip works like a camera, detecting and counting each individual particle hitting the pixels when its shutter is open. The resulting high-resolution, high-contrast images make it unique for medical-imaging applications. Successive generations of chips have been developed during the past 20 years with many applications outside high-energy physics. The latest, Medipix3, is the third generation of the technology, developed by a collaboration of more than 20 research institutes – including the University of Canterbury.

MARS Bioimaging Ltd was established in 2007 to commercialise Medipix3 technology. The firm’s product combines spectroscopic information generated by a Medipix3-enabled X-ray detector with powerful algorithms to generate 3D images. The colours represent different energy levels of the X-ray photons as recorded by the detector, hence identifying different components of body parts such as fat, water, calcium and disease markers.

So far, researchers have been using a small version of the MARS scanner to study cancer, bone and joint health, and vascular diseases that cause heart attacks and strokes. In the coming months, however, orthopaedic and rheumatology patients in New Zealand will be scanned by the new apparatus in a world-first clinical trial. “In all of these studies, promising early results suggest that when spectral imaging is routinely used in clinics it will enable more accurate diagnosis and personalisation of treatment,” said Anthony Butler.

Elephants in the gamma-ray sky

The gamma-ray sky

High-energy gamma rays provide a window into the physics of cosmic objects at extreme energies, such as black holes, supernova remnants and pulsars. In addition to revealing the nature of such objects, high-energy gamma-ray signals test general relativity and the Standard Model of particle physics. Take for example gamma-ray bursts, which can last from 10 milliseconds to several hours and are emitted by sources located up to several billion light-years away from Earth. A comparison between the arrival times of the bursts’ X-rays and gamma rays has been used to exclude modifications of Einstein’s general relativity that predict different arrival times. Also, in some theories in which dark matter is in the form of weakly interacting massive particles (WIMPs), dark-matter particles can annihilate into gamma-ray photons and other Standard Model particles. Significant effort is therefore being spent in searches for dark-matter annihilation signals in the gamma-ray band, including searches towards the Milky Way centre, which is estimated to contain a large amount of dark matter.

Studies of individual gamma-ray emitting sources and diffuse gamma-ray emission, which could include a galactic dark-matter annihilation signal, have benefited greatly from the launch of the large-area telescope on board NASA’s Fermi Gamma-ray Space Telescope (Fermi-LAT) in June 2008. Fermi-LAT, which observes gamma rays with energies from about 20 MeV to 1 TeV, has discovered more than 3000 point sources that have enabled researchers to significantly improve models of known galactic and extragalactic gamma-ray-emitting objects. But Fermi-LAT has also thrown up some surprising discoveries (figure 1). One of these is the so-called Fermi bubbles – two large gamma-ray lobes above and below the galactic centre that, intriguingly, have no clear counterpart in the X-ray and radio bands.

Fig. 2(a)

A second unexpected discovery by Fermi-LAT was an excess of gamma-ray radiation near the galactic centre with an energy of a few GeV. Interestingly, the excess has properties that are consistent with an annihilation signal from dark-matter particles with a mass of a few tens of GeV. The excess is visible up to 10 or 15 degrees away from the galactic centre – an elephant at a distance of four metres from an observer would have a similar apparent size. The Fermi bubbles, spanning 110 degrees from the northern to the southern edge, have an apparent size comparable to that of an elephant located one metre away.

Fig. 2(b)

Finally, there is a third, even larger, feature in the gamma-ray, radio and X-ray bands called Loop I. The challenge of explaining these three “elephants” in the gamma-ray sky has puzzled physicists and astronomers for years – tens of years in the case of Loop I. Are the features related to each other? Are they located near the galactic centre or close to us? And is the GeV gamma-ray excess caused by dark-matter annihilation or by astrophysical phenomena such as pulsars?

Loop I

The largest of the gamma-ray elephants, Loop I, has been known since the 1950s from its radio emission (figure 2a). Its large angular size – it stretches up to 80 degrees above the galactic plane – could easily be explained if it were a nearby feature. For instance, it could be the combined emission from a “superbubble”, the collective remnant of several supernova explosions taking place in a localised region. Such a bubble easily reaches a size of a few hundred light-years, and if the distance to the bubble was also a few hundred light-years, then it would appear very large, up to 90 degrees in angular size. In this scenario, the galactic magnetic field would drape around the expanding bubble and high-energy cosmic-ray electrons from sources in the galactic disk, compressed by the expansion of the bubble, would produce synchrotron emission that would appear as a huge, ring-like structure in the sky. A possible location of the underlying supernova explosions would be the Scorpius–Centaurus stellar cluster located a few hundred light-years away from Earth.

Loop I, or at least its brightest part, known as the North Polar Spur, is also seen at other wavelengths, in particular at gamma-ray (figure 1) and soft X-ray (figure 2b) wavelengths. While the gamma rays can be produced through inverse Compton emission by the same cosmic-ray electrons that produce the synchrotron radio emission, the soft X-ray emission is probably produced by hot interstellar gas. The approximate angular alignment between the radio and X-ray emissions of the North Polar Spur suggests that they both belong to Loop I. Yet there are several differences between the X-ray and radio emissions. For example, a bright, ring-like feature in X-rays that is crossing the North Polar Spur could be explained by the collision of the hypothetical Loop I superbubble with another bubble containing the solar system, the local hot bubble. One can even trace back the motion of stars within a few hundred light-years from us to find a population of stars with members that could have exploded as supernovae up to about 10 million years ago and inflated the local hot bubble.

However, apparent X-ray absorption at the southern part of the North Polar Spur by neutral gas located along the line of sight points to a different interpretation. Detailed spectral modelling of this absorption has recently shown that the amount of gas required to explain the absorption puts the X-ray emitting structure at distances far beyond a few hundred light-years. This lower bound on the distance to the X-ray structure favours models of Loop I as a galactic-scale phenomenon, for example the product of a large-scale outflow from the galactic-centre region, as opposed to the nearby superbubble. More X-ray data is needed to pin down the nature of Loop I, but if this feature is indeed a large-scale galactic structure, then it might be related to the second elephant in the sky – the Fermi bubbles.

Fermi bubbles

The Fermi bubbles consist of two large gamma-ray lobes above and below the galactic centre, each of which is slightly larger than the distance from Earth to the galactic centre (about 25,000 light-years). They appear smaller than Loop I and were discovered in 2010 with about a year and a half of Fermi-LAT data. From observations of galaxies other than the Milky Way, we know of two possible mechanisms for creating such bubbles: emission from a supermassive black hole at the galactic centre, or a period of intensive star formation (a starburst) and supernova explosions. Which of these processes is responsible for the formation of the Fermi bubbles in our galaxy is not yet known.

Even the mechanism for producing the gamma rays in the first place is not yet resolved: it could be due to interactions between cosmic-ray protons and galactic gas, or inverse Compton scattering of high-energy electrons off interstellar radiation fields. Both of these options have caveats. For the first, it’s unclear, for instance, how one can collect and keep the high density of cosmic rays required to compensate for the low density of gas at large distances from the galactic plane. It’s also unclear whether the pressure of cosmic rays will expel the gas and create a cavity that will make the gas density even lower. For the inverse-Compton-scattering hypothesis, one would need electrons with energies up to 1 TeV. If these electrons were accelerated to such energies at the beginning of the expansion of the Fermi bubbles, then the bubbles’ expansion velocity would be about 10,000 km s–1 – at least 10 times larger than the typical observed outflow velocities.

Fig. 3.

Moreover, even though the Fermi bubbles are similar in shape to gamma-ray lobes in other galaxies, which are typically visible in X-ray and radio wavelengths, they have no clear counterpart in X-rays and radio waves at high latitudes. Perhaps the Fermi bubbles are unique to the Milky way. Then again, perhaps astronomers have simply struggled to detect in other galaxies gamma-ray lobes that are “quiet” in the radio and X-ray bands.

A study of the gamma-ray emission from the Fermi bubbles at low latitudes could shed light on their origin, as it may point to the supermassive black hole at the galactic centre or to a region away from the centre, which would support the starburst scenario. Although the diffuse foreground and background gamma-ray emission from the Milky Way near the galactic centre is very bright, making it hard to interpret the observations, several analyses of Fermi-LAT gamma-ray data have revealed an increased intensity of gamma-ray emission from the Fermi bubbles near the galactic plane and a displacement of the emission relative to the galactic centre (figure 3). The higher intensity of the emission at the base of the Fermi bubbles opens up the possibility for a detection with ground-based very-high-energy gamma-ray Cherenkov telescopes, such as the upcoming Cherenkov Telescope Array, which is expected to start taking data with a partial array in 2022 and with the full array in 2025. At low energies, below 100 GeV, the flux from the base of the Fermi bubbles may also be confused with the third elephant in the sky – the galactic-centre GeV excess.

Galactic-centre GeV excess

The first hints of an extended excess of gamma rays from the centre and bulge of the galaxy at energies around a few GeV and with an almost spherical morphology were presented in 2009, before the discovery of the Fermi bubbles. However, given that the diffuse foreground gamma-ray emission along the galactic plane is very bright, and also rather uncertain towards the galactic centre, it took a long time to prove that the excess is not caused by mis-modelling of foreground components (such as inverse Compton scattering of high-energy electrons and hadronic interactions of the stationary distribution of cosmic rays along the line of sight). The spectrum of the excess has a peak at a few GeV, hence the name “GeV excess”, whereas the components of the diffuse foreground have a power-law structure around a few GeV.

Fig. 4.

Intriguingly, the combined GeV centre and bulge emission has properties that are largely compatible with expectations from a dark-matter annihilation signal: the emission is extended, up to at least 10–15 degrees away from the galactic centre, with a profile that is consistent with that from a slightly contracted dark-matter-halo profile (figure 4). At energies below about 1 GeV, the gamma-ray emission grows steeply, and has a maximum at a few GeV with a cut-off or a significant softening at higher energies, which is expected for a signal from dark-matter annihilation.

Given the high stakes of claiming a discovery of a dark-matter annihilation signal, corroborating evidence for this hypothesis must be found, or alternative astrophysical explanations must be confidently excluded. Unfortunately, neither has happened up to now. Quite the contrary: there are several sources of gamma-ray emission near the galactic centre that could, within uncertainties, together account for all of the bulge and centre emission. For example, massive molecular-gas clouds near the galactic centre show clear indications of star-formation activity, which results in cosmic-ray production and associated gamma-ray emission in the inner galaxy. While the hadronic cosmic rays from such activity are not likely to explain the GeV excess, because their gamma-ray emission is not as extended as the GeV excess, inverse Compton emission from cosmic-ray electrons linked with such an activity can be extended over many degrees and is expected to contribute to the GeV emission. However, given that the energy spectrum expected for this inverse Compton emission is significantly flatter than the observed GeV excess, it is unlikely that this component accounts completely for the GeV-excess emission.

Fig. 5.

Arguably, the most plausible explanation for the GeV-excess emission from the galactic bulge and centre is a population of thousands of millisecond pulsars – highly magnetised neutron stars with a rotational period of 1–10 ms. They can emit gamma rays for billions of years before they lose energy, and their gamma-ray spectrum, as observed by Fermi-LAT, is similar to the spectrum of the GeV excess. It is plausible that millisecond pulsars in the bulge follow a similar spatial distribution as the majority of bulge stars. Indeed, recent analyses showed that the profile of the GeV-excess emission in the inner galaxy is better described by the boxy stellar bulge, rather than by a spherically symmetric dark-matter profile. Moreover, several detailed statistical analyses found evidence that the emission is more likely to be from a population of numerous but faint point sources, such as millisecond pulsars in the bulge, rather than from truly diffuse emission, such as that resulting from the annihilation of dark-matter particles (figure 5).

Future observations with radio telescopes such as MeerKAT in South Africa, expected to start taking data this year, and its successor, the Square Kilometre Array (SKA), the first construction phase of which is expected to end in 2020, should be able to test whether millisecond pulsars exist in the inner galaxy and can explain the GeV excess.

Additional multi-wavelength observations will provide new information about the three elephants in the sky. In particular, the eROSITA experiment, the successor of the X-ray ROSAT satellite, will survey the whole sky in X-rays and will be one order of magnitude more sensitive than ROSAT. With the eROSITA data, astronomers will search for a possible cavity carved out by cosmic rays in the Fermi bubbles and will estimate the distance to the North Polar Spur using the absorption of soft X-rays from the spur by the distribution of gas along the line of sight.

Possible connections

On the high-energy gamma-ray front, the upcoming Cherenkov Telescope Array is expected to detect the Fermi bubbles near the galactic plane above a few hundred GeV. This detection should help to answer the question of whether the Fermi bubbles are linked to the galaxy’s central supermassive black hole or to a different source away from the galactic centre. On the other side of the electromagnetic spectrum, the new generation of radio-telescope arrays, MeerKAT and SKA, should, as mentioned, be able to confirm or rule out the millisecond-pulsar hypothesis for the GeV excess. If the millisecond-pulsar hypothesis is excluded, then the dark-matter interpretation will remain as one of the plausible explanations. By contrast, a confirmation of the millisecond-pulsar hypothesis will significantly constrain the dark-matter hypothesis.

The presence of the three elephants in the gamma-ray sky in approximately the same direction raises the question of whether they are connected. One of the possible connections between the Fermi bubbles and Loop I is that Loop I is created by galactic gas pushed away by the expansion of the bubbles. In this case, the two elephants would become one, where Loop I is an outer part and the Fermi bubbles are an inner part. This scenario looks especially plausible for the northern bubble because Loop I extends beyond it.

The overlap between the GeV excess and the Fermi bubbles in the galactic-centre region provides the exciting possibility of a connection between the two. Models that try to explain the GeV excess with an additional population of cosmic-ray electrons, star formation and cosmic-ray acceleration processes, can connect the gamma-ray emission in the bulge with that at higher latitudes in the Fermi bubbles. Also, the mechanism underpinning the formation of the bubbles – whether it is linked to activity of the galaxy’s central supermassive black hole or to a burst of star formation – might affect the properties of the GeV excess. Future observations and analyses will help to settle the nature – common or not – of the three elephants in the sky, and might point to new physics such as dark-matter annihilation in the Milky Way. Studying the gamma-ray sky will no doubt be an exciting journey for many years to come.

Closing in on the muon’s magnetic moment

Fermilab g-2 experiment

A new experiment at Fermilab in the US, designed to make the most precise measurement of the muon’s magnetic moment, has completed its first physics data-taking campaign, showing promising results. Experiment E989 is a reincarnation of the muon g-2 experiment at Brookhaven National Laboratory (BNL), which ran in the late 1990s and early 2000s and found the muon’s anomalous magnetic moment, aμ, to be approximately 3.5 sigma above the Standard Model prediction. The Fermilab experiment aims to resolve this long-standing discrepancy, revealing whether it is due to a statistical fluctuation or to the existence of new particles that are influencing the muon’s behaviour.

The international E989 collaboration hopes to measure aμ to a final precision of 140 parts per billion, improving on the BNL result by a factor of four. Following months of commissioning efforts beginning last autumn, the experiment started taking data in February. Its net accumulated dataset is already almost twice that obtained by BNL, although much of the initial run involved varying the operating conditions to optimise data collection and explore systematics.

The principle behind the Fermilab and BNL experiments is the same: muons start with their spins aligned with their direction of motion, but as they journey around the storage ring they precess at a frequency proportional to the magnetic field and to the value of aμ. At experiment E989, muons are vertically focused in the ring via a system of electric quadrupoles, and the precession frequency is determined using a set of 24 electromagnetic calorimeters located along the inner circumference of the ring. The new experiment reuses the 1.45 T superconducting storage ring from BNL, which was shipped from Long Island to Chicago in 2015 and has since been rebuilt, its magnetic field now shimmed to a uniformity that exceeds BNL’s by a factor of three. Nearly all of the other aspects of the experiment are new. 

The Fermilab Muon Campus – which will also serve the Muon-to-Electron Conversion experiment in the future – provides an intense polarised muon beam that is devoid of the pion contamination that challenged the BNL measurement. Bunches of muons are injected into the storage ring and then “kicked” during their first rotation around the ring. “This is one of the most challenging aspects and one that the collaboration continues to develop because the kick quality affects the net storage efficiency and the momentum distribution,” explains E989 member and former co-spokesperson David Hertzog.

A representative sample from a 60-hour-long dataset (see figure) demonstrates precession-frequency modulation on top of an exponentially decaying muon population. The collaboration is now evaluating data samples and developing different and independent approaches to extract the precession frequency and minimise systematic uncertainties. E989 researchers are also working to evaluate the average magnetic field and important beam-dynamics parameters.

In parallel, theorists are working hard on Standard Model calculations to reduce the uncertainties in the predicted value of aμ – in particular concerning hadronic corrections, which are the most challenging to evaluate due to the complexities of quantum chromodynamics (QCD). In June, Alexander Keshavarzi from the University of Liverpool, UK, and colleagues used electron–positron collision data to reevaluate the hadronic contribution to aμ, leading to the highest precision prediction so far. The following month, Thomas Blum of the University of Connecticut, US, and co-workers in the RBC and UKQCD collaborations reported a complete first-principles calculation of the leading-order hadronic contribution to aμ from lattice QCD and quantum electrodynamics, showing improved precision.

Physicists will have to wait a bit longer for E989 to release a first measurement of aμ, however. “Until we can closely examine the data quality – both precession data from detectors and field data from NMR probes – we are unable to predict the timetable,” says Hertzog. “Our aim is sometime in 2019, but we will unblind only after we are certain that the analysis is complete – so stay tuned.”

Relativity passes test on a galactic scale

ESO 325-G004

Einstein’s theory of gravity, general relativity, is known to work well on scales smaller than an individual galaxy. For example, the orbits of the planets in our solar system and the motion of stars around the centre of the Milky Way have been measured precisely and shown to follow the theory. But general relativity remains largely untested on larger length scales. This makes it hard to rule out alternative theories of gravity, which modify how gravity works over large distances to explain away mysterious cosmic substances such as dark matter. Now a precise test of general relativity on a galactic scale excludes some of these alternative theories.

Using data from the Hubble Space Telescope, a team led by Thomas Collett from the University of Portsmouth in the UK has found that a nearby galaxy dubbed ESO 325-G004 is surrounded by a ring-like structure known as an Einstein ring – a striking manifestation of gravitational lensing. As the light from a background object passes a foreground object, the gravity of the foreground object bends and magnifies the light of the background one into a ring. The ring system found by Collett’s group is therefore a perfect laboratory with which to test general relativity on galactic scales.

But it isn’t easy to make such a test, because the size and structure of the ring depend on several factors, including the distance of the background galaxy from Earth, and the distance, mass and shape of the foreground (lensing) galaxy. In previous tests the uncertainty on some of these factors resulted in large systematic errors in the modelling of the gravitational-lensing effect, allowing only weak constraints to be placed on alternative theories of gravity. Now Collett and colleagues’ discovery of an Einstein ring around a relatively close galaxy, ESO 325-G004, along with high-resolution observations of that same galaxy taken with the Multi Unit Spectroscopic Explorer (MUSE) on the European Southern Observatory (ESO) Very Large Telescope, has allowed the most precise test of general relativity outside the Milky Way.

The researchers derived the distances of the background galaxy and the lensing galaxy from measurements of their redshifts. Measuring the mass and the shape of the lensing galaxy is more complex, but was made possible here thanks to the MUSE observations that allowed the team to perform measurements of the motions of the stars that make up the galaxy relative to the galaxy’s centre. Since these motions are governed by the gravitational fields inside the galaxy, they can be used to indirectly measure the mass and shape of ESO 325-G004.

The team put all of these measurements together and determined the gravitational effect that ESO 325-G004 should have on the background galaxy’s light if general relativity holds true. The result, which, technically, tests the scale invariance of a parameter in general relativity called gamma, is almost in perfect agreement with general relativity, with an uncertainty of only 9%. Not only does it show that gravity behaves on a galactic scale in the same way as it does in our solar system, it also disfavours alternative gravity models, in particular those that attempt to remove the need for dark energy.

Italy and US join forces on sterile neutrinos

MicroBooNE

On 28 June, the US Department of Energy and the Italian Embassy, on behalf of the Italian Ministry of Education, Universities and Research, signed a collaboration agreement concerning the international Short Baseline Neutrino (SBN) programme hosted at Fermilab. The SBN programme, started in 2015, comprises the development, installation and operation of three neutrino detectors on the Fermilab site: the Short Baseline Near Detector, located 110 m from the neutrino beam source; MicroBooNE, located 470 m from the source; and ICARUS, located 600 m from the source. ICARUS was refurbished at CERN last year after a long and productive scientific life at Gran Sasso National Laboratory.

The SBN programme aims to search for exotic and highly non-reactive sterile neutrinos and resolve anomalies observed in previous experiments (CERN Courier June 2017 p25). Due to their different distances from the source, but employing the same liquid-argon technology, the three SBN detectors will be able to distinguish whether their measurements are due to transformations between neutrino types involving a sterile neutrino or are due to other previously-unknown neutrino interactions.

The signing of the SBN programme agreement is an addendum to a broader collaboration agreement on neutrino research that the US and Italy signed in 2015.

bright-rec iop pub iop-science physcis connect