Comsol -leaderboard other pages

Topics

RHIC smashes record for polarized-proton collisions at 200 GeV

The Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory has shattered its own record for producing polarized-proton collisions at 200 GeV collision energy. In the experimental run currently underway, accelerator physicists are delivering 1.2 × 1012 collisions per week – more than double the number routinely achieved in 2012, the last run dedicated to polarized-proton experiments at this collision energy.

The achievement is, in part, the result of a method called “electron lensing”, which uses negatively charged electrons to compensate for the tendency of the positively charged protons in one circulating beam to repel the like-charged protons in the other beam when the two oppositely directed beams pass through one another in the collider. In 2012, these beam–beam interactions limited the ability to produce high collision rates, so the RHIC team commissioned electron lenses and a new lattice to mitigate the beam–beam effect. RHIC is now the first collider to use electron lenses for head-on beam–beam compensation. The team also upgraded the source that produces the polarized protons to generate and feed more particles into the circulating beams, and made other improvements in the accelerator chain to achieve higher luminosity.

With new luminosity records for collisions of gold beams, plus the first-ever head-on collisions of gold with helium-3, 2014 proved to be an exceptional year for RHIC. Now, the collider is on track towards another year of record performance, and research teams are looking forward to a wealth of new insights from the data to come.

Magnetic fields cast light on black hole’s edge

The Atacama Large Millimetre/submillimetre Array (ALMA) has revealed an intense magnetic field at the base of the relativistic jet powered by a supermassive black hole. Probing the physical conditions of a jet so close to the black hole is unprecedented, and confirms that magnetic fields have a driving role in the formation and collimation of the jet.

Supermassive black holes, often with masses billions of times that of the Sun, are located at the heart of almost all galaxies in the universe. These black holes can accrete huge amounts of matter from a surrounding disc. While most of this matter is fed into the black hole, some can escape moments before capture and be flung out into space at close to the speed of light in twin plasma jets, which can extend hundreds of thousands of light-years from their host galaxy (Picture of the month, CERN Courier January/February 2013 p14). How this happens is not well understood, although it is thought that strong magnetic fields, acting very close to the event horizon, play a crucial role in this process.

Up to now, only weak magnetic fields far from black holes – several light-years away – have been probed. A new study by astronomers from Chalmers University of Technology and Onsala Space Observatory in Sweden used ALMA to detect a polarization signal related to the strong magnetic field in a distant galaxy named PKS 1830-211. This quasar was chosen because it is located at a relatively high redshift and is gravitationally lensed. The redshift of z = 2.5 allows submillimetre emission from the distant source to be probed at frequencies 3.5 times higher than reachable by ALMA. An observation at 300 GHz (around 1 mm) therefore probes the terahertz frequency range (around 0.3 mm), where synchrotron self-absorption no longer hides the most intense jet region closest to the black hole.

The gravitational lens splits the remote source into two components, so that Ivan Martí-Vidal and colleagues could study the relative polarization of the two lensed images. This strategy allows them to be free of many calibration-related artefacts that would otherwise limit the analysis. Through repeated observations at different wavelengths, they found clear signals of Faraday rotation that are hundreds of times stronger than previously found in the universe. The strength of this wavelength dependence of the rotation of the polarization angle is given by the rotation measure (RM), which depends on the magnetic field strength multiplied by the electron density integrated along the line of sight.

The RM derived with ALMA in PKS 1830-211 is around 108 rad/m2, which is about 100,000 times greater than in the radio cores of other quasars. This huge difference is owing to the new observations being performed at much higher frequencies, thus probing a region only light-days away from the black hole, instead of light-years when observing in the radio domain. Assuming that both the magnetic field and the electron density increase by about a factor of 300 from the radio core to the apex of the jet, the team obtains a magnetic field of at least a few tens of gauss near the base of the jet. While this is only an order-of-magnitude estimate, its relatively high value – although many billions of times weaker than in neutron stars – reinforces the idea that magnetic fields play an important role in the mechanism that launches the jet.

The road from CERN to space

Roberto Battiston

The Agenzia Spaziale Italiana (ASI) – the Italian Space Agency – has the tag line “The road to space goes through Italy.” Make a simple change and it becomes a perfectly apt summary of the career to date of the agency’s current president. For Roberto Battiston, the road to space goes through CERN.

As a physics student at the famous Scuola Normale in Pisa, which has provided many of CERN’s notable physicists, he studied the production of dimuons in proton collisions at the Intersecting Storage Rings, under the guidance of Giorgio Bellettini. For his PhD, he moved in 1979 to the University of Paris IX in Orsay, where his thesis was on the construction of the central wire-proportional chamber of UA2, the experiment that went on, with UA1, to discover the W and Z particles at CERN. Until 1995, his research focused on electroweak physics, first at the SLAC Linear Collider and then, back at CERN, at the L3 experiment at the Large Electron–Positron collider. However, at the point when the LHC project was on its starting blocks, his interest began to turn towards cosmic rays. With Sam Ting, who led the L3 experiment, Battiston became involved in the Alpha Magnetic Spectrometer, which as AMS-02 has now been taking data on board the International Space Station (ISS) for four years (CERN Courier July/August 2011 p18). Three years after the launch of AMS-02, Battiston found himself closer to space, at least metaphorically, when he was appointed president of ASI in May 2014.

The decision to move away from experiments at the LHC will surprise many people. How do you explain your unconventional choice?

The LHC, a machine of extraordinary importance, as its results have shown, was the obvious choice for someone who wanted to continue a research career in particle physics. But I chose to take a less beaten path. In space, less has been researched and less has been discovered than at accelerators. I realized that, in both neutral and charged cosmic rays, we are presented with information that is waiting to be decoded, potentially hiding unforeseen discoveries. The universe is, by definition, the ultimate laboratory of physics, a place where, in the various phases of its evolution, matter and energy have reached all of the possible conditions one could imagine – conditions that we will never be able to reproduce artificially. For this reason, when I was discussing with Sam Ting in 1994 about what would be the most interesting new project – whether to go for an LHC experiment or, radically, for a new direction – I had no hesitation: space and space exploration immediately triggered my enthusiasm and curiosity. I absolutely do not regret this choice.

Was your experience and know-how as a high-energy physicist useful for the construction and, now, the operation of AMS?

The AMS detector was designed exactly like the LHC experiments. It has an electromagnetic spectrometer with a particle tracker and particle identifiers. Subdetectors are positioned before and after the magnet and the tracker, to identify the types of particles passing through the experiment. We use the same approach as at accelerators – 99% of the events are thrown away, the interesting ones being the few that remain. However, within these data, processes that we still do not know about remain potentially hidden. The challenge is to find new methods to look at this radiation and extract a signal, exactly as at the LHC. The difference is that the trigger rate is kilohertz in space, rather than gigahertz at the LHC: AMS gets one or two particles at a time instead of hundreds of thousands per event. Moreover, space offers some advantages and optimal conditions for detecting particles: surprisingly, it provides stable environmental conditions, so detectors that on the ground would suffer from environmental changes – such us too much heat or atmospheric pressure changes or humidity – enjoy ideal conditions in space. Silicon detectors, transition-radiation detectors, electromagnetic calorimeters and Cherenkov detectors have performed much better than the best detectors on the ground.

But in space you must face more complex challenges that put constraints on your instrument’s design?

Given the complexity of the current LHC experiments, the situation is comparable. Repairing a huge detector 100 m below ground is as difficult as repairing a detector in space. If something breaks down underground, dismantling the whole structure of a detector might require months if not a year. Everything in both environments must have sufficient reliability to operate for a long time. In space, radiation doses are relatively small compared with the doses that the detectors can sustain, but there are problems of the shock at launch, pressure drops, extreme temperatures and the ability to operate in a vacuum, so the tests that a detector must pass to be able to perform in space are severe. Shock and stress resistance at launch require the detectors to be more robust than those built to stay on Earth. Another huge difference is weight and power. On Earth there are no limits. In space, we must use low-weight instruments – a few tonnes compared with the 10,000 tonnes of the large LHC detectors. And because detectors in space are powered by solar panels, there are power limits – a few kilowatts compared with tens of megawatts at the LHC. So in space, resources are optimized to the last small part.

What about the choice of leading technology vs reliability, for an experiment in space?

It is true that in space we have instruments that are dated, technologically speaking. But AMS is an exception: we made the effort of bringing to space technology developed at CERN since 2000, which has shown itself to be 10–100 times more powerful and effective than current space standards.]

Space is particle physics multiplied to the nth power.

Roberto Battiston

Now, with AMS-02 successfully installed on the ISS and reaping promising results, you have been appointed president of the ASI, one of the large European space agencies. What can a physicist like yourself bring to the management of the space industry at the European and international level?

Space is a place were human dreams converge: from photographing the Moon, to walking on Mars, to taking a snapshot of the first instants of the universe – these are global dreams of humanity. Yet, space is a different world from physics. In certain aspects, it’s wider. Particle physics is an international discipline, but is so focused that the bases for discussion are limited, however fascinating and however important might be the consequences of finding a new brick in the construction of the universe. Space is particle physics multiplied to the nth power. It is a context, not just one discipline. Many different sectors interact, but each has its own dynamics – my leitmotiv is “interdisciplinarity”. Many different things happen at a fast pace, which requires a great capacity for synthesis and ability to process a lot of data in a short time. Decisions must be taken so fast that a well-trained brain is needed. I can only thank my tough training in physics research for this. The tough discipline at the basis of research at CERN and in astroparticle physics, the continuous challenge of having to solve complex problems, the requirement of working in a large community made of people with different characters, cultures and languages, typical of experimental physics, are an asset within the context of a space agency.

How do large collaborations work in space research? Is it as global as the LHC?

The capability to keep the construction effort of very large accelerators or extremely complex detectors under direct control is still, today, an essential aspect of the high-energy physics community. Space research has not made the transition to a global collaboration in the same way as CERN, because it is still dominated by a strong element of international politics and national prestige. The amount of funding involved and the related industrial aspects and business pressures are so big, that decisions must be taken at the level of heads of state and government.

Is there a difference in approach between NASA and ESA?

They’re both huge agencies, although NASA has four times the budget of ESA. In the past, they’ve collaborated on large projects, but in the past 10 years this collaboration has dimmed, as is the case for LISA [the Laser Interferometer Space Antenna]. Sometimes, such projects are even done in competition, as in the case of WMAP and Planck. The US pulled out of Rosetta long ago, and is now focused on the James Webb Space Telescope. To do so, the US basically chose to stop most international collaborations in science, except for the ISS and exploration. The ISS exists because of a precise political will. It is a demonstration that collaboration in space is decided top-down instead of bottom-up, and it can hold or break according to politics.

AMS will soon be joined in space by new powerful instruments to study cosmic rays. Are we witnessing a change of focus, from particle physics in the lab back to the sky?

Space is a less-frequented frontier, and it is understandable that it is now attracting many physicists. Astroparticle physics is a bridge between the curiosity of particle physicists who try to understand fundamental problems and the tradition of astronomy to observe the universe. Two different aspects of physics converge here: deciphering vs photographing and explaining. In astroparticle physics we try to find traces of fundamental phenomena, in astrophysics, to explain what we are able to see.

So what would your advice be to young physics graduates? Where would they best fulfil their research ambitions today?

Physics in space is becoming enormously interesting, not just in the understanding of both the infinitely small and the infinitely large. In the coming decades, astrophysics and particles studied in space radiation will be the place from where surprises and important discoveries could come, although this will take time and more sophisticated technologies, because the limits of technology are farther from the limits of the observable phenomena in the universe than in the case of particle accelerators. Building a new accelerator will require decades and big investments, as well as new technologies, but most of all it will need a discovery indicating where to look. The resources required are so considerable that we will not be able to build such a machine just to explore and see what there is at higher energies, as we did many times in the past. This is less true in astrophysics. There will surely be decades of discoveries with more sophisticated instruments, the frontiers are not completely explored at all. However, physics keeps its outstanding fascination. With current computing capacity, latest technologies, the present understanding of quantum mechanics, the interactions between physics and biology, the amount of physics that you can do at atomic and subatomic level, using many atoms together, cold systems and so on – there are so many sectors in which an excellent physicist can find great satisfaction.

And after ASI, will you go back to particle physics?

For the moment I need to put all of my energy into the job that has just started. I have not lost the pleasure of discovery, and the main objective of the years ahead is to support the best ideas in space science and technology, trying to get results as quickly as possible. And of course, I will keep following AMS.

The Mu2e experiment: a rare opportunity

The Mu2e experiment at Fermilab recently achieved an important milestone, when it received the US Department of Energy’s critical-decision 2 (CD-2) approval in March. This officially sets the baselines in the scope, cost and schedule of the experiment. At the same time, the Mu2e collaboration was awarded authorization to begin fabricating one of the experiment’s three solenoids and to begin the construction of the experimental hall, which saw ground-breaking on 18 April (figure 1). The experiment will search with unprecedented sensitivity for the neutrinoless conversion of a muon into an electron.

Some history

The muon was first observed in 1937 in cosmic-ray interactions. The implications of this discovery, which took decades of additional progress in both experiment and theory to reveal, were profound and ultimately integral to the formulation of the Standard Model. Among the cornerstones of the model are symmetries in the underlying mathematics and the conservation laws they imply. This connection between theory (the mathematical symmetries) and experiment (the measurable conservation laws) was formalized by Emmy Noether in 1918, and is fundamental to particle physics. For example, the mathematics describing the motion of a system of particles gives the same answer regardless of where in the universe this system is placed. In other words, the equations of motion are symmetric, or invariant, to translations in space. This symmetry manifests itself as the conservation of momentum. A similar symmetry to translations in time is responsible for the conservation of energy. In this way, in particle physics, observations of conserved quantities offer important insights into the underlying mathematics that describe nature’s inner workings. Conversely, when a conservation law is broken, it often reveals something important about the underlying physics.

The implications of neutrino mixing have yet to be revealed fully.

In the Standard Model there are three families of quarks and three families of leptons. Generically speaking, members of the same family interact preferentially with one another. However, it has long been known that quark families mix. The Cabibbo–Kobayashi–Maskawa matrix characterizes the degree to which a particular quark interacts with quarks of a different family. This phenomenon has profound implications, and plays a role in the electroweak interactions that power the Sun and in the origin of CP violation. For decades it appeared that the lepton family did not mix: the lepton family number was always conserved in experiments. This changed with the observation that neutrinos mix (Fukuda et al. 1998, Ahmad et al. 2001). This discovery has profound implications; for example, neutrinos must have a finite mass, which requires the addition of a new field or a new interaction to the original Standard Model – the updated Standard Model is sometimes denoted the νSM. Indeed, the implications of neutrino mixing have yet to be revealed fully, and a vigorous worldwide experimental programme is aimed at further elucidating the physics underlying this phenomena. As often happens in science, the discovery of neutrino oscillations gave rise to a whole new set of questions. Among them is this: if the quarks mix, and the neutral leptons (the neutrinos) mix, what about the charged leptons?

A probe of new physics

Searches for charged-lepton flavour violation (CLFV) have a long history in particle physics. When the muon was discovered, one suggestion was that it might be an excited state of the electron, and so experiments searched for μ → eγ decays (Hicks and Pontecorvo 1948, Sard and Althaus 1948). The non-observation of this reaction, and the subsequent realization that there are two distinct neutrinos produced in traditional muon decay, led physicists to conclude that the muon was a new type of lepton, distinct from the electron. This was an important step along the way to formulating a theory that included several families of leptons (and, eventually, quarks). Nevertheless, searches for CLFV have continued ever since, and it is easy to understand why. In the Standard Model, with massless neutrinos, CLFV processes are strictly forbidden. Therefore, any observation of a CLFV decay would signal unambiguous evidence of new physics beyond the Standard Model. Today, even with the introduction of neutrino mass, the situation is not significantly different. In the νSM, the rate of CLFV decays is proportional to [Δm2ij/M2W]2, where Δm2ij is the mass-squared difference between the ith and jth neutrino, and MW is the mass of the W boson. The predicted rates are therefore in the region of 10–50 or smaller – far below any experimental sensitivity currently conceivable. Therefore, it remains the case that any observation of a CLFV interaction would be a discovery of new physics.

The case for pursuing CLFV searches is compelling. A wide variety of models of new physics predict large enhancements relative to the νSM (30–40 orders of magnitude) for CLFV interactions. Extra dimensions, little Higgs, lepto quarks, heavy neutrinos, grand unified theories, and all variety of supersymmetric models predict CLFV rates to which upcoming experiments will have sensitivity (see, for example, Mihara et al. 2013). Importantly, ratios of various CLFV interactions can discriminate among the different models and offer insights into the underlying new physics complementary to what experiments at the LHC, neutrino experiments, or astroparticle-physics endeavours can accomplish.

The most constraining limits on CLFV come from μ → eγ muon-to-electron conversion, μ → 3e, K → ll’, and τ decays. In the coming decade the largest improvements in sensitivity will come from the muon sector. In particular, there are plans for dramatic improvements in sensitivity for the muon-to-electron conversion process, in which the muon converts directly to an electron in the presence of a nearby nucleus with no accompanying neutrinos, μN → eN. The presence of the nucleus is required to conserve energy and momentum. The process is a coherent one and, apart from receiving a small recoil energy, the nucleus is unchanged from its initial state. The Mu2e experiment at Fermilab (Bartoszek et al. 2015) and the COMET experiment at the Japan Proton Accelerator Research Complex (Cui et al. 2009) both aim to improve the current state-of-the-art by a factor of 10,000, starting in the next five years.

The Mu2e experiment

The Mu2e experiment will use the existing Fermilab accelerator complex to take 8-GeV protons from the Booster, rebunch them in the Recycler, and slow-extract them to the experimental apparatus from the Muon Campus Delivery Ring, which was formerly the anti-proton Accumulator/Debuncher ring for the Tevatron. Mu2e will collect about 4 × 1020 protons on target, resulting in about 1018 stopped muons, which will yield a single-event sensitivity for μN → eN of 2.5 × 10–17 relative to normal muon nuclear capture (μN → νμN´). The expected background yield over the full physics run is estimated to be less than half an event. This gives an expected sensitivity of 6 × 10–17 at 90% confidence level and a discovery sensitivity of 5σ to all conversion rates larger than about 2 × 10–16. For comparison, many of the new-physics models discussed above predict rates as large as 10–14, which would yield hundreds of signal events. This projected sensitivity is 10,000 times better than the world’s current best limit (Bertl et al. 2006), and will probe effective mass scales for new physics up to 104 TeV/c2, well beyond what experiments at the LHC can explore directly.

The Mu2e experimental concept is simple. Protons interact with a primary target to create charged pions, which are focused and collected by a magnetic field in a volume where they decay to yield an intense source of muons. The muons are transported to a stopping target, where they slow, stop and are captured in atomic orbit around the target nuclei. Mu2e will use an aluminium stopping target: the lifetime of the muon in atomic orbit around an aluminium nucleus is 864 ns. The energy of the electron from the CLFV interaction μN → eN – given by the mass of the muon less the atomic binding energy and the nuclear recoil energy – is 104.96 MeV. Because the nucleus is left unchanged, the experimental signature is a simple one – a mono-energetic electron and nothing else. Active detector components will measure the energy and momentum of particles originating from the stopping target and discriminate signal events from background processes.

Because the signal is a single particle, there are no combinatorial backgrounds, a limiting factor for other CLFV reactions. The long lifetime of the muonic-aluminium atom can be exploited to suppress prompt backgrounds that would otherwise limit the experimental sensitivity. While the energy scale of the new physics that Mu2e aims to explore is at the tera-electron-volt level, the physical observables are at much lower energy. In Mu2e, 100 MeV is considered “high energy”, and the vast majority of background electrons are at energies < Mμ/2 ~ 53 MeV.

Mu2e’s dramatic increase in sensitivity relative to similar experiments in the past is enabled by two important improvements in experimental technique: the use of a solenoid in the region of the primary target and the use of a pulsed proton beam. Currently, the most intense stopped-muon source in the world is at the Paul Scherrer Institut in Switzerland, where they achieve more than 107 stopped-μ/s using about 1 MW of protons. Using a concept first proposed some 25 years ago (Dzhilkibaev and Lobashev 1989), Mu2e will place the primary production target in a solenoidal magnetic field. This will cause low-energy pions to spiral around the target where many will decay to low-energy muons, which then spiral down the solenoid field and stop in an aluminium target. This yields a very efficient muon beamline that is expected to deliver three-orders-of-magnitude-more stopped muons per second than past facilities, using only about 1% of the proton beam power.

A muon beam inevitably contains some pions. A pulsed beam helps to control a major source of background from the pions. A low-energy negative pion can stop in the aluminium target and fall into an atomic orbit. It annihilates very rapidly on the nucleus, producing an energetic photon a small percentage of the time. These photons can create a 105 MeV electron through pair production in the target, which can, in turn, fake a conversion electron. Pions at the target must be identified to high certainty or be eliminated. With a pulsed muon beam, the search for conversion electrons is delayed until almost all of the pions in the beam have decayed or interacted. The delay is about 700 ns, while the search period is about 1-μs long. The lifetime of muonic aluminium is long enough that most of the signal events occur after the initial delay. To prevent pions from being produced and arriving at the aluminium target during the measurement period, the beam intensity between pulses must be suppressed by 10 orders of magnitude.

The Mu2e apparatus consists of three superconducting solenoids connected in series (figure 2). Protons arriving from the upper right strike a tungsten production target in the middle of the production solenoid. The resulting low-energy pions decay to muons, some of which spiral downstream through the “S”-shaped transport solenoid (TS) to the detector solenoid (DS), where they stop in an aluminium target. A strong negative magnetic-field gradient surrounding the production target increases the collection efficiency and improves muon throughput in the downstream direction. The curved portions of the TS, together with a vertically off-centre collimator, preferentially transmit low-momentum negative particles. A gradient surrounding the stopping target reflects some upstream-spiralling particles, improving the acceptance for conversion electrons in the detectors.

When a muon stops in the aluminium target, it emits X-rays while cascading through atomic orbitals to the 1s level. It then has 61% probability of being captured by the nucleus, and 39% probability of decaying without being captured. In the decay process, the distribution of decay electrons largely follows the Michel spectrum for free muon decay, and most of the electrons emitted have energies below 53 MeV. However, the nearby nucleus can absorb some energy and momentum, with the result that, with low probability, there is a high-energy tail in the electron distribution reaching all of the way to the conversion-electron energy, and this poses a potential background. Because the probability falls rapidly with increasing energy, this background can be suppressed with sufficiently good momentum resolution (better than about 1% at 105 MeV/c).

Detector components

Inside the DS, particles that originate from the stopping target are measured in a straw-tube tracker followed by a barium-fluoride (BaF2) crystal calorimeter array. The inner radii of the tracker and calorimeter are left un-instrumented, so that charged particles with momenta less than about 55 MeV/c, coming from the beamline or from Michel decays in the stopping target, have low transverse momentum and spiral downstream harmlessly.

The tracker is 3-m long with inner and outer active radii of 39 cm and 68 cm, respectively. It consists of about 20,000 straw tubes 5 mm in diameter, which have 15-μm-thick mylar walls and range in length from 0.4–1.2 m (figure 3). They are oriented perpendicular to the solenoid axis. Conversion-electron candidates make between two and three turns of the helix in the 3-m length. The tracker provides better than 1 MeV/c (FWHM) resolution for 105 MeV/c electrons.

The final solenoid commissioning is scheduled to begin in 2019.

Situated immediately behind the tracker, the calorimeter provides sufficient energy and timing resolution to separate muons and pions from electrons with energy around 100 MeV. The BaF2 crystals have a fast component (decay time around 1 ns) that makes the Mu2e calorimeter tolerant of high rates without significantly affecting the energy or timing resolutions. Surrounding the DS and half the TS is a four-layer scintillator system that will identify through-going cosmic rays with 99.99% efficiency. A streaming data acquisition (DAQ) architecture will handle about 70 GB of data a second when beam is present. A small CPU farm will provide an online software trigger to reduce the accept rate to about 2 kHz. A dedicated detector system will monitor the suppression of out-of-time protons, while another will determine the number of stopped muons.

Having cleared the CD-2 milestone in March, the Mu2e collaboration is now focused on clearing the next hurdle – a CD-3 “construction readiness” review in early 2016. In preparation, prototypes of the tracker, calorimeter, cosmic-ray veto, DAQ and other important components are being built and tested. In addition, the fabrication of 27 coil modules that make up the “S” of the transport solenoid will begin soon, and the building construction will continue into 2016. The final solenoid commissioning is scheduled to begin in 2019, while detector and beamline commissioning are scheduled to begin in 2020.

Snapshots from the Long Shutdown

A view from the bottom of the ATLAS cavern

A view from the bottom of the ATLAS cavern, up to the LHC beam pipe as the experiment prepares for Run 2 of the LHC at full energy.

Construction of new panels of the pixel detector. The pixel detector is the innermost of ATLAS’s many layers, lying closest to the interaction point where particle collisions occur.

View of the ATLAS calorimeters from below as they were being moved to their final position before the detector closed for the LHC’s second run. Calorimeters measure energy carried by neutral and charged particles.

The ATLAS team watches as the first part of the Insertable B-Layer (IBL), a new component of the pixel subdetector, enters its support tube. The IBL was installed in May 2014, becoming the innermost layer of ATLAS’s inner detector region. It will provide an additional point for tracking particles. An additional point closer to the collision vertex significantly improves precision.

An ATLAS member vacuums the different sectors inside the 7000 tonne detector. Before the toroid magnets can be turned on for tests, the detector must be thoroughly cleaned. In December 2014, 110 ATLAS members worked in 10 different shifts for five days, cleaning and inspecting the detector and the cavern that houses it, to make sure that no object, however miniscule, may have been left behind during the months of upgrade and maintenance.

A thin gap chamber on one of the big wheels being replaced. The big wheels are the final layer of the muon spectrometer, which identifies muons and measures their momenta as they pass through the ATLAS detector. The muon spectrometer is the outermost component of the 25-m tall and 46-m long ATLAS detector.

ATLAS physicists Vincent Hedberg, left, and Giulio Avoni glue optical fibers for the construction of the LUCID calibration system. LUCID is a detector that will help ATLAS continue to measure luminosity with very high precision during the increased collision rates and increased energy expected in the next LHC Run.

The vacuum group’s team members lead the installation of LUCID and the LHC beam pipe. The beam pipe delivers the proton–proton collisions to the heart of the detector.

Raphaël Vuillermet, the technical co-ordination team’s engineer, supervises the separation of the muon spectrometer’s big wheels from the cavern balcony. There are four moveable big wheels at each end of the ATLAS detector, each measuring 23 m in diameter. The wheels are separated to access the interior of the muon stations to change faulty chambers.

Members of the ATLAS muon team inspect the monitored drift tubes of the muon spectrometer before the shielding that encircles the beam pipe, where collisions occur, is installed. The shielding is designed to maintain the integrity of the beam and to protect the sensitive components of the detector near the beamline.

From Physics to Daily Life: Applications in Informatics, Energy, and Environment and From Physics to Daily Life: Applications in Biology, Medicine and Healthcare

From Physics to Daily Life: Applications in Informatics, Energy, and Environment and From Physics to Daily Life: Applications in Biology, Medicine and Healthcare
By Beatrice Bressan (ed.)
Wiley-Blackwell
Hardback: £60 €75
E-book: £54.99 €66.99
(The prices are for each book separately)
Also available at the CERN bookshop

CCboo1_05_15

The old adage that “necessity is the mother of invention” explains, in a nutshell, why an institution like CERN is such a prolific source of new technologies. The extreme requirements of the LHC and its antecedents have driven researchers to make a host of inventions, many of which are detailed in these informative volumes that cover two broad areas of applications.

Eclectic is the word that comes to mind reading through the chapters of the two tomes that are all linked, in one way or another, to CERN. The editor, Beatrice Bressan, has done a valiant job of weaving together different styles and voices, from technical academic treatise to colourful first-hand account. For example, in one of his many insightful asides in a chapter entitled “WWW and More”, Robert Cailliau, a key contributor to the development of the World Wide Web, muses wryly that even after a 30-year career at CERN, it was not always clear to him what “CERN” meant.

Indeed, as the reader is reminded throughout these two books, CERN is the convenient shorthand for several closely connected organizations and networks, each with its own innovation potential. There’s the institution in Geneva whose staff consist primarily of engineers, technicians and administrators who run the facility. Then there’s the much more numerous global community of researchers that develop and manage giant experiments such as ATLAS. And underpinning all of this is the vast range of industrial suppliers, which provide most of the technology used at CERN, often through a joint R&D process with staff at CERN and its partner institutions.

From a purely utilitarian perspective, the justification for CERN surely lies in the contracts it provides to European industry. Without the billions of euros that have been cycled through European firms to build the LHC, there would be little political appetite for such a massive project. As explained in the introductory chapter by Bressan and Daan Boom – reproduced in both volumes, together with a chapter on Swiss spin-off – there has been a great deal of knowledge transfer thanks to these industrial contracts. Indeed, this more mundane part of CERN’s industrial impact may well dwarf many of the more visible examples of innovation illustrated in subsequent chapters.

CCboo2_05_15

Still, as several examples in these two volumes illustrate, there is no doubt that CERN can also generate the sort of “disruptive technologies” that shape our modern world. The web is the most stunning example, but major advances in particle accelerators and radiation sensors have had amazing knock-on effects on industry and society, too, as chapters by famous pioneers such as Ugo Amaldi and David Townsend illustrate clearly.

The question that journalists and other casual observers never cease to ask, though, is why has Europe not benefitted more directly from such breakthroughs? Why did touch screens, developed for the Super Proton Synchrotron control room, not lead to a slew of European high-tech companies? Why was it Silicon Valley and not some valley in Europe that reaped most of the direct commercial benefits of the web? Where are all of the digital start-ups that the hundreds of millions of euros invested in Grid technology were expected to generate?

Chapters on each of these technologies provide some clues to what the real challenge is. As Cailliau remarks wistfully, “WWW is an excellent example of a missed opportunity, but not by CERN.” In other words, to be successful, invention needs not only a scientific mother, it requires an entrepreneurial midwife, too. That is an area where Europe has been sorely lacking.

The only omission in these otherwise wide-ranging and well-researched books, in my opinion, is the lack of discussion on the central role of openness in CERN’s innovation strategy. Open science and open innovation are umbrella terms mentioned enthusiastically in the introductory chapter by Sergio Bertolucci, CERN’s director for research and computing. But there are no chapters dealing specifically with how open-access publication or open-source software and hardware – areas where CERN has for years been a global pioneer – have impacted knowledge transfer and innovation. Perhaps that is a topic broad enough for a third volume.

That said, there is, in these two volumes, already ample food for more thoughtful debate about successful knowledge management and technology transfer in and around European research organizations like CERN. If these books provoke such debate, and that debate leads to progress in Europe’s ability to transform innovations sparked by fundamental physics into applications that improve daily life, they will have made an important contribution

• For the colloquium held at CERN featuring talks by contributors to these two books, visit https://indico.cern.ch/event/331449/.

Proton beams are back in the LHC

After two years of intense maintenance and consolidation, and several months of preparation for the restart, the LHC is back in operation. At 10.41 a.m. on 5 April, for the first time in more than two years, proton Beam 1 completed an anti-clockwise circuit of the 27-km ring at the injection energy of 450 GeV. Injected at point 8 on the LHC, Beam 1 was allowed round the ring one step at a time, as collimators were opened at each point in turn, once the operators had checked that all was working well. On the way, the protons provided the first “beam-splash” events for the ATLAS and CMS experiments, at points 1 and 5, respectively. Beam 2 then followed a similar procedure. Injected at point 2, it completed its first orbit in the clockwise direction at 12.27 p.m.

The sight of first beam has set the LHC on course for Run 2 – but not without the kind of challenge to be expected when restarting such a complex system after the work undertaken during the long shutdown. The Herculean task to prepare the machine for operation at 6.5 TeV per beam – almost double the energy of Run 1 – involved the consolidation of some 10,000 electrical interconnections between the magnets, the addition of further magnet-protection systems, and the improvement and strengthening of cryogenic, vacuum and electronic systems.

Following the successful injection tests on 7–8 March, the final training of the superconducting magnets to the current levels required for a beam energy of 6.5 TeV continued in parallel with the many steps required for the machine check-out. During this final phase before beam, the various LHC systems are put through their operational paces from the CERN Control Centre. These include important tests of the beam-dump beam-interlock systems. All of the magnetic circuits are driven through the ramp, squeeze, ramp-down, and pre-cycle steps, together with the collimators and RF. Instrumentation, feedbacks, and the control system are also stress-tested.

By mid-March, the powering tests had left all but two of the 1700 or so magnetic circuits fully qualified for 6.5 TeV – the result of a six-month-long programme of rigorous tests involving the quench-protection system, power converters, energy extraction, uninterruptible power supplies, interlocks, electrical quality assurance and magnet behaviour. The dipoles of sector 4-5 proved a little stubborn but reached the target value of 11,080 A – the value for 6.5 TeV with a margin of an additional 100 A – after some 50 training quenches. Sector 3-4 was also nearly fully trained to the same value, when an earth fault occurred in the early morning of 21 March.

Investigations eventually pinned down the fault to a metal fragment lodged in a box housing a high-current bypass diode. After intensive discussions and simulations, the accelerator team decided to melt the fragment, and on 30 March injected a current of almost 400 A into the diode circuit for just a few milliseconds. Measurements made the following day confirmed that the short-circuit had indeed disappeared. Teams then had to re-qualify the sector, testing all of the circuits, particularly the dipole circuit that carries current up to 11 kA, before training could begin again. By 2 April, sector 3-4 had finally reached the target for operation at 6.5 TeV, and preparations to close the LHC for beam were fully under way again, for the successful restart three days later.

• To find out more, see the LHC reports in CERN Bulletin: bulletin.cern.ch.

SESAME passes an important milestone at CERN

The SESAME project – the Synchrotron-light for Experimental Science and Applications in the Middle East – passed an important milestone at the beginning of April, with the complete assembly and successful testing at CERN of the first of 16 magnetic cells for the electron storage ring.

Under construction in Jordan, SESAME is a unique joint venture that brings together scientists from its members: Bahrain, Cyprus, Egypt, Iran, Israel, Jordan, Pakistan, the Palestinian Authority and Turkey. The light source consists of an injector, comprised of a 20-MeV microtron and an 800-MeV booster synchrotron, which feeds a 2.5-GeV electron storage ring. CERN is responsible for the magnets of the storage ring and their powering scheme under CESSAMag – a project funded largely by the European Commission. Within the project, CERN has been collaborating with SESAME and the ALBA Synchrotron to design, test and characterize the components of the magnetic system.

The SESAME storage ring is built up from 16 magnetic cells, which make up the periodic structure of the machine, together with insertion regions where special synchrotron radiation can be produced. Each of the periodic cells consists of one bending magnet (a combined function dipole–quadrupole), two focusing and two defocusing magnets (quadrupoles) and four combined sextupole corrector magnets (including orbit and coupling correction). Orders were placed in the UK for the dipoles, in Spain and Turkey for the quadrupoles, and in France, Cyprus and Pakistan for the sextupoles. Italy, Israel and Switzerland are providing the power-supply components, and Iran, Pakistan and Turkey are providing additional in-kind support to CERN in the form of material and personnel.

The integration tests at CERN, which were carried out together with colleagues from SESAME, aimed at assembling a full periodic cell of the machine. Besides the magnets themselves, this involved the girder support structure as well as the vacuum chamber through which the electron beam will pass. The success of the tests demonstrates that these subsystems work together as foreseen.

Production of the magnets and their powering scheme is now in full swing. After acceptance tests and integration for the powering, the components will be shipped in batches to Jordan, where installation and commissioning of the storage ring is planned for 2016, followed by start-up the same year. The SESAME injector, which includes a booster synchrotron, is already operational.

Latest ATLAS results on the Higgs boson

CCnew6_04_15

ATLAS physicists are making increasingly precise measurements of the properties of the observed Higgs boson, including production and decay rates, as well as the spin. Comparisons of the results with theoretical predictions could indicate whether new particles or phenomena beyond the Higgs field of the Standard Model are required for electroweak-symmetry breaking.

Recently published studies concern the decays of the Higgs boson into vector bosons (γγ, ZZ, WW, Zγ) and fermions (ττ, bb, μμ) in various production modes (ATLAS Collaboration 2015a). Measurements of the signal strength, μ = σ/σSM, allow the measured cross-sections, σ, of each decay channel to be compared to that predicted by the Standard Model, σSM. The figure shows that the results are compatible with the Standard Model’s prediction, that is, μ = 1. The new combination of all of the production and decay channels gives the most precise value from ATLAS to date: μ = 1.18 + 0.15 – 0.14.

Other new results include studies of the rare process of Higgs-boson production in association with two top quarks – a channel that allows physicists to probe directly the mysteriously large top–Higgs Yukawa coupling (ATLAS Collaboration 2015b). The analyses looked at a number of different decay modes of the Higgs boson, including decays into fermions (bb, ττ), and into bosons (WW, ZZ), the latter mode being measured for the first time by ATLAS in association with top quarks. Gathering all of the decay channels together, the data show a small excess of events over background with a strength μ(ttH) = 1.8±0.8. This gives a significance of 2.4σ with respect to a “no ttH” hypothesis. Observation of the Higgs boson in this production mode will require the new data expected in the LHC’s Run 2.

The LHC will soon restart running with a proton–proton collision energy of 13 TeV, more than 60% higher than that of Run 1

ATLAS has also improved its studies of the spin and parity of the Higgs boson (ATLAS Collaboration 2015c). The Standard Model hypothesis of a spin-0 particle with positive parity is favoured at more than 99% confidence level.

In addition, the ATLAS and CMS collaborations have joined forces to combine their precision measurements of the mass of the Higgs boson, and recently presented a new combined value of mH = 125.09±0.24 (0.21 stat.±0.11 syst.) GeV, with an uncertainty reduced to two parts in a thousand (0.2%).

The LHC will soon restart running with a proton–proton collision energy of 13 TeV, more than 60% higher than that of Run 1. The production rate of the Standard Model Higgs boson will increase by more than a factor of two, and that of the rare ttH process by almost a factor of four. ATLAS is ready to exploit the full potential of Run 2 to study the Higgs boson and to look beyond for new phenomena.

CMS digs deeply into lepton-pair production

Lepton pairs produced in proton–proton collisions at the LHC provide a clear signal that is easy to identify in the detector. The production is dominated by the Drell–Yan process, in which an intermediate Z/γ* boson is produced by the incoming partons. The measurements of the Drell–Yan production cross-section as a function of the mass of the intermediate boson, its rapidity (corresponding to the scattering angle) and its transverse momentum allow sensitive tests of QCD, the theory of the strong interaction. Recently, the CMS collaboration published two new measurements that provide a comprehensive view of the production of dimuons, a pair of oppositely charged muons, via the decay of Z bosons at a collision energy of 8 TeV at the LHC.

The parton structure of the proton and its evolution, governed by the dynamics of the strong interaction, can be scrutinized over a large range of phase space. By comparing the measurements to calculations that employ different parton distribution functions (PDFs) and different theoretical models for the dynamics, the PDFs and their uncertainty can be improved. These studies are also important for investigating other physics processes, for example searches for new resonances decaying into dileptons in models beyond the Standard Model.

In the CMS analysis, dimuon production in the vicinity of the Z-boson peak was parameterized doubly differentially as functions of the transverse momentum (qT) and the rapidity (y) of the Z boson. The analysis used the data sample of proton–proton collisions at a centre-of-mass energy of 8 TeV, amounting to an integrated luminosity of 19.7 fb–1. The measurement probes the production of Z bosons up to high transverse momenta of qT > 100 GeV, a kinematic regime in which the production is dominated by gluon–quark fusion. Therefore, the measurement is sensitive to the gluon PDF in a kinematic regime that is important for Higgs-boson production via gluon fusion. In the future, Z-boson production can also be used to constrain the gluon PDF and provide information complementary to other processes employed, such as direct photon production. The data are well reproduced within uncertainties by the next-to-next-to-leading-order predictions computed with the FEWZ simulation code. The MADGRAPH and POWHEG predictions deviate from data up to 20% at high-z transverse momentum.

CMS has measured the five major angular coefficients A0 to A4 as a function of qT and y

The angular distribution of the final-state leptons in Drell–Yan production is determined by the vector and axial-vector coupling structure of the Standard Model Lagrangian, and by the relative contributions of the quark–antiquark annihilation and quark–gluon Compton processes. In the presence of higher-order QCD corrections, the general structure of the lepton angular distribution in the boson rest-frame is given by a formula that contains a set of angular coefficients.

Using the 8 TeV data, CMS has measured the five major angular coefficients A0 to A4 as a function of qT and y. None of the theoretical models tested describe all of the coefficients satisfactorily. The coefficients A0 and A2 measured by CMS in proton–proton collisions at the LHC are larger than those measured in proton–antiproton collisions at Fermilab’s Tevatron at a lower centre-of-mass energy. This is expected, owing to the significant contribution of the quark–gluon process in proton–proton collisions at the LHC. In addition, as the figure shows, the analysis confirmed for the first time the anticipated deviation from the Lam–Tung relation, A0 = A2 (Lam and Tung 1979). This deviation is expected in QCD calculations beyond the leading order. The measurement by CMS shows that A0 > A2, especially for high qT. Nonzero values were also measured for A1 and A3.

The comprehensive study of the Z-boson production mechanism presented in these two recently published CMS papers lays the foundation for future high-precision measurements, such as the measurement of the mass of the W boson and the electroweak mixing angle.

bright-rec iop pub iop-science physcis connect