Comsol -leaderboard other pages

Topics

Two teams take big steps forward in plasma acceleration

CCnew15_01_15th

The high electric-field gradients that can be set up in plasma have offered the promise of compact particle accelerators since the late 1970s. The basic idea is to use the space-charge separation that arises in the wake of either an intense laser pulse or a pulse of ultra-relativistic charged particles. Towards the end of 2014, groups working on both approaches reached important milestones. One team, working at the Facility for Advanced Accelerator Experimental Tests (FACET) at SLAC, demonstrated plasma-wakefield acceleration with both a high gradient and a high energy-transfer efficiency – a crucial combination not previously achieved. At Lawrence Berkeley National Laboratory, a team working at the Berkeley Lab Laser Accelerator (BELLA) facility boosted electrons to the highest energies ever recorded for the laser-wakefield technique.

CCnew16_01_15th

Several years ago, a team at SLAC successfully accelerated electrons in the tail of a long electron bunch from 42 GeV to 85 GeV in less than 1 m of plasma. In that experiment, the particles leading the bunch created the wakefield to accelerate those in the tail, and the total charge accelerated was small. Since then, FACET has come on line. Using the first 2 km of the SLAC linac to deliver an electron beam of 20 GeV, the facility is designed to produce pairs of high-current bunches with a small enough separation to allow the trailing bunch to be accelerated in the plasma wakefield of the drive bunch.

Using the pairs of bunches at FACET, some of the earlier team members together with new colleagues have carried out an experiment in the so-called “blow-out” regime of plasma-wakefield acceleration, where maximum energy gains at maximum efficiencies are to be found. The team succeeded in accelerating some 74 pC of charge in the core of the trailing bunch of electrons to about 1.6 GeV per particle in a gradient of about 4.4 GeV/m (Litos et al. 2014). The final energy spread for the core particles was as low as 0.7%, and the maxiumum efficiency of energy transfer from the wake to the trailing bunch was in excess of 30%.

Meanwhile, a team at Berkeley has been successfully pursuing laser-wakefield acceleration for more than a decade. This research was boosted when the specially conceived BELLA facility recently came on line with its petawatt laser. In work published in December, the team at BELLA used laser pulses at 0.3 PW peak power to create a plasma channel in a 9-cm-long capillary discharge waveguide and accelerate electrons to the record energy of 4.2 GeV (Leemans et al. 2014). Importantly, the 16 J of laser energy used was significantly lower than in previous experiments – a result of using the preformed plasma waveguide set up by pulsing an electrical discharge through hydrogen in a capillary. The combination of increased electron-beam energy and lower laser energy bodes well for the group’s aim to reach the target of 10 GeV.

Nuclei come under the microscope in California

CCnew17_01_15th

It has long been known that, when they are put under a sufficiently energetic microscope, nuclei reveal a complicated structure – the more energetic the probe, the more complex the structure. In recent years, continuing studies of deuteron–nucleus (dA) and proton–nucleus (pA) collisions have demonstrated that many features first observed in heavy-ion (AA) collisions are also present in these lighter collisions, and some of these features have even been seen in high-multiplicity pp collisions. Such factors have generated the present intense interest in nuclear structure that was evident when more than 120 physicists gathered in California’s Napa Valley on 3–7 December, to discuss the initial state in these collisions during the 2nd International Conference on Initial Stages in High-Energy Nuclear Collisions (IS2014).

In particular, pA collisions at the LHC have demonstrated the existence of anisotropic particle production. The angular distributions look very similar to those observed in AA collisions, where the anisotropy has been attributed to hydrodynamic flow. The material produced in these collisions appears to flow like a low-viscosity fluid, and the final-state anisotropy mimics that present in the initial elliptic-shaped collision region. Recent studies at Brookhaven’s Relativisitic Heavy-Ion Collider (RHIC) as well as at the LHC have shown that, in addition to the American-football-shaped collision region, there are also event-to-event anisotropies caused by the different random positions of nucleons within the nucleus. Much of the observed anisotropy might be explained by models based on hydrodynamic flow. One focus of IS2014 was the question of how hydrodynamic flow can arise in smaller nuclear systems, particularly pA collisions. One new approach to this question is being pursued at RHIC, in which 3He collided with gold last year, to see how the triangular initial state manifests itself in the collision products (figure 2).

CCnew18_01_15th

Some of these phenomena also appear in high-multiplicity pp collisions. One example is “the ridge” observed as two-particle correlations between particles with similar azimuthal angles, but separated by large rapidities. In contrast, one other expected consequence of the quark–gluon plasma – jet quenching – appears to be present only in AA collisions, for the most part.

The meeting also covered recent theoretical developments. As the centre-of-mass energies increase, collisions probe partons with smaller and smaller momentum fractions (Bjorken-x values). And as the x-values decrease, the parton density increases, and at low enough x values, saturation must set in. This happens when gluons begin to recombine as well as to split. Although saturation is expected on general principles, the details remain the subject of spirited theoretical discussion. One key question addressed in Napa was the search for the colour-glass condensate (CGC), a hypothetical state of matter where the gluons produce coherent fields. These CGCs lead to new nuclear phenomena.

The meeting included presentations on a variety of experimental techniques. The RHIC and LHC collaborations all made presentations highlighting their data and plans for AA, pA and pp collisions. In addition to hadronic collisions, one session was devoted to ultra-peripheral collisions, where two colliding nuclei interact electromagnetically. Here, reactions such as photonuclear production of vector mesons are sensitive to details of the nuclear initial state.

The congenial atmosphere led to many fruitful discussions, and a third conference is planned in Lisbon in 2016.

• For more about IS2014, visit http://is2014.lbl.gov.

Gamma-ray bursts are a real threat to life


A new study confirms the potential hazard of nearby gamma-ray bursts (GRBs), and quantifies the probability of an event on Earth and more generally in the Milky Way and other galaxies. The authors find a 50% chance that a nearby GRB powerful enough to cause a major life extinction on the planet took place during the past 500 million years (Myr). They further estimate that GRBs prevent complex life like that on Earth in 90% of the galaxies.

GRBs occur about once a day from random directions in the sky. Their origin remained a mystery until about a decade ago, when it became clear that at least some long GRBs are associated with supernova explosions (CERN Courier September 2003 p15). When nuclear fuel is exhausted at the centre of a massive star, thermal pressure can no longer sustain gravity and the core collapses on itself. If this process leads to the formation of a rapidly spinning black hole, accreted matter can be funnelled into a pair of powerful relativistic jets that drill their way through the outer layers of the dying star. If such a jet is pointing towards Earth, its high-energy emission appears as a GRB.

The luminosity of long GRBs – the most powerful ones – is so intense that they are observed throughout the universe (CERN Courier April 2009 p12). If one were to happen nearby, the intense flash of gamma rays illuminating the Earth for tens of seconds could severely damage the thin ozone layer that absorbs ultraviolet radiation from the Sun. Calculations suggest that a fluence of 100 kJ/m2 would create a depletion of 91% of this life-protecting layer on a timescale of a month, via a chain of chemical reactions in the atmosphere. This would be enough to cause a massive life-extinction event. Some scientists have proposed that a GRB could have been at the origin of the Ordovician extinction some 450 Myr ago, which wiped out 80% of the species on Earth.

With increasing statistics on GRBs, a new study now confirms a 50% likelihood of a devastating GRB event on Earth in the past 500 Myr. The authors, Tsvi Piran from the Hebrew University of Jerusalem and Raul Jimenez from the University of Barcelona in Spain, further show that the risk of life extinction on extra-solar planets increases towards the denser central regions of the Milky Way. Their estimate is based on the rate of GRBs of different luminosity and the properties of their host galaxies. Indeed, the authors found previously that GRBs are more frequent in low-mass galaxies such as the Small Magellanic Cloud with a small fraction of elements heavier than hydrogen and helium. This reduces the GRB hazard in the Milky Way by a factor of 10 compared with the overall rate.

The Milky Way would therefore be among only 10% of all galaxies in the universe – the larger ones – that can sustain complex life in the long-term. The two theoretical astrophysicists also claim that GRBs prevent evolved life as it exists on Earth in almost every galaxy that formed earlier than about five-thousand-million years after the Big Bang (at a redshift z > 0.5). Despite obvious, necessary approximations in the analysis, these results show the severe limitations set by GRBs on the location and cosmic epoch when complex life like that on Earth could arise and evolve across thousands of millions of years. This could help explain Enrico Fermi’s paradox on the absence of evidence for an extraterrestrial civilization.

Emilio Picasso’s contagious enthusiasm for physics

Never has such an illustrious career at CERN hung from so slender a thread of improbability. He was in Genoa, I was in Geneva. Were we destined to meet? In Bristol? As a result of some tiny chance? His final day of a one-year sabbatical. My first day of a visit. All alone on his last evening, Emilio wanted to say goodbye to Bristol and went to a bar. Out of hundreds of options, I ended up in the same bar…and got a warm welcome. I described the new g-2 experiment, which was just starting to roll: the first ever muon storage ring at 1.2 GeV to dilate the muon lifetime to 27 μs and see more precession cycles. Simon van der Meer was on board but no one else. Emilio loved fundamental physics, and there and then he offered to join the project, visiting CERN from Genoa and later becoming a full-time member of staff. Little did I know that I would be making speeches and writing articles in his honour: Chevalier of Legion of Honour of France and Knight Grand Cross of the Order of Merit of the Republic of Italy.
Francis J M Farley

Emilio read physics at the University of Genoa, where he stayed after receiving his doctorate in July 1956. Within a small team, he worked mainly on technical aspects of visual particle detectors, first with gas bubble chambers – based on using a supersaturated solution of gas in a liquid at room temperature – and diffusion chambers. By the early 1960s, he had moved on with some of his collaborators to study proton and meson interactions in nuclear emulsions, and participated in the International Co-operative Emulsion Flights, which took two large stacks of emulsion plates high into the atmosphere to detect the interactions of energetic cosmic rays. This international collaborative effort included the Bristol group of Cecil Powell, recipient of the 1950 Nobel Prize in Physics for his work on emulsions and their use in the discovery of the particle now known as the pion in cosmic rays.

So it was not surprising that Emilio arrived in Bristol as a NATO postdoctoral fellow in 1962/1963. There, his chance meeting with Farley in Bristol in 1963 set him on course to CERN. When he offered to join the g-2 experiment, Farley accepted with pleasure, and soon Emilio started travelling to Geneva from Genoa, becoming a research associate at CERN in 1964. From the beginning he insisted on understanding everything in depth. He wrote Fortran programs, checked the calculations and found some mistakes, which luckily for the future of the experiment were not lethal.

Emilio’s enthusiasm was contagious, and he and Farley gradually assembled a small team. Farley recalls: “There were many difficulties, but eventually it worked and we measured the anomalous moment of the muon to 270 ppm. The result disagreed with theory by 1.7σ but we were sure of our number (confirmed by the next experiment) and we published anyway. (The fashionable shibboleth is that you need 5σ for an effect; true if you are looking for a bump in a wiggly graph, which might be anywhere. But for one number 2–3σ is important and anything over 3σ is huge). The discrepancy was enough to worry the theorists, who set to work and discovered a new correction. Then they agreed with us. This was a triumph for the experiment.”

In 1967 Farley moved to a job in England and Emilio became group leader, having joined the CERN staff in November 1966. Together with John Bailey they discovered the magic energy, 3.1 GeV, at which electric fields do not affect the spin precession. This led to a new muon storage ring with a uniform magnetic field and vertical focusing using an electric quadrupole field. Emilio masterminded this much larger project, creating a warm happy atmosphere and encouraging new ideas. The muon precession could now be followed out to 500 μs and g-2 was measured to 7 ppm. The team had the right number again (confirmed by the later measurement at Brookhaven National Laboratory) and this time it agreed with the theory.

While the g-2 saga was coming to an end, Emilio and Luigi Radicati, who was then a visiting scientist at CERN, became interested in the possibility of detecting gravitational waves by exploiting suitably coupled superconducting RF cavities. The idea was to detect the change of the cavity Q-value induced by gravitational waves. They were joined by Francesco Pegoraro and CERN’s Philippe Bernard, and published papers analysing the principle in 1978/1979. It was an unconventional idea, which Emilio continued to consider and improve on and off with various collaborators for the next quarter of a century. However, at the end of the 1970s a much larger project lay on CERN’s horizon.

In November 1978, John Adams – then CERN’s executive director-general – decided to push R&D on superconducting RF with a view to increasing the energy reach of the proposed Large Electron–Positron (LEP) collider. He asked Philippe Bernard and Herbert Lengeler to put together a research programme, and they in turn proposed that Emilio should co-ordinate collaboration with outside laboratories because of his “vivid interest in RF superconductivity” and his “excellent contacts” in the field. The result was that in spring 1979, Emilio became team leader of the development programme at CERN, and responsible for co-ordination with other laboratories – in Genoa, Karlsruhe, Orsay and Wuppertal.

The development work at CERN led to superconducting cavities that could achieve the necessary high electric-field gradients, and the team went on to design and build, in collaboration with European industries, the system of superconducting RF that was eventually deployed in LEP during the 1990s. In 1986, Emilio and others proposed the installation of a maximum of 384 superconducting cavities to reach an energy of at least 220 GeV in the centre-of-mass. In the end 288 such cavities were installed, and LEP eventually reached a total energy of 208 GeV. Emilio would later express sadness that the collider’s energy was never brought to its fullest potential with the maximum number of cavities.

Leader of LEP
However, he was to take on a still more significant role in 1980, when at the suggestion of the new director-general, Herwig Schopper, CERN Council designated him LEP project leader. With Schopper’s agreement, Emilio began by setting up the LEP Management Board, consisting of the best experts at CERN, in all of the various aspects, from magnets, RF and vacuum to civil engineering and experimental halls. The board met one day a week throughout the period of LEP’s construction, discussing all of the decisions that needed to be taken, including the technical specifications for contracts with industry. Schopper would regularly join in, mainly to observe and participate in the decision-making process, which took place in a warm and enthusiastic atmosphere.

The main aspect of the project in which Emilio had no experience was civil engineering, but one of the early major issues concerned the exact siting of the tunnel, which in the initial plans was to pass for 12 km beneath some 1000 m of water-bearing limestone in the Jura mountains. While this would avoid the larger communities in France and Switzerland, it presented formidable tunnelling challenges. Rather than downsize, Emilio decided to look into locating the ring further from the mountains. This needed crucial support from the local people, and he was instrumental in setting up regular meetings with the communes around CERN. The result was that in the final design, the LEP tunnel passed for only 3.3 km under the Jura, beneath 200 m of limestone at most.

This final design was approved in December 1981 and construction of the tunnel started in 1983. It was not without incident: when water burst into the part of the tunnel underneath the Jura, it formed a river that took six months to eliminate, and the smooth planning for construction and installation became a complex juggling act. Nevertheless by July 1988, the first sector was installed completely. A test with beam proved that the machine was indeed well designed, and just over a year later, the first collisions were observed on 13 August 1989.

Following the completion of the construction phase of LEP, and the end of his successful mandate as leader of the LEP project, Emilio began to focus again on the detection of gravitational waves, an interest that had continued even while he was a director at CERN, when he supported the installation of the EXPLORER gravitational-wave detector at the laboratory in 1984. He was nominated director of the Scuola Normale Superiore in Pisa in 1991, where he had been named professor a decade earlier, and served as such for the following four years, retiring from CERN in 1992. At Pisa, he played a key role in supporting approval of Virgo – the laser-based gravitational-wave detector adopted by INFN and CNRS, which is currently running near Cascina, Pisa.

Emilio’s love for physics problems lasted throughout his life in science – a life during which warmth and welcome radiated. He knew how to switch people on. Now, sadly, this bright light is dimmed, but the afterglow remains and will be with us for many years.

Emilio Picasso 1927–2014

After a long illness, Emilio Picasso passed away on 12 October. One of the earliest and most outstanding staff members of CERN, he made remarkable contributions to the prodigious success of the organization for more than 50 years.

Born in Genoa on 9 July 1927, Emilio first studied mathematics, followed by two years of physics. After his doctorate he became assistant professor for experimental physics at the University of Genoa, and began research in atomic physics before changing to particle physics.

Short stays with the betatron at Torino and with the electron synchrotron at Frascati provided him with his first experiences with particle accelerators. He then went to Bristol in the years 1962/1963, where he joined the group of Cecil Powell, who had received the Nobel prize in 1950 for investigating cosmic radiation using photographic emulsions and discovering the π meson. There Emilio met Francis Farley who told him that he intended to measure at CERN the anomalous magnetic moment of muons circulating in a storage ring. After some drinks they became friends, and Emilio decided to join Farley on the CERN experiment.

The measurement of the anomalous magnetic moment – or more precisely the deviation of its value from the Bohr magneton, expressed as “g-2” – yields an extremely important quantity for testing quantum electrodynamics (QED). Emilio was attracted by this experiment because it matched two different aspects of his thinking. He was fascinated by fundamental questions, and at the same time the experiment required new technologies for magnets.

From 1963, Emilio commuted between Genoa and CERN, becoming a research associate in 1964 to work on the g-2 experiment and a CERN staff member in 1966. In addition to Farley, John Bailey and Simon van der Meer joined the group, which Emilio was later to lead. The measurements went on for 15 years at two successive storage rings (the second with Guido Petrucci and Frank Krienen), and achieved an incredible accuracy of 7 ppm, so becoming one of the most famous precision tests of QED.

In 1978, Luigi Radicati convinced Emilio to participate in an experiment to look for gravitational waves produced by particles circulating in a storage ring. Superconducting RF cavities were to be used as detectors. The attempt was unsuccessful, but it gave Emilio the opportunity to get to know the technology of superconducting cavities – knowledge that was to serve him extremely well later at the Large Electron–Positron collider (LEP).

In 1981, the LEP project was approved by CERN Council, alas under very difficult conditions, i.e. with a reduced and constant budget. In addition, the requisite personnel had to be found among the staff of the newly unified CERN I and CERN II laboratories. Under such conditions it was not easy to find the right person to lead the LEP project. Several outstanding accelerator experts were available at CERN, and it would have been an obvious step to appoint one of them as project leader. However, because it became necessary to reassign about a third of the CERN staff to new tasks – implying that personal relations established across many years had to be broken – I considered the human problems as dominant. Hence I appointed Emilio as project leader for LEP, a decision that was greeted by many with amazement. I considered his human qualities for this task to be more important than some explicit technical know-how. Emilio was respected by the scientists as well as by the engineers. He was prepared to listen to people, and his moderating temper, his honesty and reliability, and last but not least his Mediterranean warmth, were indispensable for the successful construction and operation of what was by far the largest accelerator of its time. His name will always remain linked with this unique project, LEP – a true testament to Emilio’s skills as a scientist and as a project leader.

After his retirement I visited Emilio often in a small office in the theory division, where he had settled to study fundamental physics questions again. But he also took up other charges. One of the most important tasks was the directorship of the Scuola Normale Superiore at Pisa from 1991 to 1995, where he had been nominated professor in 1981 – a commitment that he could not fulfil at the time because of his CERN engagements.

Emilio received many distinctions, among them the title of Cavaliere di Gran Croce dell’Ordine al Merito della Repubblica, one of the highest orders of the Italian state.

Despite the heavy demands of his job he always cared about his family, and in return his wife Mariella gave him loving support in difficult times.

We all regret that sadly Emilio was not well enough to enjoy the enormous recent success of CERN. Science has lost a great physicist and many of us a dear friend.

Herwig Schopper, CERN director-general, 1981–1988.

CMS: final Run I results on the Higgs boson

CCcms1_01_15th

Since the inception of the LHC, a central part of its physics programme has been aimed at establishing or ruling out the existence of the Higgs boson, the stubbornly missing building block of the Standard Model of elementary particles. After the discovery of a Higgs boson by the ATLAS and CMS experiments was announced in July 2012, the study of its properties became of paramount importance in understanding the nature of this boson and the structure of the scalar sector. Given the measured mass of the Higgs boson, all of its properties are predicted by the theory, so deviations from the predictions of the Standard Model could open a portal to new physics.

The CMS collaboration recently completed the full LHC Run 1 data analysis in each of the most important channels for the decay and production of the Higgs. Bosonic decays such as H → ZZ → 4 leptons (4l), H → γγ, and H → WW → lνlν, and fermionic decays such as H → bb, H → ττ and H → μμ, were studied, and the results have been published. All of the analyses are based on the proton–proton collision data collected in 2011 and 2012 at the LHC, corresponding to 5 fb–1 at 7 TeV and 20 fb–1 at 8 TeV centre-of-mass energy. The di-boson channels are observed with significance close to or above 5σ. The Standard Model’s hypothesis of 0+ for the spin-parity of the observed Higgs boson is found to be favoured strongly against other spin hypotheses (0 ,1±, 2±). The comparison of off-shell and on-shell production of the Higgs boson in the ZZ channel also sets a constraint on the natural width of the Higgs boson that is comparable to the width expected in the Standard Model. Furthermore, evidence is established for the direct coupling to fermions, with significance above 3σ for the decay to ττ.

The first preliminary results on the full Run 1 data were presented by CMS last July at the International Conference on High Energy Physics

The combination of all of the production and decay channels provides the opportunity to obtain a global view of the most important Higgs-boson parameters, and to disentangle the contributions to the measured rates from the various processes. The first preliminary results on the full Run 1 data were presented by CMS last July at the International Conference on High Energy Physics in Valencia. Now, the collaboration has submitted the final “Run 1 legacy” results on the Higgs boson for publication. The results combining individual channels are remarkably coherent.

A first major outcome of the combination is a precise measurement of the mass of the Higgs boson. This is achieved by exploiting the two channels with the highest resolution: H → γγ and H → ZZ → 4l. Thanks to the high precision and accurate calibration of the CMS electromagnetic calorimeter, the H → γγ channel gives a most precise single-channel measurement of MH = 124.70±0.34 GeV. Using the combination with the H → ZZ → 4l channel, the final measurement of MH =  125.03+0.29–0.31  GeV is obtained with an excellent precision of two per mille. The measurements in the two channels (figure 1) are compatible at the level of 1.6σ, indicating full consistency with the hypothesis of a single particle. The measured value of the mass is used for further studies of the Higgs-boson’s couplings. It is worth noting that the uncertainty is still dominated by the statistical uncertainty and will therefore improve in Run 2.

The various measurements performed at the two centre-of-mass energies are carried out in a large number (around 200) of mutually exclusive event categories. Each category addresses one or more of the different production and decay channels. Four production mechanisms are considered. Gluon–gluon fusion (ggH) is a purely quantum process, where a single Higgs boson is produced via a virtual top-quark loop. In vector-boson fusion (VBF), the Higgs boson is produced in association with two quarks. Lastly, in VH- and ttH-associated production, the Higgs boson is produced either in association with a W/Z boson or with a top–antitop quark pair. The main decay channels are indicated on the left of figure 2, which shows the measurement of the signal strength μ, defined as the ratio of the measured yield relative to the Standard Model prediction. All of the measurements are found to be consistent with μ = 1, which by definition indicates consistency with the prediction. The combination of all of the measurements gives an overall signal strength of 1.00±0.13. The figure also shows the signal strengths measured for the different decay tags. All of the combinations are obtained using simultaneous likelihood fits of all channels, with all of the systematic and theory uncertainties profiled in the fits.

Signal strengths compatible with Standard Model expectations are also found for each of the production mechanisms, with an observation of ggH production at more than 5σ and evidence for VBF, VH and ttH production at close to or above 3σ.

CCcms3_01_15th

Another set of tests of consistency with the Standard Model consist of introducing coupling modifiers, κ, that scale the Standard Model couplings. The simplest case is to allow one scaling factor for the coupling of the Higgs boson to the vector bosons (κV) and one for the coupling to fermions (κf), and to resolve the loops – namely gluon–gluon fusion and γγ decay – using Standard Model contributions only.

Figure 3 shows the 1σ contours obtained from the different decay channels in the plane κf versus κV, and from their combination. The only channel that can distinguish between the different relative signs of the two couplings is H → γγ, because of the negative interference between the top-quark and W-boson contributions in the loop. The combination (thick curve) shows that the measurement is consistent within 1σ with κV = κf = 1, while the opposite sign hypothesis, κV = –κf = 1, is excluded with a confidence limit (CL) larger than 95%.

Many other tests of modified couplings with respect to the Standard Model have been carried out, and all of the results indicate consistency with the predictions. For instance, the so-called “custodial” symmetry that fixes the relative couplings κWZ of the Higgs boson to W and Z bosons is verified at the 15% precision level and the couplings to fermions of the third family are verified at the 20–30% precision level.

CCcms4_01_15th

Fig. 4. Graphical representation of the results obtained from likelihood scans for a model where the gluon and photon loop-interactions with the Higgs boson are resolved in terms of other Standard Model particles. The dashed line corresponds to the Standard Model expectation. The inner bars represent the 68% CL intervals, while the outer bars represent the 95% CL intervals. The ordinate differs between fermions and vector bosons to take account of the expected Standard Model scaling of the coupling with mass, depending on the type of particle. The continuous line shows the result of the coupling–mass fit, while the inner and outer bands represent the 68% and 95% CL regions.

The Higgs boson is tightly connected with the mechanism for generating mass in the Standard Model: the Yukawa couplings for the fermions are predicted to be proportional to the mass of the fermions themselves, while the gauge couplings to the vector bosons are proportional to the masses squared of the vector bosons. Figure 4 illustrates this by showing the couplings to the Standard Model particles as a function of the mass of their masses. All of the measurements are in excellent agreement with the expected behaviour of the couplings, indicated by the black line. In this plot the H → μμ channel is also included and, even though it currently has a large uncertainty, it is consistent with the fitted line. This demonstrates beautifully that the Higgs boson is linked to the fundamental field at the origin of the masses of particles.

Summary and conclusions

CMS has just submitted for publication the final Run 1 measurements of the properties of the Higgs boson – mass, couplings and spin-parity parameters – with the highest precision allowed by the current statistics. So far, all of the results are found to be consistent, within uncertainties, with the newly established scalar sector, just as predicted for the spontaneous electroweak symmetry breaking in the Standard Model. The measurements provide overwhelming evidence that the observed Higgs-boson couples to other particles in a way that is consistent with the Standard Model predictions. After achieving the major milestone of completing all of the most important Run 1 Higgs-boson measurements, the CMS experiment will now direct its efforts towards the exploitation of the upcoming LHC run (Run 2) at a centre-of-mass energy of 13 TeV. The new energy frontier promises increased reach into the Higgs sector, but also a unique look at a totally new, unchartered territory.

ARIEL begins a new future in rare isotopes

TRIUMF is Canada’s national laboratory for particle and nuclear physics, located in Vancouver. Founded in 1968, the laboratory’s particle-accelerator-driven research has grown from nuclear and particle physics to include vibrant programmes in materials science, nuclear medicine and accelerator science, while maintaining strong particle-physics activities elsewhere, for example at CERN and the Japan Proton Accelerator Research Complex. Currently, the laboratory’s flagship on-site programme uses rare-isotope beams (RIBs) for both discovery and application in the physical and health sciences.

Rare isotopes are not found in nature, yet they have properties that have shaped the evolution of the universe in fundamental ways, from powering the burning of stars to generating the chemical elements that make up life on Earth. These isotopes are foundational for modern medical-imaging techniques, such as positron-emission tomography and single-photon emission computed tomography, and are useful for therapeutic purposes, including the treatment of cancer tumours. They are also powerful tools for scientific discovery, for example in determining the structure and dynamics of atomic nuclei, understanding the processes by which heavy elements in the universe were created, enabling precision tests of fundamental symmetries that could challenge the Standard Model of particle physics, and serving as probes of the interfaces between materials.

TRIUMF’s Isotope Separator and Accelerator – ISAC – is one of the world’s premier RIB facilities. ISAC’s high proton-beam power (up to 50 kW) that produces the rare isotopes, its chain of accelerators that propels them up to energies of 6–18 MeV per nucleon for heavy and light-mass beams, respectively, and its experimental equipment that measures their properties are unmatched in the world.

The Advanced Rare IsotopE Laboratory (ARIEL) was conceived to expand these capabilities in important new directions, and to establish TRIUMF as a world-leading laboratory in accelerator technology and in rare-isotope research for science, medicine and business. To expand the number and scope of RIBs feeding TRIUMF’s experimental facilities, ARIEL will add two high-power driver beams – one electron and one proton – and two new isotope production-target and transport systems.

Together with the existing ISAC station, the two additional target stations will triple the current isotope-production capacity, enable full utilization of the existing experimental facilities, and satisfy researcher demand for isotopes used in nuclear astrophysics, fundamental nuclear studies and searches for new particle physics, as well as in characterizing materials and in medical-isotope research. In addition, ARIEL will deliver important social and economic impacts, in the production of medical isotopes for targeted cancer therapy, in the characterization of novel materials, and in the continued advancement of accelerator technology in Canada, both at the laboratory and in partnership with industry.

The e-linac

ARIEL-I, the first stage of ARIEL, was funded in 2010 by the Canada Foundation for Innovation (CFI), the British Columbia Knowledge Development Fund, and the Canadian government. It comprises the ARIEL building (figure 1), completed in 2013, and a 25 MeV, 100 kW superconducting radio-frequency (SRF) electron linear accelerator (e-linac), which is the first stage of a new electron driver designed ultimately to achieve 50 MeV and 500 kW for the production of radioactive beams via photo-fission.

The ARIEL-I e-linac accelerated its first beam to 23 MeV in September 2014

The ARIEL-I e-linac, which accelerated its first beam to 23 MeV in September 2014, is a state-of-the-art accelerator featuring a number of technological breakthroughs (figure 2). The 10 mA continuous wave (cw) electron beam is generated in a 300 kV DC thermionic gridded-cathode assembly modulated at 650 MHz, bunched by a room-temperature 1.3 GHz RF structure, and accelerated using up to five 1.3 GHz superconducting cavities, housed in one 10 MeV injector cryomodule (ICM) and two accelerator cryomodules, each providing 20 MeV energy gain.

The design and layout of the e-linac are compatible with a future recirculation arc that can be tuned either for energy-recovery or energy-doubling operation. The electron source, designed and constructed at TRIUMF, exhibits reduced field-emission and a novel modulation scheme: the RF power is transmitted via a ceramic waveguide between the grounded vessel and the gun, so the amplifier is at ground potential. The source has been successfully tested to the full current specification of 10 mA cw. Specially designed short quadrupoles (figure 3) present minimum electron-beam aberrations by shaping the poles to be locally spherical, with radius 4π times the aperture radius (Baartman 2012).

The injector and accelerator cryomodules house the SRF cavities (figure 4), which are cooled to 2K and each driven by a 300 kW klystron. To take advantage of prior developments – and to contribute to future projects – TRIUMF chose the 1.3 GHz technology, the same as other global accelerator projects including the XFEL in Hamburg, the LCLS-II at SLAC, and the proposed International Linear Collider.

Through technology transfer from TRIUMF, the Canadian company PAVAC Industries Inc. fabricated the niobium cavities and TRIUMF constructed the cryomodules, based on the ISAC top-loading design (figure 2). The TRIUMF/PAVAC collaboration, which goes back to 2005, was born from the vision of “made in Canada” superconducting accelerators. Now, 10 years later, the relationship is a glowing example of a positive partnership between industry and a research institute.

International partnerships have been essential in facilitating technical developments for the e-linac. In 2008, TRIUMF went into partnership with the Variable Energy Cyclotron Centre (VECC) in Kolkata, for joint development of the ICM and the construction of two of them: one for ARIEL and one for ANURIB, India’s next-generation RIB facility, which is being constructed in Kolkata. In 2013, the collaboration was extended to include the development of components for ARIEL’s next phase, ARIEL-II. In addition, collaborations with Fermilab, the Helmholtz Zentrum Berlin and DESY were indispensable for the project.

ARIEL’s development is continuing with ARIEL-II, which will complete the e-linac and add the new proton driver, production targets and transport systems in preparation for first science in 2017. Funding for ARIEL-II has been requested from the CFI on behalf of 19 universities, led by the University of Victoria, and matching funds are being sought from five Canadian provinces.

ARIEL will bring unprecedented capabilities:

•The multi-user RIB capability will not only triple the RIB hours delivered to users, but also increase the richness of the science by enabling long-running experiments for fundamental symmetries that are not practical currently.

•Photo-fission will allow the production of very neutron-rich isotopes at unprecedented intensities for precision studies of r-process nuclei.

•The multi-user capability will establish depth-resolved β-detected NMR as a user facility, unique in the world.

•High production rates of novel alpha-emitting heavy nuclei will accelerate development of targeted alpha tumour therapy.

The new facility will also provide important societal benefits. In addition to the economic benefits from the commercialization of accelerator technologies (e.g. PAVAC), ARIEL will expand TRIUMF’s outstanding record in student development through participation in international collaborations and training in advanced instrumentation and accelerator technologies. The e-linac has provided the impetus to form Canada’s first graduate programme in accelerator physics. One of only a few worldwide, the programme is in high demand globally and has already produced award-winning graduates.

ARIEL is not only the future of TRIUMF, it also embodies the mission of TRIUMF at large: scientific excellence, societal impact, and economic benefit. And it is off to a great start.

Helping CERN to benefit society

CCvie1_01_15th

I first came to CERN as a student in the mid 1980s, and spent an entrancing summer learning the extent of my lack of knowledge in the field of physics (considerable!) and meeting fellow students from across Europe and further afield. It was a life-changing experience and the beginning of my love affair with CERN. On graduation I returned as a research fellow working on the Large Electron–Positron collider, but at the end of three wonderful years I reluctantly came to the realization that the world of research was not for me. I moved into a more commercial world, and have been working in the field of investments for more than 20 years.

However, as the saying goes, you can take the girl out of CERN but you can’t take CERN out of the girl. I stayed in touch, and when, a few years ago, I met Rolf Heuer, the current director-general, and heard his vision of creating a foundation that would expand CERN’s ability to reach a wider audience, I was keen to be involved.

Science is, in some respects, a field of study that is open largely to the most privileged only. To do it well requires resources – trained educators, good facilities, textbooks, access to research and, of course, opportunity. These are not available universally. I was fortunate to become a summer student at CERN, but that is possible for a lucky few only, and there are many places in the world where even basic access to textbooks or research libraries is limited or non-existent.

And to those outside of the field of science, there is not always a good understanding of why these things matter. The return on a country’s investment in science will come years into the future, beyond short-term electoral cycles. There can appear to be more immediate and pressing concerns competing for limited spending, so advocacy of the wider benefits to society of investment in science is important.

The case for pure scientific research is sometimes difficult to explain. This is not just down to the concepts themselves, which are beyond most of us to understand at anything but a superficial level. It is also because the most fundamental research does not necessarily know in advance what its ultimate usefulness or practicality might be. “Trust me, there will be some” does not sound convincing, even if experience shows that this generally turns out to be the case.

Communication of the tangible benefits of scientific discovery, which can occur a long time after the initial research, is an important part of securing the ongoing support of society for research endeavours, particularly in times of strained financial resources.

After many months of hard work, the CERN & Society Foundation was established in June 2014. Its purpose is “to spread the CERN spirit of scientific curiosity for the inspiration and benefit of society”. It aims to excite young people in the understanding and pursuit of science; to provide researchers in less privileged parts of the world with the tools and access they need to enable them to engage with the wider scientific community; to advocate the benefit of pure scientific research to key influencers; to inspire cultural activities and the arts; and to further the development of science in practical applications for the wider benefit of society as a whole, whether in medicine, technology or the environment. The excitement generated by the LHC gives us a unique opportunity to contribute to society in ways that cannot be done within the constraints of dedicated member-state funding.

To translate this vision into reality will, of course, take time. The foundation currently has a three-person board, made up of myself, Peter Jenni and the director-general. It has benefited from some initial generous donations to get it off the ground and allow us to fund our first projects.

The foundation benefits from the advice of the Fundraising Advisory Board (FAB), which ensures compliance with CERN’s Ethical Policy for Fundraising. It filters through ideas for projects looking for support, and recommends those that are likely to have the highest impact. The FAB, chaired by Markus Nordberg, consists of CERN staff who help us to prioritize the areas on which to focus. In our early years, we have three main themes where we are looking for support: education and outreach; innovation and knowledge exchange; and culture and the arts. With the help of CERN’s Development Office, we are seeking support from foundations, corporate donors and individuals. No donation is too large or small.

Matteo Castoldi, heading the Development Office, has been instrumental in the practical side of the foundation, and is a good person to contact if you have ideas for a project, want help in formalizing a proposal for FAB or would like to discuss any aspect of the CERN & Society Foundation. Our website is up and running – please take a look to find out more, and if you would like to make a donation just click on the link. Thank you in advance for your support.

Time in Powers of Ten: Natural Phenomena and Their Timescales

By Gerard ’t Hooft and Stefan Vandoren (translated by Saskia Eisberg-’t Hooft)
World Scientific
Hardback: £31
Paperback: £16
E-book: £12
Also available at the CERN bookshop

CCboo4_10_14

With powers of 10, one cannot fail to think of the iconic 1970s film made by Charles and Ray Eames – a journey through the universe departing from a picnic blanket somewhere in Chicago. However, this book is not about distance scales, rather time. And the universe it reveals is one of constant turmoil and evolution. No vast empty wastelands here, where nothing changes across many powers of 10. Journeying across the time scales, we discover a universe teeming with activity at every stage – processing, ticking, cycling, continuously moving, changing, surprising.

Every page brims with the authors’ evident enthusiasm for the workings of the universe, be it the esoteric or the more mundane. I would never have expected to read a book where cosmic microwave background radiation sits side by side with the problems of traffic congestion in the US (time = 10 trillion seconds).

Leaping in powers of 10, the book races through stories of life, the Earth and the solar system, and on to physical processes quadrillions of times the age of the universe itself. The largest and smallest of time scales transport the reader to the strange and fascinating. Just as with distance scales, the very small and the very large are intimately entwined.

There is a gap between the more anecdotal and the more scientific. Record sprint times (time = 10 seconds) and the rhythm of our biological clock (time = 100,000 seconds) are light interludes in contrast with the decay modes of the ηc meson (time = 10 yoctoseconds) and the Lamb shift (time = 1 nanosecond). While this eclecticism is part of the book’s charm, some scientific baggage is required to enjoy the contents fully.

Where the book fails, is in the design. Visually, it is a little dull. With disparate styles of graphic illustrations, many taken from Wikipedia, the image quality is not up to that of the text. A clever design could take readers on a visual voyage, adding to the impact of the writing. The story warrants this effort.

It is striking that mysteries exist at every time scale, not only at the extremes – be it the high magnetic field of pulsars (time = 1 second), the explanation of high-temperature superconductors (time = 10 million seconds) or the origin of water on Earth (time = 100 quadrillion seconds). The book reveals the extraordinary complexity of our universe – it is a fascinating journey.

 

Behind the Scenes of the Universe: From the Higgs to Dark Matter

By Gianfranco Bertone
Oxford University Press
Hardback: £19.99
Also available as an e-book, and at the CERN bookshop

CCboo2_10_14

With the discovery of a Higgs boson by the ATLAS and CMS experiments, the concept of mass has changed from an intrinsic property of each particle to the result of an interaction between the particles and the omnipresent Higgs field: the stronger that interaction is, the more it slows down the particle, which effectively behaves as if it is massive. This experimental validation of a theoretical idea born 50 years ago is a major achievement in elementary particle physics, and confirms the Standard Model as the cornerstone in our understanding of the universe. However, as is often the case in science, there is more to mass than meets the eye: most of the mass of the universe is currently believed to exist in a form that has, so far, remained hidden from our best detectors.

Gianfranco Bertone seems to have been travelling through the dark side of the universe for quite a while, and I am glad that he has taken the time to write this beautiful account of his journey. The book is easy to read, the scientific observations, puzzles and discussions being interspersed with interesting short annotations from history, art, poetry, etc. Readers should enjoy the non-technical tour through general relativity, gravitational lensing, cosmology, particle physics, etc. In particular, one learns that space–time bends light rays travelling through the universe, and that we can deduce the properties of a lens by studying the images it distorts. At the end of this learning curve we reach the conclusion that “we have a problem”: no matter where we look, and how we look, we always infer the existence of much more mass than we can see. Bertone expresses it poetically: “The cosmic scaffolding that grew the galaxies we live in and keeps them together is made of a form of matter that is unknown to us, and far more abundant in the universe than any form of matter we have touched, seen, or experienced in any way.”

The second half of the book wanders through the efforts devised to indentify the nature of dark matter, through the direct or indirect detection of dark-matter particles, with the LHC experiments, deep underground detectors, or detectors orbiting the Earth. As more data are collected and interpreted, more regions of parameters defining the properties of the dark-matter particles are excluded. In a few years, the data accumulated at the LHC and in astroparticle experiments will be such that, for many dark-matter candidates, “we must either discover them or rule them out”. The book is an excellent guide to anyone interested in witnessing that important step in the progress of fundamental physics.

Publishing and the Advancement of Science: From Selfish Genes to Galileo’s Finger

By Michael Rodgers
World Scientific
Hardback: £50
Paperback: £25
E-book: £19

CCboo3_10_14

In Publishing and the Advancement of Science, retired science editor Michael Rodgers take us on an autobiographical tour of the world of science publishing, taking in textbooks, trade paperbacks and popular science books along the way. The narrative is detailed and chronological: a blow-by-blow account of Rodgers’ career at various publishing houses, with the challenges, differences of opinion and downright arguments that it takes to get a science book to press.

Rodgers was part of the revolution in popular-science publishing that started in the 1970s, and he conveys with palpable excitement the experience of discovering great authors or reading brilliant typescripts for the first time. Readers with an interest in science will recognize such titles as Richard Dawkins’ The Selfish Gene or Peter Atkins’ Physical Chemistry, both of which Rodgers worked on. Frustratingly, he falls short of providing real insight into what makes a popular-science book great. There is a niggling sense of “I know one when I see one”, but a lack of analysis of the writing.

Rodgers’ first job in publishing – as “field editor” for Oxford University Press (OUP), starting in 1969 – had him visiting universities around the UK, commissioning academics to write books. Anecdotes about the inner workings of OUP at the time take the reader back to a charming, pre-web way of working: telephone calls and letters rather than e-mails and attachments, and responding to authors in days rather than minutes. The culture of publishing at the time is conveyed with wry humour. OUP sent memos about the proper use of the semicolon, and had a puzzlingly arcane filing system, which added to the sense of mustiness.

A section on the development of Dawkins’ seminal The Selfish Gene threw up interesting tidbits – altercations about the nature of the gene, and a discussion about what makes a good title – but I was less interested in the analysis of the US market for chemistry textbooks, or such tips as “The best time to publish a mainstream coursebook is in January, to allow maximum time for promotion.”

At times, the level of autobiographical detail dilutes Rodgers’ sense of intellectual excitement about the scientific ideas in his books. The measure of a book’s success in terms of copies sold and years in print makes publishing a commercial rather than intellectual exercise, which to some extent left me disappointed. And although Rodgers worked part time, freelance or was made redundant at various points in his career, apart from a brief section in the epilogue, he seems rather blind to the changes sweeping the publishing industry, with the advent of free online content.

Those interested in the world of publishing, with a special interest in science, will find much to like about this book. But although Rodgers provides quirky tidbits about how some famous books came to be, it falls short of telling us what makes them great.

bright-rec iop pub iop-science physcis connect