Comsol -leaderboard other pages

Topics

Linear-collider technologies for all

A superconducting RF accelerator

The LHC at CERN is a prime example of worldwide collaboration to build a large instrument and pursue frontier science. The discovery there of a particle consistent with the long-sought Higgs boson points to future directions both for the LHC and more broadly for particle physics. Now, the international community is considering machines to complement the LHC and further advance particle physics, including the favoured option: an electron–positron linear collider (LC). Two major global efforts are underway: the International Linear Collider (ILC), which is distributed among many laboratories; and the Compact Linear Collider (CLIC), centred at CERN. Both would collide electrons and positrons at tera-electron-volt energies but have different technologies, energy ranges and timescales. Now, the two efforts are coming closer together and forming a worldwide linear-collider community in the areas of accelerators, detectors and resources.

Last year, the organizers of the 2012 IEEE Nuclear Science Symposium held in Anaheim, California, decided to arrange a Special Linear Collider Event to summarize the accelerator and detector concepts for the ILC and CLIC. Held on 29–30 October, the event also included presentations on the impact of LC technologies for different applications and a discussion forum on LC perspectives. It brought together academic, industry and laboratory-based experts, providing an opportunity to discuss LC progress with the accelerator and instrumentation community at large, and to justify the investments in technology required for future particle accelerators and detectors. Representatives of the US Funding Agencies were also invited to attend.

The CLIC studies focus on an option for a multi-tera-electron-volt machine using a novel two-beam acceleration scheme

CERN’s director-general, Rolf Heuer, introduced the event before Steinar Stapnes, CLIC project leader, and Barry Barish, director of the ILC’s Global Design Effort (GDE), reviewed the two projects. The ILC concept is based on superconducting radio-frequency (SRF) cavities, with a nominal accelerating field of 31.5 MV/m, to provide e+e collisions at sub-tera-electron-volt energies in the centre-of-mass. The CLIC studies focus on an option for a multi-tera-electron-volt machine using a novel two-beam acceleration scheme, with normal-conducting accelerating structures operating at fields as high as 100 MV/m. In this approach, two beams run parallel to each other: the main beam, to be accelerated; and a drive beam, to provide the RF power for the accelerating structures.

Both studies have reached important milestones. The CLIC Conceptual Design Report was released in 2012, with three volumes for physics, detectors and accelerators. The project’s goals for the coming years are well defined, the key challenges being related to system specifications and performance studies for accelerator parts and detectors, technology developments with industry and implementation studies. The aim is to present an implementation plan by 2016, when LHC results at full design energy should become available.

The ILC GDE took a major step towards the final technical design when a draft of the four-volume Technical Design Report (TDR) was presented to the ILC Steering Committee on 15 December 2012 in Tokyo. This describes the successful establishment of the key ILC technologies, as well as advances in the detector R&D and physics studies. Although not released by the time of the NSS meeting, the TDR results served as the basis for the presentations at the special event. The chosen technologies – including SRF cavities with high gradients and state-of-the-art detector concepts – have reached a stage where, should governments decide in favour and a site be chosen, ILC construction could start almost immediately. The ILC TDR, which describes a cost-effective and mature design for an LC in the energy range 200–500 GeV, with a possible upgrade to 1 TeV, is the final deliverable for the GDE mandate.

The newly established Linear Collider Collaboration (LCC), with Lyn Evans as director, will carry out the next steps to integrate the ILC and CLIC efforts under one governance. One highlight in Anaheim was a talk on the physics of the LC by Hitoshi Murayama of the Kavli Institute for Mathematics and Physics of the Universe (IPMU) and future deputy-director for the LCC. He addressed the broader IEEE audience, reviewing how a “Higgs factory” (a 250 GeV machine) as the first phase of the ILC could elucidate the nature of the Higgs particle – complementary to the LHC. The power of the LC lies in its flexibility. It can be tuned to well defined initial states, allowing model-independent measurements from the Higgs threshold to multi-tera-electron-volt energies, as well as precision studies that could reveal new physics at a higher energy scale.

Detailed technical reviews of the ILC and CLIC accelerator concepts and associated technologies followed the opening session. Nick Walker of DESY presented the benefits of using SRF acceleration with a focus on the “globalization” of the technology and the preparation for a worldwide industrial base for the ILC construction. The ultralow cavity-wall losses allow the use of long RF pulses, greatly simplifying the RF source while facilitating efficient acceleration of high-current beams. In addition, the low RF frequency (1.3 GHz) significantly reduces the impedance of the cavities, leading to reduced beam-dynamics effects and relatively relaxed alignment tolerances. More than two decades of R&D have led to a six-fold increase in the available voltage gradient, which – together with integration into a single cryostat (cryomodule) – has resulted in an affordable and mature technology. One of the most important goals of the GDE was to demonstrate that the SRF cavities can be reliably produced in industry. By the end of 2012, two ambitious goals were achieved: to produce cavities qualified at 35 MV/m and to demonstrate that an average gradient of 31.5 MV/m can be reached for ILC cryomodules. These high-gradient cavities have now been produced by industry with 90% yield, acceptable for ILC mass-production.

CERN’s Daniel Schulte reviewed progress with the CLIC concept, which is based on 12 GHz normal conducting accelerating structures and a two-beam scheme (rather than klystrons) for a cost-effective machine. The study has over the past two decades developed high-gradient, micron-precision accelerating structures that now reach more than 100 MV/m, with a breakdown probability of only 3 × 10–7 m–1 pulse–1 during high-power tests and more than 145 MV/m in two-beam acceleration tests at the CTF3 facility at CERN (tolerating higher breakdown rates). The CLIC design is compatible with energy-staging from the ILC baseline design of 0.5 TeV up to 3 TeV. The ILC and CLIC studies are collaborating closely on a number of technical R&D issues: beam delivery and final-focus systems, beam dynamics and simulations, positron generation, damping rings and civil engineering.

Another area of common effort is the development of novel detector technologies

Another area of common effort is the development of novel detector technologies. The ILC and CLIC physics programmes are both based on two complementary detectors with a “push–pull concept” to share the beam time between them. Hitoshi Yamamoto of Tohoku University reviewed the overall detector concepts and engineering challenges: multipurpose detectors for high-precision vertex and main tracking; a highly granular calorimeter inside a large solenoid; and power pulsing of electronics. The two ILC detector concepts (ILD and SiD) formed an excellent starting point for the CLIC studies. They were adapted for the higher-energy CLIC beams and ultra-short (0.5 ns) bunch-spacing by using calorimeters with denser absorbers, modified vertex and forward detector geometries, and precise (a few nanosecond) time-stamping to cope with increased beam-induced background.

Vertex detectors, a key element of the LC physics programme, were presented by Marc Winter of CNRS/IPHC Strasbourg. Their requirements address material budget, granularity and power consumption by calling for new pixel technologies. CCDs with 50 μm final pixels, CMOS pixel sensors (MIMOSA) and depleted field-effect transistor (DEPFET) devices have been developed successfully for many years and are suited to running conditions at the ILC. For CLIC, where much faster read-out is mandatory, the R&D concentrates on multilayer devices, such as vertically integrated 3D sensors comprising interconnected layers thinner than 10 μm, which allow a thin charge-sensitive layer to be combined with several tiers of read-out fabricated in different CMOS processes.

Both ILD and SiD require efficient tracking and highly granular calorimeters, optimized for particle-flow event reconstruction but differ in the central-tracking approach. ILD aims for a large time-projection chamber (TPC) in a 3.5–4 T field, while SiD is designed as a compact, all-silicon tracking detector with a 5 T solenoid. Tim Nelson of SLAC described tracking in the SiD concept with particular emphasis on power, cooling and minimization of material. Takeshi Matsuda of KEK presented progress on the low material-budget field cage for the TPC and end-plates based on micropattern gas detectors (GEMs, Micromegas or InGrid).

Apart from their dimensions, the electromagnetic calorimeters are similar for the ILC and CLIC concepts, as Jean-Claude Brient of Laboratoire Leprince-Ringuet explained. The ILD and the SiD detectors both rely on the silicon-tungsten sampling calorimeters, with emphasis on the separation of close electromagnetic showers. The use of small scintillator strips with silicon photodiodes operated in Geiger mode (SiPM) read-out as an active medium is being considered, as well as mixed designs using alternating layers of silicon and scintillator. José Repond of Argonne National Laboratory described the progress in hadron calorimetry, which is optimized to provide the best possible separation of energy deposits from neutral and charged hadrons. Two main options are under study: small plastic scintillator tiles with embedded SiPMs or higher granularity calorimeters based on gaseous detectors. Simon Kulis of AGH-UST Cracow addressed the importance of precise luminosity measurement at the LC and the challenges of the forward calorimetry.

The discussion forum

In the accelerator instrumentation domain, tolerances at CLIC are much tighter because of the higher gradients. Thibaut Lefevre of CERN, Andrea Jeremie of LAPP/CNRS and Daniel Schulte of CERN all discussed beam instrumentation, alignment and module control, including stabilization. Emittance preservation during beam generation, acceleration and focusing are key feasibility issues for achieving high luminosity for CLIC. Extremely small beam sizes of 40 nm (1 nm) in the horizontal (vertical) planes at the interaction point require beam-based alignment down to a few micrometres over several hundred metres and stabilization of the quadrupoles along the linac to nanometres, about an order of magnitude lower than ground vibrations.

Two sessions were specially organized to discuss potential spin-off from LC detector and accelerator technologies. Marcel Demarteau of Argonne National Laboratory summarized a study report, ILC Detector R&D: Its Impact, which points to the value of sustained support for basic R&D for instrumentation. LC detector R&D has already had an impact in particle physics. For example, the DEPFET technology is chosen as a baseline for the Belle-II vertex detector; an adapted version of the MIMOSA CMOS sensor provides the baseline architecture for the upgrade of the inner tracker in ALICE at the LHC; and construction of the TPC for the T2K experiment has benefited from the ILC TPC R&D programme.

The LC hadron calorimetry collaboration (CALICE) has initiated the large-scale use of SiPMs to read out scintillator stacks (8000 channels) and the medical field has already recognized the potential of these powerful imaging calorimeters for proton computed tomography. Erika Garutti of Hamburg University described another medical imaging technique that could benefit from SiPM technology – positron-emission tomography (PET) assisted by time-of-flight (TOF) measurement, with a coincidence-time resolution of about 300 ps FWHM, which is a factor of two better than devices available commercially. Christophe de La Taille of CNRS presented a number of LC detector applications for volcano studies, astrophysics, nuclear physics and medical imaging.

Accelerator R&D is vital for particle physics. A future LC accelerator would use high-technology coupled to nanoscale precision and control on an industrial scale. Marc Ross of SLAC introduced the session, “LC Accelerator Technologies for Industrial Applications”, with an institutional perspective on the future opportunities of LC technologies. The decision in 2004 to develop the ILC based on SRF cavities allowed an unprecedented degree of global focus and participation and a high level of investment, extending the frontiers of this “technology” in terms of performance, reliability and cost.

Particle accelerators are widely used as tools in the service of science with an ever growing number of applications to society

Particle accelerators are widely used as tools in the service of science with an ever growing number of applications to society. An overview of industrial, medical and security-related uses for accelerators was presented by Stuart Henderson of Fermilab. A variety of industrial applications makes use of low-energy beams of electrons, protons and ions (about 20,000 instruments) and some 9000 medical accelerators are in operation in the world. One example of how to improve co-ordination between basic and applied accelerator science is the creation of the Illinois Accelerator Research Centre (IARC). This partnership between the US Department of Energy and the State of Illinois aims to unite industry, universities and Fermilab to advance applications that are directly relevant to society.

SRF technology has potential in a number of industrial applications, as Antony Favale of Advanced Energy Systems explained. For example, large, high-power systems could benefit significantly using SRF, although the costs of cavities and associated cryomodules are higher than for room-temperature linacs; continuous-wave accelerators operating at reasonably high gradients benefit economically and structurally from SRF technology. The industrial markets for SRF accelerators exist for defence, isotope production and accelerator-driven systems for energy production and nuclear-waste mitigation. Walter Wuensch of CERN described how the development of normal-conducting linacs based on the high-gradient 100 MV/m CLIC accelerating structures may be beneficial for a number of accelerator applications, from X-ray free-electron lasers to industrial and medical linacs. Increased performance of high-gradient accelerating structures, translated into lower cost, potentially broadens the market for such accelerators. In addition, industrial applications increasingly require micron-precision 3D geometries, similar to the CLIC prototype accelerating structures. A number of firms have taken steps to extend their capabilities in this area, working closely with the accelerator community.

Steve Lenci of Communications and Power Industries LLC presented an overview of RF technology that supports linear colliders, such as klystrons and power couplers, and discussed the use of similar technologies elsewhere in research and industry. Marc Ross summarized applications of LC instrumentation, used for beam measurements, component monitoring and control and RF feedback.

The Advanced Accelerator Association Promoting Science & Technology (AAA) aims to facilitate industry-government-academia collaboration and to promote and seek industrial applications of advanced technologies derived from R&D on accelerators, with the ILC as a model case. Founded in Japan in 2008, its membership has grown to comprise 90 companies and 38 academic institutions. As the secretary-general Masanori Matsuoka explained, one of the main goals is a study on how to reach a consensus to implement the ILC in Japan and to inform the public of the significance of advanced accelerators and ILC science through social, political and educational events.

The Special Linear Collider Event ended with a forum that brought together directors of the high-energy-physics laboratories and leading experts in LC technologies, from both the academic research sector and industry. A panel discussion, moderated by Brian Foster of the University of Hamburg/DESY, included Rolf Heuer (CERN), Joachim Mnich (DESY), Atsuto Suzuki (KEK), Stuart Henderson (Fermilab), Hitoshi Murayama (IPMU), Steinar Stapnes (CERN) and Akira Yamamoto (KEK) .

The ILC has received considerable recent attention from the Japanese government. The round-table discussion therefore began with Suzuki’s presentation of the discovery of a Higgs-like particle at CERN and the emerging initiative toward hosting an ILC in Japan. The formal government statement, which is expected within the next few years, will provide the opportunity for the early implementation of an ILC and the recent discovery at CERN is strong motivation for a staged approach. This would begin with a 250 GeV machine (a “Higgs factory”), with the possibility of increasing in energy in a longer term. Suzuki also presented the Japan Policy Council’s recommendation, Creation of Global Cities by hosting the ILC, which was published in July 2012.

The discussion then focused on three major issues: the ILC Project Implementation Plan; the ILC Technology Roadmap; and the ILC Added Value to Society. While the possibility of implementing CLIC as a project at CERN to follow the LHC was also on the table, there was less urgency for discussion because the ILC effort counts on an earlier start date. The panellists exchanged many views and opinions with the audience on how the ILC international programme could be financed and how regional priorities could be integrated into a consistent worldwide strategy for the LC. Combining extensive host-lab-based expertise together with resources from individual institutes around the world is a mandatory first step for LC construction, which will also require the development of links between projects, institutions, universities and industry in an ongoing and multifaceted approach.

SRF systems – the central technology for the ILC – have many applications, so a worldwide plan for distributing the mass production of the SRF components is necessary, with technology-transfer proceeding in parallel, in partnership with funding agencies. Another issue discussed related to the model of global collaboration between host/hub-laboratories and industry to build the ILC, where each country shares the costs and human resources. Finally, accelerator and detector developments for the LC have already penetrated many areas of science. The question is how to improve further the transfer of technology from laboratories, so as to develop viable, on-going businesses that serve as a general benefit to society; as in the successful examples, such the IARC facility and the PET-TOF detector, presented in Anaheim.

Last, but not least, this technology-oriented symposium would have been impossible without the tireless efforts of the “Special LC Event” programme committee: Jim Brau, University of Oregon, Juan Fuster (IFIC Valencia), Ingrid-Maria Gregor (DESY Hamburg), Michael Harrison (BNL), Marc Ross (FNAL), Steinar Stapnes (CERN), Maxim Titov (CEA Saclay), Nick Walker (DESY Hamburg), Akira Yamamoto (KEK) and Hitoshi Yamamoto (Tohoku University). In all, this event was considered a real success. More than 90% of participants who answered the conference questionnaire rated it extremely important.

The IEEE tradition

The 2012 Institute of Electrical and Electronics Engineers (IEEE) Nuclear Science Symposium (NSS) and Medical Imaging Conference (MIC) together with the Workshop on Room-Temperature Semiconductor X-Ray and Gamma-Ray Detectors took place at the Disneyland Hotel, Anaheim, California, on 29 October – 3 November. Having received over 850 NSS abstracts – a record number of NSS submissions for the conferences in North America – the 2012 IEEE NSS/MIC Symposium attracted more than 2200 attendees. The NSS series, which started in 1954, offers an outstanding opportunity for detector physicists and other scientists and engineers interested in the fields of nuclear science, radiation detection, accelerators, high-energy physics, astrophysics and related software. During the past decade the symposium has become the largest annual event in the area of nuclear and particle-physics instrumentation, providing an international forum to discuss the science and technology of large-scale experimental facilities at the frontiers of research.

Lori Ann White, SLAC.

Charm and beauty in Utrecht

CCutr1_01_13

The historic academic building of Utrecht University provided the setting for the 5th International Workshop on Heavy Quark Production in Heavy-Ion Collisions, offering a unique atmosphere for a lively discussion and interpretation of the current measurements on open and hidden heavy flavour in high-energy heavy-ion collisions. Held on 14–17 November, the workshop attracted some 70 researchers from around the world, a third of the participants being theorists and more than 20% female researchers. The topics for discussion covered recent results, upgrades and future experiments at CERN’s LHC, Brookhaven’s Relativistic Heavy-Ion Collider (RHIC) and the Facility for Antiproton and Ion Research (FAIR) at Darmstadt, as well as theoretical developments. There was a particular focus on the exchange of information and ideas between the experiments on open heavy-flavour reconstruction.

Open and hidden heavy flavour

Representatives from all of the major collaborations nicely summarized recent experimental results and prospects for future measurements. In particular, with the advent of the LHC, an unprecedented wealth of data on the production of heavy quarks and quarkonium in nuclear collisions has become available. One of the more spectacular effects observed at RHIC is the quenching of the transverse momentum (pT) spectra of light hadrons, related to the energy loss of quarks inside the hot quark–gluon plasma (QGP) phase produced in lead–lead (PbPb) collisions. This has now been studied in detail for the first time by the ALICE, ATLAS and CMS collaborations in the heavy-quark sector.

CCutr2_01_13

Among the highlights presented at the workshop, the ALICE collaboration reported a strong suppression (up to a factor around 5) of the production of D mesons in PbPb collisions at a centre-of-mass energy, √sNN, of 2.76 TeV, compared with proton–proton data at the same energy. The CMS experiment has also found a sizeable suppression of the yield of J/ψs coming from the decay of B hadrons. When this effect is compared with the one measured by the same experiments for light hadrons, interesting hints of a hierarchy of suppression are seen, with the beauty hadrons being less suppressed than the charmed hadrons and the latter less suppressed than light hadrons. Such an observation may be connected to the so called dead-cone effect, a reduction of small-angle gluon radiation for heavy compared with light quarks, predicted by QCD and related to the energy density reached in the medium.

In the quarkonium sector, the ALICE and CMS collaborations showed new and intriguing results on J/ψ and Υ production, respectively. A suppression of charmonium states had been previously observed at CERN’s Super Proton Synchrotron (SPS) and at RHIC and was explained as an effect of the screening of the binding colour force in a QGP. With data from the LHC, accurate results on the bottomonium states have proved for the first time – beyond any doubt – that the less-strongly bound Υ(2S) and Υ(3S) are up to five times more strongly suppressed in a QGP with respect to the tightly bound Υ(1S) state, an observation that is expected in a colour-screening scenario. On the contrary, the ALICE collaboration sees a smaller suppression-effect for the J/ψ with respect to RHIC and the SPS, despite the larger energy density reached in nuclear collisions at the LHC. An interesting hypothesis relates this observation to a recombination of cc pairs, which are produced with high multiplicity in each PbPb collision, in the later stages when the system cools down and crosses the transition temperature between the QGP and the ordinary hadronic world.

Theoretical developments

The talks on theory provided quite a comprehensive overview of the vigorous research efforts towards a theoretical understanding of heavy-quark probes in heavy-ion collisions. The experimental findings on open heavy-flavour suppression and elliptic flow have led to many theoretical investigations of heavy-quark diffusion in the strongly coupled QGP. Most models use a relativistic Fokker-Planck-Langevin approach, with drag and diffusion coefficients taken from various microscopic models for the heavy-quark interactions with the hot and dense medium. The microscopic models include estimates from perturbative QCD for elastic- and/or radiative-scattering processes, T-matrix calculations using in-medium lattice potentials (from both the free and the internal thermodynamic potentials) and collision terms in full transport simulations, including 2 ↔ 2 and 2 ↔ 3 processes in perturbative QCD.

First studies of the influence of the hadronic phase on the modifications of the open-heavy-flavour medium were presented at the workshop. Estimates of the viscosity to entropy-density ratio, η/s, from the corresponding partonic and hadronic heavy-quark transport coefficients, lead to values that are not too far from the conjectured anti-de Sitter/conformal field theory lower bound of 1/4π in the phase-transition region, showing the characteristic minimum around the critical temperature, Tc. Results from a direct calculation of the heavy-quark transport coefficients via the maximum-entropy method applied to lattice-QCD correlation functions were also reported.

In the field of heavy quarkonia, the notion of a possible regeneration of heavy quarkonia via qq recombination in the medium in addition to the dissociation/melting processes leading to their suppression in the QGP has in recent years led to detailed studies on the bound-state properties of heavy quarkonia in the hot medium. Here, the models range from the evaluation of static qq potentials in hard-thermal-loop resummed thermal-QCD to a generalization of systematic nonrelativistic QCD and heavy-quark effective theory studies, generalizing from the vacuum to thermal field theory.

These theoretical studies have already led to major progress in understanding the possible microscopic mechanisms behind the coupling of heavy-quark degrees of freedom with the hot and dense medium created in heavy-ion collisions. In future, it might be possible to gain an even better quantitative understanding of fundamental quantities such as the transport coefficients of the QGP (for example η/s) and the dissociation temperatures of heavy quarkonia, which could provide a thermometer for the QGP formed in heavy-ion collisions. Whatever happens, the workshop has provided an excellent framework to discuss this exciting theoretical work and trigger some fruitful ideas for its future development.

The observed signals for the QGP are expected to be even stronger in PbPb collisions at √sNN = 5.1 TeV (foreseen in 2015) and allow the properties of the QGP to be characterized further. Proton–lead data are urgently needed to measure the contribution from the effects in cold nuclear matter, such as nuclear shadowing and Cronin enhancement. The experimental teams at the LHC and at RHIC are working on upgrades of the inner tracking systems of their detectors, aiming for an improved resolution in impact parameter, which will make the measurement of open beauty in heavy-ion collisions feasible in the near future.

• The organizers would like to thank the Lawrence Berkeley National Laboratory and the Foundation for Fundamental Research on Matter (FOM) for financial support.

The incurable attraction of physics

A noble gas, a missing scientist and an underground laboratory. It could be the starting point for a classic detective story. But a love story? It seems unlikely. However, add in a back-story set in Spain during General Franco’s rule, plus a “eureka” moment in California, and the ingredients are there for a real romance – all of it rooted firmly in physics.

CCgom1_01_13

When Spanish particle-physicist Juan José Gómez Cadenas arrived at CERN as a summer student, the passion that he already had for physics turned into an infatuation. Thirty years later and back in his home country, Gómez Cadenas is pursuing one of nature’s most elusive particles, the neutrino, by looking where it is expected not to appear at all – in neutrinoless double-beta decay. Moreover, fiction has become entwined with fact, as he was recently invited to write a novel set at CERN. The result, Materia Extraña (Strange matter), is a scientific thriller that has already been translated into Italian.

Critical point

“Particle physicists were a rare commodity in Spain when the country first joined CERN in 1961,” Cecilia Jarlskog noted 10 years ago after a visit to “a young and rapidly expanding community” of Spanish particle physicists. Indeed, the country left CERN in 1969, when Juan was only nine years old and Spain was still under the Franco regime. Young Juan – or “JJ” as he later became known – initially wanted to become a naval officer, like his father, but in 1975 he was introduced to the wonders of physics by his cousin; Bernardo Llanas had just completed his studies with the Junta de Energía Nuclear (the forerunner of CIEMAT, the Spanish research centre for energy, the environment and technology) at the same time as Juan Antonio Rubio, who was to do so much to re-establish particle physics in Spain. The young JJ set his sights on the subject – “Suddenly the world became magic,” he recalls, “I was lost to physics” – and so began the love affair that was to take him to CERN and, in a strange twist, to write his first novel.

The critical point came in 1983. JJ was one of the first Spanish students to gain a place in CERN’s summer student programme when his country rejoined the organization. It was an amazing time to be at the laboratory: the W and Z bosons had just been discovered and the place was buzzing. “I couldn’t believe this place, it was the beginning of an absolute infatuation,” he says. That summer he met two people who were to influence his career: “My supervisor, Peter Sonderegger, with whom I learnt the ropes as an experimental physicist, and Luis Álvarez-Gaume, a rising star who took pity on the poor, hungry fellow-Spaniard hanging around at night in the CERN canteen.” After graduating from Valencia University, JJ’s PhD studies took him to the DELPHI experiment at CERN’s Large Electron–Positron collider. With the aid of a Fulbright scholarship, he then set off for America to work on the Mark II experiment at SLAC. From there it was back to CERN and DELPHI again, but in 1994 he left once more for the US, this time following his wife, Pilar Hernandez, to Harvard. An accomplished particle-physics theorist, she converted her husband to her speciality, neutrino physics, thus setting him on the trail that would lead him through the NOMAD, HARP and K2K experiments to the challenge of neutrinoless double-beta decay.

The neutrinoless challenge

Established for 15 years as professor of physics at the Institute of Nuclear and Particle Physics (IFIC), a joint venture between the University of Valencia and the Spanish research council (CSIC), he is currently leading NEXT – the Neutrino Experiment with a Xenon TPC. The aim is to search for neutrinoless double-beta decay using a high-pressure xenon time-projection chamber (TPC) in the Canfranc Underground Laboratory in the Spanish Pyrenees. JJ believes that the experiment has several advantages in the hunt for this decay mode, which would demonstrate that the neutrino must be its own antiparticle, as first proposed by Ettore Majorana (whose own life ended shrouded in mystery). The experiment uses xenon, which is relatively cheap and also cheap to enrich because it is a nobel gas. Moreover, NEXT uses gaseous xenon, which gives 10 times better energy resolution for the decay electrons than the liquid form. By using a TPC, it also provides a topological signature for the double-beta decay.

The big challenge was to find a way to amplify the charge in the xenon gas without inducing sparks. The solution came when JJ talked to David Nygren, inventor of the TPC at Berkeley. “It was one of those eureka moments,” he recalls. “Nygren proposed using electroluminescence, where you detect light emitted by ionization in a strong field near the anode. You can get 1000 UV photons for each electron. He immediately realized that we could get the resolution that way.” JJ then came up with an innovative scheme to detect those electrons in the tracking plane using light-detecting pixels (the silicon photomultiplier) – and the idea for NEXT was born. “It is hard for me not to believe in the goddess of physics,” says JJ. “Every time that I need help, she sends me an angel. It was Abe Seiden in California, Gary Feldman in Boston, Luigi di Lella and Ormundur Runolfson at CERN, Juan Antonio Rubio in Spain … and then Dave. Without him, I doubt NEXT would have ever materialized.” The collaboration now involves not only Spain and the US but also Colombia, Portugal and Russia. The generous help of a special Spanish funding programme, called CONSOLIDER-INGENIO, provided the necessary funds to get it going. “More angels came to help here,” he explains, “all of them theorists: José Manuel Labastida, at the time at the ministry of science, Álvaro de Rújula, my close friend Concepción González-García … really, the goddess gave us a good hand there.”

Despite the financial problems in Spain, JJ says that “there is a lot of good will” in MINECO, the Ministry of Economy, which currently handles science in Spain. He points out that there has already been a big investment in the experiment and that there is full support from the Canfranc Laboratory. He is particularly grateful for the “huge support and experience” of Alessandro Bettini, the former director of the Gran Sasso National Laboratory in Italy, who is now in charge at Canfranc. JJ finds Bettini and Nygren – both in their mid-seventies – inspirational characters, calling them the “Bob Dylans” of particle physics. Indeed, he set up an interview with both of them for the online cultural magazine, Jotdown – where he regularly contributes with a blog called “Faster than light”.

In many ways, JJ’s trajectory through particle physics is similar to that of any talented, energetic particle physicist pursuing his passion. So what about the novel? When did an interest in writing begin? JJ says that it goes back to when his family eventually settled in the town of Sagunto, near Valencia, when he was 15. An ancient city where modern steel-making stands alongside Roman ruins, he found it “a crucible of ideas”, where writers and artists mingled with the steel-workers, who wanted a more intellectual lifestyle for their children – especially after the return of democracy with the new constitution in 1978, following Franco’s death. JJ started writing poetry while studying physics in Sagunto, and when physics took him to SLAC in 1986, as a member of Stanford University, he was allowed to sit in on the creative-writing workshop. “I was not only the only non-native American but also the only physicist,” he recalls. “I’m not sure that they knew what to make of me.” Years later, he continued his formal education as a writer at the prestigious Escuela de Letras in Madrid.

A novel look at CERN

Around 2003, CERN was starting to become bigger news, with the construction of the LHC, experiments on antimatter and an appearance in Dan Brown’s mystery-thriller Angels & Demons. Having already written a book of short stories, La agonía de las libélulas (Agony of the dragonflies), published in 2000, JJ was approached by the Spanish publisher Espasa to write a novel that would involve CERN. Of course, the story would require action but it would also be a personal story, imbued with JJ’s love for the place. Materia Extraña, published in 2008, “deals with how someone from outside tries to come to grips with CERN,” he explains, “and also with the way that you do science.” It gives little away to say that at one and the same time it is CERN – but not CERN. For example, the director-general is a woman, with an amalgam of the characteristics that he observes to be necessary for women to succeed in physics. “The novel was presented in Madrid by Rubio,” says JJ. “At the time, we couldn’t guess he had not much time left.” (Rubio was to pass away in 2010.)

When asked by Espasa to write another book, JJ turned from fiction to fact and the issue of energy. Here he encountered “a kind of Taliban of environmentalism” and became determined to argue a more rational case. The result was El ecologista nuclear (The Nuclear Environmentalist, now published in English) in which he sets down the issues surrounding the various sources of energy. Comparing renewables, fossil fuels and nuclear power, he puts forward the case for an approach based on diversity and a mixture of sources. “The book created a lot of interest in intellectual circles in Spain,” he says. “For example, Carlos Martínez, who was president of CSIC and then secretary of state (second to the minister) liked it quite a bit. Cayetano López, now director of CIEMAT, and an authority in the field, was kind enough to present it in Madrid. It has made some impact in trying to put nuclear energy into perspective.”

So how does JJ manage to do all of this while also developing and promoting the NEXT experiment? “The trick is to find time,” he reveals. ‘We have no TV and I take no lunch, although I go for a swim.” He is also one of those lucky people who can manage with little sleep. “I write generally between 11 p.m. and 2 a.m.,” he explains, “but it is not like a mill. I’m very explosive and sometimes I go at it for 12 hours, non-stop.”

He is now considering writing about nuclear energy, along the lines of the widely acclaimed Sustainable Energy – without the hot air by Cambridge University physicist David MacKay, who is currently the chief scientific adviser at the UK’s Department of Energy and Climate Change. “The idea would be to give the facts without the polemic,” says JJ, “to really step back.” He has also been asked to write another novel, this time aimed at young adults, a group where publisher Espasa is finding new readers. While his son is only eight years old, his daughter is 12 and approaching this age group. This means that he is in touch with young-adult literature, although he finds that at present “there are too many vampires” and admits that he will be “trying to do better”. That he is a great admirer of the writing of Philip Pullman, the author of the bestselling trilogy for young people, His Dark Materials, can only bode well.

• For more about the NEXT experiment see the recent CERN Colloquium by JJ Gómez Cadenas at http://indico.cern.ch/conferenceDisplay.py?confId=225995. For a review of El ecologista nuclear see the Bookshelf section of this issue.

Work for the LHC’s first long shutdown gets under way

The LHC has been delivering data to the physics experiments since the first collisions in 2009. Now, with the first long shutdown, LS1, which started on 13 February, work begins to refurbish and consolidate aspects of the collider, together with the experiments and other accelerators in the injections chain.

LS1 was triggered by the need to consolidate the magnet interconnections so as to allow the LHC to operate at the design energy of 14 TeV in the centre-of-mass for proton–proton collisions. It has now turned into a programme involving all of the groups that have equipment in the accelerator complex, the experiments and the infrastructure systems. LS1 will see a massive programme of maintenance for the LHC and its injectors in the wake of more than three years of operation without the long winter shutdowns that were the norm in the past.

The main driving effort will be the consolidation of the 10,170 high-current splices between the superconducting magnets. As many as 1000–1500 splices will need to be redone and more than 27,000 shunts added to overcome possible problems with poor contacts between the superconducting cable and the copper stabilizer that led to the breakdown in September 2008.

The teams will start by opening up the interconnections between each of the 1695 main magnet cryostats. They will repair and consolidate around 500 interconnections at a time, in work that will gradually cover the entire 27-km circumference of the LHC. The effort on the LHC ring will also involve the exchange of 19 magnets, consolidation of the cryogenic feed boxes and installation of pressure-relief valves on the sectors that have not yet been equipped with them.

The Radiation to Electronics project (R2E) will see the protection of sensitive electronic equipment optimized by relocating the equipment or by adding shielding. Nor will work during LS1 be confined to the LHC. Major renovation work is scheduled, for example, for the Proton Synchrotron, the Super Proton Synchrotron and the LHC experiments.

Preparations for LS1 started more than three years ago, with the detailed planning of manpower and other resources. For example, Building 180 on the Meyrin site at CERN recently became a hive of activity as a training centre for the technicians who are implementing the various repairs and modifications. The pictures shown here give the flavour of this activity.

A view of the Large Magnet Facility
Plug-in modules
Welding
Workshops
The cutting tool

• More detailed articles on the work being done during LS1 will appear in the coming months. For news of the activities, watch out for articles in CERN Bulletin at http://cds.cern.ch/journal/CERNBulletin/2013/06/News%20Articles/?ln=en.

Spin physics in Dubna

SPIN 2012, the 20th International Symposium on Spin Physics, took place at the Joint Institute for Nuclear Research (JINR) in Dubna on 17–22 September. Around 300 participants attended from JINR and institutes in 22 countries (mainly Germany, Italy, Japan, Russia and the US). It consisted of a traditional mix of plenary and parallel sessions. Presentations covered the spin structure of hadrons, spin effects in reactions with lepton and hadron beams, spin physics beyond the Standard Model and future experiments, as well as the techniques of polarized beams and targets, and the application of spin phenomena in medicine and technology.

CCspi1_01_13

The symposium began with a focus on work at Dubna, starting with the unveiling of a monument to Vladimir Veksler, who invented the principle of phase stability (independently from Edwin McMillan in the US) and founded the 10 GeV Synchro-phasotron in Dubna in 1955. Talks followed about the future projects to be carried out at JINR’s newest facility, the Nuclotron-based Ion Collider fAcility (NICA). The complex will include an upgraded superconducting synchrotron, Nuclotron-M, with an area for fixed-target experiments, as well as a collider with two intersections for polarized protons (at 12 GeV per beam) or deuterons and nuclei (5 A GeV per beam). It will provide opportunities for a range of polarization studies to complement global data and will particularly help to solve the puzzles of spin effects that have been awaiting solutions since the 1970s. The spin community at the symposium supported the plans for these unique capabilities, and JINR’s director, Victor Matveev, announced that the project is ready to invite international nominations for leading positions in the spin programme at NICA.

The experimental landscape

In the US, Jefferson Lab’s programme of experiments on generalized parton distributions (GPDs) will be implemented with upgraded detectors and an increase in the energy of the Continuous Electron Beam Accelerator Facility from 6 GeV up to 12 GeV. The laboratory is also considering the construction of a new synchrotron to accelerate protons and nuclei up to 250 GeV before collision with 12 GeV electrons. In a similar way, a new 10–30 GeV electron accelerator is being proposed at Brookhaven National Laboratory to provide collisions between electrons and polarized protons and ions, including polarized 3He nuclei, at the Relativistic Heavy-Ion Collider (RHIC). The aim will be to investigate the spin structure of the proton and the neutron.

CCspi2_01_13

At CERN, the COMPASS-II project has been approved, firstly to study Drell-Yan muon-pair production in collisions of pions with polarized nucleons, to investigate the nucleon’s parton distribution functions (PDFs). A second aim is to study GPDs via the deeply virtual Compton-scattering processes of exclusive photon and meson production. The latter processes will provide the possibility for measuring the contribution of the orbital angular momenta of quarks and gluons to the nucleon spin. The Institute of High Energy Physics (IHEP), Protvino, has a programme at the U-70 accelerator for obtaining polarized proton and antiproton beams from Λ decay for spin studies at the SPASCHARM facility, which is currently under construction.

The participants heard with interest the plans to construct dedicated facilities for determining the electric dipole moment (EDM) of the proton and nuclei, with proposals by the Storage Ring EDM collaboration at Brookhaven and the JEDI collaboration at Jülich. The dipole moment of fundamental particles violates both parity and time-reversal invariance. Its detection would indicate the violation of the Standard Model and would, in particular, make it possible to approach the problem of understanding the baryon asymmetry of the universe. The proposed experiments would reduce the measurement limit on the deuteron EDM down to 10–29 e cm.

Classical experiments studying the nucleon spin structure at high energies use both lepton scattering on polarized nucleons (e.g. in HERMES at DESY, COMPASS and at Jefferson Lab) and collisions of polarized hadrons (at RHIC, IHEP and JINR). A unified description of these different high-energy processes is becoming possible within the context of QCD, the theory of strong interactions. Related properties, such as factorization, local quark–hadron duality and asymptotic freedom, allow the calculation of the characteristics of the process within the framework of perturbation theory. At the same time, PDFs, correlation and fragmentation functions are not calculable in perturbative QCD, but being universal they should be either parameterized and determined using various processes or calculated within some model approaches. A number of talks at the symposium were devoted to the development and application of such models.

Theory confronts experiment

Experiments involving spin have brought about the demise of more theories than any other single physical parameter. Modern theoretical descriptions of spin-dependent PDFs, especially those including the internal transverse-parton motion, were discussed at the symposium. In this case, the number of PDFs increases and the picture that is related to them loses – to a considerable degree – the simplicity of a parton model with its probabilistic interpretation. One of the difficulties here concerns how the PDFs evolve with a change in the wavelength of the probe particle. A new approach to solving this problem was outlined and demonstrated for the so-called Sivers asymmetry measured in data from the HERMES and COMPASS experiments (figure 1).

CCspi3_01_13

The helicity distributions of the quarks in a nucleon are the most thoroughly studied so far. The results of the most accurate measurements by COMPASS, HERMES and the CLAS experiment at Jefferson Lab were presented by the collaborations. The present-day experimental data are sufficiently precise to include them in QCD analysis. Two new alternative methods for the QCD analysis of deep-inelastic scattering (DIS) and semi-inclusive DIS (SIDIS) data allow a positive polarization of strange quarks to be excluded with a high probability. As for the gluon polarization, the results of its direct measurement by the COMPASS experiment, which are confirmed by the PHENIX and STAR experiments at RHIC, also agree with QCD analysis. The low value of gluon polarization indicates that its contribution to nucleon spin is not enough to resolve the so-called nucleon-spin crisis. Hopes to overcome this crisis are now connected to the possible contributions of the orbital angular momenta of quarks and gluons, to be measured from GPDs. There were talks on different theoretical aspects of GPDs, as well as experimental aspects of their measurement, in the context of the HERMES, CLAS and COMPASS experiments.

Other important spin distribution functions manifest themselves in the lepton DIS off transversely polarized nucleons. The processes in which the polarization of only one particle (initial or final) is known are especially interesting. However, although relatively simple from the point of view of the experiment, they are complicated from the theoretical point of view (such complementarities frequently occur). These single-spin asymmetries are related to T-odd effects, i.e. they seemingly break invariance with respect to time reversal. However, it is a case of “effective breaking” – that is, it is not related to a true non-invariance of a fundamental interaction (here, the strong interaction, described by QCD) with respect to time reversal but to its simulation by the effects of re-scattering in the final or initial states. The single asymmetries have been studied by theorists for more than 20 years. These studies have received a fresh impetus in recent years in connection with new experimental data on single-spin asymmetries in the semi-inclusive electroproduction of hadrons off longitudinally and transversely polarized and unpolarized nucleons.

Reports from the COMPASS collaborations on transverse-momentum-dependent (TMD) asymmetries were one of the highlights of the symposium. The experiment is studying as many as 14 different TMD asymmetries. Two of them, the Collins and Sivers asymmetries (figure 2) – which are responsible for the left–right asymmetries of hadrons in the fragmentation of transversely polarized quarks and quark distributions in transversely polarized nucleons – are now definitely established in the global analysis of all of the available data, although other TMD effects require further study. The results of studies of the transverse structure of the proton at Jefferson Lab were also presented at the symposium.

CCspi4_01_13

The PHENIX and STAR collaborations have new data on the single-spin asymmetries of pions and η-mesons produced in proton–proton collisions at 200 GeV per beam at RHIC, with one of the beams polarized and the other unpolarized. They observe amazingly large asymmetries in the forward rapidity region of the fragmenting polarized or unpolarized protons, with a fall to zero in the central rapidity region. A similar effect was observed earlier at Protvino and at Fermilab, but at lower energies, thus confirming energy independence (figure 3). In addition, there is no fall with rising transverse momentum in the values of the asymmetry measured at RHIC. The particular mechanism for these asymmetries remains a puzzle so far.

So although single-spin asymmetries on the whole are described by existing theory, developments continue. The T-odd distribution functions involved lose the key property of universality and become “effective”, that is, dependent on the process in which they are observed. In particular, the most fundamental QCD prediction is the change of sign of the Sivers PDF determined from SIDIS processes and from Drell-Yan pair-production on a transversely polarized target. This prediction is to be checked by the COMPASS-II experiment as well as at RHIC, NICA and in the PANDA and PAX experiments at the Facility for Antiproton and Ion Research.

New data from Jefferson Lab on measurements of the ratio of the proton’s electric and magnetic form factors performed by the technique of recoil polarization gave rise to significant interest and discussions at the symposium. The previous measurements from Jefferson Lab showed that this ratio is not constant, as had been suggested for a long time, but decreases linearly with increasing momentum transfer, Q2 – the so-called “form factor crisis”. New data from the GEp(III) experiment indicate a flattening of this ratio in the region of Q2 = 6–8 GeV2. The question of whether this behaviour is a result of an incomplete calculation of radiative corrections – in particular, two-photon exchange – remains open.

The symposium enjoyed hearing the first results related to spin physics from experiments at CERN’s LHC. In particular, many discussions focused on the role of spin in investigating the recently discovered particle with a mass of 125 GeV, which could be the Higgs boson, as well as in studies of the polarization of W and Z bosons, and in heavy-quark physics. A number of talks were dedicated to the opportunities for theory related to searches for the Z’ and other exotics at the LHC and the future electron–positron International Linear Collider.

On the technical side there was confirmation of the method of obtaining the proton-beam polarization at the COSY facility in Jülich by spin filtration in the polarized gas target. This method can also be used for polarization of an antiproton beam, which will be important for measurements of different spin distributions in the nucleon via Drell-Yan muon-pair production in polarized proton–antiproton collisions in the PANDA and PAX experiments. There were also discussions on sources of polarized particles, the physics of polarized-beam acceleration, polarimeters and polarized-target techniques. In addition, there were reports on applications of hyperpolarized 3He and 19F in different fields of physics, applied science and medicine.

The main results of the symposium were summarized in an excellent concluding talk by Franco Bradamante from Trieste. The proceedings will be published in special volumes of Physics of Elementary Particles and Atomic Nuclei. The International Committee on Spin Physics, which met during the symposium, emphasized the excellent organization and success of the meeting in Dubna and decided that the 21st Symposium of Spin Physics will take place in Beijing in September 2014.

The incomprehensibility principle

Educators and psychologists invented the term “attention span” to describe the length of time anyone can concentrate on a particular task before becoming distracted. It is a useful term but span, or duration, is only one aspect of attention. Attention must also have an intensity – and the two variables are independent of each other. Perhaps one can postulate an analogue of the Heisenberg uncertainty principle, in which the intensity of attention multiplied by its span cannot exceed some fixed value. I call this the “incomprehensibility principle” and I have had plenty of opportunities to observe its consequences.

CCvie1_01_13

In the hands of skilled presenters, information can be carefully packaged as entertainment so that the attention needed to digest it is minimal. The trick is to mask the effort with compelling emotional appeal and a floppy boy-band haircut. However, the need to pay attention is still there; in fact, absorbing even the most trivial information demands a modicum of attention. How many of us, when leaving a cinema, have had the nagging feeling that although the film made great entertainment some details of the plot remained less than crystal clear?

The existence of a minimum level of attention suggests that it is, in some sense, a quantum substance. This means that under close examination, any apparently continuous or sustained effort at paying attention will be revealed as a series of discrete micro-efforts. However, while attention can be chopped up and interleaved with other activities, even tiny pulses of attention demand full concentration, to the exclusion of all other voluntary activities. Any attempt at multitasking, such as using a mobile phone while driving a car, is counterproductive.

The incomprehensibility principle plays a major role in education, where it is closely linked to the learning process. Because of the subject matter and/or the teacher, some school lessons require more time to assimilate than others. This trend accelerates in higher education. In my case, a hint of what was to come appeared during my third year of undergraduate physics, when I attended additional lectures on quantum mechanics in the mathematics department at Imperial College London.

My teacher was Abdus Salam, who went on to share the Nobel Prize for Physics in 1979. Salam’s lectures were exquisitely incomprehensible; as I look back, I realize he was probably echoing his own experiences at Cambridge some 15 years earlier at the hands of Paul Dirac. But he quickly referred us to Dirac’s book, The Principles of Quantum Mechanics. At a first and even a second glance, this book shone no light at all but after intense study, a rewarding glimmer of illumination appeared out of the darkness.

Motivated by Salam’s unintelligibility, I began postgraduate studies in physics only to find that my previous exposure to incomprehensibility had been merely an introduction. By then, there were no longer any textbooks to fall back on and journal papers were impressively baffling. With time, though, I realized that – like Dirac’s book – they could be painfully decrypted at “leisure”, line by line, with help from enlightened colleagues.

The real problem with the incomprehensibility principle came when I had to absorb information in real time, during seminars and talks. The most impenetrable of these talks always came from American speakers because they were, at the time, wielding the heavy cutting tools at the face of physics research. Consequently, I developed an association between incomprehensibility and accent. This reached a climax when I visited the US, where I always had the feeling that dubious characters hanging out at bus stations and rest stops must somehow be experts in S-matrix theory and the like, travelling from one seminar to the next. Several years later, when I was at CERN, seminars were instead delivered in thick European accents and concepts such as “muon punch-through” became more of an obstacle when pointed out in a heavy German accent.

Nevertheless, I persevered and slowly developed new skills. The incomprehensibility principle cannot be bypassed but even taking into account added difficulties such as the speaker’s accent or speed of delivery – not to mention bad acoustics or poor visual “aids” – it is still possible to optimize one’s absorption of information.

One way of doing this is to monitor difficult presentations in “background mode”, paying just enough attention to follow the gist of the argument until a key point is about to be reached. At that moment, a concerted effort can be made to grab a vital piece of information as it whistles past, before it disappears into the obscurity of the intellectual stratosphere. The trick is to do this at just the right time, so that each concentrated effort is not fruitless. “Only cross your bridges when you come to them”, as the old adage goes.

By adopting this technique, I was able to cover frontier meetings on subjects of which I was supremely ignorant, including microprocessors, cosmology and medical imaging, among others. Journalists who find themselves baffled at scientific press conferences would do well to follow my example, for the truth is that there will always be a fresh supply of incomprehensibility in physics. Don’t be disappointed!

Gordon Fraser. Gordon, who was editor of CERN Courier for many years, wrote this as a ‘Lateral Thought’ for Physics World magazine but died before the article could be revised (see obituary). It was completed by staff at Physics World and is published in both magazines this month as a tribute.

Marking the end of the first proton run

At 6 a.m. on 17 December, operators ended the LHC’s first three-year-long run for proton physics with a new performance milestone. In the preceding days, the space between proton bunches had been successfully halved to the design specification of 25 ns rather than the 50 ns used so far.

Halving the bunch spacing allowed the number of bunches in the machine to be doubled, resulting in a record number of 2748 bunches in each beam; previously the LHC had been running with around 1380 bunches per beam. This gave a record beam intensity of 2.7 × 1014 protons in both beams at the injection energy of 450 GeV.

The LHC operations team then performed a number of ramps taking 25 ns beams from 450 GeV to 4 TeV, increased the total number of bunches at each step to a maximum of 804 per beam. The stepwise approach is needed to monitor the effects of additional electron cloud produced when synchrotron radiation emitted by the protons strikes the vacuum chamber – the synchrotron-radiation photon flux increases significantly as the energy of the protons is increased.

Electron cloud is strongly enhanced by the reduced spacing between bunches and is one of the main limitations for 25 ns operation. It has negative effects on the beam (increasing beam size and losses), the cryogenics (in the heat load on the beam pipe) and the vacuum (pressure rise). As a result, a period of beam-pipe conditioning known as “scrubbing” was needed before ramping the beams. During this period, the machine was operated in a controlled way with beams of increasingly high intensity. This helps to improve the surface characteristics of the beam pipe and reduces the density of the electron cloud. Once each beam had been ramped to 4 TeV, a pilot physics run of several hours took place with up to 396 bunches, spaced at 25 ns, in each beam. Although the tests were successful, significantly more scrubbing will be required before the full 25 ns beam can be used operationally.

While these tests were taking place, on 13 December representatives of the LHC and five of its experiments delivered a round-up report to CERN Council. All of the collaborations congratulated the LHC team on the machine’s exemplary performance over the first three full years of running. In 2012, not only did the collision energy increase from 7 TeV to 8 TeV but the instantaneous luminosity reached 7.7 × 1033 cm–2 s–1, more than twice the maximum value obtained in 2011 (3.5 × 1033 cm–2 s–1). News from the experiments included LHCb’s measurement of the decay of the Bs meson into two muons (Bs → μμ seen after being sought for decades), ALICE’s detailed studies of the quark–gluon plasma and TOTEM’s insights on the structure of the proton.

ATLAS and CMS gave updates on the Higgs-like particle first announced in July, with each experiment now observing the new particle with a significance close to 7σ, well beyond the 5σ required for a discovery. So far, the particle’s properties seem consistent with those of a Standard Model Higgs boson. The two collaborations are, however, careful to say that further analysis of the data – and a probable combination of both experiments’ data next year – will be required before some key properties of the new particle, such as its spin, can be determined conclusively. The focus of the analysis has now moved from discovery to measurement of the new particle in its individual decay channels.

With December 2012 marking the end of the first LHC proton physics running period, 2013 sees a four-week run from mid-January to mid-February for proton–lead collisions before going into a long shut-down for consolidation and maintenance until the end of 2014. Running will resume in 2015 at an increased collision energy of 13 TeV.

CERN becomes UN observer

CCnew2_01_13

On 14 December, the UN General Assembly adopted a resolution to allow CERN to participate in the work of the General Assembly and to attend its sessions as an observer. With this new status, the laboratory can promote the essential role of basic science in development.

In a meeting with UN secretary-general, Ban Ki-moon, on 17 December, CERN’s director-general, Rolf Heuer, pledged that CERN was willing to contribute actively to the UN’s efforts to promote science, in particular UNESCO’s initiative “Science for sustainable development”. Ban Ki-moon, left, with Rolf Heuer.

BOSS gives clearer view of baryon oscillations

CCnew3_01_13

In November the Baryon Oscillation Spectroscopic Survey (BOSS) released its second major result of 2012, using 48,000 quasars with redshifts (z) up to 3.5 as backlights to map intergalactic hydrogen gas in the early universe for the first time, as far back as 11,500 million years ago.

As the light from each quasar passes through clouds of gas on its way to Earth, its spectrum accumulates a thicket of hydrogen absorption lines, the “Lyman-alpha forest”, whose redshifts and prominence reveal the varying density of the gas along the line of sight. BOSS collected enough close-together quasars to map the distribution of the gas in 3D over a wide expanse of sky.

The largest component of the third Sloan Digital Sky Survey, BOSS measures baryon acoustic oscillations (BAO) – recurring peaks of matter density that are most evident in net-like strands of galaxies. Initially imprinted in the cosmic microwave background radiation, BAO provide a ruler for measuring the universe’s expansion history and probing the nature of dark energy.

In March 2012, BOSS released its first results on more than 350,000 galaxies up to z = 0.7, or 7000 million years ago. However, only quasars are bright enough to probe the gravity-dominated early universe when expansion was slowing, well before the transition to the present, where dark energy dominates and expansion is accelerating. When complete, BOSS will have surveyed 1.5 million galaxies and 160,000 quasars.

To resolve the nature of dark energy will need even greater precision. The BigBOSS collaboration, which, like BOSS, is led by scientists at Lawrence Berkeley National Laboratory (LBNL), proposes to modify the 4-m Mayall Telescope to survey 24 million galaxies to z = 1.7, plus two million quasars to z = 3.5. The Gordon and Betty Moore Foundation recently awarded a grant of $2.1 million to help fund the spectrograph and corrector optics, two key BigBOSS technologies.

Europe launches consortium for astroparticle physics

At the end of November, European funding agencies for astroparticle physics launched a new sustainable entity, the Astroparticle Physics European Consortium (APPEC). This will build on the successful work of the European-funded network, the AStroParticle European Research Area (ASPERA).

Over the past six years, ASPERA has brought together funding agencies and the physics community to set up European co-ordination for astroparticle physics. It has developed common R&D calls and created closer relationships to industry and other research fields. Above all, ASPERA has developed a European strategy for astroparticle physics to prioritize the large infrastructures needed to solve universal mysteries in concerning, for example, neutrinos, gravitational waves, dark matter and dark energy.

APPEC now plans to develop a European common action plan to fund the upcoming large astroparticle-physics infrastructures as defined in ASPERA’s road map. Ten countries have already joined the new APPEC consortium, with nine others following the accession process. APPEC’s activities will be organized through three functional centres, located at DESY, the Astronomy, Particle Physics and Cosmology laboratory of the French CNRS/CEA, and the INFN’s Gran Sasso National Laboratory. Stavros Katsanevas of CNRS has been elected as chair of APPEC and Thomas Berghoefer of DESY as general secretary.

• APPEC is the Astroparticle Physics European Consortium. It currently comprises 10 countries represented by their Ministries, funding agencies or their designated institution: Belgium (FWO), Croatia (HRZZ), France (CEA, CNRS), Germany (DESY), Ireland (RIA), Italy (INFN), The Netherlands (FOM), Poland (NCN), Romania (IFIN) and the UK (STFC).

bright-rec iop pub iop-science physcis connect