Bluefors – leaderboard other pages

Topics

MoEDAL releases new mass limits for the production of monopoles

In April, the MoEDAL collaboration submitted its first physics-research publication on the search for magnetic monopoles utilising a 160 kg prototype MoEDAL trapping detector exposed to 0.75 fb–1 of 8 TeV pp collisions, which was subsequently removed and monitored by a SQUID magnetometer located at ETH Zurich. This is the first time that a dedicated scalable and reusable trapping array has been deployed at an accelerator facility.

The innovative MoEDAL detector (CERN Courier May 2010 p19) employs unconventional methodologies designed to search for highly ionising messengers of new physics such as magnetic monopoles or massive (pseudo-)stable electrically charged particles from a number of beyond-the-Standard-Model scenarios. The largely passive MoEDAL detector is deployed at point 8 on the LHC ring, sharing the intersection region with LHCb. It employs three separate detector systems. The first is comprised of nuclear track detectors (NTDs) sensitive only to new physics. Second, it is uniquely able to trap particle messengers of physics from beyond the Standard Model, for further study in the laboratory. Third, MoEDAL’s radiation environment is monitored by a TimePix pixel-detector array.

Clearly, a unique property of the magnetic monopole is that it has magnetic charge. Imagine that a magnetic monopole traverses the superconducting wire coil of a superconducting quantum interference device (SQUID). As the monopole approaches the coil, its magnetic charge drives an electrical current within the superconducting coil. The current continues to flow in the coil after the monopole has passed because the wire is superconducting, without electrical resistance. The induced current depends only on the magnetic charge and is independent of the monopole’s speed and mass.

In the early 1980s, Blas Cabrera was the first to deploy a SQUID device (CERN Courier April 2001 p12) in an experiment to directly detect magnetic monopoles from the cosmos. The MoEDAL detector can also directly detect magnetic charge using SQUID technology, but in a different way. Rather than the monopole being directly detected in the SQUID coil à la Cabrera, MoEDAL captures the monopoles – in this case produced in LHC collisions – in aluminium trapping volumes that are subsequently monitored by a single SQUID magnetometer.

No evidence for trapped monopoles was seen in data analysed for MoEDAL’s first physics publication described here. The resulting mass limit for monopole production with a single Dirac (magnetic) charge (1gD) is roughly half that of the recent ATLAS 8 TeV result. However, mass limits for the production of monopoles with the higher charges 2gD and 3gD are the LHC’s first to date, and superior to those from previous collider experiments. Figure 1 shows the cross-section upper limits for the production of spin-1/2 monopoles by the Drell–Yan (DY) mechanism with charges up to 4gD. Additionally, a model-independent 95% CL upper limit was obtained for monopole charge up to 6gD and mass reaching 3.5 TeV, again demonstrating MoEDAL’s superior acceptance of higher charges.

Despite a relatively small solid-angle coverage and modest integrated luminosity, MoEDAL’s prototype monopole trapping detector probed ranges of charge, mass and energy inaccessible to the other LHC experiments. The full detector system containing 0.8 tonnes of aluminium trapping detector volumes and around 100 m2 of plastic NTDs was installed late in 2014 for the LHC start-up at 13 TeV in 2015. The MoEDAL collaboration is now working on the analysis of data obtained from pp and heavy-ion running in 2015, with the exciting possibility of revolutionary discoveries to come.

BEPCII reaches its design luminosity

A new luminosity record at the charm-tau energy region was recently broken again by the Beijing Electron–Positron Collider (BEPCII). The new record, 1 × 1033 cm–2s–1 at 1.89 GeV beam energy, is also the design luminosity for this collider at its design beam energy.

BEPCII, the upgrade project of BEPC (CERN Courier September 2008 p7), is a double-ring collider working at 1–2.3 GeV beam energy with a design luminosity of 1 × 1033 cm–2s–1 at an optimised beam energy of 1.89 GeV. Because of its performance, BEPCII can be seen as a charm-tau factory. The same as BEPC, BEPCII is characterised as “one machine, two purposes”. Indeed, the machine not only provides beam for high-energy physics experiments, it also provides synchrotron-radiation (SR) light to users in parasitic and dedicated modes.

BEPCII is installed in the tunnel that hosted its predecessor, BEPC. Its electron and positron rings, called BER and BPR, respectively, have a circumference of 237.5 m. BER and BPR run in parallel and have a crossing angle of 22 mrad at their interaction point (IP). On the opposite point to the IP, BER and BPR cross with a vertical bump created for each beam by local correctors as its original design. The third ring, BSR, resulting from the connection of the two half rings of BER and BPR, has a circumference of 241.1 m and can be run as a dedicated synchrotron light source at 2.5 GeV energy and a maximum beam current of 250 mA.

Installation of BEPCII was completed in 2006. Since then, the machine has passed the national check and other tests, together with its new detector, BESIII. In mid-July 2009, the luminosity reached 3.2 × 1032 cm–2s–1. The data-taking for high-energy physics started in August 2009. Besides running at 1.89 GeV design energy from 2010 to 2011, BEPCII has been run at other beam energies, from 1 GeV to 2.3 GeV, for different high-energy physics experiments.

Enhancing measures

In the past seven years, some measures have been taken to enhance the peak and integrated luminosity:

• A longitudinal feedback system was installed to suppress the longitudinal multibunch instability in 2010. During the high-energy-physics data-taking, the horizontal betatron tunes of two rings were moved to very close to half integers – 0.504 or 0.505. The luminosity at the design energy reached 5.21 × 1032 cm–2s–1 in 2010, and 6.49 × 1032 cm–2s–1 in 2011 with 720 mA/88 bunches/beam.

• In 2011, the vacuum chambers and eight magnets near the north crossing point were moved by 15 cm to mitigate the parasitic beam–beam interaction. The movement changed the layout of the machine and the beam separation from vertical to horizontal.

• The betatron tunes were changed from the region of (6.5, 5.5) to (7.5, 5.5), reducing the momentum compaction and shortening the bunch length. A luminosity of 7.08 × 1032 cm–2s–1 with 735 mA/130 bunches/beam was achieved in 2013.

• The emittance was increased from 100 nm· rad to 128 nm· rad to increase the single-bunch current. The luminosity reached 8.53 × 1032 cm–2s–1 with 700 mA/92 bunches/beam in late 2014.

High beam current is the main feature of this type of collider, which is also a big challenge for BEPCII. The direct feedback of the radiofrequency system was turned on, which helps higher beam current to be more stable. The transverse-feedback system was another big challenge. Beam collision helps to suppress the multibunch instability in the positron ring. The bunch pattern was optimised carefully to increase the luminosity. Finally, thanks to the efforts of all of the accelerator team of BEPCII, the design luminosity, 1 × 1033 cm–2s–1 with 850 mA/120 bunches/beam at its design energy, which is 100 times higher than the luminosity of BEPC at the same beam energy, was reached at 22.29 p.m. on 5 April 2016. The breakthrough from BEPC to BEPCII is now completed.

Protons accelerated to PeV energies

The High Energy Stereoscopic System (HESS) – an array of Cherenkov telescopes in Namibia – has detected gamma-ray emission from the central region of the Milky Way at energies never reached before. The likely source of this diffuse emission is the supermassive black hole at the centre of our Galaxy, which would have accelerated protons to peta-electron-volt (PeV) energies.

The Earth is constantly bombarded by high-energy particles (protons, electrons and atomic nuclei). Being electrically charged, these cosmic rays are randomly deflected by the turbulent magnetic field pervading our Galaxy. This makes it impossible to directly identify their source, and led to a century-long mystery as to their origin. A way to overcome this limitation is to look at gamma rays produced by the interaction of cosmic rays with light and gas in the neighbourhood of their source. These gamma rays travel in straight lines, undeflected by magnetic fields, and can therefore be traced back to their origin.

When a very-high-energy gamma ray reaches the Earth, it interacts with a molecule in the upper atmosphere, producing a shower of secondary particles that emit a short pulse of Cherenkov light. By detecting these flashes of light using telescopes equipped with large mirrors, sensitive photodetectors, and fast electronics, more than 100 sources of very-high-energy gamma rays have been identified over the past three decades. HESS is the only state-of-the-art array of Cherenkov telescopes that is located in the southern hemisphere – a perfect viewpoint for the centre of the Milky Way.

Earlier observations have shown that cosmic rays with energies up to approximately 100 tera-electron-volts (TeV) are produced by supernova remnants and pulsar-wind nebulae. Although theoretical arguments and direct measurements of cosmic rays suggest a galactic origin of particles up to PeV energies, the search for such a “Pevatron” accelerator has been unsuccessful, so far.

The HESS collaboration has now found evidence that there is a “Pevatron” in the central 33 light-years of the Galaxy. This result, published in Nature, is based on deep observations – obtained between 2004 and 2013 – of the surrounding giant molecular cloud extending approximately 500 light-years. The production of PeV protons is deduced from the obtained spectrum of gamma rays, which is a power law extending to multi-TeV energies without showing a high-energy cut-off. The spatial localisation comes from the observation that the cosmic-ray density decreases with a 1/r relation, where r is the distance from the galactic centre. The 1/r profile indicates a quasi-continuous central injection of protons during at least about 1000 years.

Given these properties, the most plausible source of PeV protons is Sagittarius A*, the supermassive black hole at the centre of our Galaxy. According to the authors, the acceleration could originate in the accretion flow in the immediate vicinity of the black hole or further away, where a fraction of the material falling towards the black hole is ejected back into the environment. However, to account for the bulk of PeV cosmic rays detected on Earth, the currently quiet supermassive black hole would have had to be much more active in the past million years. If true, this finding would dramatically influence the century-old debate concerning the origin of these enigmatic particles.

CERN’s IT gears up to face the challenges of LHC Run 2

Résumé

L’informatique du CERN prête à relever les défis de l’Exploitation 2 du LHC

Pour l’Exploitation 2, le LHC va continuer à ouvrir la voie à de nouvelles découvertes en fournissant aux expériences jusqu’à un milliard de collisions par seconde. À plus haute énergie et intensité, les collisions sont plus complexes à reconstruire et analyser ; les besoins en capacité de calcul sont par conséquent plus élevés. La deuxième période d’exploitation doit fournir deux fois plus de données que la première, soit environ 50 Po par an. Le moment est donc propice pour faire le point sur l’informatique du LHC afin de voir ce qui a été fait durant le premier long arrêt (LS1) en prévision de l’augmentation du taux de collision et de la luminosité lors de la deuxième période d’exploitation, ce qu’il est possible de réaliser aujourd’hui, et ce qui est prévu pour l’avenir.

2015 saw the start of Run 2 for the LHC, where the machine reached a proton–proton collision energy of 13 TeV – the highest ever reached by a particle accelerator. Beam intensity also increased and, by the end of 2015, 2240 proton bunches per beam were being collided. This year, in Run 2 the LHC will continue to open the path for new discoveries by providing up to one billion collisions per second to ATLAS and CMS. At higher energy and intensity, collision events are more complex to reconstruct and analyse, therefore computing requirements must increase accordingly. Run 2 is anticipated to yield twice the data produced in the first run, about 50 petabytes (PB) per year. So it is an opportune time to look at the LHC’s computing, to see what was achieved during Long Shutdown 1 (LS1), to keep up with the collision rate and luminosity increases of Run 2, how it is performing now and what is foreseen for the future.

LS1 upgrades and Run 2

The Worldwide LHC Computing Grid (WLCG) collaboration, the LHC experiment teams and the CERN IT department were kept busy as the accelerator complex entered LS1, not only with analysis of the large amount of data already collected at the LHC but also with preparations for the higher flow of data during Run 2. The latter entailed major upgrades of the computing infrastructure and services, lasting the entire duration of LS1.

Consolidation of the CERN data centre and inauguration of its extension in Budapest were two major milestones in the upgrade plan achieved in 2013. The main objective of the consolidation and upgrade of the Meyrin data centre was to secure critical information-technology systems. Such services can now keep running, even in the event of a major power cut affecting CERN. The consolidation also ensured important redundancy and increased the overall computing-power capacity of the IT centre from 2.9 MW to 3.5 MW. Additionally, on 13 June 2013, CERN and the Wigner Research Centre for Physics in Budapest inaugurated the Hungarian data centre, which hosts the extension of the CERN Tier-0 data centre, adding up to 2.7 MW capacity to the Meyrin-site facility. This substantially extended the capabilities of the Tier-0 activities of WLCG, which include running the first-pass event reconstruction and producing, among other things, the event-summary data for analysis.

Building a CERN private cloud (preview-courier.web.cern.ch/cws/article/cnl/38515) was required to remotely manage the capacity hosted at Wigner, enable efficient management of the increased computing capacity installed for Run 2, and to provide the computing infrastructure powering most of the LHC grid services. To deliver a scalable cloud operating system, CERN IT started using OpenStack. This open-source project now plays a vital role in enabling CERN to tailor its computing resources in a flexible way and has been running in production since July 2013. Multiple OpenStack clouds at CERN successfully run simulation and analysis for the CERN user community. To support the growth of capacity needed for Run 2, the compute capacity of the CERN private cloud has nearly doubled during 2015, now providing more than 150,000 computing cores. CMS, ATLAS and ALICE have also deployed OpenStack on their high-level trigger farms, providing a further 45,000 cores for use in certain conditions when the accelerator isn’t running. Through various collaborations, such as with BARC (Mumbai, India) and between CERN openlab (see the text box, overleaf) and Rackspace, CERN has contributed more than 90 improvements in the latest OpenStack release.

As surprising as it may seem, LS1 was also a very busy period with regards to storage. Both the CERN Advanced STORage manager (CASTOR) and EOS, an open-source distributed disk storage system developed at CERN and in production since 2011, went through either major migration or deployment. CASTOR relies on a tape-based back end for permanent data archiving, and LS1 offered an ideal opportunity to migrate the archived data from legacy cartridges and formats to higher-density ones. This involved migrating around 85 PB of data, and was carried out in two phases during 2014 and 2015. As an overall result, no less than 30,000 tape-cartridge slots were released to store more data. The EOS 2015 deployment brought storage at CERN to a new scale and enables the research community to make use of 100 PB of disk storage in a distributed environment using tens of thousands of heterogeneous hard drives, with minimal data movements and dynamic reconfiguration. It currently stores 45 PB of data with an installed capacity of 135 PB. Data preservation is essential, and more can be read on this aspect in “Data preservation is a journey” .

Databases play a significant role with regards to storage, accelerator operations and physics. A great number of upgrades were performed, both in terms of software and hardware, to rejuvenate platforms, accompany the CERN IT computing-infrastructure’s transformation and the needs of the accelerators and experiments. The control applications of the LHC migrated from a file-based archiver to a centralised infrastructure based on Oracle databases. The evolution of the database technologies deployed for WLCG database services improved the availability, performance and robustness of the replication service. New services have also been implemented. The databases for archiving the controls’ data are now able to handle, at peak, one million changes per second, compared with the previous 150,000 changes per second. This also positively impacts on the controls of the quench-protection system of the LHC magnets, which has been modernised to safely operate the machine at 13 TeV energy. These upgrades and changes, which in some cases have built on the work accomplished as part of CERN openlab projects, have a strong impact on the increasing size and scope of the databases, as can be seen in the CERN databases diagram (above right).

To optimise computing and storage resources in Run 2, the experiments have adopted new computing models. These models move away from the strict hierarchical roles of the tiered centres described in the original WLCG models, to a peer site model, and make more effective use of the capabilities of all sites. This is coupled with significant changes in data-management strategies, away from explicit placement of data sets globally to a much more dynamic system that replicates data only when necessary. Remote access to data is now also allowed under certain conditions. These “data federations”, which optimise the use of expensive disk space, are possible because of the greatly improved networking capabilities made available to WLCG over the past few years. The experiment collaborations also invested significant effort during LS1 to improve the performance and efficiency of their core software, with extensive work to validate the new software and frameworks in readiness for the expected increase in data. Thanks to those successful results, a doubling of the CPU and storage capacity was needed to manage the increased data rate and complexity of Run 2 – without such gains, a much greater capacity would have been required.

Despite the upgrades and development mentioned, additional computing resources are always needed, notably for simulations of physics events, or accelerator and detector upgrades. In recent years, volunteer computing has played an increasing role in this domain. The volunteer capacity now corresponds to about half the capacity of the CERN batch system. Since 2011, thanks to virtualisation, the use of LHC@home has been greatly extended, with about 2.7 trillion events being simulated. Following this success, ATLAS became the first experiment to join, with volunteers steadily ramping up for the last 18 months and a production rate now equivalent to that of a WLCG Tier-2 site.

In terms of network activities, LS1 gave the opportunity to perform bandwidth increases and redundancy improvements at various levels. The data-transfer rates have been increased between some of the detectors (ATLAS, ALICE) and the Meyrin data centre by a factor of two and four. A third circuit has been ordered in addition to the two dedicated and redundant 100 Gbit/s circuits that were already connecting the CERN Meyrin site and the Wigner site since 2013. The LHC Optical Private Network (LHCOPN) and the LHC Open Network Environment (LHCONE) have evolved to serve the networking requirements of the new computing models for Run 2. LHCOPN, reserved for LHC data transfers and analysis and connecting the Tier-0 and Tier-1 sites, benefitted from bandwidth increases from 10 Gbps to 20 and 40 Gbps. LHCONE has been deployed to meet the requirements of the new computing model of the LHC experiments, which demands the transfer of data among any pair of Tier-1, Tier-2 and Tier-3 sites. As of the start of Run 2, LHCONE’s traffic represents no less than one third of the European research traffic. Transatlantic connections improved steadily, with ESnet setting up three 100 Gbps links extending to CERN through Europe, replacing the five 10 Gbps links used during Run 1.

With the start of Run 2, supported by these upgrades and improvements of the computing infrastructure, new data-taking records were achieved: 40 PB of data were successfully written on tape at CERN in 2015; out of the 30 PB from the LHC experiments, a record-breaking 7.3 PB were collected in October; and up to 0.5 PB of data were written to tape each day during the heavy-ion run. By way of comparison, CERN’s tape-based archive system collected in the region of 70 PB of data in total during the first run of the LHC, as shown in the plot (right). In total, today, WLCG has access to some 600,000 cores and 500 PB of storage, provided by the 170 collaborating sites in 42 countries, which enabled the Grid to set a new record in October 2015 by running a total of 51.1 million jobs.

Looking into the future

With the LHC’s computing now well on track with Run 2 needs, the WLCG collaboration is looking further into the future, already focusing on the two phases of upgrades planned for the LHC. The first phase (2019–2020) will see major upgrades of ALICE and LHCb, as well as increased luminosity of the LHC. The second phase – the High Luminosity LHC project (HL-LHC), in 2024–2025 – will upgrade the LHC to a much higher luminosity and increase the precision of the substantially improved ATLAS and CMS detectors.

The requirements for data and computing will grow dramatically during this time, with rates of 500 PB/year expected for the HL-LHC. The needs for processing are expected to increase more than 10 times over and above what technology evolution will provide. As a consequence, partnerships such as those with CERN openlab and other programmes of R&D are essential to investigate how the computing models could evolve to address these needs. They will focus on applying more intelligence into filtering and selecting data as early as possible. Investigating the distributed infrastructure itself (the grid) and how one can best make use of available technologies and opportunistic resources (grid, cloud, HPC, volunteer, etc), improving software performance to optimise the overall system.

Building on many initiatives that have used large-scale commercial cloud resources for similar cases, the Helix Nebula the Science Cloud (HNSciCloud) pre-commercial procurement (PCP) project may bring interesting solutions. The project, which is led by CERN, started in January 2016, and is co-funded by the European Commission. HNSciCloud pulls together commercial cloud-service providers, publicly funded e-infrastructures and a group of 10 buyers’ in-house resources to build a hybrid cloud platform, on top of which a competitive marketplace of European cloud players can develop their own services for a wider range of users. It aims at bringing Europe’s technical development, policy and procurement activities together to remove fragmentation and maximise exploitation. The alignment of commercial and public (regional, national and European) strategies will increase the rate of innovation.

To improve software performance, the High Energy Physics (HEP) Software Foundation, a major new long-term activity, has been initiated. This seeks to address the optimal use of modern CPU architectures and encourage more commonality in key software libraries. The initiative will provide underlying support for the significant re-engineering of experiment core software that will be necessary in the coming years.

In addition, there is a great deal of interest in investigating new ways of data analysis: global queries, machine learning and many more. These are all significant and exciting challenges, but it is clear that the LHC’s computing will continue to evolve, and that in 10 years it will look very different, while still retaining the features that enable global collaboration.

R&D collaboration with CERN openlab

CERN openlab is a unique public–private partnership that has accelerated the development of cutting-edge solutions for the worldwide LHC community and wider scientific research since 2001. Through CERN openlab, CERN collaborates with leading ICT companies and research institutes. Testing in CERN’s demanding environment provides the partners with valuable feedback on their products, while allowing CERN to assess the merits of new technologies in their early stages of development for possible future use. In January 2015, CERN openlab entered its fifth three-year phase.

The topics addressed in CERN openlab’s fifth phase were defined through discussion and collaborative analysis of requirements. This involved CERN openlab industrial collaborators, representatives of CERN, members of the LHC experiment collaborations, and delegates from other international research organisations. The topics include next-generation data-acquisition systems, optimised hardware- and software-based computing platforms for simulation and analysis, scalable and interoperable data storage and management, cloud-computing operations and procurement, and data-analytics platforms and applications.

 

Data preservation is a journey

 

As an organisation with more than 60 years of history, CERN has created large volumes of “data” of many different types. This involves not only scientific data – by far the largest in terms of volume – but also many other types (photographs, videos, minutes, memoranda, web pages and so forth). Sadly, some of this information from as recently as the 1990s, such as the first CERN web pages, has been lost, as well as more notably much of the data from numerous pre-LEP experiments. Today, things look rather different, with concerted efforts across the laboratory to preserve its “digital memory”. This concerns not only “born-digital” material but also what is still available from the pre-digital era. Whereas the latter often existed (and luckily often still exists) in multiple physical copies, the fate of digital data can be more precarious. This led Vint Cerf, vice-president of Google and an early internet pioneer, to declare in February 2015: “We are nonchalantly throwing all of our data into what could become an information black hole without realising it.” This is a situation that we have to avoid for all CERN data – it’s our legacy.

Interestingly, many of the tools that are relevant for preserving data from the LHC and other experiments are also suitable for other types of data. Furthermore, there are models that are widely accepted across numerous disciplines for how data preservation should be approached and how success against agreed metrics can be demonstrated.

Success, however, is far from guaranteed: the tools involved have had a lifetime that is much shorter than the desired retention period of the current data, and so constant effort is required. Data preservation is a journey, not a destination.

The basic model that more or less all data-preservation efforts worldwide adhere to – or at least refer to – is the Open Archival Information System (OAIS) model, for which there is an ISO standard (ISO 14721:2012). Related to this are a number of procedures for auditing and certifying “trusted digital repositories”, including another ISO standard – ISO 16363.

This certification requires, first and foremost, a commitment by “the repository” (CERN in this case) to “the long-term retention of, management of, and access to digital information”.

In conjunction with numerous more technical criteria, certification is therefore a way of demonstrating that specific goals regarding data preservation are being, and will be, met. For example, will we still be able to access and use data from LEP in 2030? Will we be able to reproduce analyses on LHC data up until the “FCC era”?

In the context of the Worldwide LHC Computing Grid (WLCG), self-certification of, initially, the Tier0 site, is currently under way. This is a first step prior to possible formal certification, certification of other WLCG sites (e.g. the Tier1s), and even certification of CERN as a whole. This could cover not only current and future experiments but also the “digital memory” of non-experiment data.

What would this involve and what consequences would it have? Fortunately, many of the metrics that make up ISO 16363 are part of CERN’s current practices. To pass an audit, quite a few of these would have to be formalised into official documents (stored in a certified digital repository with a digital object identifier): there are no technical difficulties here but it would require effort and commitment to complete. In addition, it is likely that the ongoing self-certification will uncover some weak areas. Addressing these can be expected to help ensure that all of our data remains accessible, interpretable and usable for long periods of time: several decades and perhaps even longer. Increasingly, funding agencies are requiring not only the preservation of data generated by projects that they fund, but also details of how reproducibility of results will be addressed and how data will be shared beyond the initial community that generated it. Therefore, these are issues that we need to address, in any event.

A reasonable target by which certification could be achieved would be prior to the next update of the European Strategy for Particle Physics (ESPP), and further updates of this strategy would offer a suitable frequency of checking that the policies and procedures were still effective.

The current status of scientific data preservation in high-energy physics owes much to the Study Group that was initiated at DESY in late 2008/early 2009. This group published a “Blueprint document” in May 2012, and a summary of this was input to the 2012 ESPP update process. Since that time, effort has continued worldwide, with a new status report published at the end of 2015.

In 2016, we will profit from the first ever international data-preservation conference to be held in Switzerland (iPRES, Bern, 3–6 October) to discuss our status and plans with the wider data-preservation community. Not only do we have services, tools and experiences to offer, but we also have much to gain, as witnessed by the work on OAIS, developed in the space community, and related standards and practices.

High-energy physics is recognised as a leader in the open-access movement, and the tools in use for this, based on Invenio Digital Library software, have been key to our success. They also underpin more recent offerings, such as the CERN Open Data and Analysis Portals. We are also recognised as world leaders in “bit preservation”, where the 100+PB of LHC (and other) data are proactively curated with increasing reliability (or decreasing occurrences of rare but inevitable loss of data), despite ever-growing data volumes. Finally, CERN’s work on virtualisation and versioning file-systems through CernVM and CernVM-FS has already demonstrated great potential for the highly complex task of “software preservation”.

• For further reading, visit arxiv.org/pdf/1205.4667 and dx.doi.org/10.5281/zenodo.46158.

NA62: CERN’s kaon factory

Résumé

NA62 : l’usine à kaons du CERN

Le CERN est fort d’une longue tradition en physique des kaons, tradition perpétuée aujourd’hui par l’expérience NA62. La phase de mise en service a cédé la place en 2015 à la phase d’acquisition de données, qui devrait se poursuivre jusqu’en 2018. NA62 est conçue pour étudier avec précision la désintégration K+ → π+νν, mais elle est aussi utile pour examiner d’autres aspects, notamment l’universalité des leptons et les désintégrations radiatives. La qualité du détecteur, la possibilité d’utiliser des faisceaux secondaires aussi bien chargés que neutres, et la disponibilité prévue des faisceaux extraits du SPS pour la durée de l’exploitation du LHC font de NA62 une véritable usine à kaons.

CERN’s long tradition in kaon physics started in the 1960s with experiments at the Proton Synchrotron conducted by, among others, Jack Steinberger and Carlo Rubbia. It continued with NA31, CPLEAR, NA48 and its follow-ups. Next in line and currently active is NA62 – the high-intensity facility designed to study rare kaon decays, in particular those where the mother particle decays into a pion and two neutrinos. The nominal performance of the detector in terms of data quality and quantity is so good that the experiment can undeniably play the role of a kaon factory.

Using its unique set-up, NA62 will address with sufficient statistics and precision a basic question: does the Standard Model also work in the most suppressed corner of flavour-changing neutral currents (FCNCs)? According to theory, these processes are suppressed by the unitarity of the quark-mixing Cabibbo–Kobayashi–Maskawa matrix and by the Glashow–Iliopoulos–Maiani mechanism. What makes the kaons special is that some of these FCNCs are not affected by large hadronic matrix-element uncertainties because they can be normalised to a semi-leptonic mode described by the same form factor, which therefore drops out in the ratio. The poster child of these reactions is the K → πνν. By measuring the decay rate, it will be possible to determine a combination of Cabibbo–Kobayashi–Maskawa matrix elements independently of B decays. Discrepancies compared with expectations might be a signature of new physics.

Testing Standard Model theoretical predictions is not easy, because the decay under study is predicted to occur with a probability of less than one part in 10 billion. Therefore, the first experimental challenge is to collect a sufficient number of kaon decays. To do so, in 2012, an intense secondary beam from the Super Proton Synchrotron (SPS), called K12, had to be completely rebuilt. Today, NA62 is exploiting this intense secondary beam, which has an instantaneous rate approaching 1 GHz. Although we know that approximately only 6% of the beam particles are kaons, each single particle sent by the SPS accelerator has to be identified before entering the experiment’s decay region. At the heart of the tracking system is the gigatracker (GTK), which is able to measure the impact position of the incoming particle and its arrival time. This information is used to associate the incoming particle with the event observed downstream, and to reconstruct its kinematics. To do so with the required sensitivity, 200 picoseconds time-resolution in the gigatracker is required.

The GTK consists of a matrix of 200 columns by 90 rows of hybrid silicon pixels. To affect the trajectory of the particles as little as possible, the sensors are 200 μm thick and the pixel chip is 100 μm thick. The GTK is placed in a vacuum and operated at a temperature of –20 °C to reduce radiation-induced performance degradation. The NA62 collaboration has developed innovative ways to ensure effective cooling, using light materials to minimise their effect on particle trajectory.

In addition to measuring the direction and the momentum of each particle, the identity of the particle needs to be determined before it enters the decay tank. This is done using a differential Cherenkov counter (CEDAR) equipped with state-of-the-art optics and electronics to cope with the large particle rate.

Final-pion identification

There is a continuous struggle between particle physicists, who want to keep the amount of material in the tracking detectors to a minimum, and engineers, who need to ensure safety and prevent the explosion of pressurised devices operated inside the vacuum tank, such as the NA62 straw tracker made of more than 7000 thin tubes. In addition, the beam specialists would even prefer to have no detector at all. Any amount of material in the beam leads to scattering of particles into the detectors placed downstream, leading to potential backgrounds and unwanted additional counting rates. In NA62, the accepted signal is a single pion π+ and nothing else, so every trick in the book of experimental particle physics is used to determine the identity of the final pion, including a ring imaging Cherenkov (RICH) counter for pion/muon separation up to about 40 GeV/c.

Perhaps the most striking feature of NA62 is the complex of electromagnetic calorimeters deployed along and downstream of the vacuum tank: 12 stations of lead-glass rings (using crystals refurbished from the OPAL barrel at LEP), of which 11 operate inside the vacuum tank; a liquid-krypton calorimeter, a legacy of NA48 but upgraded with new electronics, and smaller detectors complementing the acceptance. These calorimeters form the NA62 army deployed to suppress the background originating from K+ → π+π0 decays when both photons from the π0 decay are lost: only one π0 out of 107 remains undetected. As you have probably realised by now, NA62 is not a small experiment; a picture of the detector is shown in figure 1.

Even with a 65 m-long fiducial region, only 10% of the kaons decay usefully, so only six in 1000 of the incoming particles measured by the GTK actually end up being used to study kaon decays in NA62 – a big upfront price to pay. On the positive side, the advantage is the possibility to have full control of the initial and final states because the particles don’t cross any material apart from the trackers, and the kinematics of the decays can be reconstructed with great precision. To demonstrate the quality of the NA62 data, figure 3 shows events selected with a single track for incoming particles tagged as kaons and figure 4 shows the particle-identification capability.

In addition to suppressing the π0, NA62 has to suppress the background from muons. Most of the single rate in the large detectors is due to these particles, either from the more frequent pion and kaon decay (π→ μ+ν and K+ → μ+ν) or originating from the dump of the primary proton beam. In addition to the already mentioned RICH, NA62 is equipped with hadron calorimeters and a fast muon detector at the end of the hall to deal with the muons. A powerful and innovative trigger-and-data-acquisition system is a crucial ingredient for the success of NA62, together with the commitment and dedication of each collaborator (see figure 2).

NA62 was commissioned in 2014 and 2015, and it is now in the middle of a first long phase of data-taking, which should last until the accelerator’s Long Shutdown 2 in 2018. The data collected so far indicate a detector performance in line with expectations, and preliminary results based on these data were shown at the Rencontres de Moriond Electroweak conference in La Thuile, Italy, in March. A big effort was invested to build this new experiment, and the collaboration is eager to exploit its physics potential to the full.

Having designed NA62 to address with precision the K+ → π+νν decay means that several other physics opportunities can be studied with the same detector. They range from the study of lepton universality to radiative decays. The improved apparatus with respect to NA48 should also allow measurements of π π scattering and semi-leptonic decays to be improved on, and possible low-mass long-lived particles to be looked for.

The quality of the detector, the possibility to use both charged and neutral secondary beams, and the foreseen availability of the SPS extracted beams for the duration of exploitation of the LHC make NA62 a bona-fide kaon factory.

Particle flow in CMS

In hadron-collider experiments, jets are traditionally reconstructed by clustering photon and hadron energy deposits in the calorimeters. As the information from the inner tracking system is completely ignored in the reconstruction of jet momentum, the performance of such calorimeter-based reconstruction algorithms is seriously limited. In particular, the energy deposits of all jet particles are clustered together, and the jet energy resolution is driven by the calorimeter resolution for hadrons – typically 100%/√E in CMS – and by the non-linear calorimeter response. Also, because the trajectories of low-energy charged hadrons are bent away from the jet axis in the 3.8 T field of the CMS magnet, their energy deposits in the calorimeters are often not clustered into the jets. Finally, low-energy hadrons may even be invisible if their energies lie below the calorimeter detection thresholds.

In contrast, in lepton-collider experiments, particles are identified individually through their characteristic interaction pattern in all detector layers, which allows the reconstruction of their properties (energy, direction, origin) in an optimal manner, even in highly boosted jets at the TeV scale. This approach was first introduced at LEP with great success, before being adopted as the baseline for the design of future detectors for the ILC, CLIC and the FCC-ee. The same ambitious approach has been adopted by the CMS experiment, for the first time at a hadron collider. For example, the presence of a charged hadron is signalled by a track connected to calorimeter energy deposits. The direction of the particle is indicated by the track before any deviation in the field, and its energy is calculated as a weighted average of the track momentum and the associated calorimeter energy. These particles, which typically carry about 65% of the energy of a jet, are therefore reconstructed with the best possible energy resolution. Calorimeter energy deposits not connected to a track are either identified as a photon or as a neutral hadron. Photons, which represent typically 25% of the jet energy, are reconstructed with the excellent energy resolution of the CMS electromagnetic calorimeter. Consequently, only 10% of the jet energy – the average fraction carried by neutral hadrons – needs to be reconstructed solely using the hadron calorimeter, with its 100%/√E resolution. In addition to these types of particles, the algorithm identifies and reconstructs leptons with improved efficiency and purity, especially in the busy jet environment.

Key ingredients for the success of particle flow are excellent tracking efficiency and purity, the ability to resolve the calorimeter energy deposits of neighbouring particles, and unambiguous matching of charged-particle tracks to calorimeter deposits. The CMS detector, while not designed for this purpose, turned out to be well-suited for particle flow. Charged-particle tracks are reconstructed with efficiency greater than 90% and a rate of false track reconstruction at the per cent level down to a transverse momentum of 500 MeV. Excellent separation of charged hadron and photon energy deposits is provided by the granular electromagnetic calorimeter and large magnetic-field strength. Finally, the two calorimeters are placed inside of the magnet coil, which minimises the probability for a charged particle to generate a shower before reaching the calorimeters, and therefore facilitates the matching between tracks and calorimeter deposits.

After particle flow, the list of reconstructed particles resembles that provided by an event generator. It can be used directly to reconstruct jets and the missing transverse momentum, to identify hadronic tau decays, and to quantify lepton isolation. Figure 1 illustrates, in a given event, the accuracy of the particle reconstruction by comparing the jets of reconstructed particles to the jets of generated particles. Figure 2 further demonstrates the dramatic improvement in jet-energy resolution with respect to the calorimeter-based measurement. In addition, the particle flow improves the jet angular resolution by a factor of three and reduces the systematic uncertainty in the jet energy scale by a factor of two. The influence of particle flow is, however, far from being restricted to jets with, for example, similar improvements for missing transverse-momentum reconstruction and a tau-identification background rate reduced by a factor three. This new approach to reconstruction also paved the way for particle-level pile-up mitigation methods such as the identification and masking of charged hadrons from pile-up before clustering jets or estimating lepton isolation, and the use of machine learning to estimate the contribution of pile-up to the missing transverse momentum.

The algorithm, optimised before the start of LHC Run I in 2009, remains essentially unchanged for Run II, because the reduced bunch spacing of 25 ns could be accommodated by a simple reduction of the time windows for the detector hits. The future CMS upgrades have been planned towards optimal conditions for particle flow (and therefore physics) performance. In the first phase of the upgrade programme, a new pixel layer will reduce the rate of false charged-particle tracks, while the read-out of multiple layers with low noise photodetectors in the hadron calorimeter will improve the neutral hadron measurement that limits the jet-energy resolution. The second phase includes extended tracking allowing for full particle-flow reconstruction in the forward region, and a new high-granularity endcap calorimeter with extended particle-flow capabilities. The future is therefore bright for the CMS particle-flow reconstruction concept.

• CMS Collaboration, “Particle flow and global event description in CMS”, in preparation.

AugerPrime looks to the highest energies

 

Since the start of its operations in 2004, the Auger Observatory has illuminated many of the open questions in cosmic-ray science. For example, it confirmed with high precision the suppression of the primary cosmic-ray energy spectrum for energies exceeding 5 × 1019 eV, as predicted by Kenneth Greisen, Georgiy Zatsepin and Vadim Kuzmin (the “GZK effect”). The collaboration has searched for possible extragalactic point sources of the highest-energy cosmic-ray particles ever observed, as well as for large-scale anisotropy of arrival directions in the sky (CERN Courier December 2007 p5). It has also published unexpected results about the specific particle types that reach the Earth from remote galaxies, referred to as the “mass composition” of the primary particles. The observatory has set the world’s most stringent upper limits on the flux of neutrinos and photons with EeV energies (1 EeV = 1018 eV). Furthermore, it contributes to our understanding of hadronic showers and interactions at centre-of-mass energies well above those accessible at the LHC, such as in its measurement of the proton–proton inelastic cross-section at √s = 57 TeV (CERN Courier September 2012 p6).

The current Auger Observatory

The Auger Observatory learns about high-energy cosmic rays from the extensive air showers they create in the atmosphere (CERN Courier July/August 2006 p12). These showers consist of billions of subatomic particles that rain down on the Earth’s surface, spread over a footprint of tens of square kilometres. Each air shower carries information about the primary cosmic-ray particle’s arrival direction, energy and particle type. An array of 1600 water-Cherenkov surface detectors, placed on a 1500 m grid covering 3000 km2, samples some of these particles, while fluorescence detectors around the observatory’s perimeter observe the faint ultraviolet light the shower creates by exciting the air molecules it passes through. The surface detectors operate 24 hours a day, and are joined by fluorescence-detector measurements on clear moonless nights. The duty cycle for the fluorescence detectors is about 10% that of the surface detectors. An additional 60 surface detectors in a region with a reduced 750 m spacing, known as the infill array, focus on detecting lower-energy air showers whose footprint is smaller than that of showers at the highest energies. Each surface-detector station (see image above) is self-powered by a solar panel, which charges batteries in a box attached to the tank (at left in the image), enabling the detectors to operate day and night. An array of 153 radio antennas, named AERA and spread over a 17 km2 area, complements the surface detectors and fluorescence detectors. The antennas are sensitive to coherent radiation emitted in the frequency range 30–80 MHz by air-shower electrons and positrons deflected in the Earth’s magnetic field.

The motivation for AugerPrime and its detector upgrades

The primary motivation for the AugerPrime detector upgrades is to understand how the suppressed energy spectrum and the mass composition of the primary cosmic-ray particles at the highest energies are related. Different primary particles, such as γ-rays, neutrinos, protons or heavier nuclei, create air showers with different average characteristics. To date, the observatory has deduced the average primary-particle mass at a given energy from measurements provided by the fluorescence detectors. These detectors are sensitive to the number of air-shower particles versus depth in the atmosphere through the varying intensity of the ultraviolet light emitted along the path of the shower. The atmospheric depth of the shower’s maximum number of particles, a quantity known as Xmax, is deeper in the atmosphere for proton-induced air showers relative to showers induced by heavier nuclei, such as iron, at a given primary energy. Owing to the 10% duty cycle of the fluorescence detectors, the mass-composition measurements using the Xmax technique do not currently extend into the energy region E > 5 × 1019 eV where the flux suppression is observed. AugerPrime will capitalise on another feature of air showers induced by different primary-mass particles, namely, the different abundances of muons, photons and electrons at the Earth’s surface. The main goal of AugerPrime is to measure the relative numbers of these shower particles to obtain a more precise handle on the primary cosmic-ray composition with increased statistics at the highest energies. This knowledge should reveal whether the flux suppression at the highest energies is a result of a GZK-like propagation effect or of astrophysical sources reaching a limit in their ability to accelerate the highest-energy primary particles.

The key to differentiating the ground-level air-shower particles lies in improving the detection capabilities of the surface array. AugerPrime will cover each of the 1660 water-Cherenkov surface detectors with planes of plastic-scintillator detectors measuring 4 m2. Surface-detector stations with scintillators above the Cherenkov detectors will allow the Auger team to determine the electron/photon versus muon abundances of air showers more precisely compared with using the Cherenkov detectors alone. The scintillator planes will be housed in light-tight, weatherproof enclosures, attached to the existing water tanks with a sturdy support frame, as shown above. The scintillator light will be read out with wavelength-shifting fibres inserted into straight extruded holes in the scintillator planes, which are bundled and attached to photomultiplier tubes. Also above, an image shows how the green wavelength-shifting fibres emerge from the scintillator planes and are grouped into bundles. Because the surface detectors operate 24 hours a day, the AugerPrime upgrade will yield mass-composition information for the full data set collected in the future.

The AugerPrime project also includes other detector improvements. The dynamic range of the Cherenkov detectors will be extended with the addition of a fourth photomultiplier tube. Its gain will be adjusted so that particle densities can be accurately measured close to the core of the highest-energy air showers. New electronics with faster sampling of the photomultiplier-tube signals will better identify the narrow peaks created by muons. New GPS receivers at each surface-detector station will provide better timing accuracy and calibration. A subproject of AugerPrime called AMIGA will consist of scintillator planes buried 1.3 m under the 60 surface detectors of the infill array. The AMIGA detectors are directly sensitive to the muon content of air showers, because the electromagnetic components are largely absorbed by the overburden.

The AugerPrime Symposium

In November 2015, the Auger scientists combined their biannual collaboration meeting in Malargüe, Argentina, with a meeting of its International Finance Board and dignitaries from many of its collaborating countries, to begin the new phase of the experiment in an AugerPrime Symposium. The Finance Board endorsed the development and construction of the AugerPrime detector upgrades, and a renewed international agreement was signed in a formal ceremony for continued operation of the experiment for an additional 10 years. The observatory’s spokesperson, Karl-Heinz Kampert from the University of Wuppertal, said: “The symposium marks a turning point for the observatory and we look forward to the exciting science that AugerPrime will enable us to pursue.”

While continuing to collect extensive air-shower data with its current detector configuration and publishing new results, the Auger Collaboration is focused on finalising the design for the upgraded AugerPrime detectors and making the transition to the construction phase at the many collaborating institutions worldwide. Subsequent installation of the new detector components on the Pampa Amarilla is no small task, with the 1660 surface detectors spread across such a large area. Each station must be accessed with all-terrain vehicles moving carefully on rough desert roads. But the collaboration is up to the challenge, and AugerPrime is foreseen to be completed in 2018 with essentially no interruption to current data-taking operations.

• For more information, see auger.org/augerprime.

A global lab with a global mission

Our world has been transformed almost beyond recognition since CERN was founded in 1954. Particle physics has evolved to become a field that is increasingly planned and co-ordinated around the world. Collaboration across regions is growing. New players are emerging.

CERN is now a global lab, with a European core. This was recognised by CERN member states with the adoption, in 2010, of the geographical enlargement policy that opens up for greater participation from countries outside of Europe. Since then, we have welcomed Israel as a new member state. Romania and Serbia are entering the final stages of accession to membership, and Cyprus has just joined as an associate member in the pre-stage to membership. Since 2015, Pakistan and Turkey have been part of the wider CERN family as associate members, and several more states are applicants for associate membership.

Yet, the changes go much further than our scientific field and the inclusion of new members in our particle-physics family. Global governance is more complex than ever, with overlapping challenges and a greater number of interlocutors. Public opinion is being formed in new ways, driven by technological advances and political change. Global economic changes, with emerging countries gaining influence and clout, shape policy priorities in new ways – also in the scientific field. Support for fundamental science must be constantly nurtured, and partnerships are more necessary than ever.

It is a highly complex and fast-moving global policy space. CERN – and indeed all large labs and research infrastructures – needs to react to and act within this evolving context. The challenge for all of us is to advance in a globally co-ordinated manner, so as to be able to carry out as many exciting and complementary projects as possible, while ensuring long-term support for fundamental science as the competition for resources becomes ever fiercer on all levels.

Global impact

It is against this background that the Director-General of CERN has now, for the first time, established an International Relations (IR) sector. The sector brings together entities within the Organization that are working on different aspects of our international engagement, and it provides a unique opportunity for CERN to strengthen the global dimension of its work.

The IR sector has three overarching objectives. First, to help strengthen CERN’s position as a global centre of excellence in science and research through sustained support from all stakeholders. Second, to contribute to shaping a global policy agenda that supports fundamental research, and includes science perspectives more generally. And third, connecting CERN with people across the world to inspire scientific curiosity and understanding.

The immediate priorities for the sector include reinforcing dialogue with our member states, setting future directions for geographical enlargement, and strengthening CERN’s voice in global policy debates.

Let me share a couple of the initiatives that are under way.

We have already expanded the interaction with member states with the establishment of thematic forums that enable better dialogue, and new forums will be created in the coming months. We have also begun reflecting on how to focus geographical enlargement in a way that fully supports and reinforces our long-term scientific aspirations. It is critical that enlargement is not seen as an end in itself; it is intended to underpin CERN’s scientific objectives through a broader and more diverse support base to strengthen our core scientific work.

Fundamental science

Direct engagement with people across the world is a key aspect of our work. With a newly integrated Education, Communications and Outreach group, we will be able to reach out in a more co-ordinated manner – to stimulate interest in and support for fundamental science, among teachers, students, global science policy makers and the many others around the globe who follow our work. For those of us who work with fundamental science every day, the value and impact seem obvious. But it isn’t always that obvious beyond our own corridors. We need to get better at demonstrating how scientific advances impact on the lives of people across the world, every single day, often in surprising but deeply profound ways.

While the IR sector as an institutional construct is new, we are building on a proud, long-standing tradition of inclusive international collaboration in pursuit of a common goal: expanding our collective knowledge. Exploring the frontiers of knowledge has always thrived on ideas, input and initiatives from across the world.

It is truly a privilege to be part of the collective effort that is the CERN IR sector, to take that work forward.

The Composite Nambu–Goldstone Higgs

By Giuliano Panico and Andrea Wulzer
Springer

978-3-319-22617-0

This book provides a description of a composite Higgs scenario as possible extension of the Standard Model (SM). The SM is, by now, the established theory of electroweak and strong interactions, but it is not the fundamental theory of nature. It is just an effective theory, an approximation of a more fundamental theory, which is able to describe nature under specific conditions.

There are a number of open theoretical issues, such as: the existence of gravity, for which no complete high-energy description is available; the neutrino masses and oscillation; and the hierarchy problem associated with the Higgs boson mass (why does the Higgs boson have so small a mass? Or, in other words, why is it so much lighter than the Planck mass?).

Among the possible solutions to the hierarchy problem, the scenario of a composite Higgs boson is a quite simple idea that offers a plausible description of the experimental data. In this picture, the Higgs must be a (pseudo-) Nambu–Goldstone boson, as explained in the text.

The aim of this volume is to describe the composite Higgs scenario, to assess its likelihood of being a model that is actually realisable in nature – to the best of present-day theoretical and experimental understanding – and identify possible experimental manifestations of this scenario (which would influence future research directions). The tools employed for formulation of the theory and for the study of its implications are also discussed.

Thanks to the pedagogical nature of the text, this book could be useful for graduate students and non-specialist researchers in particle, nuclear and gravitational physics.

bright-rec iop pub iop-science physcis connect