On the same day that the LHC’s first three-year physics run ended, CERN announced that its data centre had recorded more than 100 petabytes (PB) – 100 million gigabytes – of physics data.
Amassed over the past 20 years, the storing of this 100 PB – the equivalent of 700 years of full HD-quality video – has been a challenge. At CERN, the bulk of the data (about 88 PB) is archived on tape using the CERN Advanced Storage (CASTOR) system. The rest (13 PB) is stored on the EOS-disk pool system, which is optimized for fast analysis access by many concurrent users.
For the CASTOR system, eight robotic tape libraries are distributed across two buildings, with each tape library capable of containing up to 14,000 tape cartridges. CERN currently has around 52,000 tape cartridges with a capacity ranging from 1 terabyte (TB) to 5.5 TB each. For the EOS system, the data are stored on more than 17,000 disks attached to 800 disk servers.
Not all of the data are generated by LHC experiments. CERN’s IT Department hosts data from many other high-energy physics experiments at CERN, past and present, and is also a data centre for the Alpha Magnetic Spectrometer.
For both tape and disk, efficient data storage and access must be provided, and this involves identifying performance bottlenecks and understanding how users want to access the data. Tapes are checked regularly to make sure that they stay in good condition and are accessible to users. To optimize storage space, the complete archive is regularly migrated to the newest high-capacity tapes. Disk-based systems are replicated automatically after hard-disk failures and a scalable namespace enables fast concurrent access to millions of individual files.
The data centre will keep busy during the long shutdown of the whole accelerator complex, analysing data taken during the LHC’s first three-year run and preparing for the higher expected data flow when the accelerators and experiments start up again. An extension of the centre and the use of a remote data centre in Hungary will further increase the data centre’s capacity.
Using two independent analyses, the LHCb collaboration has updated its measurement of ΔACP, the difference in CP violation in the decays D0→K+K– and D0→π+π–. This helps to cast light on the whether – and to what extent – CP violation occurs in interactions involving particles, such as the D0, that contain a charm quark.
The new results represent a significant improvement in the measurement of ΔACP, which has emerged as an important means to probe the charm sector. A previous measurement from LHCb was 3.5σ from zero and constituted the first evidence for CP violation in the charm sector (LHCb 2012). Subsequent results from the CDF and Belle collaborations, at Fermilab and KEK, respectively, further strengthened the evidence but not to the 5σ gold standard. Because the size of the effect was larger than expected, the result provoked a flurry of theoretical activity, including new physics models that could enhance such asymmetries and ideas for measurements that could elucidate the origin of the effect.
Both of the new measurements by LHCb use the full 2011 data set, corresponding to an integrated luminosity of 1.0 fb–1 of proton–proton collisions at 7 TeV in the centre of mass. The first uses the same “tagging” technique as all previous measurements, in which the initial flavour of the D meson (D0 or D–0) is inferred from the charge of the pion in the decay D*+→D0 π+. The second uses D mesons produced in semimuonic B decays, where the charge of the associated muon provides the tag. The two methods allow for useful cross-checks, in particular for biases that have different origins in the two analyses.
Compared with LHCb’s previous publication on ΔACP, the new pion-tagged analysis uses more data, fully reprocessed with improved alignment and calibration constants (LHCb 2013a). The most important change in the analysis procedure is that a vertex constraint has been applied to achieve a factor 2.5 better in background suppression. The result, ΔACP = (–0.34 ± 0.15 (stat.) ± 0.10 (syst.))%, is closer to zero than the previous measurement, which it supersedes. Detailed investigations reveal that the shift caused by each change in the analysis is consistent with a statistical fluctuation.
To add to the picture, the muon-tagged analysis also measures a value that is consistent with zero: ΔACP = (+0.49 ± 0.30 (stat.) ± 0.14 (syst.))% (LHCb 2013b). In both analyses, the control of systematic uncertainties around the per mille level is substantiated by numerous cross-checks. As the figure shows, the two new results are consistent with each other and with other results at the 2σ level but do not confirm the previous evidence of CP violation in the charm sector.
Theoretical work has shown that several well motivated models could induce large CP-violation effects in the charm sector. These new results constrain the parameter space of such models. Further updates to this and to related measurements will be needed to discover if – and at what level – nature distinguishes between charm and anticharm. The full data sample recorded by LHCb until the start of the long shutdown contains more than three times the number of charm decays analysed in these new analyses, so progress can be anticipated during the LHC long shutdown.
The TOTEM collaboration has published the first luminosity-independent measurement of the total proton–proton cross-section at a centre-of-mass energy of 7 TeV. This is based on the experiment’s simultaneous studies of both inelastic and elastic scattering in proton collisions at the LHC.
The TOTEM (TOTal cross-section, Elastic scattering and diffraction dissociation Measurement) experiment, which co-habits the intersection region at Point 5 (IP5) with the CMS experiment, is optimized for making precise measurements of particles that emerge from collisions close to the non-interacting beam particles. To study elastic proton–proton (pp) collisions, in which the interacting protons simply change direction slightly, the experiment uses silicon detectors in Roman Pots, which can bring the detectors close to the beam line. For inelastic collisions, where new particles are created, two charged-particle telescopes, T1 and T2, come into play. T1 is based on cathode-strip chambers in two “arms” at about 9 m from IP5; T2 employs gas electron-multiplier (GEM) chambers, in this case in two arms at around 13.5 m from IP5.
The measurements at 7 TeV in the centre of mass are based on data recorded in October 2011 with a special setting of the LHC in which the beams were not squeezed for high luminosity but were left relatively wide and straight. With the Roman Pot detectors moved close to the beam, the TOTEM collaboration measured the differential elastic cross-section, dσ/dt, down to values of the four-momentum transfer squared, |t| < 0.005 GeV2. Using the luminosity at IP5 as measured by CMS then gave a value for the elastic pp cross-section, σel = 25.4±1.1 mb ( TOTEM collaboration 2013a). Using the optical theorem, which relates dσ/dt at t=0 to σtot, the measurement of dσ/dt also provided a value of the total cross-section, σtot, and indirectly, for the inelastic cross-section, as σinel= σtot – σel. This yielded σinel = 73.2±1.4 mb.
To measure σinel more directly, the collaboration has analysed events that have at least one charged particle in the T2 telescope. After applying several corrections and, again, the luminosity from CMS, they arrive at a final result of σinel = 73.7±3.4 mb (TOTEM collaboration 2013b).
The excellent agreement between this value for σinel and the one determined from dσ/dt confirms that the collaboration understands well the systematic uncertainties and corrections used in the analysis and allows them to extract still more information from the data. In particular, as the elastic and inelastic data were collected simultaneously, the optical theorem allows the rates to be combined without the need to know the luminosity. This gives luminosity-independent values of σel = 25.1±1.1 mb, σinel = 72.9±1.5 mb and σtot = 98.0±2.5 mb (TOTEM collaboration 2013c). Using the optical theorem in a complementary way also allows TOTEM to determine the luminosity and in this case the collaboration finds values that are in excellent agreement with those measured by CMS.
The ATLAS collaboration achieved a milestone in February when it applied the finishing touches to the measurement of the luminosity for proton–proton (pp) data recorded in 2011 at 7 TeV in the centre of mass. With a relative uncertazinty of ±1.8%, the understanding of the luminosity delivered to ATLAS exceeds the accuracy expected before running at the LHC began and opens up exciting possibilities for precision measurements.
The absolute scale for luminosity – a measure of how many particles pass through a given area in a given time – is calibrated by combining simultaneous precision measurements of the bunch currents in the LHC and of the convolved transverse size of the colliding bunches. Using a technique pioneered by Simon van der Meer nearly 50 years ago at CERN’s Intersecting Storage Rings, the inelastic pp collision rate is monitored as the beams are separated first in the horizontal and then in the vertical direction. This “vdM scan” provides a measurement of the beam-overlap area, which when combined with the numbers of protons in each bunch, determines the absolute luminosity produced in head-on collisions.
The success of the procedure for vdM scans at the LHC resulted from close co-operation between the LHC accelerator team and the four large experimental collaborations. The scans are performed in special fills with carefully tailored beam conditions. These fills are optimized for the accuracy of the luminosity measurement while remaining within acceptable operational parameters for the accelerator complex. One key input, the understanding of the number of protons per bunch in the LHC, is determined from several different beam instrumentation measurements, as well as from additional supporting measurements by each of the four LHC collaborations. This effort, led by the LHC Bunch Current Normalization Working Group, has reduced the uncertainty on this key component of the luminosity calibration from around 10% in early 2010 to 0.5% for the final 2011 result.
ATLAS uses two main detectors to monitor the luminosity delivered during physics collisions. LUCID is a segmented Cherenkov detector wrapped around the forward beam pipe; it has been designed specifically for luminosity measurements. The beam conditions monitor (BCM) is a set of small sensors made from synthetic (CVD) diamonds, which also provide fast-abort signals to protect the inner tracking-detectors from radiation damage. LUCID and the BCM both deliver individual luminosity measurements for each of the 3564 possible colliding-bunch slots in the LHC’s fill pattern.
The vdM scans provide a direct calibration of these detectors at a single point in time. The accuracy of that calibration in 2011 was determined to be ±1.5%. The dominant uncertainties in this calibration are linked to the reproducibility of the result from one scan to the next and among different colliding bunches in the same scan, as well as to the understanding of the numbers of protons mentioned above.
To verify that the luminosity calibration determined during vdM scans is stable over an entire year of LHC operation, ATLAS relies on the consistency between several different detectors and algorithms. In addition to LUCID and the BCM, the electrical current flowing through the liquid argon gaps of the forward calorimeter, as well as the photomultiplier currents in selected cells of the hadronic calorimeter, have proved to be remarkably good luminosity monitors. Additional measurements, such as the rate of primary collision vertices reconstructed by the ATLAS tracking system, provide additional cross-checks. Altogether, the agreement among the different luminosity methods has limited any possible variation of the luminosity scale to less than ±1% over the entire year.
The story of the 2011 luminosity measurement has come to a close with the submission of an ATLAS paper on the topic. However, each year brings new challenges and past performance does not guarantee future returns. Considerable machine time was devoted to vdM scans in 2012 to provide the data necessary for a successful luminosity calibration, this time at 8 TeV. This analysis is ongoing, but the accuracy established in 2011 has set a high standard for future luminosity measurements at the LHC.
A century after the discovery of cosmic rays, NASA’s Fermi Gamma-ray Space Telescope has gathered strong evidence that protons are, indeed, accelerated to high-energies by supernova remnants (SNRs). The “smoking gun” is the production of neutral pions in proton–proton collisions and their subsequent decay, revealed by the shape of the gamma-ray spectrum measured in two SNRs.
Cosmic rays are high-energy charged particles (mostly protons) interacting in the Earth’s atmosphere. Except for the low-energy component in the solar wind, they hit the Earth from all directions because the interstellar magnetic field deflects them randomly. This means that cosmic rays cannot be traced to their sources – so their origin has remained a mystery since their discovery by Victor Hess in 1912 (CERN Courier July/August 2012 p14).
Based on the pioneering work of Enrico Fermi in 1949, researchers have suspected that SNRs are capable of accelerating particles to cosmic-ray energies according to the following scenario. A massive star exploding as a supernova will produce an expanding shock wave in the interstellar medium. A particle crossing the shock front gains an increase in speed of about 1%. This is not much but it can become important for multiple crossings that can be induced when a turbulent magnetic field deflects a charged particle in a random walk process. A particle that crosses the shock discontinuity many times can gain enough energy to break free and escape into the Galaxy – becoming a cosmic ray.
The detection of high-energy gamma rays emitted by SNRs provided the first observational evidence for such a mechanism (CERN Courier January/February 2005 p30). This was corroborated by the detection with Cherenkov telescope arrays of the nearby starburst galaxies M82 and NGC 253 (CERN Courier December 2009 p11). However, it was still possible that the gamma-ray radiation could be induced by bremsstrahlung or inverse-Compton radiation from electrons, rather than by cosmic-ray protons.
An opportunity to disentangle the two processes by studying lower-energy gamma-rays at energies of MeV to GeV came with the launch of the Fermi satellite in 2008 (CERN Courier November 2008 p13). To prove that SNRs produce cosmic rays, the Fermi collaboration has focused on two particular objects, known as IC 443 and W44. Not too distant in the Galaxy, these have the advantage that they are expanding into cold, dense clouds of interstellar gas. These clouds emit gamma rays when struck by high-speed particles escaping the remnants. If these particles are protons, then they can produce neutral pions when colliding with ambient nuclei in the gas clouds. The pions then instantly decay into pairs of gamma rays with an energy of half of the pion rest mass, 135 MeV, in the rest frame of the particle. While the photon number spectrum is thus centred at 67.5 MeV, the usual representation of the gamma-ray spectrum – the photon number spectrum multiplied by the square of the photon energy – rises steeply below around 200 MeV.
Thanks to improved calibrations at low energies, the Fermi spectra of IC 443 and W44 can now show the presence of the low-energy spectral cut-off expected from pion decay. The study published in Science by the Fermi collaboration shows that the gamma-ray spectra of both sources are better reproduced by pion decay rather than by bremsstrahlung radiation from electrons. This observational proof of proton acceleration in SNRs was one of the key objectives of the Fermi mission and confirms the basic principle of particle acceleration suggested by Enrico Fermi some 60 years ago.
Three years after resuming operation at a centre-of-mass energy of 7 TeV in 2010 and ramping up to 8 TeV last year, the LHC is now taking a break for its first long shutdown, LS1. During the long period of highly successful running, the CMS collaboration took advantage of the accelerator’s superb performance to produce high-quality results in a variety of physics analyses, the most significant of which being the joint discovery with ATLAS of a new, Higgs-boson-like particle in July 2012.
Now, as the LHC teams prepare the machine for running from 2015 onwards at a higher centre-of-mass energy (13–14 TeV) and with increasing luminosity, the collaboration will continue to be busy maintaining and consolidating the CMS subdetectors and making sure that they can handle the collider’s improved performance. For several systems, this will involve making provision for upgrades to be implemented later in the detector’s lifetime. Point 5, the home of the CMS detector and control room, will see a busy LS1.
Tracker climate control
Perhaps the biggest priority for CMS is to reduce the effects of radiation damage on the performance of the Tracker. The CMS tracking system forms the innermost subdetector and fits snugly round the LHC beampipe. It must withstand an onslaught of some 1010 particles a second and the aggressive field of mixed radiation that this produces. The only way to mitigate against the progressive effects of this irradiation is to operate the Tracker at a lower temperature than the present few degrees Celsius – perhaps as much as 30° C lower. It is crucial that the Tracker will run under these conditions over the next decade, during which a replacement will be designed and built. The issue here is two-fold: on the one hand, the Tracker coolant must run at a lower temperature; on the other, there can be no condensation on the cooling circuits and detectors, which will be much colder than before, and that is a matter of controlling humidity.
Because the Tracker will not be in an hermetically sealed environment, despite an intensive programme of improvement, the humidity inside it will have to be controlled by blowing in dry gas to force out all of the water vapour. In addition to the Tracker itself, the nearby coolant pipes – which will also be at low temperature because of the coolant – are not well insulated. The collaboration will have to make sure that the detector and nearby pipework are dry, to avoid condensation and the growth of ice, which can inflict major damage.
CMS will require substantially more dry gas (nitrogen during operation, air during maintenance) than previously (up to a few hundred normal cubic metres per hour are envisaged) making it no longer cost-effective to purchase liquefied nitrogen. The collaboration has therefore procured an on-site plant that extracts the water vapour and, optionally, the oxygen from air, outputting a dry atmosphere with (optionally) 95% nitrogen. This plant is a relatively large piece of equipment that requires integration, installation and commissioning. It will be deployed in a few months’ time, after the detector is opened up, to confirm that the improved sealing system works well enough to allow the Tracker to run at a much reduced temperature after LS1 and beyond. This is the number one priority for CMS for the shutdown.
During the normal year-end technical stop of 2016–2017, the collaboration will install the Phase 1 upgrade of the CMS pixel tracker, which is the closest physics detector to the collision point. This will feature an additional, fourth layer, among other improvements. To get the first layer as close to the collision point as possible, a smaller-diameter beampipe will be installed during LS1, with an outer diameter of 45 mm – compared with the current 59.6 mm. The additional pixel layer will improve the CMS experiment’s ability to tell where a track comes from, which vertex it comes from or if, indeed, it comes from a primary vertex at all. Running under conditions of high pile-up, resolving which tracks and clusters belong to which vertices is absolutely crucial for the physics analyses.
The additional pixel layer will improve the CMS experiment’s ability to tell where a track comes from
Although replacing the pixel tracker will require a shutdown of only three to four months, installing a new beampipe will take significantly longer – more than a year – so this has to take place during LS1. It is a delicate operation that requires the detector to be in its most open condition with the pixels removed. Once the new beampipe is in place, the collaboration will conduct a dry run by installing a “3D print” of the new pixel detector: a shell that represents the volume of the detector. This is to make sure that the operation can be performed rapidly with the real object, that it does not jam anywhere and that the adjustment systems all work.
More for muons
Another major element of the CMS plans for LS1 features work to improve the muon detectors. The original design for the endcap part of this system had four triggering and measurement stations for muons but the fourth layer was not considered essential for initial operation. However, to function effectively in the future, the fourth layer is now needed to provide more discriminatory power between interesting muons and fake signatures from mismeasurement or background. Hundreds of detector components have to be built and installed. The biggest assembly site is in Building 904 on CERN’s Prévessin site, where teams from CERN and around the world, including the US, China, Russia, Korea, Pakistan and Italy, are halfway through the detector-construction project. Meanwhile, preparations are well advanced for a consolidation of the barrel part of the muon system; some key on-board electronics will be moved from the underground experimental cavern to the neighbouring service cavern, thus taking advantage of the accessibility of this latter cavern for maintenance activities even during LHC operation.
Associated with the installation of the fourth endcap layer is the refurbishment of chambers in the first layer. The inner wires of these chambers were read out in groups in the initial version of CMS. This was fine for lower collision rates but in future the full granularity of this detector layer will be required. In addition, the electronics are not optimal for the expected higher collision rates, so the collaboration is going to replace all of the on-board electronics. The electronics from the first layer will be reused to provide electronics for the outer layer, where it is easier to cope with the collision rate. A special operational support centre has been built at Point 5 specifically for this refurbishment task and for other detector activities, including cold-storage of the pixel tracker while the new beampipe is fitted. Because some elements to be stored or modified may have been activated by radiation, the centre includes a controlled workshop area.
New shielding discs, 10 cm deep, are to be installed outside the new fourth muon stations on the endcap yoke on either end of the detector. Each shielding disc is made of 12 iron sector-casings filled with a special concrete. Following manufacture and preassembly tests in Pakistan, these discs, whose preparation has taken five years, with the design finished only two years ago, are now being re-assembled and filled at CERN. The first has just been finished. The concrete, developed for this specific application by CERN’s civil engineers, is almost 50% denser than normal concrete – it is made using haematite (or ferric oxide) instead of the usual sand – and it is loaded with boron to absorb low-energy neutrons that would otherwise give rise to unwanted hits in the detector. The overall density of neutrons flying round the cavern will be decreased by having these massive 14-m-diameter shielding discs installed.
The new 100-tonne shielding discs represent the first large mechanical elements of CMS to be constructed entirely underground in the experimental cavern, because the heavy-duty cranes – used to lower each of the existing elements of CMS in their entirety – are no longer installed at Point 5. (The CMS experiment was unique in being constructed in massive “slices” above ground). Each disc will have to be taken apart into its 12 component sectors for lowering and then be rebuilt in a vertical position underground. The shielding discs will have an installed clearance to the new detector layer of around 10–20 mm, so it will be a delicate operation and the logical course of action is to install the discs before the detectors.
The magnet and other systems
The consolidation and upgrade programme aims to equip CMS for running well into the 2030s, and a key element of operating for another two decades will be the CMS magnet – a unique object that is impossible to envisage replacing. Changes are being made to ensure that the experiment is not vulnerable to a major breakdown of the supporting cryogenic system, which could prevent CMS from running for a long time, or to avoid unnecessary on–off cycles, which could prematurely age the magnet.
It is important to remember that the detector was designed for 10 years of operation, with a cycle of 7 months for operation and 5 months of shutdown, and a technical stop every three weeks. In practice, there has been three years of continuous operation with only short winter stops – not long enough to open the detector up for thorough servicing – and a technical stop every 6, 8 or 10 weeks. This is a radically different scenario from the one for which CMS was built. Although, the detector has performed well, there is a pressing need to consolidate it for the new regime. For the magnet consolidation, the obvious change is to install a duplicate compressor plant, to mitigate against the failure of the existing plant at Point 5, which has compressors that have run well beyond the recommended service intervals of 40,000 hours without maintenance.
The electrical system is going to be completely revised so that the two levels of the underground service cavern will be supplied through the UPS (uninterruptible power supply), system to give better protection against power glitches. There will also be cooling modifications, not only to make the magnet more robust but also to accommodate the new detectors of the fourth muon layer, the new operating conditions for the Tracker and the future pixel tracker. Many of these modifications have to be put in place during LS1 because there will not be adequate time to do so later.
All of the photo-transducers of the Hadron Calorimeter (HCAL) are to be replaced. Although the work will only be finished during subsequent shutdowns, it is important to begin now while the CMS teams have access to detector components that will not be accessible later. For example, it might not be possible to access the outer HCAL (HO) on the central yoke wheel during LS2, whereas this can be done in the current shutdown.
To be sure that all systems are running well, the collaboration will repeat the Cosmic Run At Four Tesla (CRAFT) (actually 3.8 T) exercise in late summer of 2014, after closing the yoke and testing the magnet. Although there will be no collisions, the detector will record valuable calibration and commissioning data from cosmic rays. If there is a problem with the new cooling systems or with the humidity control of the Tracker, for example, this should be detected promptly and should give the teams just enough time to open up the detector, do whatever needs to be done to fix it and close it again, before the designated end of the shutdown.
The schedule for 2013 is planned in fine detail with a list of hundreds of tasks that are currently being translated into day-to-day planning schematics, and with work packages that have to be understood, approved, checked for co-activity, possible radiological factors and so forth. In addition, amid all this important technical work, the CMS collaboration will attempt to welcome around 20,000 visitors to the site at Point 5 over the course of the year. The coming two years might be described as a shutdown period for the LHC and its experiments but life at Point 5 will be as busy as it has ever been.
Mid-February marked the end of the first three-year run of the LHC. While the machine exceeded all expectations, delivering significantly more data to the experiments than initially foreseen, high-performance distributed computing also enabled physicists to announce on 4 July the discovery of a new particle (CERN Courier September 2012 p46). With the first run now over, it is a good time to look back at the Worldwide LHC Computing Grid to see what was initially planned, how it performed and what is foreseen for the future.
Back in the late 1990s, it was already clear that the expected amount of LHC data would far exceed the computing capacity at CERN alone. Distributed computing was the sensible choice. The first model proposed was MONARC (Models of Networked Analysis at Regional Centres for LHC Experiments), on which the experiments originally based their computing models (CERN Courier June 2000 p17). In September 2001, CERN Council approved the first phase of the LHC Computing Grid project, led by Les Robertson of CERN’s IT department (CERN Courier November 2001 p5). From 2002 to 2005, staff at CERN and collaborating institutes around the world developed prototype equipment and techniques. From 2006, the LHC Computing Grid became the Worldwide LHC Computing Grid (WLCG) as global computing centres became connected to CERN to help store data and provide computing power.
WLCG uses a tier structure with the CERN data centre as Tier-0 (figure 1). CERN sends out data to each of the 11 major data centres around the world that form the first level, or Tier-1, via optical-fibre links working at multiples of 10 Gbit/s. Each Tier-1 site is then linked to a number of Tier-2 sites, usually located in the same geographical region. Computing resources are supported by the national funding agencies of the countries where each tier is located.
Exceeding expectations
Before the LHC run began, the experiment collaborations had high expectations for the Grid. Distributed computing was the only way that they could store, process and analyse the data – both simulated and real. But, equally, there was some hesitation: the scale of the data processing was unprecedented and it was the first time that analysis had been distributed in this way, dependent on work done at so many different places and funded by so many sources.
There was caution on the computing side too; concerns about network reliability led to built-in complexities such as database replication. As it turned out, the network performed much better than expected. Networking in general saw a big improvement, with connections of 10 Gbit/s being more or less standard to the many university departments where the tiers are housed. Greater reliability, greater bandwidth and greater performance led to increased confidence. The initial complexities and the need for replication of databases reduced, and over time the Grid saw increased simplicity, with a greater reliance on central services run at CERN.
A wealth of data
Network improvements, coupled with the reduced costs of computing hardware meant that more resources could be provided. Improved performance allowed the physics to evolve as the LHC experiments increased their trigger rates to explore more regions than initially foreseen, thus increasing the instantaneous data. LHCb now writes as much data as had been initially estimated for ATLAS and CMS. In 2010, the LHC produced its nominal 15 petabytes (PB) of data a year. Since then, it has increased to 23 PB in 2011 and 27 PB in 2012. LHC data contributed about 70 PB to the recent milestone of 100 PB of CERN data storage (see p6).
In ATLAS and CMS, at least one collision took place every 50 ns i.e. with a frequency of 20 MHz. The ATLAS trigger output-rate increased over the years to up to 400 Hz of output into the main physics streams in 2012, giving more than 5.5 × 109 recorded physics collisions. CMS collected more than 1010 collision events after the start of the run and reconstructed more than 2 × 1010 simulated crossings.
For ALICE, the most important periods of data-taking were the heavy-ion (PbPb) periods – about 40 days in 2010 and 2011. The collaboration collected some 200 million PbPb events with various trigger set-ups. These periods produced the bulk of the data volume in ALICE and their reconstruction and analysis required the biggest amount of CPU resources. In addition, the ALICE detector operated during the proton–proton periods and collected reference data for comparison with the heavy-ion data. In 2013, just before the long shutdown, ALICE collected asymmetrical proton–lead collisions with an interaction versus trigger rate of 10%. In total, from 2010, ALICE accumulated about 8 PB of raw data. Add to that the reconstruction, Monte Carlo simulations and analysis results, and the total data volume grows to about 20 PB.
In LHCb, the trigger reduces 20 million collisions a second to 5000 events written to tape each second. The experiment produces about 350 MB of raw data per second of LHC running, with the total raw data recorded since the start of LHC at about 3 PB. The total amount of data stored by LHCb is 20 PB, of which about 8 PB are on disk. Simulated data accounts for about 20% of the total. On average, about one tenth of the jobs running concurrently on the WLCG come from LHCb.
The WLCG gives access to vast distributed resources across the globe in Tier-1 and Tier-2 sites, as well as to additional voluntary resources from interested institutions, ensuring built-in resilience because the analysis is not performed in a single data centre and hence is not dependent on that centre. It also makes the LHC data available worldwide at the same time.
As time has gone on, the Tier-2 sites have been used far more than foreseen (figure 2). Originally thought to be just for analysis and Monte Carlo simulations, the sites can now do much more with more resources and networking than anticipated. They currently contribute to data reprocessing, normally run at Tier-1 sites, and have enabled the Grid to absorb peak loads that have arisen when processing real data as a result of the extension of the LHC run and the higher-than-expected data collection rates. Because the capacity available at Tier-0 and Tier-1 was insufficient to process new data and reprocess earlier data simultaneously, the reprocessing activity was largely done on Tier-2s. Without them it would not have been possible to have the complete 2012 data set reprocessed in time for analyses targeting the winter conferences in early 2013.
The challenges for the Grid were three-fold. The main one was to understand how best to manage the LHC data and use the Grid’s heterogeneous environment in a way that physicists could concern themselves with analysis without needing to know where their data were. A distributed system is more complex and demanding to master than the usual batch-processing farms, so the physicists required continuous education on how to use the system. The Grid needs to be fully operational at all times (24/7, 365 days/year) and should “never sleep” (figure 3), meaning that important upgrades of the Grid middleware in all data centres must be done on a regular basis. For the latter, the success can be attributed in part to the excellent quality of the middleware itself (supplied by various common projects, such as WLCG/EGEE/EMI in Europe and OSG in the US, see box) and to the administrators of the computing centres, who keep the computing fabric running continuously.
Requirements for the future
With CERN now entering its first long shutdown (LS1), the physicists previously on shift in the control rooms are turning to analysis of the data. Hence LS1 will not be a period of “pause” for the Grid. In addition to analysis, the computing infrastructure will undergo a continual process of upgrades and improvements.
The computing requirements of ALICE, ATLAS, CMS and LHCb are expected to evolve and increase in conjunction with the experiments’ physics programmes and the improved precision of the detectors’ measurements. The ALICE collaboration will re-calibrate, re-process and re-analyse the data collected from 2010 until 2013 during LS1. After the shutdown, the Grid capacity (CPU and storage) will be about 30% more than that currently installed, which will allow the experiment to resume data-taking and immediate data processing at the higher LHC energy. The ATLAS collaboration has an ambitious plan to improve its software and computing performance further during LS1 to moderate the increase in hardware needs. They nonetheless expect a substantial increase in their computing needs compared with what was pledged for 2012. The CMS collaboration expects the trigger rate – and subsequently the processing and analysis challenges – to continue to grow with the higher energy and luminosity after LS1. LHCb’s broader scope to include charm physics may increase the experiment’s data rate by a factor of about two after LS1, which would require more storage on the Grid and more CPU power. The collaboration also plans to make much more use of Tier-2 sites for data processing than was the case up until now.
For the Grid itself, the aim is to make it simpler and more integrated, with work now underway to extend CERN’s Tier-0 data centre, using resources at CERN and the Wigner Research Centre in Budapest (CERN Courier June 2012 p9). Equipment is already being installed and should be fully operational in 2013.
Future challenges and requirements are the result of great successes. Grid performance has been excellent and all of the experiments have not only been good at recording data, but have also found that their detectors could even do more. This has led to the experiment collaborations wanting to capitalize on this potential. With a wealth of data, they can be thankful for the worldwide computer, showing global collaboration at its best.
Worldwide LHC Computing Grid in numbers
• About 10,000 physicists use it
• On average well in excess of 250,000 jobs run concurrently on the Grid
• 30 million jobs ran in January 2013
• 260,000 available processing cores
• 180 PB disk storage available worldwide
• 15% of the computing resources are at CERN
• 10 Gbit/s optical-fibre links connect CERN to each of the 11 Tier-1 institutes
• There are now more than 70 PB of stored data at CERN from the LHC
Beyond particle physics
Throughout its lifetime, WLCG has worked closely with Grid projects co-funded by the European Commission, such as EGEE (Enabling Grids for E-sciencE), EGI (European Grid Infrastructure) and EMI (European Middleware Initiative), or funded by the US National Science Foundation and Department of Energy, such as OSG (Open Science Grid). These projects have provided operational and developmental support and enabled wider scientific communities to use Grid computing, from biologists who simulate millions of molecular drug candidates to find out how they interact with specific proteins, to Earth-scientists who model the future of the planet’s climate.
Imagine the mass of the entire Sun squeezed into a radius of just 10 km. This is about the density of a neutron star – the highest density known in the cosmos. These extremely dense objects are the residues of core-collapse supernova explosions, so a significant fraction of the stars in the universe finish their lives this way. They are often present as binary systems that eventually merge, in principle radiating detectable gravity waves. Another tantalizing possibility is that the ejecta from these events might enrich the interstellar medium with heavy elements, created by a rapid neutron-capture process (the r process). The composition of neutron stars is therefore important yet the description of these ultracompact objects remains one of the biggest challenges facing nuclear and particle physics today.
As the name implies, neutron stars are essentially – but not wholly – composed of neutrons. As figure 1 shows, neutron stars are thought to consist of three layers: a homogeneous core and two concentric shells (Lattimer and Prakash 2004). The surface of the star contains only nuclei that are stable under natural terrestrial conditions. Below this “outer crust”, however, the rapidly increasing internal densities form nuclei that are increasingly neutron-rich, eventually reaching the “drip line”, or the brink of nuclear stability. This marks the transition to the “inner crust”, which is an inhomogeneous assembly of neutron–proton clusters and unbound neutrons that is neutralized by a quasi-uniform electron gas. Deeper into the star, the clusters start to smooth out, giving way to the inner core whose structure is the source of much debate.
Magic numbers
A landmark paper in 1971 presented a model for neutron stars that assumed cold, catalysed matter in which increasingly heavy and neutron-rich nuclides (resulting from electron capture) exist in a state of equilibrium for beta-decay processes (Baym, Pethick and Sutherland, 1971). The effects of the shell structure of nuclei mean that the nuclides residing in neutron-star crusts will cluster around the “magic” neutron numbers, N = 50 and 82, which correspond to closed shells (see figure 2). Indeed, one of the outstanding questions in nuclear physics is whether these magic numbers retain their “supernatural” characteristics in nuclides far from stability. The most exotic N = 50 and N = 82 species are therefore the priority for many experiments in nuclear physics.
Neutron-star crusts present a situation in which solid-state physics is combined with nuclear physics and relativistic gravitation. Although it will remain impossible to create such conditions in the laboratory, recent developments in nuclear theory are now providing consistent and accurate knowledge of nuclear binding energies and a nuclear equation of state that can help to place the composition of the outer crust on firm ground. In analogy with ice cores, scientists can “drill” into the neutron star to determine the most abundant species in each layer. Using known masses, the composition of the outer crust has been well determined to a depth of about 215 m (for the star shown in figure 1) but deeper knowledge relies on theoretical models of nuclear masses. However, different state-of-the-art mass models do not predict the same composition and they can be tested only by high-precision mass measurements on further exotic species.
Unlike many scenarios in nucleosynthesis, where astrophysical uncertainties dominate those resulting from nuclear physics, those of the neutron-star crust are relatively robust. This is because of its likeness to a crystalline semiconductor in a sea of charge-carriers, except that the crust is a lattice of neutron-rich nuclides surrounded by neutrons. The lattice and thermodynamic conditions are therefore well defined, so the crustal composition will depend mainly on the nuclear binding energies.
The ISOLTRAP Penning-trap mass spectrometer at CERN’s ISOLDE radioactive-beam facility has pioneered the art of online precision mass measurements
The ISOLTRAP Penning-trap mass spectrometer at CERN’s ISOLDE radioactive-beam facility has pioneered the art of online precision mass measurements. It uses static electric and magnetic fields to confine ions in an unperturbed environment to weigh accurately the exotic nuclides produced by ISOLDE. Recently, an advance in mass spectrometry with the ISOLTRAP experiment combined with the state-of-the-art purification techniques at ISOLDE, have enabled a first measurement of the mass of 82Zn, an exotic nuclide predicted to reside in neutron-star crusts (Wolf et al. 2013).
The ISOLDE facility produces exotic zinc isotopes by fission in a uranium-carbide target bombarded by the 1.4 GeV proton beams from CERN’s PS-Booster (PSB). Because protons also induce transmutation through the process of spallation, other neutron-deficient elements having the same mass number (isobars) are also produced. Isobaric contamination is the worst enemy of exotic nuclides because their intensity can be up to a million times higher than that of the isobar being sought.
The first line of defence against this is a special version of an ISOLDE target that includes a tungsten convertor unit. Instead of aiming for the target itself, the PSB operators bear left, to hit the converter. The result is an effusion of slow neutrons that induce fission in the nearby target material but without producing the isobaric contamination that would result from direct spallation reactions. Having produced only neutron-rich isobars, the next line of defence is a highly selective, three-step laser excitation tuned to ionize only zinc isotopes. Yet another trick is then pulled from ISOLDE’s sleeve to eliminate residual surface-ionized isobars: a temperature-controlled quartz transfer-line between the target and the ion source. Nevertheless, despite these state-of-the-art precautions, more than 6000 ions per second of 82Rb were still present in the beam delivered to ISOLTRAP in comparison to just a few ions of zinc, making this one of the most challenging measurements of exotic nuclides to date.
To measure 82Zn, yet another type of ion trap was integrated into the suite of Paul and Penning traps comprising ISOLTRAP’s mass spectrometer. The multi-reflection time-of-flight mass separator (MR-ToF MS), shown in figure 3 (overleaf), allowed residual 82Rb+ contaminants to be separated in time after multiple reflections between electrostatic mirrors. The advantage over purification in Penning traps is a mass-resolving power in excess of 100,000, obtained in about only 15 ms. From the MR-ToF MS, the short-lived 82Zn+ ions were sent through an electronic beam gate, opened quickly for 82Zn but otherwise closed to block the contaminants. The purified sample was transferred to the first of two Penning traps situated in individual superconducting solenoids, where the ions were cooled in a helium buffer-gas in preparation for the final mass measurement in the second, hyperbolic high-precision Penning trap. There, the standard time-of-flight ion cyclotron-resonance technique was used to determine the mass. This successful implementation of the MR-ToF MS represents a pioneering advance in mass spectrometry.
Drilling deeper
Probing neutron-star composition requires solving relativistic equations, known as the Tolman-Oppenheimer-Volkov (TOV) equations, that govern hydrostatic equilibrium in neutron-degenerate matter. The TOV equations relate pressure and mass-energy to the neutron-star radius and therefore require an equation of state. Stable- and radioactive-beam facilities have provided substantial information about the equations of state of finite nuclei but even the most exotic systems studied have proton fractions of 25–30%, which is far larger than the few per cent found in neutron stars. With this in mind, a Brussels-Montréal collaboration has developed a model for predicting nuclear binding energies based on the Skyrme force – an effective interaction between nucleons that also provides an equation of state – within the same theoretical framework (Pearson et al. 2011).
With the new 82Zn mass, calculations were performed to “drill” deeper into the neutron-star crust. This was done by minimizing the Gibb’s free energy per nucleon, where the total pressure at a given depth can be determined by the electron pressure and the lattice pressure. The abundances of all neighbouring nuclides were calculated for an array of nucleon densities and pressures. Last, the depths of the crust at which the nuclides are formed can be found using the TOV equations. Figures 1 and 2 illustrate the results. Because the new mass is considerably less bound than the predictions of the mean-field model HFB-19, 82Zn is no longer present in the neutron-star crust. The nuclide 80Zn remains but its presence is now constrained experimentally – deeper in the core than predicted by HFB-19. This result has extended knowledge of the crust composition of neutron stars to new depths.
This composition may have relevance for the nucleosynthesis of heavy elements by the r process, named for the series of rapid neutron captures that are involved (Arnould et al. 2007). The decompression of a neutron star’s matter brought about by tidal effects from a merger with a black hole or another neutron star, allows an r process to occur as the ejected clump vaporizes into the interstellar medium. While the total ejected mass per event is relatively low, it can still explain the total enrichment of r nuclei in the Galaxy; moreover, the calculated abundance distribution is tantalizingly close to that observed in the Solar System. The robustness of these predictions to the variation of input parameters makes the composition of neutron stars one of the most promising situations for addressing the important question of the origin of the elements.
Michel Borghini, who passed away unexpectedly on 15 December 2012, was at CERN for more than 30 years. Born in 1934, Michel was a citizen of Monaco. He graduated from Ecole Polytechnique in 1955 and went on to obtain a degree in electrical engineering from Ecole Supérieure d’Electricité, Paris, in 1957. He then joined the group of Anatole Abragam at what was the Centre d’Etudes Nucléaires, Saclay, where he took part in the study of dynamic nuclear polarization that led to the development of the first polarized proton targets for use in high-energy physics experiments. It was here that he gained the experience that he was to develop at CERN, to the great benefit of experimental particle physics.
The basic aim with a polarized target is to line up the spins of the protons, say, in a given direction. In principle, this can be done by aligning the spins with a magnetic field but the magnitude of the proton’s magnetic moment is such that it takes little energy to knock them out of alignment; thermal vibrations are sufficient. Even at low temperatures and reasonably high magnetic fields, the polarization achieved by this “brute force” method is small: only 0.1% at a temperature of 1 K and in an applied magnetic field of 1 T. To overcome this limitation, dynamic polarization exploits the much larger magnetic moment of electrons by harnessing the coupling of free proton spins in a material with nearby free electron spins. At temperatures of about 1 K, the electron spins are almost fully polarized in an external magnetic field of 1 T and the application of microwaves of around 70 GHz induces resonant transitions between the spin levels of coupled electron–proton pairs. The effect is to increase the natural, small proton polarization by more than two orders of magnitude. The polarization can be reversed with a slight change of the microwave frequency, with no need to reverse the external magnetic field.
First experiments
In 1962, Abragam’s group, including Michel, reported on what was the first experiment to measure the scattering of polarized protons – in this case a 20 MeV beam derived from the cyclotron at Saclay – off a polarized proton target (Abragam et al. 1962). The target was a single crystal of lanthanum magnesium nitrate (La2Mg3(NO3)12.24H2O or LMN), with 0.2% of the La3+ replaced with Ce3+, yielding a proton polarization of 20%.
Michel moved to CERN three years later, where he and others from Saclay and CERN had just tested a polarized target in an experiment on proton–proton scattering at 600 MeV at the Synchrocyclotron (SC). Developed by the Saclay group for the higher energy beams of the Proton Synchrotron (PS), the target consisted of a crystal of LMN 4.5 cm long with transverse dimensions 1.2 cm × 1.2 cm and doped with 1% neodymium. It was cooled to around 1 K in a 4He cryostat built in Saclay by Pierre Roubeau, in the field of a 1.8 T magnet designed by CERN’s Guido Petrucci and built in the SC workshop. This target, with an average polarization of around 70%, was used in several experiments at the PS between 1965 and 1968, in both pion and proton beams with momenta of several GeV/c. These experiments measured the polarization parameter for π± elastic scattering and for the charge-exchange reaction π–p→π0n at small values of t, the square of the four-momentum transfer, typically, |t| < 1 GeV2.
In LMN crystals, the fraction of free, polarized protons is only around 1/16 of the total number of target protons. As a consequence, the unpolarized protons bound in the La, Mg, N and O nuclei formed a serious background in these early experiments. This background was reduced by imposing on the final-state particles the strict kinematic constraints expected from the collisions off protons at rest; the residual background was then subtracted by taking special data with a “dummy” target containing no free protons.
Michel’s group at CERN thus began investigating the possibility of developing polarized targets with a higher content of free protons. In this context, in 1968 Michel published two important papers in which he proposed a new phenomenological model of dynamic nuclear polarization: the “spin-temperature model” (Borghini 1968a and 1968b). The model suggested that sizable proton polarizations could be reached in frozen organic liquids doped with paramagnetic radicals. Despite some initial scepticism, in 1969 Michel’s team succeeded in measuring a polarization of around 40% in a 5 cm3 sample consisting of tiny beads made from a frozen mixture of 95% butanol (C4H9OH) and 5% water saturated with the free-radical porphyrexide. The beads were cooled to 1 K in an external magnetic field of 2.5 T and the fraction of free, polarized protons in the sample was around 1/4 – some four times higher than in LMN (Mango, Runólfsson and Borghini 1969).
The group at CERN went on to study a large number of organic materials doped with free-paramagnetic radicals, searching for the optimum combination for polarized targets. In this activity, where cryostats based on 3He–4He dilution capable of reaching temperatures below 0.1 K were developed, Michel guided two PhD students: Wim de Boer of the University of Delft (now professor at the Karlsruhe Institute of Technology) and Tapio Niinikoski of the University of Helsinki, who went on to join CERN in 1974. They finally obtained polarizations of almost 100% in samples of propanediol (C3H8O2) doped with chromium (V) complexes and cooled to 0.1 K, in a field of 2.5 T, with 19% free, polarized protons.
In this work, the concept of spin temperature that Michel had proposed was verified by polarizing several nuclei simultaneously in a special sample containing 13C and deuterons. The nuclei had different polarizations but their values corresponded to a single spin temperature in the Boltzmann formula giving the populations of the various spin states.
These targets were used in a number of experiments at CERN, at both the PS and the Super Proton Synchrotron (SPS). They measured polarization parameters in the elastic scattering of pions, kaons and protons on protons in the forward diffraction region and at backward scattering angles; in the charge-exchange reactions K–p → K–0n and pp → nn; in the reaction π–p → K0Λ0; and in proton–deuteron scattering. In all of these experiments, Jean-Michel Rieubland of CERN provided invaluable help to ensure a smooth operation of the targets.
In the early 1970s, Michel also initiated the development of “frozen spin” targets. In these targets, the proton spins were first dynamically polarized in a high, uniform magnetic field, and then cooled to a low enough temperature so that the spin-relaxation rate of the protons would be slow even in lower magnetic fields. The targets could then be moved to the detector, thus providing more freedom in the choice of magnetic spectrometers and orientations of the polarization vector. The first frozen spin target was successfully operated at CERN in 1974.
In 1969, Michel took leave from CERN to join the Berkeley group led by Owen Chamberlain working at SLAC, where he took part in a test of T-invariance in inelastic e± scattering from polarized protons in collaboration with the SLAC group led by Richard Taylor. The target, built at Berkeley, was made of butanol and the SLAC 20 GeV spectrometer served as the electron (and positron) analyser. The experiment measured the up–down asymmetry for transverse target spin for both electrons and positrons. No time-reversal violations were seen at the few per cent level.
Michel took leave to work at SLAC again in 1977, this time on a search for parity violation in deep-inelastic scattering of polarized electrons off an unpolarized deuterium target. Here, he worked on the polarized electron source and its associated laser, as well as on the electron spectrometer. The small parity-violation effects expected from the interference of the photon and Z exchanges were, indeed, observed and published in 1978. Michel then moved to the University of Michigan at Ann Arbor, where he joined the group led by Alan Krisch and took part in an experiment to measure proton–proton elastic scattering using both a polarized target and a 6 GeV polarized beam from the 12 GeV Zero Gradient Synchrotron at Argonne National Laboratory.
Michel was an outstanding physicist, equally at ease with theory and being in the laboratory
Michel left CERN’s polarized target group in 1978, succeeded by Niinikoski. Writing in 1985 on major contributions to spin physics, Chamberlain listed the people that he felt to be “the heroes – the people who have given [this] work a special push” (Chamberlain 1985). Michel is the only one that he cites twice: with Abragam and colleagues for the first polarized target and the first experiment to use such a target; and with Niinikoski, for their introduction of the frozen spin target and showing the advantages of powerful (dilution) refrigerators. Today, polarized targets with volumes of several litres and large 3He–4He dilution cryostats are still in operation, for example in the NA58 (COMPASS) experiment at the SPS, where the spin structure of the proton has been studied using deep-inelastic scattering of high-energy muons. Dynamic nuclear polarization has also found applications in medical magnetic-resonance imaging and Michel’s spin-temperature model is still widely used.
In the 1980s, Michel took part in the UA2 experiment at CERN’s SPS proton–antiproton collider, where he contributed to the calibration of the 1444 analogue-to-digital converters (ADCs) that were used to measure the energy deposited in the central calorimeter. He wrote all of the software to drive the large number of precision pulse-generators that monitored the ADC stability during data-taking.
From 1983 to 1996, he was a member of the Executive Committee of the CERN Staff Association, being its vice-president until 1990 and then its president until June 1996. After retiring from CERN in January 1999, he returned to Monaco where in 2003 he was nominated Permanent Representative of Monaco to the United Nations (New York), a post that he kept until 2005.
Michel was an outstanding physicist, equally at ease with theory and being in the laboratory. He had broad professional competences, a sharp, analytical mind, imagination and organizational skills. He is well remembered by his collaborators for his wisdom and advice, and also for his quiet demeanour and his keen but often subtle, sense of humour. His culture and interests extended well beyond science. He was also a talented tennis player. He will be sorely missed by those who had the privilege of working with him, or of being among his friends. Much sympathy goes to his two daughters, Anne and Isabelle, and to their families.
Last year, the Executive Committee of the European Physical Society (EPS) decided to revive the EPS Technology and Innovation Group (TIG) by launching a workshop to take stock of projected R&D and technological innovations in research in accelerator and particle physics and their potential spin-offs to society. The three-day workshop took place on 22–24 October at the Ettore Majorana Foundation and Centre for Scientific Culture, in Erice, with some 25 participants. While it could not cover all ongoing technology and innovation activities, the workshop nevertheless provided the opportunity to review important developments based on international, interdisciplinary collaboration between research laboratories and university groups, supported by technology-transfer professionals as well as small and medium-size companies (SMEs).
The workshop opened with a talk by Phil Bryant, formerly of CERN, on “Accelerators: a history of innovation and spin-off”. His review of the repeated reincarnation of accelerators during the 20th century illustrated the importance of these machines – which were developed for nuclear and particle physics – as a major spin-off from basic research to medical applications. In the following presentation, Ken Peach of Oxford University stressed the importance of close collaboration between accelerator scientists, oncologists, radiobiologists and biophysicists. In his overview of radio- and proton-therapy he explained the underlying physics of radiation and its effects on tumour cells and normal tissue, the evolution of instruments and techniques, as well as the clinical aspects and the challenges. He gave many examples and statistics together with a list of improvements required after the transfer of instruments from accelerator laboratories to hospitals. His instructive overview of industrial solutions, new ideas and novel techniques showed the importance of this technology’s spin-off from research to society. Last, he outlined the potential of radionuclide production for medical tracers and described a proposal for a new biomedical facility at the Low-Energy Ion Ring (LEIR) at CERN. Several other speakers also discussed this latter topic, highlighting the need for more research in radiation biophysics and presenting details of the proposal.
Complementary to the accelerators are the detectors, sensors, and sophisticated electronic read-out chips that are now available for medical imaging. Jean-Marie Le Goff of CERN made the case for a low-energy cyclotron to produce isotopes for positron-emission tomography (PET). He covered the use in medicine of PET and computerized tomography technologies, presenting an overview of cyclotron manufacturers, as well as applications in industry and in the production of radiopharmaceuticals. He also described the joint project by CERN and the Spanish research centre, CIEMAT, to develop an ultracompact cyclotron for single-dose production in collaboration with an industrial consortium.
Another highlight was the talk by Michael Campbell of CERN on the incredible success story of the Medipix chip, which started in the days of the LAA project at CERN, driven by requirements for the LHC experiments for a hybrid-pixel detector. Hybrid-pixel vertex detectors are now installed in the ALICE, ATLAS and CMS experiments at the LHC and the same technology is being used in the photon detectors in LHCb’s ring-imaging Cherenkov detectors. All four of these systems are making a significant contribution to the output of LHC physics.
Meanwhile, the Medipix2 and Medipix3 collaborations have applied the hybrid approach to all kinds of applications in particle imaging. While some were foreseen, in many cases the applications (such as low-energy electron microscopy, or space-based dosimetry) were unimaginable at the start of the work. Background radiation can be seen with the Timepix chip, which has also become a powerful pedagogical tool for inspiring the next generation of scientists and engineers (see, for example, CERN Courier May 2010 p22). Moreover, and thanks to the application of a novel read-out-architecture in deep sub-micron CMOS technology by the Medipix3 collaboration, high-resolution colour X-ray imaging is coming within reach.
To conclude on the topic of medical applications, Viviana Vitolo of the Italian National Centre for Oncological Treatment (CNAO) presented a clinical perspective on hadron therapy. She gave a complete overview of particle facilities in Europe and showed the preliminary clinical results from CNAO. Although based on a small number of patients and a limited time-scale, the results are encouraging because none of the patients present progressive disease and all present stable disease at first follow-up. She proposes that the particle-therapy community should over the coming years produce evidence on the need for hadron therapy, define clinical indications and convince the decision-makers.
Advanced technologies
A second major topic of the workshop concerned the R&D initiatives for the LHC upgrade and future research programmes at the collider. International collaboration in an innovative domain is exemplified in the latest studies and tests on crab cavities and superconducting-RF technology. In linear accelerators there is an R&D effort to improve efficiency and energy recovery, with a scheme to recycle the otherwise lost beam power in light sources and colliders to produce the RF power that is needed to accelerate. There is ongoing development and transfer to industry of superconducting-magnet technology, all of which is related to the proposed luminosity and energy increase of the LHC and its injectors.
European initiatives in detector R&D include the European Radiation Detection and Imaging Technology Platform (ERDIT). This activity is truly multidisciplinary. It involves scientists from microelectronics, semiconductor materials, computing science and various application areas. There is a general understanding that the lack of advanced detectors is the limiting factor in many applications, although the multidisciplinary character of the work makes it hard to find funding. The ERDIT proposal is an initiative to make detector scientists and users from different application fields join forces to put these issues onto the agenda of European funding agencies and industry. Even if the application requirements differ a great deal, several generic technologies are required to make an advanced detector. These include sensor materials, analogue-signal processing, digital processing, storage and communication, hybridization, mounting, packaging and information processing. By pushing the technology limits in these fields it should be possible to develop high-quality devices that can be combined in different configurations to create new and advanced radiation detectors in an efficient way. ERDIT would then be a forum to discuss the priorities and road maps for the research and to promote initiatives in this field.
Another interesting initiative is the ATLAS Technology Lab (ATLAB), which is an organized effort to support detector R&D in the ATLAS experiment. Using many examples from ATLAS today and the future goals for the detector’s performance, Marzio Nessi of CERN explained how the detector community could organize itself – in partnership with industry – to foster effective and necessary detector R&D. He also outlined ATTRACT, which is an initiative outside ATLAS that serves the radiation-sensor and imaging R&D community at large. As with ERDIT, this would work in close collaboration with industry – notably SMEs – to define a work programme for radiation detectors and their infrastructures and then distribute and manage related EU funding.
Talks on microelectronics and successful microchip projects complemented those on detector challenges. Microfabrication, for example, could lead to the integration of services – such as cooling in silicon. At CERN, several projects have been launched in the Physics Department (PH) in collaboration with experimental groups. Microchannel cooling has been adopted by the NA62 experiment for the Gigatracker and results have been published on prototypes; the technology is also being studied for the upgrade of the Vertex Locator in the LHCb experiment. Microfluidic scintillation detectors are under consideration for single-particle tracking and calorimeters in the ATLAS and ALICE experiments and in the Compact Linear Collider design study, as well as for beam monitors for hadron therapy. In the long term, the formation of a competence centre in microfabrication within PH, with synergies with the existing excellence in microelectronics design and wire-bonding module integration, could crucially advance the development of novel detectors for the LHC and future projects, providing exciting spin-offs to other fields.
New user facilities such as the European Spallation Source (ESS) are a trigger for innovation and collaboration with industry. Steve Peggs reported on this green-field construction project, a multilab collaboration with in-kind contributions from partners. He reviewed different types of spallation accelerators and the road map based on accelerator-driven systems (ADS). Neutron physics, which allows the observation of magnetic atoms and atoms moving inside materials, has seen a steady evolution of performance from research reactors and pulsed sources. The ESS will increase the research potential through its projected high flux and high average-availability time. Peggs also addressed the technical options, challenges and final design of the multimegawatt ESS, for which energy efficiency and recovery are design goals. Starting up in 2019 with 1.5 MW and aiming at 5 MW by 2025, this project is a “wonderful challenge”.
Other new user facilities include MYRRHA in Belgium, which is a high-power research reactor based on ADS to produce intense beams of secondary particles relevant for fundamental and applied science, and the International Facility for Anti-Proton and Ion Research (FAIR), currently under construction at the GSI laboratory in Darmstadt.
Transferring technology
The workshop went on to review technology-transfer techniques and success stories. One notable success story is that of Cristoforo Benvenuti, formerly of CERN. Using the non-evaporable getter pumping that he introduced in the dipole chambers for the Large Electron–Positron (LEP) collider and the sputtering techniques that he developed for the LEP superconducting RF cavities, a one-to-one transfer of accelerator technologies to the domain of solar thermal panels has occurred. Panel efficiency has been improved using evacuation to decrease thermal losses, which together with an intelligent process applied to solder the front glass to the metal frame, yields a competitive advantage. The resulting collector is highly efficient even when exposed to diffused light, which may comprise more than 50% of the total daylight in central Europe. Fully automatic production of these collectors, using intelligent robots designed by the company SRB Energy that Benvenuti created with the Spanish Grupo Segura, has resulted in the first orders.
Two presentations at the workshop were devoted to technology-transfer mechanisms. With proactive support for innovation and by improving the commercialization of research, the UK’s Science and Technology Facilities Council (STFC) has already obtained measurable progress, including open access for companies to labs, new jobs being created, new products taken to market, patent applications made and licensing agreements signed. In a new initiative, the STFC and CERN are jointly announcing a first call for a hi-tech start-up or SME looking to take high-energy physics technologies to commercial applications as part of the programme for a new business-incubation centre. This is just one example of a large number of activities that come under CERN’s Knowledge and Transfer group, which is keen to find help with promoting the many initiatives.
Computing infrastructure, networking and high-performance data-handling are all important topics in terms of technology transfer. Whereas the world wide web provides seamless access to information stored in many millions of different geographical locations, the Grid is an infrastructure that provides seamless access to computing power and data-storage capacity distributed throughout the globe. Bob Jones of CERN described the Worldwide LHC Computing Grid, which is vital in analysing the huge amount of data from the LHC. Such Grids are important for not only particle physics but also other research communities and business; in future, moves from Grids to clouds are foreseen. Jones also presented the successful CERN openlab project, which is a public–private partnership between the research community and industry. Volker Lindenstruth of the University of Frankfurt then described an innovative cooling-system architecture, the Green Cube, for the FAIR project’s high-performance computing backbone. He proposed investing in modern parallel programming to gain efficiency for the future computing needs of the Condensed Baryon Matter experiment at FAIR or of ALICE at CERN, as well as for lattice QCD physics analysis.
A video conference with Neville Reeve and Jean-Emmanuel Faure of the European Commission provided details about the Horizon 2020 programme. It was stressed during the workshop that transnational collaborations between research and industry are required and, indeed favoured, within this forthcoming programme. Projects similar to the model of CERN openlab or the ATTRACT/ERDIT initiative of the ATLAS collaboration are clearly in line with these requirements, allowing the distribution and management of related Horizon 2020 funding on behalf of the EU as part of its efforts to externalize funding. The intention is for the community to suggest more such proposals for co-innovation and collaborative frameworks between industry and research infrastructures that leverage the innovation potential and know-how gained by working together in areas of common interest and offering at the same time new benefits for industry.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.