Comsol -leaderboard other pages

Topics

ESS: neutron beams at the high-intensity frontier

Today, neutron research takes place either at nuclear reactors or at accelerator-based sources. For a long time, reactors have been the most powerful sources in terms of integrated neutron flux. Nevertheless, accelerator-based sources, which usually have a pulsed structure (SINQ at PSI being a notable exception), can provide a peak flux during the pulse that is much higher than at a reactor. The European Spallation Source (ESS) – currently under construction in Lund – will be based on a proton linac that is powerful enough to give a higher integrated useful flux than any research reactor. It will be the world’s most powerful facility for research using neutron beams, when it comes into full operation early in the next decade. Although driven by the neutron-scattering community, the project will also offer the opportunity for experiments in fundamental physics, and there are plans to use the huge amount of neutrinos produced at the spallation target for neutrino physics.

The story of the ESS goes back to the early 1990s, with a proposal for a 10 MW linear accelerator, a double compressor ring and two target stations. The aim was for an H linac to deliver alternate pulses to a long-pulse target station and to the compressor rings. The long-pulse target was to receive 2-ms long pulses from the linac, while multiturn injection into the rings would provide a compression factor of 800 and allow a single turn of 1.4 μs to be extracted to the short-pulse target station.

This proposal was not funded, however, and after a short hiatus, new initiatives to build the ESS appeared in several European countries. By 2009, three candidates remained: Hungary (Debrecen), Spain (Bilbao) and Scandinavia (Lund). The decision to locate the ESS near Lund was taken in Brussels in May 2009, after a competitive process facilitated by the European Strategy Forum for Research Infrastructures and the Czech Republic’s Ministry of Research during its period of presidency of the European Union. In this new incarnation, the proposal was to build a facility with a single long-pulse target powered by a 5 MW superconducting proton linac (figure 1). The neutrons will be released from a rotating tungsten target hit by 2 GeV protons emerging from this superconducting linac, with its unprecedented average beam power.

Neutrons have properties that make them indispensable as tools in modern research. They have wavelengths and energies such that objects can be studied with a spatial resolution between 10–10 m and 10–2 m, and with a time resolution between 10–12 s and 1 s. These length- and time-scales are relevant for dynamic processes in bio-molecules, pharmaceuticals, polymers, catalysts and many types of condensed matter. In addition, neutrons interact quite weakly with matter, so they can penetrate large objects, allowing the study of materials surrounded by vacuum chambers, cryostats, magnets or other experimental equipment. Moreover, in contrast to the scattering of light, neutrons interact with atomic nuclei, so that neutron scattering is sensitive to isotope effects. As an extra bonus, neutrons also have a magnetic moment, which makes them a unique probe for investigations of magnetism.

Neutron scattering also has limitations. One of these is that neutron sources are weak compared with sources of light or of electrons. Neutrons are not created, but are “mined” from atomic nuclei where they are tightly bound, and it costs a significant amount of energy to extract them. Photons, on the other hand, can be created in large amounts, for instance in synchrotron light sources. Experiments at light sources can therefore be more sensitive in many respects than those at a neutron source. For this reason, the siting of ESS next to MAX IV – the next-generation synchrotron radiation facility currently being built on the north-eastern outskirts of Lund – is important. Thanks to its pioneering magnet technology, MAX IV will be able to produce light with higher brilliance than at any other synchrotron light source, while the ESS will be the most powerful neutron source in the world.

The ESS will provide unique opportunities for experiments in fundamental neutron physics that require the highest possible integrated neutron flux. A particularly notable example is the proposed search for neutron–antineutron oscillations. The high neutron intensity at the ESS will allow sufficient precision to make neutron experiments complementary to efforts in particle physics at the highest energies, for example at the LHC. The importance of the low-energy, precision “frontier” has been recognized widely (Raidal et al. 2008 and Hewett et al. 2012), and an increasing number of theoretical studies have exploited this complementarity and highlighted the need for further, more precise experimental input (Cigliano and Ramsey-Musolf 2013).

In addition, the construction of a proton accelerator at the high-intensity frontier opens possibilities for investigations of neutrino oscillations. A collaboration is being formed by Tord Ekelöf and Marcos Dracos to study a measurement of CP violation in neutrinos using the ESS together with a large underground water Cherenkov detector (Baussen et al. 2013).

The main components

The number of neutrons produced at the tungsten target will be proportional to the beam current, and because the total production cross-section in the range of proton energies relevant for the ESS is approximately linear with energy, the total flux of neutrons from the target is nearly proportional to the beam power. Given a power of 5 MW, beam parameters have been optimized with respect to cost and reliability, while user requirements have dictated the pulse structure. Table 1 shows the resulting top-level parameters for the accelerator.

The linac will have a normal-conducting front end, followed by three families of superconducting cavities, before a high-energy beam transport brings the protons to the spallation target. Because the ESS is a long-pulse source, it can use protons rather than the H ions needed for efficient injection into the accumulator ring of a short-pulse source.

Figure 2 illustrates the different sections of the linac. In addition to the ion source on a 75 kV platform, the front end consists of a low-energy beam transport (LEBT), a radio-frequency quadrupole that accelerates to 3.6 MeV, a medium-energy beam transport (MEBT) and a drift-tube linac (DTL) that takes the beam to 90 MeV.

The superconducting linac, operating with superfluid helium at 2 K, starts with a section of double-spoke cavities having an optimum beta of 0.50. The protons are accelerated to 216 MeV in 13 cryomodules, each of which has two double-spoke cavities. Medium- and high-beta elliptical cavities follow, with geometric beta values of 0.67 and 0.92. The medium-beta cavities have six cells, the high-betas have five cells. In this way, the two cavity types have almost the same length, so that cryomodules of the same overall design can be used in both cases to house four cavities. Figure 3 shows a preliminary design of a high-beta cryomodule, with its four five-cell cavities and power couplers extending downwards.

Nine medium-beta cryomodules accelerate the beam to 516 MeV, and the final 2 GeV is reached with 21 high-beta modules. The normal-conducting acceleration structures and the spoke cavities run at 352.21 MHz, while the elliptical cavities operate at twice the frequency, 704.42 MHz. After reaching their full energy, the protons are brought to the target by the high-energy beam transport (HEBT), which includes rastering magnets that produce a 160 × 60 mm rectangular footprint on the target wheel.

The design of the proton accelerator – as with the other components of the ESS – has been carried out by a European collaboration. The ion source and LEBT have been designed by INFN Catania, the RFQ by CEA Saclay, the MEBT by ESS-Bilbao, the DTL by INFN Legnaro, the spoke section by IPN Orsay, the elliptical sections again by CEA Saclay, and the HEBT by ISA Århus. During the design phase, additional collaboration partners included the universities of Uppsala, Lund and Huddersfield, NCBJ Świerk, DESY and CERN. Now the collaboration is being extended further for the construction phase.

A major cost driver of the ESS accelerator centres on the RF sources. Klystrons provide the standard solution for high output power at the frequencies relevant to the ESS. For the lower power of the spoke cavities, tetrodes are an option, but solid-state amplifiers have not been excluded completely, even though the required peak powers have not been demonstrated yet. Inductive output tubes (IOTs) are an interesting option for the elliptical cavities, in particular for the high-beta cavities, where the staged installation of the linac still allows for a few years of studies. While IOTs are more efficient and take up less space than klystrons, they are not yet available for the peak powers required, but the ESS is funding the development of higher-power IOTs in industry.

Neutron production

The ESS will use a rotating, gas-cooled tungsten target rather than, for instance, the liquid-mercury targets used at the Spallation Neutron Source in the US and in the neutron source at the Japan Proton Accelerator Research Complex. As well as avoiding environmental issues that arise with mercury, the rotating tungsten target will require the least amount of development effort. It also has good thermal and mechanical properties, excellent safety characteristics and high neutron production.

The target wheel has a diameter of 2.5 m and consists of tungsten elements in a steel frame (figure 4). The tungsten elements are separated by cooling channels for the helium gas. The wheel rotates at 25 rpm synchronized with the beam pulses, so that consecutive pulses hit adjacent tungsten elements. An important design criterion is that the heat generated by radioactive decay after the beam has been switched off must not damage the target, even if all active cooling systems fail.

With the ESS beam parameters, every proton generates about 80 neutrons. Most of them are emitted with energies of millions of electron volts, while most experiments need cold neutrons, from room temperature down to some tens of kelvins. For this reason, the neutrons are slowed down in moderators containing water at room temperature and super-critical hydrogen at 13–20 K before being guided to the experimental stations, which are known as instruments. The construction budget contains 22 such instruments, including one devoted to fundamental physics with neutrons.

The ESS is an international European collaboration where 17 European countries (Sweden, Denmark, Norway, Iceland, Estonia, Latvia, Lithuania, Poland, Germany, France, the UK, the Netherlands, the Czech Republic, Hungary, Switzerland, Italy and Spain) have signed letters of intent. Negotiations are now taking place to distribute the costs between these countries.

The project has now moved into the construction phase, with ground breaking planned for summer this year.

Sweden and Denmark have been hosting the ESS since the site decision, and a large fraction of the design study that started then was financed by Sweden and Denmark. The project has now moved into the construction phase, with ground breaking planned for summer this year.

According to the current project plans, the accelerator up to and including the medium-beta section will be ready by the middle of 2019. Then, the first protons will be sent to the target and the first neutrons will reach the instruments. During the following few years, the high-beta cryomodules will be installed, such that the full 5 MW beam power will be reached in 2022.

The neutron instruments will be built in parallel. Around 40 concepts are being developed at different laboratories in Europe, and the 22 instruments of the complete ESS project will be chosen in a peer-reviewed selection process. Three of these will have been installed in time for the first neutrons. The rest will gradually come on line during the following years, so that all will have been installed by 2025.

The construction budget of ESS amounts to €1,843 million, half of which comes from Sweden, Denmark and Norway. The annual operating costs are estimated to be €140 million, and the cost for decommissioning the ESS after 40 years has been included in the budget. The hope, however, is that the scientific environment that will grow up around ESS and MAX IV – and within the Science Village Scandinavia to be located in the same area – will last longer than that.

Electrons at the LHC: a new beginning

From time to time, great experimental progress in particle physics suddenly reveals a crisis in theoretical physics. This happened in the early 1960s when a plethora of hadrons had been discovered, while strong-interaction theory dealt with analytical properties of the S matrix and a number of phenomenological models. At that time, Murray Gell-Mann, who had just introduced the notion of quarks, seconded by Georg Zweig, argued for focusing on “a higher-energy accelerator so that we can do more experiments over the next generation and really learn more about the basic structure of matter” (Gell-Mann 1967). The current situation is not so different.

At the LHC, the Standard Model is being subjected to a thorough confirmation, including the remarkable completion of its particle contents with the discovery of a Higgs boson. Important as these results are, however, there is still no indication of the existence of the long-predicted supersymmetric particles or of Kaluza–Klein resonances below a mass scale of about a tera-electron-volt, or of other new phenomena. Of course, the hope is that in the coming years the LHC will discover new physics in exploring the next higher-energy domain with increased luminosity. Yet, to discover all hidden treasures when entering unknown territory, it is a wise strategy to prepare for all possibilities and not to rely on a few choices only.

In this spirit, investigations of electron–proton (ep) and electron–ion (eA) collisions at high energies offer an important prospect, complementary to proton–proton (pp) and electron–positron (ee) collisions. So far, the only collider to exploit the ep configuration was HERA at DESY, where results from the H1 and ZEUS experiments provided much of the base of current LHC physics and also led to surprising results, for example on the momentum distributions of partons inside the proton. Building on the conceptual design study for the Large Hadron Electron Collider (LHeC) – an electron-beam upgrade to the LHC – CERN’s management decided recently to investigate these possibilities more deeply. It has established an International Advisory Committee (IAC) to report to the director-general, with the mandate to provide “…scientific and technical direction for the physics potential of the ep/eA collider, both at the LHC and FCC [the proposed Future Circular Collider complex], as a function of the machine parameters and of a realistic detector design, as well as for the design and possible approval of an energy recovery linac (ERL) test facility at CERN…”. Furthermore, the advisory committee should offer “assistance in building the international case for the accelerator and detector developments as well as guidance to the resource, infrastructure and science policy aspects…”. Chaired by Herwig Schopper, the IAC comprises 12 eminent scientists from three continents, together with CERN’s director for research and computing, Sergio Bertolucci, and the director for accelerators and technology, Frederick Bordry, as well as the co-chairs of the newly established LHeC Co-ordination Group, Oliver Brüning and Max Klein.

One of the IAC’s first major activities was to hold a well-attended workshop on the LHeC, its physics, and the accelerator and detector development, at Chavannes-de-Bogis in January this year. At the meeting, Stefano Forte classified deep-inelastic scattering (DIS) physics at the energy frontier – which becomes accessible with ep collisions using the LHC’s proton beam (figure 1) – into three major areas. One area consists of high-precision measurements of the Standard Model, with the experimental and theoretical programme aiming for a per mille determination of the strong coupling constant, αs, and the reduction of uncertainties in searches at the High Luminosity LHC (HL-LHC) at high mass scales as prime examples. A second area concerns exploration of the parameter space, with Higgs physics – including the challenging decays into b and c quarks (figure 2) – as the obvious and most important element. The cross-section for such processes at the LHeC would be about 200 fb, enabling unique measurements of the Higgs properties from WW–H and ZZ–H production in ep scattering. With its unprecedented precision in determination of the parton distributions and of the strong coupling, the LHeC could assist in transforming the LHC into a precision Higgs factory. Lastly, there is what Forte called “serendipity”, meaning room for “known or unknown” discoveries. Indeed, a big step to higher energy with perhaps 1000 times the luminosity of HERA could lead not only to new insights but to breakthroughs, especially in the understanding of QCD.

Given the exploration of novel QCD phenomena such as quark–gluon plasma in heavy-ion collisions at the LHC – and also because HERA never scattered electrons off deuterons or heavier ions – a programme of electron–ion physics at the LHeC collider would be of great interest. It would extend the kinematic range in terms of four-momentum transfer squared, Q2, and the inverse of Bjorken-x, by nearly four orders of magnitude. This could reveal unexpected phenomena and would put the understanding of the partonic structure of the neutron and nuclei, and the exploration of high-density matter, on firmer theoretical ground.

The vision of a 50 TeV proton (and about 20 TeV lead-ion) beam from the FCC opens a further horizon to future DIS measurements, which, for example, would access contact-interaction scales of a few hundreds of tera-electron-volts, could study lepton–quark resonances should these exist, and determine the Higgs self-coupling based on an inclusive Higgs-production cross-section of 2pb, which is much larger than the “Higgs-strahlung” cross-section at the International Linear Collider or the electron–positron FCC (FCC-ee).

A unique strength of the LHeC rests on the prospect of measuring parton distributions much more accurately than previously and of unfolding them without symmetry assumptions for the first time. This would remove a substantial part of the uncertainty of Higgs production in pp collisions, which dominantly occurs proportional to the square of the gluon distribution (xg) times the strong coupling constant. The measurement of Higgs production across a larger rapidity range in pp scattering at the FCC extends down to extremely small values of Bjorken-x. In this range, which is also of interest for ultra-high-energy neutrino scattering, the extrapolations of the current xg parameterizations no longer have any basis, and they differ hugely. Moreover, it is expected that nonlinear gluon–gluon effects set in, possibly leading to a saturation of gluon-dominated interaction cross-sections. The clarification of the laws of parton evolution at Bjorken-x < 10–4, most likely leading to the end of validity of the linear so-called DGLAP evolution equations, is impossible without a DIS programme of the kind considered here, and is essential for the pursuit of a sound programme in pp physics at the energy frontier at CERN.

The Higgs discovery has led to a reconsideration of the luminosity needs at the LHeC – a further focus of the Chavannes workshop. The conceptual design report (CDR) was directed at achieving an instantaneous luminosity of about 1033 cm–2 s–1 in synchronous ep and pp operations at the LHC (LHeC Study Group 2012). A substantial increase of this value is desirable, with the goal of producing 105 Higgs bosons across a 10-year period of operation. This would open the route to a 1% precision measurement of the decay H → bb, thanks to the clean final-state signature and the absence of pile-up. Such an increase of luminosity might be possible owing to the beam brightness of the HL-LHC, which is expected to be 2–3 times higher than assumed in the CDR, through doubling the electron-beam current to 10–20 mA and also by reducing the focusing of the proton beam in the ep interaction region. It is one of the goals of the new ep study initiated by CERN to understand the implications of high-intensity ep operation on the design of the interaction region and on the simultaneous operation of the LHC envisaged.

It has always been the tradition at CERN to plan a long time ahead carefully

A deeper study of the possibility for an ep and eA collider at CERN shows that development of the technique of energy recovery is necessary. This is possible when the maximum energy beam is decelerated with a phase shift in the same superconducting RF cavity structure used for acceleration. An energy-recovery linac provides a unique opportunity to achieve high energy and high luminosity by efficient use of the available power. In the case of the LHeC design, a beam power of about 25 MW is used. This would correspond to a power of almost 1 GW if there were no energy recovery. In conjunction with the renewed study of ep at CERN, the decision has been made to design and build a set of two cryogenic superconducting RF-cavity modules in collaboration with experts at Jefferson Lab in the US and at Mainz University (figure 3). About 7 m long, one module comprises four cavities of a five-cell low-loss shape with a higher-order-mode coupler and supply end-can. The design is for a frequency of 802 MHz, with a few modules to be built for test purposes at CERN and Jefferson Lab and for the MESA project at Mainz. In a workshop last year, 802 MHz was chosen as a more-or-less optimum value for beam stability, cavity dimensions, RF power, dynamic losses, etc, and in view of the LHC and choices for the FCC developments also.

The two cryo-cavity modules could serve as the initial building blocks for an ERL test facility at CERN – the LTFC (figure 4). Its design, scheduled for 2015, is being undertaken in international collaboration. This test facility would have a variety of important goals: the development of superconducting RF at CERN under realistic operational beam conditions, with high gradients for continuous-wave operation (< 20 MV/m) and of high quality (Q0 > 1010); the development of high-current electron sources, which are also required for the FCC-ee; and further applications, such as magnet quench tests in a low-radiation environment and detector tests with an electron beam on-site of up to 1 GeV energy.

In addition to the many topics in deep-inelastic scattering that can be studied with the LHeC and the hadron–electron FCC (FCC-he), there is also an intimate relationship between ep physics and physics at pp and ee colliders. This was already evident when HERA, the Tevatron, the Large Electron–Positron Collider and the SLAC Linear Collider explored the Fermi scale. It is clear, not only from the example of Higgs studies, that this will also be the case at the energy scales of the LHC and the proposed FCC hh-ee-he complex. A new energy-frontier ep and eA project would naturally exploit the major investments in hadron beams at CERN. It would not become a flagship activity for CERN, since it would reside essentially at one experimental location, which could not satisfy the majority of the particle-physics community. However, such a project would provide a complementary window for the main upgrade programmes and would potentially lead into the distant future.

It has always been the tradition at CERN to plan a long time ahead carefully, with the result that all big projects were achieved on time and to budget, and were also scientifically and technically successful. This is one of the secrets of CERN’s success. Close co-operation between theory, experiments and technology was always essential for this to work. One aim of this article is to encourage collaboration on the test facility, on the accelerator, on the ep/eA detector being designed, and on the understanding and evaluation of an electron–proton and electron–ion physics programme at the energy and intensity frontier at CERN that would be worth pursuing.

LS1: on the home straight

CCnew1_03_14

When the shift crew in the CERN Control Centre extracted the beams from the LHC on 14 February last year, it marked the beginning of the first long shutdown, LS1, not only for the LHC but for all of CERN’s accelerator complex, after an unprecedented three years of almost continuous running. By the end of last summer, the programme for LS1 had already reached some key milestones. Now, with the cooling of the LHC to begin in May and the restart of the Proton Synchrotron (PS) and Super Proton Synchrotron (SPS) planned for later this year, LS1 is well on schedule.

Throughout LS1, the Superconducting Magnets and Circuits Consolidation (SMACC) project has been responsible for opening interconnections between the LHC magnets for the series of operations needed for magnet-circuit consolidation. A major objective has been to install a shunt on each splice, straddling the main electrical connection and the busbars of the neighbouring magnets. This is to avoid the serious consequences of electric arcs that could arise from discontinuities in the splices.

Despite the complexity of the work on the 27-km-long LHC, there has been excellent progress, with SMACC teams working on different sectors of the accelerator in parallel. By October, the outer “W” sleeves had been removed from the equivalent of seven of the eight sectors, and leak tests were in progress in several sub-sectors. A month later, the first SMACC team had arrived in sector 4-5 and with the opening of the “W” bellows had completed the full tour of the LHC. One-third of the shunts were by then in place and the closure of internal sleeves had begun in sector 7-8. On 28 November, CERN’s director-general, Rolf Heuer, was present for the welding of the last W sleeve in sector 6-7, and expressed his appreciation to the teams involved in the LS1 work. By mid-February, 80% of the interconnections had been consolidated and 85% of the 27,000 shunts had been installed.

CCnew2_03_14

On 15 January, the first pressure tests began in sector 6-7, after all of its vacuum subsectors had been closed and tested. The objective was to check the mechanical integrity and overall leak-tightness of the sector by injecting it with pressurized helium. The tests were a success. Next, the cryogenic teams prepared the sector for new electrical quality-assurance tests at ambient temperature, which were also successful. Two weeks of intensive cleaning followed to flush out any dust and dirt from the repair and consolidation work.

Elsewhere around the LHC, X-ray testing has been used to look for any faults in the machine’s cryogenic distribution system, and 1,344 DN200 safety valves have been installed to release helium in the event of pressure build-up. Compensators on the LHC’s cryogenic distribution lines have been replaced, as has a faulty RF cryomodule. Tests on the back-up electrical supply have also been completed.

Meanwhile, at the PS Booster, a new beam dump and its shielding blocks were installed from October onwards. At the PS itself, the cooling and ventilation system – dating back to 1957 – was replaced with a new ventilation system to aerate radioactive areas more efficiently. At the same time, testing of the newly installed access system was underway. By late October, consolidation of the seven main PS magnets had begun, with magnets being removed from the beam line and delivered to the magnet workshop, to be worked on by a specialized team from Russia.

Thanks to the know-how, motivation and commitment of hundreds of professionals at CERN – as well as teams from member states and beyond – the LHC, its experiments and its injectors are on course to be ready to start the next LHC run in January 2015.

• See cds.cern.ch/journal/CERNBulletin/.

New results mark progress towards polarized ion beams in laser-induced acceleration

CCnew13_03_14

The field of laser-induced relativistic plasmas and, in particular, laser-driven particle acceleration, has undergone impressive progress in recent years. Despite many advances in understanding fundamental physical phenomena, one unexplored issue is how the particle spins are influenced by the huge magnetic fields inherently present in the plasmas.

Laser-induced generation of polarized-ion beams would without doubt be important in research at particle accelerators. In this context, 3He2+ ions have been discussed widely. They can serve as a substitute for polarized neutron beams, because in a 3He nucleus the two protons have opposite spin directions, so the spin of the nucleus is carried by the neutron. However, such beams are currently not available owing to a lack of corresponding ion sources. A promising approach for a laser-based ion source would be to use pre-polarized 3He gas as the target material. Polarization conservation of 3He ions in plasmas is also crucial for the feasibility of proposals aiming at an increase in efficiency of fusion reactors by using polarized fuel, because this efficiency depends strongly on the cross-section of the fusion reactions.

CCnew14_03_14

A group from Forschungszentrum Jülich (FZJ) and Heinrich-Heine University Düsseldorf has developed a method to measure the degree of polarization for laser-accelerated proton and ion beams. In a first experiment at the Arcturus Laser facility, protons of a few million electron volts – generated most easily by using thin foil targets – were used to measure the differential cross-section d2σ/dϑdφ of the Si(p, p´)Si reaction in a secondary scattering target. The result for the dependence on scattering angle is in excellent agreement with existing data, demonstrating the feasibility of a classical accelerator measurement with a laser-driven particle source.

The azimuthal-angle (φ) dependence of the scattering distributions allowed the degree of polarization of the laser-accelerated protons to be determined for the first time. As expected from computer simulations for the given target configuration, the data are consistent with an unpolarized beam. This “negative” result indicates that the particle spins are not affected by the strong magnetic fields and field gradients in the plasma. This is promising for future measurements using pre-polarized targets, which are underway at Arcturus.

The polarization measurements are also an important step towards JuSPARC, the Jülich Short Pulse Particle and Radiation Centre at FZJ. This proposed laser facility will provide not only polarized beams but also intense X-ray and thermal neutron pulses to users from different fields of fundamental and applied research.

NOvA experiment sees its first long-distance neutrinos

CCnew15_03_14

On 11 February, the NOvA collaboration announced the detection of the first neutrinos in the long-baseline experiment’s far detector in northern Minnesota. The neutrino beam is generated at Fermilab and sent 800 km through the Earth’s surface to the far detector. Once completed, the near and far detectors will weigh 300 and 14,000 tonnes, respectively. Installation of the last module of the far detector is scheduled for early this spring and outfitting of both detectors with electronics should be completed in summer.

MINERvA searches for wisdom among neutrinos

MINERvA Collab

Neutrino physicists enjoy a challenge, and the members of the MINERvA (Main INjector ExpeRiment for v-A) collaboration at Fermilab are no exception. MINERvA seeks to make precise measurements of neutrino reactions using the Neutrinos at the Main Injector (NuMI) beam on both light and heavy nuclei. Does this goal reflect the wisdom of the collaboration’s namesake? Current and future accelerator-based neutrino-oscillation experiments must precisely predict neutrino reactions on the nuclei if they are to search successfully for CP violation in oscillations. Understanding matter–antimatter asymmetries might in turn lead to a microphysical mechanism to answer the most existential of questions: why are we here? Although MINERvA might provide vital assistance in meeting this worthy goal, neutrinos never yield answers easily. Moreover, using neutrinos to probe the dynamics of reactions on complicated nuclei convolutes two challenges.

The history of neutrinos is wrought with theorists underestimating the persistence of experimentalists (Close 2010). Wolfgang Pauli’s quip about the prediction of the neutrino, “I have done a terrible thing. I have postulated a particle that cannot be detected,” is a famous example. Nature rejected Enrico Fermi’s 1933 paper explaining β decay, saying it “contained speculations too remote from reality to be of interest to readers”. Eighty years ago, when Hans Bethe and Rudolf Peierls calculated the first prediction for the neutrino cross-section, they said, “there is no practical way of detecting a neutrino” (p23). But when does practicality ever stop physicists? The theoretical framework developed during the following two decades predicted numerous measurements of great interest using neutrinos, but the technology of the time was not sufficient to enable those measurements. The story of neutrinos across the ensuing decades is that of many dedicated experimentalists overcoming these barriers. Today, the MINERvA experiment continues Fermilab’s rich history of difficult neutrino measurements.

Neutrinos at Fermilab

Fermilab’s research on neutrinos is as old as the lab itself. While it was still being built, the first director, Robert Wilson, said in 1971 that the initial aim of experiments on the accelerator system was to detect a neutrino. “I feel that we then will be in business to do experiments on our accelerator…[Experiment E1A collaborators’] enthusiasm and improvisation gives us a real incentive to provide them with the neutrinos they are waiting for.” The first experiment, E1A, was designed to study the weak interaction using neutrinos, and was one of the first experiments to see evidence of the weak neutral current. In the early years, neutrino detectors at Fermilab were both the “15 foot” (4.6 m) bubble chamber filled with neon or hydrogen, and coarse-grained calorimeters. As the lab grew, the detector technologies expanded to include emulsion, oil-based Cherenkov detectors, totally active scintillator detectors, and liquid-argon time-projection chambers. The physics programme expanded as well, to include 42 neutrino experiments either completed (37), running (3) or being commissioned (2). The NuTeV experiment collected an unprecedented million high-energy neutrino and antineutrino interactions, of both charged and neutral currents. It provided precise measurements of structure functions and a measurement of the weak mixing angle in an off-shell process with comparable precision to contemporary W-mass measurements (Formaggio and Zeller 2013). Then in 2001, the DONuT experiment observed the τ neutrino – the last of the fundamental fermions to be detected.

neutrino event

While much of the progress of particle physics has come by making proton beams of higher and higher energies, the most recent progress at Fermilab has come from making neutrino beams of lower energies but higher intensities. This shift reflects the new focus on neutrino oscillations, where the small neutrino mass demands low-energy beams sent over long distances. While NuTeV and DONuT used beams of 100 GeV neutrinos in the 1990s, the MiniBooNE experiment, started in 2001, used a 1 GeV neutrino beam to search for oscillations over a short distance. The MINOS experiment, which started in 2005, used 3 GeV neutrinos and measured them both at Fermilab and in a detector 735 km away, to study oscillations that were seen in atmospheric neutrinos. MicroBooNE and NOvA – two experiments completing construction at the time of this article – will place yet more sensitive detectors in these neutrino beamlines. Fermilab is also planning the Long-Baseline Neutrino Experiment to be broadly sensitive to resolve CP violation in neutrinos.

A spectrum of interactions

Depending on the energy of the neutrino, different types of interactions will take place (Formaggio and Zeller 2013, Kopeliovich et al. 2012). In low-energy interactions, the neutrino will scatter from the entire nucleus, perhaps ejecting one or more of the constituent nucleons in a process referred to as quasi-elastic scattering. At slightly higher energies, the neutrinos interact with nucleons and can excite a nucleon into a baryon resonance that typically decays to create new final-state hadrons. In the high-energy limit, much of the scattering can be described as neutrinos scattering from individual quarks in the familiar deep-inelastic scattering framework. MINERvA seeks to study this entire spectrum of interactions.

To measure CP violation in neutrino-oscillation experiments, quasi-elastic scattering is an important channel. In a simple model where the nucleons of the nucleus live in a nuclear binding potential, the reaction rate can be predicted. In addition, an accurate estimate of the energy of the incoming neutrino can be made using only the final-state charged lepton’s energy and angle, which are easy to measure even in a massive neutrino-oscillation experiment. However, the MiniBooNE experiment at Fermilab and the NOMAD experiment at CERN both measured the quasi-elastic cross-section and found contradictory results in the framework of this simple model (Formaggio and Zeller 2013, Kopeliovich et al. 2012).

he neutrino quasi-elastic cross-section

One possible explanation of this discrepancy can be found in more sophisticated treatments of the environment in which the interaction occurs (Formaggio and Zeller 2013, Kopeliovich et al. 2012). The simple relativistic Fermi-gas model treats the nucleus as quasi-free independent nucleons with Fermi motion in a uniform binding potential. The spectral-function model includes more correlation among the nucleons in the nucleus. However, more complete models that include the interactions among the many nucleons in the nucleus modify the quasi-elastic reaction significantly. In addition to modelling the nuclear environment on the initial reaction, final-state interactions of produced hadrons inside the nucleus must also be modelled. For example, if a pion is created inside the nucleus, it might be absorbed on interacting with other nucleons before leaving the nucleus. Experimentalists must provide sufficient data to distinguish between the models.

The ever-elusive neutrino has forced experimentalists to develop clever ways to measure neutrino cross-sections, and this is exactly what MINERvA is designed to do with precision. The experiment uses the NuMI beam – a highly intense neutrino beam. The MINERvA detector is made of finely segmented scintillators, allowing the measurement of the angles and energies of the particles within. Figures 1 and 2 show the detector and a typical event in the nuclear targets. The MINOS near-detector, located just behind MINERvA, is used to measure the momentum and charge of the muons. With this information, MINERvA can measure precise cross-sections of different types of neutrino interactions: quasi-elastic, resonance production, and deep-inelastic scatters, among others.

ratio of charged-current cross-section

The MINERvA collaboration began by studying the quasi-elastic muon neutrino scattering for both neutrinos (MINERvA 2013b) and antineutrinos (MINERvA 2013a). By measuring the muon kinematics to estimate the neutrino energies, they were able to measure the neutrino and antineutrino cross-sections. The data, shown in figure 3, suggest that the nucleons do spend some time in the nucleus joined together in pairs. When the neutrino interacts with the pair, the pair is kicked out of the nucleus. Using the visible energy around the nucleus allowed a search for evidence of the pair of nucleons. Experience from electron quasi-elastic scattering leads to an expectation of final-state proton–proton pairs for neutrino quasi-elastic scattering and neutron–neutron pairs for antineutrino scattering. MINERvA’s measurements of the energy around the vertex in both neutrino and antineutrino quasi-elastic scattering support this expectation (figure 3, right).

A 30-year-old puzzle

Another surprise beyond the standard picture in lepton–nucleus scattering emerged 30 years ago in deep-inelastic muon scattering. The European Muon Collaboration (EMC) observed a modification of the structure functions in heavy nuclei that is still theoretically unresolved, in part because there is no other reaction in which an analogous effect is observe. Neutrino and antineutrino deep-inelastic scattering might see related effects with different leptonic currents, and therefore different couplings to the constituents of the nucleus (Gallagher et al. 2010, Kopeliovich et al. 2012). MINERvA has begun this study using large targets of active scintillator and passive graphite, iron and lead (MINERvA 2014). Figure 4 shows the ratio of lead to scintillator and illustrates behaviour that is not in agreement with a model based on charged-lepton scattering modifications of deep-inelastic scattering and the elastic physics described above. Similar behaviour, but with smaller deviations from the model, is observed in the ratio of iron to scintillator. MINERvA’s investigation of this effect will benefit greatly from its current operation in the upgraded NuMI beam for the NOvA experiment, which is more intense and higher in (the beamline’s on-axis) energy. Both features will allow more access to the kinematic regions where deep-inelastic scattering dominates. By including a long period of antineutrino operation needed for NOvA’s oscillation studies, an even more complete survey of the nucleons can be done. The end result of these investigations will be a data set that can offer a new window on the process behind the EMC effect.

Initially in the history of the neutrino, theory led experiment by several decades

Initially in the history of the neutrino, theory led experiment by several decades. Now, experiment leads theory. Neutrino physics has repeatedly identified interesting and unexpected physics. Currently, physics is trying to understand how the most abundant particle in the universe interacts in the simplest of situations. MINERvA is just getting started on answering these types of questions and there are many more interactions to study. The collaboration is also looking at what happens when neutrinos make pions or kaons when they hit a nucleus, and how well they can measure the number of times a neutrino scatters off an electron – the only “standard candle” in this business.

Time after time, models fail to predict what is seen in neutrino physics. The MINERvA experiment, among others, has shown that quasi-elastic scattering is a wonderful tool to study the nuclear environment. Maybe the use of neutrinos, once thought to be impossible to detect, as a probe to study inside the nucleus, would make Pauli, Fermi, Bethe, Peierls and the rest chuckle.

Heavy-ion synchrotron prepares for FAIR

CCnew9_02_14

Elaborate alterations to the Schwerionensynchrotron (SIS) – the heavy-ion synchrotron at GSI – have finished after one year’s work. The main new feature is an additional accelerator cavity, so that the accelerator now has a total of three cavities. The remodelling of the SIS accelerator was necessary for it to serve in future as an injector for the Facility for Antiproton and Ion Research (FAIR). The FAIR accelerator complex, which is currently under construction through an international effort, will be connected to the existing GSI facility.

The SIS accelerator has a circumference of 216 m, with about 50 magnets – each weighing several tonnes – to keep the particles on the correct path. In the coming years, two further accelerator cavities will be added. With a total of five cavities, the SIS will have the performance that is required to accelerate all kinds of elements and inject them into the FAIR machines.

Since its commissioning in 1990, SIS has been the scene of many successes, including the discovery of hundreds of new isotopes – a field in which a GSI scientist holds the world record – and three new types of radioactive decay. Work on SIS in biophysics also led to the development of ion-beam therapy at GSI, where 450 patients were successfully treated. This method of cancer therapy is now routinely administered at the HIT facility in Heidelberg, using a dedicated accelerator built by GSI.

LBNE prototype cryostat exceeds goals

CCnew10_02_14

Scientists and engineers working on the design of the particle detector for the Long-Baseline Neutrino Experiment (LBNE) celebrated a major success in January. They showed that very large cryostats for liquid-argon-based neutrino detectors can be built using industry-standard technology normally employed for the storage of liquefied natural gas. The 35-tonne prototype system satisfies LBNE’s stringent purity requirement on oxygen contamination in argon of less than 200 parts per trillion (ppt) – a level that the team could maintain stably.

The purity of liquid argon is crucial for the proposed LBNE time-projection chamber (TPC), which will feature wire planes that collect electrons from an approximately 3.5 m drift region. Oxygen and other electronegative impurities in the liquid can absorb ionization electrons created by charged particles emerging from neutrino interactions and prevent them from reaching the TPC’s signal wires.

The test results were the outcome of the first phase of operating the LBNE prototype cryostat, which was built at Fermilab and features a membrane designed and supplied by the IHI Corporation of Japan. As part of the test, engineers cooled the system and filled the cryostat with liquid argon without prior evacuation. On 20 December, during a marathon 36 hour session, they cooled the membrane cryostat slowly and smoothly to 110 K, at which point they commenced the transfer of some 20,000 litres of liquid argon, maintained at about 89 K, from Fermilab’s Liquid-Argon Purity Demonstrator to the 35 tonne cryostat. By the end of the session, the team was able to verify that the systems for purifying, recirculating and recondensing the argon were working properly.

The LBNE team then topped off the tank with an additional 6000 litres of liquid argon and began to determine the argon’s purity by measuring the lifetime of ionization electrons travelling through the liquid, accelerated by an electric field of 60 V/cm. The measured electron lifetimes were between 2.5 and 3 ms – corresponding to an oxygen contamination approaching 100 ppt and nearly two times better than LBNE’s minimum requirement of 1.5 ms.

The Phase II testing programme, scheduled to begin at the end of 2014, will focus on the performance of active TPC detector elements submerged in liquid argon. Construction of the LBNE experiment, which will look for CP violation in neutrino oscillations by examining a neutrino beam travelling 1300 km from Fermilab to the Sanford Underground Research Facility, could begin in 2016. More than 450 scientists from 85 institutions collaborate on LBNE.

New recipes for stopping neutrons

Neutrons are a common by-product of particle-accelerator operations. While these particles can be studied or gainfully used, at other times they are a nuisance, with the potential to damage sensitive electronics and cause data-acquisition systems to fail mid-experiment.

Preventing neutrons from causing damage was a chief goal in the design of a shield house for an apparatus being built for the 12 GeV Upgrade project being carried out at the US Department of Energy’s Jefferson Lab. The $338 million upgrade will double the energy of the electron beam, adding an additional experimental hall while improving the existing halls along with other upgrades and additions.

The new apparatus, the Super High Momentum Spectrometer (SHMS), will enable measurements at high luminosity of particles with momentum approaching the beam energy, scattered at forward angles. It complements the existing High Momentum Spectrometer (HMS).

The physicists and engineers designing the SHMS shield house capitalized on data from more than 15 years of operations with the HMS and various large, open detector systems operated at Jefferson Lab. Using Monte Carlo calculations carried out with Geant4, material specifications were optimized for shielding the electronics from neutrons. However, current technologies did not meet the requirements. Existing systems were too bulky, expensive and difficult to manufacture. So a new system was designed, consisting of three parts: a hydrogen-rich and lightweight concrete layer to thermalize neutrons, a boron-rich concrete layer to absorb them and a thin lead layer to halt residual radiation.

The hydrogen-rich, lightweight concrete is the main structural component of the shield house. This material lacks most of the grit and rocks in ordinary concrete and instead contains shredded plastic and lightweight shale. It looks and pours like concrete and has the same strength, but it has two-thirds of the weight and has four times the neutron-thermalizing capability.

The boron-rich concrete is similarly produced using a patented new recipe, with boron powder replacing the typical aggregate. The boron-rich concrete has the same consistency and strength as ordinary concrete and concrete simply doped with boron, yet stops neutrons with less material. A protective layer that is 15 cm thick encloses concrete electronics rooms in the SHMS shield house, topped with thin lead plates to stop the 0.48 MeV γ rays produced in neutron–boron interactions. A third, panel-like product was designed to stop neutrons in space-constricted areas. It is about 2.5 cm thick and consists of boron embedded in an epoxy resin.

All of the new products are easily manufactured using existing techniques, and systems built with these patented and patent-pending technologies have applications in nuclear-waste storage, compact nuclear reactors and for shielding radiation sources in medical applications.

Microelectronics at CERN: from infancy to maturity

Two decades of microelectronics

When the project for the Large Electron–Positron (LEP) collider began at CERN in the early 1980s, the programme required the concentration of all available CERN resources, forcing the closure not only of the Intersecting Storage Rings and its experiments, but of all the bubble chambers and several other fixed-target programmes. During this period, the LAA detector R&D project was approved at the CERN Council meeting in December 1986 as “another CERN programme of activities” (see box) opening a door to initiate developments for the future. A particular achievement of the project was to act as an incubator for the development of microelectronics at CERN, together with the design of silicon-strip and pixel detectors – all of which would become essential ingredients for the superb performance of the experiments at the LHC more than two decades later.

The start of the LAA project led directly to the build-up of know-how within CERN’s Experimental Physics Facilities Division, with the recruitment of young and creative electronic engineers. It also enabled the financing of hardware and software tools, as well as the training required to prepare for the future. By 1988, an electronics design group had been set up at CERN, dedicated to the silicon technology that now underlies many of the high-performing detectors at the LHC and in other experiments. Miniaturization to submicrometre scales allowed many functions to be compacted into a small volume in sophisticated, application-specific integrated circuits (ASICS), generally based on complementary metal-oxide-silicon (CMOS) technology. The resulting microchips incorporate analogue or digital memories, so selective read-out of only potentially useful data can be used to reduce the volume of data that is transmitted and analysed. This allows the recording of particle-collision events at unprecedented rates – the LHC experiments register 40 million events per second, continuously.

muon tracks

Last November, 25 years after the chip-design group was set up, some of those involved in the early days of these developments – including Antonino Zichichi, the initiator of LAA – met at CERN to celebrate the project and its vital role in establishing microelectronics at CERN. There were presentations from Erik Heijne and Alessandro Marchioro, who were among the founding members of the group, and from Jim Virdee, who is one of the founding fathers of the CMS experiment at the LHC. Together, they recalled the birth and gradual growth to maturity of microelectronics at CERN.

The beginnings

The story of advanced ASIC design at CERN began around the time of UA1 and UA2, when the Super Proton Synchrotron was operating as a proton–antiproton collider, to supply enough interaction energy for discovery of the W and Z bosons. In 1988, UA2 became, by chance, the first collider experiment to exploit a silicon detector with ASIC read-out. Outer and inner silicon-detector arrays were inserted into the experiment to solve the difficulty of identifying the single electron that comes from a decay of the W boson, close to the primary interaction vertex. The inner silicon-detector array with small pads could be fitted in the 9 mm space around the beam pipe, thanks to the use of the AMPLEX – a fully fledged, 16-channel 3-μm CMOS chip for read-out and signal multiplexing.

The need for such read-out chips was triggered by the introduction of silicon microstrip detectors at CERN in 1980 by Erik Heijne and Pierre Jarron. These highly segmented silicon sensors allow micrometre precision, but the large numbers of parallel sensor elements have to be dealt with by integrated on-chip signal processing. To develop ideas for such detector read-out, in the years 1984–1985 Heijne was seconded to the University of Leuven, where the microelectronics research facility had just become the Interuniversity MicroElectronics Centre (IMEC). It soon became apparent that CMOS technology was the way ahead, and the experience with IMEC led to Jarron’s design of the AMPLEX.

(Earlier, in 1983, a collaboration between SLAC, Stanford University Integrated Circuits Laboratory, the University of Hawaii and Bernard Hyams from CERN had already initiated the design of the “Microplex” – a silicon-microstrip detector read-out chip using nMOS, which was eventually used in the MARK II experiment at SLAC in the summer of 1990. The design was done in Stanford by Sherwood Parker and Terry Walker. A newer iteration of the Microplex design was used in autumn 1989 for the microvertex detector in the DELPHI experiment at LEP.)

The first digital ASIC

Heijne and Jarron were keen to launch chip design at CERN, as was Alessandro Marchioro, who was interested in developing digital microelectronics. However, finances were tight after the approval of LEP. With the appearance of new designs, the tools and methodologies developed in industry had to be adopted. For example, performing simulations was better than the old “try-and-test technique” of wire wrapping, but this required the appropriate software, including licences and training. The LAA project came at just the right time, allowing the chip-design group to start work in the autumn of 1988, with a budget for workstations, design software and analysis equipment – and crucially, up to five positions for chip-design engineers, most of whom remain at CERN to this day.

On the analogue side, there were three lines to the proposed research programme within LAA: silicon-microstrip read-out, a silicon micropattern pixel detector and R&D on chip radiation-hardness. The design of the first silicon-strip read-out chip at CERN – dubbed HARP for Hierarchical Analog Readout Processor – moved ahead quickly. The first four-channel prototypes were already received in 1988, with work such as the final design verification and layout check still being done at IMEC.

The silicon micropattern pixel detector, with small pixels in a 2D matrix, required integration of the sensor matrix and the CMOS read-out chip, either in the same silicon structure (monolithic) or in a hybrid technology with the read-out chip “bump bonded” to each pixel. Such a chip was developed as a prototype at CERN in 1989 in collaboration with Eric Vittoz of the Centre Suisse d’Electronique et de Microtechnique and his colleagues at the École polytechnique fédérale de Lausanne. While it turned out that this first chip could not be bump bonded, it successfully demonstrated the concept. In 1991, the next pixel-read-out chip designed at CERN was used in a three-unit “telescope” to register tracks behind the WA94 heavy-ion experiment in the Omega spectrometer. This test convinced the physicists to propose an improved heavy-ion experiment, WA97, with a larger telescope of seven double planes of pixel detectors. This experiment not only took useful data, but also proved that the new hybrid pixel detectors could be built and exploited.

Research on radiation hardness in chips remained limited within the LAA project, but took off later with the programme of the Detector Research and Development Committee (DRDC) and the design of detectors for the LHC experiments. Initially, it was more urgent to show the implementation of functioning chips in real experiments. Here, the use of AMPLEX in UA2 and later the first pixel chips in WA97 were crucial in convincing the community.

In parallel, components such as time-to-digital converters (TDCs) and other Fastbus digital-interface chips were successfully developed at CERN by the digital team. The new simulation tools purchased through the financial injection from the LAA project were used for modelling real-time event processing in a Fastbus data-acquisition system. This was to lead to high-performance programmable Fastbus ASICs for data acquisition in the early 1990s. Furthermore, a fast digital 8-bit adder-multiplier with a micropipelined architecture for correcting pedestals, based on a 1.2 μm CMOS technology, was designed and used in early 1987. By 1994, the team had designed a 16-channel TDC for the NA48 experiment, with a resolution of 1.56 ns, which could be read out at 40 MHz. The LAA had well and truly propelled the engineers at CERN into the world of microtechnology.

The LAA

The LAA programme, proposed by Antonino Zichichi and financed by the Italian government, was launched as a comprehensive R&D project to study new experimental techniques for the next step in hadron-collider physics at multi-tera-electron-volt energies. The project provided a unique opportunity for Europe to take a leading role in advanced technology for high-energy physics. It was open to all physicists and engineers interested in participating. A total of 40 physicists, engineers and technicians were recruited, and more than 80 associates joined the programme. Later in the 1990s, during the operation of LEP for physics, the programme was complemented by the activities overseen by CERN’s Detector R&D Committee.

The challenge for the LHC

A critical requirement for modern high-energy-physics detectors is to have highly “transparent” detectors, maximizing the interaction of particles with the active part of the sensors while minimizing similar interactions with auxiliary material such as electronics components, cables, cooling and mechanical infrastructure – all while consuming absolute minimum power. Detectors with millions of channels can be built only if each channel consumes milliwatts of power. In this context, the developments in microelectronics offered a unique opportunity, allowing the read-out system of each detector to be designed to provide optimal signal-to-noise characteristics for minimal power consumption. In addition, auxiliary electronics such as high-speed links and monitoring electronics could be highly optimized to provide the best solution for system builders.

However, none of this was evident when thoughts turned to experiments for the LHC. The first workshop on the prospects for building a large proton collider in the LEP tunnel took place in Lausanne in 1984, the year following the discovery of the W and Z bosons by UA1 and UA2. A prevalent saying at the time was “We think we know how to build a high-energy, high-luminosity hadron collider – we don’t have the technology to build a detector for it.” Over the next six years, several seminal workshops and conferences took place, during the course of which the formidable experimental challenges started to appear manageable, provided that enough R&D work could be carried out, especially on detectors.

CMS tracker barrel

The LHC experiments needed special chips with a rate capability compatible with the collider’s 40 MHz/25 ns cycle time and with a fast signal rise time to allow each event to be uniquely identified. (Recall that LEP ran with a 22 μs cycle time.) Thin – typically 0.3 mm – silicon sensors could meet these requirements, having a dead time of less than 15 ns. With sub-micron CMOS technology, front-end amplifiers could also be designed with a recovery time of less than 50 ns, therefore avoiding pile-up problems.

Thanks to the LAA initiative and the launch in 1990 by CERN of R&D for LHC detectors, overseen by the DRDC, technologies were identified and prototyped that could operate well in the harsh conditions of the LHC. In particular, members of the CERN microelectronics group pioneered the use of special full custom-design techniques, which led to the production of chips capable of withstanding the extreme radiation environment of the experiments while using a commercially available CMOS process. The first full-scale chip developed using these techniques is the main building block of the silicon-pixel detector in the ALICE experiment. Furthermore, in the case of CMS, the move to sub-micron 0.25-μm CMOS high-volume commercial technology for producing radiation-hard chips enabled the front-end read-out for the tracker to be both affordable and delivered on time. This technology became the workhorse for the LHC and has been used since for many applications, even where radiation tolerance is not required.

An example of another area that benefited from an early launch, assisted by the LAA project, is optical links. These play a crucial role in transferring large volumes of data, an important example being the transfer from the front ends of detectors that require one end of the link to be radiation hard – again, a new challenge.

Today, applications that require a high number of chips can profit from the increase in wafer size, with many chips per wafer, and low cost in high-volume manufacturing. This high level of integration also opens new perspectives for more complexity and intelligence in detectors, allowing new modes of imaging.

Looking ahead

Many years after Moore’s law was suggested, miniaturization still seems to comply with it. There has been continuous progress in silicon technology, from 10 μm silicon MOS transistors in the 1970s to 20 nm planar silicon-on-insulator transistors today. Extremely complex FinFET devices promise further downscaling to 7 nm transistors. Such devices will allow even more intelligence in detectors. The old dream of having detectors that directly provide physics primitives – namely, essential primary information about the phenomena involved in the interaction of particles with matter – instead of meaningless “ADC counts” or “hits” is now fully within reach. It will no longer be necessary to wait for data to come out of a detector because new technology for chips and high-density interconnections will make it possible to build in direct vertex-identification, particle-momenta evaluation, energy sums and discrimination, and fast particle-flow determination.

Some of the chips developed at CERN – or the underlying ideas – have found applications in materials analysis, medical imaging and various types of industrial equipment that employ radiation. Here, system integration has been key to new functionalities, as well as to cost reduction. The Medipix photon-counting chip developed in 1997 with collaborators in Germany, Italy and the UK is the ancestor of the Timepix chip that is used today, for example, for dosimetry on the International Space Station and in education projects. Pixel-matrix-based radiation imaging also has many applications, such as for X-ray diffraction. Furthermore, some of the techniques that were pioneered and developed at CERN for manufacturing chips sufficiently robust to survive the harsh LHC conditions are now adopted universally in many other fields with similar environments.

Looking ahead to Europe’s top priority for particle physics, exploitation of the LHC’s full potential until 2035 – including the luminosity upgrade – will require not only the maintenance of detector performance but also its steady improvement. This will again require a focused R&D programme, especially in microelectronics because more intelligence can now be incorporated into the front end.

Lessons learnt from the past can be useful guides for the future. The LAA project propelled the CERN electronics group into the new world of microelectronic technology. In the future, a version of the LAA could be envisaged for launching CERN into yet another generation of discovery-enabling detectors exploiting these technologies for new physics and new science.

bright-rec iop pub iop-science physcis connect