Synchrotron X-ray sources have become essential tools across the sciences, medicine and engineering. To continue the rapid pace of advances in these fields, researchers need much better high-intensity sources of short-wavelength X-rays to capture ultrafast phenomena and probe materials with atomic resolution. Existing continuous-duty synchrotrons fall short because they produce mostly incoherent light, so there is now a worldwide race to build coherent, high-flux hard X-ray sources.
Cornell University has for some time received funding to conceive, design and prototype innovative superconducting technology for an energy-recovery linac (ERL) as a basis for a next-generation source. Towards the end of 2011, the team at Cornell surpassed three important milestones on the road towards a coherent source and is now within striking distance of delivering performance that matches theoretical limits.
The goal of an ERL light source is to create ultralow-emittance electron bunches, accelerate them in a superconducting linear accelerator and then circulate them only once through a series of small-gap undulators to produce ultrabright, short pulses with a high fraction of transversely coherent, hard X-ray light. The electron’s energy is then recovered to accelerate a new, high-brightness beam. Three of the biggest R&D challenges are to prove that it is possible to build an electron injector with sufficient current and sufficiently small emittances, as well as a superconducting linac with sufficiently small energy consumption.
Cornell’s prototype injector has achieved the first milestone by delivering a continuous-duty current of 35 mA. This is the world record for any laser-driven photocathode electron gun and is above the specification for one of the proposed operating modes. The team is now ramping up the current to even higher levels.
The normalized emittance goals at Cornell are around 0.1 mm mrad for a bunch charge of about 20 pC and 0.3 mm mrad at some 80 pC, i.e. for a 100 mA beam of 1.3 GHz bunches. The emittances achieved for the bunch cores (the central 2/3 of the bunch) are <0.15 mm mrad at 20 pC and 0.3 mm mrad at 80 pC. The team expects even better values as the injector voltages are ramped up. For comparison, at 5 GeV a 0.1 mm mrad emittance yields a geometric horizontal emittance of 10 pm mrad. The current world-record storage-ring source, PETRA-III at DESY, operates with a geometric horizontal emittance of 1 nm mrad.
Finally the energy requirement for the first prototype cavity of Cornell’s X-ray ERL has been shown to be as small as proposed in a vertical cryogenic test (Q0 = 2 × 1010 at 16 MV/m), thus reaching the third milestone.
While much remains to be done, these achievements show that even this first prototype injector, when coupled with a linac and long undulators in a full-scale ERL light source, would already produce continuous-duty (1.3 GHz) pulses of hard X-ray beams of unprecedented coherence and pulse length.
The SPIN@COSY polarized-beam team has found unexpectedly strong higher-order spin resonances when using 2.1 GeV/c polarized protons stored in the COSY COoler SYnchrotron at the Forschungszentrum Jülich. These results may help to increase the polarization in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven when it is used as a 250 GeV/c polarized proton collider. The data were taken in April 2004 and presented in part at the International Spin Physics Symposium at Trieste the following September. However, the many partly overlapping data points had an unusually large scatter, so they allowed no firm conclusions to be reached. Now, after a challenging reanalysis, the team has published results that may spell good news for the work at RHIC.
In all of the other experiments by SPIN@COSY, the data showed the expected spread when beam parameters were varied upwards, e.g. in steps of 2, 4, 6, 8, 10 and then downwards, e.g. in steps of 9, 7, 5, 3, 1. However, this was not true for the April 2004 data. Indeed, when several of the sweeps were repeated for the second and third times, the data-spread increased. Clearly, these data needed a reanalysis, which began in 2009 after the other data from SPIN@COSY had been published.
The team obtained data for each resonance by measuring the polarization after sweeping COSY’s vertical (and later horizontal) betatron tunes slowly through the resonance with a narrow tune range of 0.002 in 2s, giving a tune sweep-rate of 0.001/s. The sweep-rate through all of the other resonances was made about 250 times faster by sweeping through a much larger range in a shorter time, typically a range of 0.125 in 0.5s. This reduced the effect of all of the other resonances by about 250-fold.
One possible cause of the spread in the data was a variation in the polarization stability of the polarized H– ion source. Polarized ion-sources are sensitive devices, with several sextupoles and RF transition units, whose fields and frequencies must be precisely matched to maximize the polarization. The source polarization at COSY is measured by a low-energy polarimeter (LEP) and the values stored on a nearby computer; but in 2004 this computer was not connected to the computers in the main control room, where the 2.1 GeV/c data were stored. Fortunately, however, the stored LEP data from 2004 were still available in 2009, so that several gigabytes of data could be transferred from COSY to Michigan. There, a small team of two post-docs and four undergraduate students matched the LEP data from the source in time with the corresponding 2.1 GeV/c data. When the 2.1 GeV/c polarization data were renormalized to the LEP polarization data, much of the spread disappeared.
The team then refined the vertical betatron-tune data further by using what is possibly a new technique for combining many partly overlapping data points in an unbiased manner. Thirty-six pairs of data points were sequentially recombined, whenever both lay within a sequentially increasing (in 0.001 steps) range in betatron tune. This recombination continued for 76 steps until the results of recombining the data from low-to-high vertical betatron tunes and from high-to-low tunes were all identical. In the horizontal data only five pairs of overlapping points needed to be recombined.
Figure 1 shows the results of this two-step reanalysis. The data clearly revealed that the long-held belief that lower-order spin resonances always caused more depolarization than higher-order resonances, and vertical spin resonances always caused more depolarization than horizontal resonances, was not correct for the 2nd- and 3rd-order resonances. The results showed that the single 2nd-order vertical resonance was far weaker than two of the 3rd-order vertical resonances (figure 2). They also showed that, while the 1st-order vertical resonance was so much stronger than the 1st-order horizontal resonance that it fully flipped the spin direction, this was certainly not true for the 2nd- and 3rd-order resonances. These unexpected results may help the RHIC polarized collider to increase its 250 GeV level of polarization further towards the 100 GeV level.
After 20 years of continuous operation, the EXPLORER gravitational-wave detector has come to the end of its long life as an experiment and left CERN. On 23 January it set off for a new existence at the European Gravitational Observatory (EGO) in Cascina, near Pisa, where it will become the main attraction in a new museum area. The detector’s main results span from the first modern upper limits on signals for gravitational waves bathing the Earth to the measurement of the dynamic gravitational field generated by an artificial source; from correlations with γ-ray and neutrino bursts to the acoustic detection of cosmic rays.
EXPLORER was the first gravitational-wave detector to reach the sensitivity and stability needed to perform long-term observations. Built and operated by INFN’s gravitational-wave groups at Rome and Frascati – first led by Edoardo Amaldi and Guido Pizzella and then by Eugenio Coccia – it was based on a cryogenic mechanical resonator, in the shape of a 3-m long aluminium cylindrical bar cooled to 2 K. It could be driven by a gravitational wave with spectral components at the bar’s resonant frequency, that is, about 1 kHz, and made use of superfluid helium to reduce thermal and vibrational noise and to allow the exploitation of high-sensitivity transducers and superconducting amplifiers. The experiment was able to detect changes as small as 10–19 m in the bar’s vibrational amplitude – a real achievement.
EXPLORER’s gravitational-wave sensitivity was limited to the strongest sources in the Galaxy. Now, the future of the field is represented by the network of large interferometers: the Laser Interferometer Gravitational-Wave Observatory with two interferometers in the US, Virgo at the EGO site in Italy, GEO in Germany and the Large-scale Cryogenic Gravitational wave Telescope in Japan. This network, which will comprise advanced versions of the instruments, should start detecting signals from many thousands of galaxies from the year 2015; typical sources of gravitational waves include supernovae, pulsars, and collisions of neutron stars and black holes. In the mean time, for the next three years, the Galaxy will be monitored by two modern cryogenic bars – Nautilus in INFN’s Frascati Laboratory and Auriga in INFN’s Legnaro Laboratory and by the GEO interferometer in Hanover.
The Large Hadron Collider (LHC) has been exploring the new high-energy frontier since 2009, attracting a global user-community of more than 7000 scientists. At the start of 2011, the long-term programme for the LHC had a minimum goal of an integrated luminosity (a measure of the number of recorded collisions) of at least 1 fb–1. Thanks to better-than-anticipated performance, the year ended with almost six times this amount delivered to each of the two general-purpose experiments, ATLAS and CMS.
The LHC is the pinnacle of 30 years of technological development. Set to remain the most powerful accelerator in the world for at least two decades, its full exploitation is the highest priority in the European Strategy for Particle Physics, adopted by the CERN Council and integrated into the European Strategy Forum on Research Infrastructures (ESFRI) Roadmap. However, beyond the run in 2019–2021, halving the statistical error in the measurements will require more than 10 years of running – unless the nominal luminosity is increased by a considerable amount. The LHC will need a major upgrade after 2020 to maintain scientific progress and exploit its full capacity. The aim is to increase its luminosity by a factor of 5–10 beyond the original design value and provide 3000 fb–1 in 10 to 12 years.
From a physics perspective, operating at a higher luminosity has three main purposes: to perform more accurate measurements on the new particles discovered at the LHC; to observe rare processes that occur at rates below the current sensitivity, whether predicted by the Standard Model or by the new physics scenarios unveiled by the LHC; and to extend exploration of the energy frontier, to increase the discovery reach with rare events in which most of the proton momentum is concentrated in a single quark or gluon.
Technological challenges
The LHC will also need technical consolidation and improvement. For example, radiation sensitivity of electronics may already be a limiting factor for the LHC in its current form. Transferring equipment such as power supplies from the tunnel to the surface requires a completely new scheme for “cold powering”, with a superconducting link to carry some 150 kA over 300 m with a vertical step of 100 m – a great challenge for superconducting cables and cryogenics.
With such a highly complex and optimized machine, an upgrade must be studied carefully and will require about 10 years to implement (figure 1). This has given rise to the High-Luminosity LHC (HL-LHC) project, which relies on a number of key innovative technologies, representing exceptional technological challenges, such as cutting-edge 12 T superconducting magnets with large aperture, compact and ultraprecise superconducting cavities for beam rotation, new types of collimators and 300-m long, high-power superconducting links with almost zero energy dissipation.
The high-luminosity upgrade represents a leap forward for key hardware components
The high-luminosity upgrade therefore represents a leap forward for key hardware components. The most technically challenging aspects of these cannot be done by CERN alone but will instead require strong collaboration involving external expertise. For this reason part of the HL-LHC project is grouped under the HiLumi LHC Design Study, which is supported in part by funding from the Seventh Framework programme (FP7) of the European Commission (EC).
Six work packages
HiLumi LHC comprises six work packages (WP), which are all overseen by the project management and technical co-ordination (WP1). Accelerator physics (WP2) is at the heart of the design study and it relates closely to the WPs that are organized around the main equipment on which the performance of the upgrade relies (figure 2). The first aim is to reduce β* (the beam focal length at the collision point), so the insertion-region magnets (WP3) that accomplish this function are the first set of hardware to consider. Crab cavities (WP4) will then make the decreased β* really effective by eliminating the reduction caused by geometrical factors; they will also provide levelling of the luminosity during the beam spill. Collimators (WP5) are necessary to protect the magnets from the 500 MJ stored energy in the beam – a technical stop to change a magnet would take 2–3 months. Superconducting links (WP6) will avoid radiation damage to electronics and ease installation and integration in what is a crowded zone of the tunnel. The remaining WPs of HL-LHC are not included in the FP7 Design Study as they refer to accelerator functions or processes that will be carried out within CERN (with the exception of the 11 T dipole project for collimation in the cold region of the dispersion suppressor, which is the subject of close collaboration with Fermilab).
The 20 participants within the HiLumi LHC Design Study include institutes from France, Germany, Italy, Spain, Switzerland and the UK, as well as organizations from outside the European Research Area, such as Russia, Japan and the US. As well as providing resources, participants are sharing expertise and responsibilities for the intellectual challenges.
The Japanese and US contributions constitute roughly one third of the manpower for the design study and are well anchored in existing partnerships formed during the construction of the LHC, namely the CERN-KEK collaboration and the US LHC Accelerator Research Program (LARP). Japan participates as a beneficiary without funding and the US laboratories are associates connected to the project via a memorandum of understanding. The participation of leading US and Japanese laboratories enables the implementation of the construction phase as a global project. The proposed governance model is tailored accordingly and could pave the way for the organization of other global research infrastructures.
The four-year HiLumi LHC Design Study was launched last November with a meeting attended by almost 160 participants, half of whom were from institutes beyond CERN. The meeting was held jointly with LARP because HL-LHC builds on both US and European activities. It included a meeting of the collaboration board, during which Michel Spiro, president of the CERN Council, presented the necessary steps for inclusion in the updated European Strategy for Particle Physics. CERN Council will discuss the updated strategy in March 2013 and plans to adopt it in a special session in Brussels in early summer 2013. Spiro’s presentation showed that with respect to the initially proposed timeline of HiLumi LHC, the Preliminary Design Report will now need to advance by one year to be ready by the end of 2012.
The FP7 HiLumi LHC Design Study thus combines and structures the efforts and R&D of a large community towards the ambitious HL-LHC objectives. It acts as a catalyst for ideas, helping to streamline plans and formalize collaborations. When evaluated by the EC, the design study proposal scored 15 out of 15 and was ranked top of its category, receiving funding of €4.9 million. “The appeal of the HiLumi LHC Design Study is that it goes beyond CERN and Europe to a worldwide collaboration,” stated Christian Kurrer, EC project officer of HiLumi LHC at the meeting in November. “This will further strengthen scientific excellence in Europe.”
• For more details about the High Luminosity upgrade and the HiLumi LHC Design Study, see http://cern.ch/HiLumiLHC.
Micropattern gaseous detectors (MPGDs) have opened a new era in state-of-the-art technologies and are the benchmark for gas-detector developments beyond the LHC. They could eventually enable a plethora of new radiation-detector concepts in fundamental science, medical imaging, security inspection and industry. Given the ever-growing interest in this rapidly developing field, an international conference series on MPGD detectors was founded in 2009 to provide a scientific forum to review the current highlights, new results and concepts, applications and future trends, with the first conference organized in Crete. The second in the series, MPGD2011, took place in Kobe on 29 August – 1 September. With it being two years since the previous meeting, there were many new developments to discuss at MPGD2011.
The conference was held at the Maiko Villa Kobe hotel, which is located near the Akashi Strait Bridge. Connecting the Japanese mainland with Awaji island, this is the world’s largest suspension bridge. It was clearly visible from the venue and symbolically emphasized the connection and synergy of the worldwide communities. Half of the 120 participants were from overseas, visiting from 16 countries. Attendance was clearly unaffected by the Great East Japan Earthquake on 11 March 2011, which was in contrast to many other international conferences and events in Japan in 2011 that were cancelled owing to low participation from foreign countries following the disaster.
Japan is the most advanced of any country in terms of having a successful partnership between academia and industry in the development of particle-physics detectors. MPGD developments have been an active field in the country since the early 1990s, shortly after the invention of the micro-strip gas chamber (MSGC). However, in the Asian region and especially in Japan, most MPGD R&D has been carried out independently from other countries. Elsewhere, worldwide interest in the technological development and the use of the novel MPGD technologies led to the establishment of the international research collaboration RD51 at CERN in 2008. By 2011, 80 institutes from 25 countries had joined the collaboration. Only one institute from Japan – Kobe University – has so far joined RD51, although there is an annual domestic MPGD workshop with some 80 participants and around 30 presentations. Holding the international MPGD conference in Japan, followed by a meeting of the RD51 collaboration on 2–3 September, was highly important from the perspective of improving communication and enhancing the synergy between the worldwide MPGD communities.
MPGDs are a relatively novel kind of particle detector, based on gaseous multiplication using micro-pattern electrodes instead of thin wires in a multiwire proportional chamber (MWPC). By using a pitch size of a few hundred micrometres, which is an order-of-magnitude improvement in granularity over wire chambers, these detectors offer an intrinsic high rate-capability (> 106 Hz/mm), excellent spatial resolution (around 30–50 μm) and single-photoelectron time resolution in the nanosecond range. The MSGC, a concept invented by Anton Oed in 1988, was the first of the microstructure gas detectors. Further advances in photolithography techniques gave rise to more powerful devices, in particular, the gas-electron multiplier (GEM) of Fabio Sauli in 1997, and the micromesh gaseous structure (Micromegas) of Ioannis Giomataris and colleagues in 1996. Both of these devices exhibit improved operational stability and increased radiation hardness. During their evolution, many types of MPGDs have arisen from the initial ideas, such as the thick GEM (THGEM), the resistive thick GEM (RETGEM), the microhole and strip plate (MHSP) and the micropixel gas chamber (μ-PIC).
Today, a large number of groups worldwide are developing MPGDs for:
• future experiments at particle accelerators (upgrades of muon detectors at the LHC and novel MPGD-based devices for time-projection chambers (TPCs) and digital hadron calorimetry at a future linear collider);
• experiments in nuclear and hadron physics (KLOE2 at DAΦNE, the Panda and CMB experiments at the Facility for Antiproton and Ion Research, STAR at the Relativistic Heavy Ion Collider, SBS at Jefferson Lab and many others);
• experiments in astroparticle physics and neutrino physics;
• and industrial applications such as medical imaging, material science and security inspection.
This report cannot summarize all of the interesting developments in the MPGD field but it illustrates the richness with a few conference highlights and their implications.
During the three days of MPGD2011, results were presented in 39 plenary talks – including three review talks – and some 30 posters. Five industrial companies linked closely to MPGD technologies also exhibited their products.
Marcel Demarteau of Argonne National Laboratory discussed the paramount importance of the interplay between future physics challenges and the development of advanced detector concepts, with instrumentation being the enabler of science, both pure and applied. The greatest payoffs will come from fundamentally reinventing mainstream technologies under a new paradigm of integration of electronics and detectors, as well as integration of functionality. As an example, several conference talks discussed recent progress in the development of integrated Micromegas (InGrid) directly on top of a CMOS micropixel anode (the Timepix chip), which offers a novel and fully integrated read-out solution. These detectors will be used in searching for solar axions in the CAST experiment at CERN and are also are under study for a TPC at the International Linear Collider and for a pixellized tracker (the “gas on slimmed silicon pixels” or GOSSIP detector) for the upgrades of the LHC experiments.
A key point that must be solved to advance with MPGDs is the industrialization of the production and manufacturing of large-size detectors. Rui de Oliveira of CERN discussed the current status of the new facility for large-size MPGD production at CERN, which will be able to produce 2 m × 0.6 m GEMs, 1.5 m × 0.6 m Micromegas and 0.8 m × 0.4 m THGEMs. He also presented recent developments and improvements of fabrication techniques – single-mask GEMs and resistive “bulk Micromegas”. GEMs and Micromegas prototypes have been produced in the CERN workshop with a size of nearly 1 m2 for the ATLAS and CMS muon upgrades for the future high-luminosity LHC (the HL-LHC project, Designs on higher luminosity). Large-area cylindrical GEMs are currently being manufactured for the KLOE2 inner tracker.
Moving away from applications in particle physics, large-area MPGDs are being developed for muon tomography to detect nuclear contraband and for tomographic densitometry of the Earth. Industry has also become interested in manufacturing MPGD structures; technology-transfer activities and collaboration have been actively pursued during the past year with several companies in Europe, Japan, Korea and the US.
One of the highlights of MPGD2011 was the recent trend in the development of MPGDs with resistive electrodes. This technique is an attractive way to quench discharges, thus improving the robustness of the detector against sparks. There were more than 10 presentations devoted to resistive MPGDs. The resistive bulk Micromegas for the ATLAS muon upgrade (MAMMA) employs a 2D read-out board utilizing resistive strips on top of the insulator, covering copper strips (figure 2). This industrial-assembly process allows regular production of large, robust and inexpensive detector modules. The design has achieved stable operation in the presence of heavily ionizing particles and neutron background, similar to the conditions expected in the ATLAS cavern in the HL-LHC upgrade. There were also other presentations describing basic developments of the GEM, THGEM and μ-PIC using resistive materials.
Alexey Buzulutskov of the Budker Institute of the Nuclear Physics, Novosibirsk, reviewed recent advances in cryogenic avalanche detectors, operated at low temperatures (from a few tens of degrees kelvin down to a few degrees). Recent progress in the operation of cascaded MPGDs at cryogenic temperatures could pave the road toward their potential application in: the next-generation neutrino physics and proton-decay experiments; liquid argon TPCs for direct dark-matter searches; positron-emission tomography (PET); and a noble-liquid Compton telescope combined with a micro-PET camera.
The MPGD2011 conference also featured a physics presentation announcing the observation of electron-neutrino appearance events using the beam from the Japan Proton Accelerator Research Complex to the Super-Kamiokande detector. The results were mainly based on the three large-volume TPCs, instrumented with bulk Micromegas detectors and read out via some 80,000 channels. This is a good example of the interplay between physics and technology. Last, but not least, interesting results of gaseous photomultipliers with caesium-iodide and bialkali photocathodes coupled to GEM, THGEM and Micromegas structures, were reported at the conference. A sealed prototype of an MPGD sensitive to visible light has been produced by Hamamatsu.
A design study for the High-Luminosity LHC (HL-LHC) project to upgrade the LHC has been launched with a meeting at CERN to bring together scientists and engineers from 20 institutes in Europe, Japan and the US. The goal is to prepare the ground for an LHC-luminosity upgrade scheduled for around 2020, which aims to take the LHC’s luminosity to 200–300 fb–1/year, a factor of 5–10 above the current design value. Upgrading the LHC for higher luminosity will involve a number of innovative technologies and challenges. These include cutting-edge 12 T superconducting magnets, compact and ultraprecise superconducting cavities for beam rotation, as well as 300-m-long, high-power superconducting links with almost zero energy dissipation.
The meeting on 16–18 November marked the initial step in the HiLumi LHC Design Study, which is supported through the European Commission’s Seventh Framework programme (FP7). Drawing on expertise from around the world, this 1st HiLumi LHC Collaboration Meeting included scientists and engineers from the well established CERN-KEK collaboration and US LHC Accelerator Research Program (LARP). Because it was a joint meeting with LARP, the first day was organized as a LARP collaboration meeting (LARP CM17), with a plenary session followed by two parallel sessions (Accelerator Physics and Magnet R&D), where the accent was on the LARP programme and its results.
The second day included various parallel sessions organized for LARP and for the HiLumi LHC study, as well the first HiLumi LHC Plenary Session, with presentations by the management and technical co-ordination. The first HiLumi LHC Collaboration Board also took place during a period of parallel sessions. The third day was devoted to a HiLumi LHC Public Session in CERN’s Main Auditorium to review the HL-LHC programme and CERN’s plans, as well as to discuss the US and Japanese involvement and the status of various work packages in the HiLumi LHC study.
For more about the meeting, see http://hilumilhc.web.cern.ch/HiLumiLHC/. A more detailed report on the study will appear in the March issue of CERN Courier.
European funding agencies have welcomed the priorities for the future of astroparticle physics defined by the scientific community and have accepted the recommendations included in the update of the European roadmap for astroparticle physics, published on 21 November 2011.
This update comes after the first ever European roadmap for astroparticle physics, published in 2008. The goal was to define the research infrastructures necessary for the development of the field: the “magnificent seven” of astroparticle physics.
The roadmap is the product of a collaboration between the AStroParticle European Research Area (ASPERA) network of European national funding agencies responsible for astroparticle physics and the Astroparticle Physics European Coordination (ApPEC). “The update of the roadmap provides a better picture of what will come first on the menu,” said Christian Spiering, chair of the ASPERA and ApPEC Scientific Advisory Committee that produced the roadmap. Funding for each project is still subject to national decision-making processes and the roadmap recognizes that not all funding agencies will necessarily support each project.
The strategy reaffirms the support required for current experiments and planned upgrades, in particular in the areas of gravitational waves, dark-matter searches and the measurement of neutrino properties, as well as for underground and space-based infrastructures. The mid-term planning (2015–2020) includes four large projects to be constructed starting from the middle of this decade.
In the domain of tera-electron-volt gamma-ray astrophysics the Cherenkov Telescope Array (CTA) is clearly the worldwide priority project. CTA is an initiative to build the next-generation ground-based, very high-energy gamma-ray observatory, combining proven technological feasibility with a guaranteed scientific perspective. Some 800 scientists from 25 countries have already joined forces to build it.
KM3NeT, the next-generation high-energy neutrino telescope in the Mediterranean Sea, is in its final stages of technology definition, with prototype deployment expected within the next 2–3 years. A project selected by the European Strategy Forum on Research Infrastructures, it is in an EU-funded preparatory phase, having obtained substantial regional funding.
LAGUNA is a megatonne-scale project for low-energy neutrino physics and astrophysics. It is at the interface with the CERN European Strategy update to be delivered in early 2013 and is currently the subject of an EU-funded design study.
Last, but not least, is a ground-based cosmic-ray observatory following in the footsteps of the Pierre Auger Observatory in Argentina. On longer time scales, similarly large infrastructures in the domain of dark energy or gravitational wave detection are being considered.
In 2010 the Italian government gave the green light for SuperB – a next-generation B factory based on an asymmetric electron–positron (e+e–) collider, which is to be constructed on the Tor Vergata campus of Rome University (figure 1). The intention is to deliver a peak luminosity of 1036 cm–2 s–1 to allow the indirect exploration of new effects in the physics of heavy quarks and flavours through studies of large samples of B, D and τ decays. Building on the wealth of results produced by the previous two B Factories, PEP-II and KEKB, and their associated detectors, BaBar and Belle, SuperB will produce an unprecedented amount of data and make accessible a range of new investigations.
The SuperB concept represents a real breakthrough in collider design. The low-emittance ring has its roots in R&D for the International Linear Collider (ILC) and could be used as a system-test for the design of the ILC damping ring. The invention of the crab-waist final focus could also have an impact on the current generation of circular colliders.
The SuperB e+e– collider will have two rings with a 1.25 km circumference, one for electrons at 4.18 GeV and one for positrons at 6.7 GeV. There will be one interaction point (IP) where the beams will be squeezed down to a vertical size of only 36 nm rms. The design results from a combination of knowledge acquired at the previous B factories as well as the concepts developed for linear colliders.
The innovative crab-waist principle, which has been successfully tested at Frascati’s Φ factory – the DAΦNE e+e– collider – will allow SuperB to overcome some of the requirements that have proved problematic in previous e+e– collider designs, such as high beam currents and short bunches. While SuperB will have beam currents and bunch lengths similar to those of its predecessors, the use of smaller emittances and the crab-waist scheme for the collision region should produce a leap in luminosity from some 1034 cm–2 s–1 to an unprecedented level of 1036 cm–2 s–1, without increasing the background levels in the experiments or the machine’s power consumption.
High luminosity in particle colliders not only depends on high beam-intensity, it also requires a small horizontal beam size and horizontal emittance (a measure of the beam phase space) and a very small value for the vertical β function at the IP, β*y. (The β function in effect gives the envelope of the possible particle trajectories and has a parabolic behaviour around the IP.) However, β*y cannot be made much smaller than the bunch length without running into trouble with the “hourglass” effect, in which particles in the bunch-tails experience a much higher β*y and a loss in luminosity.
Unfortunately it is difficult to shorten the bunch length in a high-current ring without exciting instabilities and therefore paying in radio-frequency voltage. One way to overcome this is to make the beam crossing-angle relatively large and the horizontal beam size small, so that the region where the two colliding beams overlap is much smaller than the bunch length. In addition, in the crab-waist scheme, two sextupoles at suitable phase-advances from the IP are used to rotate the waist in the β function of one beam such that its minimum value is aligned along the trajectory of the other beam, so maximizing the number of collisions occurring at the minimum β (figure 2). This technique can substantially increase luminosity without having to decrease the bunch length. A crab-waist scheme was tested at the DAΦNE in 2008 allowing a peak luminosity three times higher than the previous record for similar currents in the two rings.
The combination of large crossing-angle and small beam sizes, emittances and beam angular divergences at the IP in the SuperB design will also be effective in decreasing the backgrounds present at the IP with respect to the previous B factories. A limited beam current also contributes to keeping these levels very low at SuperB. However, luminosity-related backgrounds are still relevant and impose serious shielding requirements.
The high luminosity of SuperB, representing an increase of nearly two orders of magnitude over the current generation of B factories, will allow exploration of the contributions of physics beyond the Standard Model to the decays of heavy quarks and heavy leptons. Indeed, new physics can affect rare B-decay modes through observables such as branching fractions, CP-violating asymmetries and kinematic distributions. These decays do not typically occur at tree level, so their rates are strongly suppressed in the Standard Model. Substantial variations in the rates and/or in angular distributions of final-state particles could result from the presence of new heavy particles in loop diagrams, providing clear evidence of new physics. Moreover, because the pattern of observable effects is highly model-dependent, measurements of several rare decay modes can provide information regarding the source of the new physics.
The SuperB data sample will contain unprecedented numbers of charm-quark and τ-lepton decays. Such data are of great interest, both in a capacity to improve the precision of existing measurements and in sensitivity to new physics. This interest extends beyond weak decays; the detailed exploration of new charmonium states is also an important objective. Limits on rare τ decays, particularly lepton-flavour-violating decays, already provide important constraints on models of new physics and SuperB may have the sensitivity to observe such decays. The accelerator design will allow for longitudinal polarization of the electron beam, making possible uniquely sensitive searches for a τ electric dipole moment, as well as for CP-violating τ decays.
Studies of CP-violating asymmetries are among the primary goals of SuperB. In addition to known sources of CP, new CP-violating phases arise naturally in many extensions of the Standard Model. These extra phases produce measurable effects in the weak decays of heavy-flavour particles. The detailed pattern of these effects, as well as of rare-decay branching fractions and kinematic distributions, will be made accessible by SuperB’s high luminosity. Such studies will provide unique constraints in, for example, ascertaining the type of supersymmetry breaking or the kind of extra-dimensional model behind the new phenomena. A natural consequence of such detailed studies will be an improved knowledge of the unitarity triangle to the limit allowed by theoretical uncertainties.
In addition to pursuing important research in fundamental physics, SuperB is also taking up the challenge to combine it with a rich programme of applied physics: the synchrotron light emitted by the machine will have a high brightness and will be suitable for studies in life sciences and material science. Current proposals include: the creation and exploitation of beamlines for laser ablation on biomaterials (a technique that, by modifying the surface of the material with a laser, allows the creation of patterns of biological systems); femtochemistry studies (a field that includes the structural study of small numbers of molecules); and the development of new phase-contrast imaging techniques to improve the reconstruction of morphological information related to tissues and organs.
The construction of SuperB, which is funded by the Italian government and supported by a large international collaboration that includes scientists from Europe, the US and Canada, is planned to take about six years. The newly established “Nicola Cabibbo Laboratory” Consortium will provide the necessary infrastructure for the exploitation of the new accelerator. In November, the Consortium appointed Roberto Petronzio as director with an initial three-year mandate. The machine will reuse several components from PEP-II, such as the magnets, the magnet power-supplies, the RF system, the digital feedback-system and many vacuum components. This will reduce the cost and engineering effort needed to bring the project to fruition.
The exciting physics programme foreseen for SuperB can only be accomplished with a large sample of heavy-quark and heavy-lepton decays produced in the clean environment of an e+e– collider. The programme is complementary to that of an experiment such as LHCb at a hadron collider. Indeed, a “super” flavour factory such as SuperB will, perforce, be a partner together with experiments at the LHC, and eventually at an ILC, in ascertaining exactly what kind of new physics nature has in store.
The United Nations General Assembly has designated 2012 the international year of sustainable energy for all. With leadership from UN Secretary-General Ban Ki-moon, a coordinating group of 20 UN agencies (UN-Energy) will tackle the crucial challenges of sustainable access to energy, energy efficiency and renewable energy at the local, national, regional and international levels. So what can big science do for global climate and energy challenges?
Catherine Césarsky, High Commissioner for Atomic Energy and member of the CERN Council, believes that research infrastructures (RIs) in particular are appropriate tools for addressing these challenges scientifically, validating and providing scientific knowledge and in this way contributing to the decision-making process. When it comes to technical solutions, large-scale RIs, being intrinsically energy intensive, can provide their know-how in improving energy management and share their mid- and long-term strategies for reliable, affordable and sustainable carbon-neutral energy supply.
Act now, save later
Research infrastructures have considerable expertise regarding energy savings and efficiency approaches.
It was with this message that Césarsky opened the first Joint Workshop on Energy Management in Large Scale Research Infrastructures, which was organized by CERN, the European Spallation Source (ESS) and the European Association of National Research Facilities (ERF). It took place in Lund, where the ESS will be built as the first carbon-neutral research facility, and brought international experts on energy together with representatives from research laboratories and future large-scale research projects all over the world. The objective was to identify the challenges and best practice for energy efficiency, optimization and supply at large research facilities and to consider how these capabilities could be better oriented to respond to this general challenge for society.
The quality of energy required and the levels of consumption mean that RIs have a considerable, unique expertise regarding energy savings and efficiency approaches, ranging from research in materials sciences to demonstrators/prototypes of energy efficiency. In particular, the workshop helped to identify several key points:
• The development and demonstration of co-generation (combined heat and power) plus renewable energy go hand in hand with the improvement in the quality of electrical power and a better use of transmission lines (in peak-shaving methods to reduce power drawn at peak times and in storage), while decreasing instrumental black-outs.
• It is important to maximize the re-use of thermal energy generated in various systems, both for heating and cooling (e.g. with heat pumps and absorption refrigerators), thus decreasing the use of primary energy.
• The design of systems should allow the recovery of heat at higher temperatures than in usual design standards, to allow a better re-use and an interaction with local communities to develop district heating if not yet available.
• While new RIs are in the position to introduce energy-saving approaches, there is a need for special support to allow existing RIs to re-fit and increase efficiency; this could be a driver for improved returns to the hosting territory, through increased technology and knowledge transfer.
RIs employ some of the best technicians and applied researchers in the world, who are trained continuously in cutting-edge technology by responding to the technical challenges brought to them by the best researchers. RIs could be the test-bed for completely innovative research-based solutions, such as the use of superconducting lines to manage different energy flows, the installation of superconducting magnetic-energy storage for energy quality control, the transformation of energy between radio frequency and direct current, and other novel schemes involving advanced concepts.
An increase in efficiency in the use of energy will be the major contributor to limiting carbon emissions at large-scale facilities. Energy efficiency will be driven by introducing and demonstrating appropriate methods and breakthrough technologies, including the recycling of waste heat into useful applications.
A recurrent theme of discussion during the workshop was the importance of evaluating the different energy options both in the design of new research facilities and in the upgrade of existing ones. The inclusion of energy-efficiency and recycling requirements at the design stage opens many possibilities and initiatives to all of the stakeholders. For example, high-temperature waste water can be recovered with high efficiency, but equipment manufacturers are rarely asked if high-temperature cooling water can be used to cool their equipment.
As recommendations, the workshop proposed that:
• The design and the construction of facilities should aim at optimizing scientific performance while including the best approach to energy use.
• The optimal balance between investment and operation costs must have a long-term view. A total “cost of ownership” approach is required.
• A clear and objective assessment of overall energy consumption – equipment, buildings and associated information and communication technologies – must be available.
• The use of relatively fine-grained monitoring and active feedback-control tools (including modelling), as well as the specific role of an energy manager, are required.
Towards renewables
In addition to technical aspects, the workshop tackled socioeconomic issues in parallel sessions. These advised investigation and a long-term approach in matters such as: government legislation (tax exemptions, permits and licenses), contracts with energy suppliers, innovative financing, understanding of the energy-load profile, contracts for steady-state and peaks, socioeconomic and environmental impacts and benefits at the host site.
Renewable energies will be important as future sustainable energy sources for RIs. In turn, the RIs can be instrumental in supporting renewable-energy research and technological development through, for example, new and improved materials (for photo-voltaic, fuel cells, improved motors and turbines etc.), the development of environmentally friendly biofuels, and new and safe methods of carbon capture.
Large-scale RIs are able to generate innovative solutions that can be used profitably elsewhere and be at the base of “win-win” partnerships with industries. Their capabilities and staff could also be mobilized for large international projects, e.g., the development of solar power generated in the sun-rich regions of North Africa and the Middle East (MENA). This could supply up to 15% of Europe’s energy needs by 2050 as advocated by the DESERTEC foundation. Technologies to exploit this potential, such as concentrated solar power, exist and are proven. Realizing such ambitious projects, however, will require a new energy and science partnership between Europe and MENA and a closer integration of MENA into the European Research Area.
RIs can be particularly effective in training young researchers, operators and managers to face the upcoming energy challenges.
The workshop showed that several RIs are already mobilizing their unique resources and technical skills to respond to the “energy grand challenge”. They can act as a test-bed for implementing appropriate energy-supply and procurement schemes as well as efficient energy-use. RIs can also be particularly effective in training young researchers, operators and managers to face the upcoming energy challenges in order to co-operate on R&D, exchange best practices and provide know-how. Planned by Frédérick Bordry (head of CERN’s Technology Department), Thomas Parker (ESS energy manager) and Carlo Rizzuto (chair of ERF and president of Sincrotrone Trieste – Italy), the workshop attracted 150 participants, indicating a clear requirement for this type of initiative. The unanimous consensus on such a need was supported by CERN with the proposal to host a second workshop in 2013.
The evolution of nuclear structure through the list of stable nuclear isotopes was well established by the late 1960s. During the following decades, however, the discovery of more and more short-lived nuclei expanded the nuclear chart – revealing several surprises. For example, the nuclear shells, which give the classical “magic numbers” along the line of stability, have been seen to change position and sometimes even to dissolve in highly unstable (exotic) nuclei. Only now is the field approaching a fundamental understanding of how nuclear shells evolve. To follow these changes in nuclear structure, nuclei must be probed in many complementary ways. Therefore the leading nuclear-physics facilities not only give access to many different isotopes but also allow a variety of experiments to be performed.
The introduction of REX-ISOLDE at CERN’s ISOLDE facility a decade ago (Kester et al. 2000) allowed a major step forward, as ions produced in the Isotope On-Line (ISOL) facility could be accelerated to a completely new energy region. Before the introduction of REX-ISOLDE, the experiments at ISOLDE took place at low energy (up to 60 keV) via decay studies, ion-beam measurements or manipulation. The natural extension of these techniques was to include reaction studies such as Coulomb excitation, capture reactions and transfer reactions. The challenge was to devise a universal, fast, efficient and cost-effective acceleration scheme that would take full advantage of the large range of isotopes available at ISOLDE.
The idea for the REX-ISOLDE “post-accelerating” scheme emerged in 1994, the acronym coming from “Radioactive beam EXperiments at ISOLDE”. The added accelerator had to increase the beam energy to a few million electron-volts per atomic mass unit (MeV/u). Its key ingredient is an innovative scheme for preparing ions that combines a Penning trap with an electron-beam ion source (EBIS) – REXTRAP and REXEBIS, as illustrated in figure 1. The semi-continuously released radioactive 1+ ions from the ISOLDE target, produced by the impact of 1.4 GeV protons from the Proton Synchrotron Booster, are accumulated and phase-space cooled in the buffer-gas-filled Penning trap, before being sent in a bunch to REXEBIS. So called “charge breeding” takes place inside the EBIS, i.e. the conversion of the ions from 1+ to q+ by bombardment with a dense, energetic electron beam. The highly charged ions, now with a reduced mass-to-charge ratio (A/q <4.5), are extracted and separated before being post-accelerated in a room-temperature linear accelerator (linac). The high charge state allows for efficient acceleration in the compact linac. REX-
ISOLDE pioneered this charge-breeding scheme for radioactive ions and now several facilities around the world are replicating the concept (Wenander 2010).
Versatile acceleration
Although REX-ISOLDE has a modest final beam energy compared with other CERN accelerators, it compensates by being agile and flexible. It was initially designed to perform post-acceleration of neutron-rich Na and K isotopes, all with masses below A = 50. Since then the mass range has been extended and radioactive elements from light 8Li to heavy 224Ra have been accelerated for experiments. To accelerate the heavier elements, high charge states – for example, above 50+ for Ra – have to be achieved to fulfil the A/q requirement of the linac. Neither stripping foils nor gas-jet stripping can be used to obtain such charge states at low energies, so the challenging task falls entirely on the charge breeder. By increasing the breeding time, sometimes up to 300 ms, REXEBIS can nevertheless efficiently convert the ions to the required high charge states.
Because REX-ISOLDE also has the ability to cover the high mass-range for ions it is possible to make full use of ISOLDE’s capability to produce heavy radioactive elements by spallation processes in targets of uranium carbide. This is a unique feature that so far no other radioactive ion-beam facility can challenge. The combination of a Penning trap and EBIS has also proved capable of accepting almost all chemical elements produced by ISOLDE because the ions are kept within the traps without any surface contact.
The duration of the cooling and the charge breeding is of secondary importance for radioactive elements with long half-lives. On the other hand, some radioactive isotopes of interest have short half-lives, potentially leading to decay losses of the rare ions. By optimizing the cooling and the breeding, even elements such as 11Li (t1/2 = 8.5 ms) and 12Be (t1/2 = 23.6 ms) have been post-accelerated successfully. To reduce the decay losses further, continuous injection from the ISOLDE target-ion source into REXEBIS – without prior bunching and cooling in REXTRAP – can be performed at the expense of a slightly lower transmission efficiency.
The purity of the radioactive ion beams is an important factor. Because there are often only a few thousand ions per second, corresponding to subfemtoampere beams, there is a real interest in suppressing as much contaminating beam components as possible. The excellent vacuum of REXEBIS is one of the requirements for good beam purity. Still, even with a vacuum of better than 10–11 mbar, the residual gas ions – such as C, N, O, Ar and Ne (the latter being the buffer gas in the Penning trap) – usually dominate the beam extracted from the EBIS. In the A/q spectrum shown in figure 2, the residual gas ions appear in discrete peaks, while the background between the peaks is very clean. Thus, by correctly choosing the A/q value of the radioactive beam – in this case abundantly injected 129Cs – the contaminating beam components can be avoided. By adjusting the time the radioactive ions are trapped within the EBIS, and therefore the time ions are exposed to the electron-impact ionization process, the charge state and hence the A/q of the extracted ions is changed.
The low-energy side of REX is a toolbox full of means for beam-manipulation exercises. One of the latest tools to be added is the “in-trap decay” method, used for producing elements that are not readily available from ISOLDE for chemical reasons, such as Fe. A short-lived isotope of abundantly produced Mn is taken from ISOLDE, injected into the EBIS and kept there for a few hundred milliseconds until the major fraction has decayed to Fe, before being accelerated to the experiment. This method can be used to access isotopes of several elements new to ISOLDE, such as B, Si, Ti and Zr.
Another tool, aimed at improving the beam purity and suppressing isobaric contaminants from ISOLDE, for instance Rb superimposed on Sr, is the molecular sideband method. Instead of extracting the ions of interest as atomic 1+ ions from ISOLDE, a carrier gas – in this case CF4 – is introduced and the ions are extracted as SrF+. Because of the electron configuration, the Rb contaminant does not form an RbF+ molecule and can therefore be suppressed in the ISOLDE separator. Inside the EBIS, the SrF+ is broken up and the Sr charge bred in the normal way before being accelerated to the experiment.
Yet another method of beam purification is to make use of the inherent mass-selectivity of the Penning trap. The injected ion cloud from ISOLDE, containing both the ion of interest and the isobaric contamination, is excited to a large radius inside the 3 T magnetic field of REXTRAP. Thereafter, a mass-selective cooling mechanism is applied, re-centring only the ions of interest, while the contamination is lost on extraction. A mass resolution in the order of 30,000 – a factor six times higher than with the High-Resolution Separator at ISOLDE – has been demonstrated for ions with mass number A in the range of 30 to 40.
Selected results
At the start, REX-ISOLDE made use of five room-temperature accelerating cavities to reach a maximum energy of 2.2 MeV/u. In 2004, a 9-gap interdigital H-type cavity was added to the linac, which boosted the final energy to approximately 2.9 MeV/u. Through stepped activation of the six accelerating cavities, the energy of the ion beam can be varied from 300 keV/u (the energy of the RF quadrupole cavity) up to the maximum energy.
The demand for beam time by the experiments is high, and many different beams – up to 10 elements and 20 isotopes plus several stable calibration beams – have to be delivered every year. This has to be done efficiently, in terms of both set-up time and beam transmission, because the exotic ions are difficult to produce. Until now, REX-ISOLDE has accelerated close to 100 isotopes of 30 different elements (P Van Duppen and K Riisager 2011).
A major theme in the experiments performed so far has been the tracking of the evolution of nuclear shells. The first hints for the breaking of the classical magic numbers came from experiments at CERN on neutron-rich nuclei with about 20 neutrons, in what is now called the “island of inversion”. Several REX experiments have contributed to clarifying the structure in this region. One of the latest combined a radioactive 30Mg beam with a radioactive tritium target to do two-neutron transfer to two 0+ states in 32Mg, the data being consistent with the closed-shell configuration in the excited state rather than the ground state.
In the region of the classical neutron magic number 50, a campaign of experiments has probed how shells evolve towards 78Ni, presumably still a double magic nucleus. Both transfer- and Coulomb-excitation experiments have been performed, the latter including one that made use of another speciality of ISOLDE: isomeric beams. ISOLDE’s laser ion source can in certain cases – such as the heavy Cu isotopes – produce beams of an isotope that is mainly in either its ground state or in a long-lived (isomeric) state. This extra selectivity helps greatly in interpreting complex spectra that result when these beams react.
In the light mass region REX experiments are also testing the extent of shell-breaking in the nucleus 12Be (neutron number 8), but the physics implications are broader because the accelerated isotope in this case, 11Be, is a halo nucleus with an unusually large spatial extent. The halo structure implies that continuum degrees of freedom play an important role. An extreme example of this is the neighbouring halo nucleus 11Li that is bound although its subsystem 10Li is particle unbound. The structure of 10Li has also been studied in transfer experiments at REX.
As a final example, at the other end of the nuclear chart several experiments are tracking the sizeable changes in shape that are known to take place systematically among light isotopes of elements around Pb. Coulomb excitation favours transitions among collective states and has allowed the identification of different shapes in nuclei from 182Hg to 204Rn.
Apart from these nuclear physics examples, the REX-ISOLDE accelerator has also been used for physics-application studies. These include the calibration of plastic foils of polyethylene terephthalate (PET), for use as solid-state nuclear track detectors, and for development work on diamond detectors.
REX-ISOLDE is a machine undergoing constant development to fulfil the changing requests of the experiments. Currently, the possibility of producing polarized nuclear beams with the tilted foil method is being investigated. The beam energy will also be increased in a few years’ time – first to 5 MeV/u and ultimately to 10 MeV/u – within the framework of the High Intensity and Energy ISOLDE project, HIE-ISOLDE. This will be achieved by adding superconducting cavities to the accelerating linac (Pasini et al. 2008). The increased energy range will open up a wide field of reaction studies and keep REX-ISOLDE fully booked for at least another decade.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.