In mid-September, the Long Baseline Neutrino Experiment (LBNE) collaboration, based at Fermilab, welcomed the participation of 16 additional institutions from Brazil, Italy and the UK. The new members represent a significant increase in overall membership of more than 30% compared with a year ago. Now, more than 450 scientists and engineers from more than 75 institutions participate in the LBNE science collaboration. They come from universities and national laboratories in the US, India and Japan, as well as Brazil, Italy and the UK.
The swelling numbers strengthen the case to pursue an LBNE design that will maximize its scientific impact. In mid-2012, an external review panel recommended phasing LBNE to meet the budget constraints of the US Department of Energy (DOE). In December the project received the DOE’s Critical Decision 1 (CD-1) approval on its phase 1 design, which excluded both the near detector and an underground location for the far detector. However, the CD-1 approval explicitly allows for an increase in design scope if new partners are able to contribute additional resources. Under this scenario, goals for a new, expanded LBNE phase 1 bring back these excluded design elements, which are crucial to execute a robust and far-reaching neutrino, nucleon-decay and astroparticle-physics programme.
For the first time, astronomers have caught a pulsar in a crucial transitional phase that explains the origin of the mysterious millisecond pulsars. The newly found pulsar swings back and forth between accretion-powered X-ray emission and rotation-driven radio emission, bringing conclusive evidence for a 30-year-old model that explains the high spin rate of millisecond pulsars as a result of matter accretion from a companion star.
Pulsars are the highly magnetized, spinning remnants of supernova explosions of massive stars and are primarily observed as pulsating sources of radio waves. The radio emission is powered by the rotating magnetic field and focused in two beams that stem from the magnetic poles. Similarly to a rotating lighthouse beacon, the rotation of the pulsar swings the emission cone through space, resulting in distant observers seeing regular pulses of radio waves (CERN Courier March 2013 p12). It is actually the kinetic rotational energy of the neutron star that is radiated away, leading to a gradual slow down of the rotation. While pulsars spin rapidly at birth, they tend to rotate more slowly – with periods of up to a few seconds – as they age. For this reason, astronomers in the 1980s were puzzled by the discovery of millisecond pulsars – old but extremely quickly rotating pulsars with periods of a few thousandths of a second.
The mysterious millisecond pulsars can be explained through a theoretical model known as the “recycling” scenario. If a pulsar is part of a binary system and is accreting matter from a stellar companion via an accretion disc, then it might also gain angular momentum. This process can “rejuvenate” old pulsars, boosting their rotation and making their periods as short as a few milliseconds. This scenario relies on the existence of accreting pulsars in binary systems, which can be detected through the X rays that are emitted in the accretion process. The discovery in the 1990s of the first X-ray millisecond pulsars was first evidence for this model but, until now, the search for a direct link between X-ray bright millisecond pulsars in binary systems and the radio-emitting millisecond pulsars has been in vain.
Now, the missing link to prove the validity of the scenario has finally been discovered by the wide-field IBIS/ISGRI imager on board ESA’s INTEGRAL satellite. A new X-ray source appeared in images taken on 28 March 2013 at the position of the globular cluster M28. Subsequent observations by the XMM-Newton satellite found a modulation of its X-ray emission at a period of 3.9 ms, revealing the incredibly fast spin of the neutron star of more than 250 rotations per second. A very clear modulation of the delay in the pulse arrival time further showed that a low-mass companion star orbits the pulsar every 11 hours.
These results obtained by an international team led by Alessandro Papitto from the Institute of Space Sciences in Barcelona were then compared with properties of a series of known radio pulsars in M28 and – luckily – they found one with precisely the same values. There is therefore no doubt that the radio and X-ray sources are the same pulsar, providing the missing link that validates the recycling scenario of millisecond pulsars. Follow-up radio observations by several antennae in Australia, the Netherlands and the US showed that the source does not exhibit radio pulsations when active in X-rays and vice-versa. It was only at the end of the X-ray outburst, on 2 May, that radio pulsations resumed.
This bouncing behaviour is caused by the interplay between the pulsar’s magnetic field and the pressure of accreted matter. When the accretion dominates, the source emits X rays and radio emission is inhibited by the presence of the accretion disc closing the magnetic field lines.
Il y a 30 ans, John Lawson, du Laboratoire Rutherford Appleton, menait une étude de conception concernant un accélérateur linéaire à champs de sillage plasma excités par laser pour le XXIe siècle. Le modèle de conception produit alors reste une référence pour la définition d’une machine de plusieurs TeV, car la plupart des points importants soulevés à l’époque sont encore à l’étude. Certains problèmes majeurs ont été résolus, notamment la production de faisceaux d’électrons à haute énergie avec une bonne émittance, et la formation d’une colonne de plasma uniforme. Cependant, différentes expériences poursuivent les efforts visant à trouver des réponses à des questions essentielles de la physique sous-jacente.
Particle accelerators developed during the past century are approaching the energy frontier. Today at the terascale, the machines needed are extremely large and costly. However, for more than 30 years, plasma-based particle accelerators driven by either lasers or particle beams have shown promise for a route to high energies, primarily because of the extremely large accelerating electric fields they can support. About a thousand times greater than in conventional accelerators, the high fields would allow the possibility of compact accelerating structures. It is with this in mind that future facilities may incorporate aspects of plasma accelerators.
Plasma-based accelerators – the brainchild of John Dawson (who died in 2001) and his colleagues at the University of California, Los Angeles – are being investigated worldwide with a great deal of success. However, can they be a serious competitor and displace the conventional “dinosaur” variety? This is the question that the late John Lawson at the Rutherford Appleton Laboratory in the UK posed a few years after Dawson and his collaborator Toshi Tajima published their seminal paper on plasma-based accelerators (Tajima and Dawson 1979).
The accelerating fields in plasma are supported by the collective motion of plasma electrons forming a space-charge disturbance that moves close to the speed of light – commonly known as the plasma wakefield accelerator (CERN Courier June 2007 p28). The main advantage is that the plasma can support accelerating fields many orders of magnitude greater than conventional devices, which suffer from breakdown of the waveguide structure. In contrast, in a plasma-based system the plasma is already “broken down” and the collective electric field, E, supported by the plasma is determined by the electron density, n, such that E ∼ n1/2.
In 1982, Ronald Ruth and Alexander Chao in the US made a first qualitative design based on a simplified model for a linear laser-driven plasma accelerator that could yield electrons at 5 TeV (Ruth and Chao 1982). Spurred on by this, Lawson set up a study group – including Ruth – to investigate the idea further and produce a design based on a more realistic model for a particle collider at the terascale for the 21st century. Published in the summer of 1983, the reference design considered electron energies above 1 TeV and – because of synchrotron radiation losses – a linear collider (Lawson et al. 1983). Indeed, it is as components of linear colliders that plasma accelerators continue to be considered.
At the time it was already clear that because of the increased energy advantage of colliding beams, all future high-energy accelerators would work in the colliding-beam mode. Conventional machines were planned that would reach the 1 TeV energy scale for hadrons and the 100 GeV scale for electron–positron colliders. These became the Tevatron at Fermilab, the SLAC Linear Collider and the Large Electron–Positron collider and Large Hadron Collider at CERN, which have made remarkably precise measurements of the W and Z bosons, as well as discoveries of the top quark and – most recently – a Higgs boson.
In the RAL report, Lawson writes that “the maximum energies thought ever to be practically achievable are of order 0.4 TeV per beam for e+e– and 20 TeV per beam for hadrons” and that these represent “the end of the dinosaurs”. However, radically new techniques such as plasma accelerators demand a long development time, so there is still ongoing development of conventional machines. These include the International Linear Collider for 350–500 GeV electrons and positrons with luminosity of the order of 1034 cm–2 s–1 and the more novel Compact Linear Collider for energies up to 3 TeV. In addition, there is the TLEP concept based on a synchrotron, which could also be used for protons (CERN Courier July/August 2013 p26).
The original plasma-accelerator schemes investigated were based on a long-pulse laser. Short-pulse lasers did not exist because chirp pulse amplification had not yet been demonstrated in the optical regime, only in the microwave regime. The Lawson design therefore incorporated the beat-wave mechanism of Tajima and Dawson, where two laser beams with a frequency difference equal to the plasma frequency drive a large-amplitude plasma wave with a phase velocity close to the speed of light. Most laser-driven and particle-driven particle accelerator experiments today are in the so-called bubble regime where the pulse length of the laser or particle beam is of the order of the plasma wavelength. In 2004, three independent groups in three different countries demonstrated mono-energetic electron beams with good emittance using short-pulse lasers and many groups worldwide now routinely produce electron beams at giga-electron-volt energies (CERN Courier November 2004 p5).
As a first attempt at finding a set of consistent parameters appropriate to a linear e+e– collider at a few tera-electron-volts, Lawson’s design was optimistic but it laid the groundwork for later studies on the scalings in length of such a machine. For example, Carl Schroeder and colleagues at Lawrence Berkeley National Laboratory (LBNL) recently looked at physics considerations for a 1 TeV machine based on plasma acceleration driven by a short-pulse laser (Schroeder et al. 2010).
Despite the difference in the accelerating structure considered in 1983, the outline design of the RAL report is still a good blueprint for a multi-tera-electron-volt device. Its main points are still being investigated. These include the construction of uniform metre-long plasma columns, laser focusing and guiding schemes, the laser energy and efficiency requirements, staging, particle transport and focusing. There are also requirements on the particle bunch-density and on the beam quality in terms of energy spread, emittance and luminosity. Some of these issues are common to both plasma and conventional accelerators, although Lawson’s design brief was that plasma-based techniques should take particle accelerators to a new level, not easily achievable in a conventional machine.
In particular, in single-pass colliders, bunches are used only once and high densities are required. Self-fields add rather than cancel, causing a pinch that enhances the luminosity – provided the effect is not too strong. However, deflection of particles by the electric fields of the opposite bunch causes strong synchrotron-like radiation, known as beamstrahlung, which reduces the energy of the particles and increases the energy spread. This can be controlled by having fewer particles per bunch and instead having a train of “bunchlets” – a feature that Lawson’s report found to be advantageous.
Since then, researchers have solved many of the issues, an important one – highlighted by Lawson – being creation of a uniform plasma column. In an experiment at SLAC in 2007, Chan Joshi’s group at University of Califormia, Los Angeles, successfully demonstrated acceleration in metre-long plasma columns using lithium vapour that was fully ionized by the head of an electron beam. The resulting wakefield then accelerated particles near the back of the beam pulse. This idea will be incorporated in the group’s latest round of experiments to accelerate electrons and positrons using the electron-beam-driven plasma wakefield facility, FACET, at SLAC (CERN Courier March 2011 p23). It also underpins the proton-beam-driven plasma wakefield experiment, AWAKE, which will use a beam from CERN’s Super Proton Synchrotron and, initially, a 10-m-long plasma column, to produce giga-electron-volt electron beams (see AWAKE: to high energies in a single leap).
Today, most experiments such as the Berkeley Lab Laser Experiment (BELLA) at LBNL and FACET at SLAC, as well as other smaller-scale experiments, are aimed at addressing some of the key areas of the underlying physics that still have not been fully resolved. Joshi’s experiment not only showed that metre-length plasma columns can be built with the required density and homogeneity, it also demonstrated energy doubling of an electron beam from 42 GeV to 85 GeV in the lithium plasma – a remarkable result considering that it takes 3 km of the SLAC linac to accelerate electrons to 42 GeV (CERN Courier April 2007 p5). Laser wakefield experiments already demonstrate mono-energetic electron beams at the giga-electron-volt scale and BELLA, which is nearing completion of a petawatt-class laser, will demonstrate acceleration of electrons to 10 GeV (CERN Courier October 2012 p10).
Despite the successes of these experiments, it is still necessary to demonstrate beam quality – including low-energy spread and low emittance – and focusing of useful beams. In all cases, the experiments are guided by plasma simulations that require the largest computers. Such simulations have already demonstrated that in the range 10–50 GeV electron beams can be created in one stage of a plasma accelerator.
If plasma accelerators are to take over from conventional machines, a great deal of effort still needs to be put into efficient drivers. Suitable laser efficiency and pulse rate are looking likely with diode-pumped lasers or with fibre lasers (see Can fibre be the future of high-energy physics?), but effort has to be put into these schemes to meet the requirements necessary to drive a wakefield. For beam-driven systems, electron beams at 100 GeV and proton beams with tera-electron-volt energies are required. These exist at the LHC for protons and at the old SLAC linac for electrons. For an e+e– system, a key challenge is positron acceleration and some groups are looking at positron acceleration in wakefields. Alternatively an e–e– collider or a photon (γ–γ) collider could be built, doing away with the need for positrons and so saving time and effort.
A number of other applications for plasma-based accelerators have been identified such as X-ray generators through betatron radiation, drivers for free-electron lasers, or low-energy proton machines. After more than 30 years it is time to develop the facilities that can answer some of the outstanding issues to demonstrate the full potential of high-energy plasma-based accelerators.
L’accélération par champ de sillage plasma entraînée par des protons pourrait permettre l’accélération d’électrons à l’échelle du téra en une seule cellule plasma. C’est pour mettre à l’épreuve cette approche novatrice qu’a été lancée au CERN l’expérience de démonstration du principe AWAKE (accélération par champ de sillage plasma entraînée par des protons). Le projet utilisera des paquets de protons de 400 GeV issus du Supersynchroton à protons ; ce sera la première expérience d’accélération par champ de sillage entraînée par un faisceau en Europe, et la première expérience de ce type s’appuyant sur un faisceau de protons au monde. Les résultats auront un impact considérable sur les futures expériences de R&D utilisant cette technique, qui seront utilisées à plus grande échelle et qui pourraient consister par exemple en l’accélération de paquets d’électrons à environ 100 GeV sur une centaine de mètres.
To complement the results that will come from the LHC at CERN, the particle-physics community is looking for options for future lepton colliders at the tera-electron-volt energy scale. These will need to be huge circular or linear colliders. With the accelerating gradients of today’s RF cavities or microwave technology limited to about 100 MV/m, the length of the linear machines would be tens of kilometres. However, plasma can sustain much higher gradients and the idea of harnessing them in plasma wakefield acceleration is gathering momentum. One attractive idea is to use a high-energy proton beam as the driver of a wakefield in a single plasma section.
To verify this novel approach, a proof-of-principle R&D experiment – the Advanced Wakefield Experiment (AWAKE) – has been launched at CERN, using 400 GeV proton bunches from the Super Proton Synchrotron (SPS). AWAKE will be the first beam-driven wakefield-acceleration experiment in Europe and the first proton-driven plasma-wakefield-acceleration experiment in the world. The results will have a significant impact on future larger-scale R&D on this technique, for example in accelerating electron bunches to around 100 GeV in 100 m.
Previous research on the potential of plasma as a medium for high-gradient acceleration has focused on injecting a short, intense laser pulse or an electron bunch into a plasma cell. With laser excitation, electrons have been accelerated to 1 GeV in 3 cm, with a gradient of 33 GV/m (CERN Courier November 2006 p5). In other experiments using an electron bunch as driver, the energy of particles in the tail of a bunch was doubled from 42 GeV to 85 GeV in 85 cm – corresponding to a gradient of 52 GV/m (CERN Courier April 2007 p5). However, the energy gain in these studies is limited by the energy carried by the laser or electron drive beam (<100 J) and the propagation length of the driver in the plasma (<1 m). Therefore the staging of a large number of acceleration sections would be required to reach the interesting region of 1 TeV per particle or more.
Proton beams of the kind produced at CERN could overcome the issue of staging. In the SPS, beams with 3 × 1011 protons per bunch at 400 GeV/c carry much higher energy (19 kJ). This would drive wakefields over much longer plasma lengths than other methods and consequently proton drivers can take a witness beam to the energy frontier in a single plasma cell. Simulations have shown that an LHC-type proton bunch (1 TeV, 1011 protons) with an rms bunch length of 100 μm can accelerate an incoming 10 GeV electron bunch to more than 500 GeV in around 500 m of plasma with an average gradient ≥1 GV/m (Caldwell et al. 2009, Lotov et al. 2010).
To reach accelerating gradients of a gigavolt per metre or more requires plasma densities where the number of electrons, ne, is of the order of 1015 cm–3. At these intensities the plasma wavelength, λpe, is about 1 mm. To achieve the maximum electric field in the plasma, the drive beam requires short, densely packed proton beams with a longitudinal bunch length, σz, of the same order as λpe.
The proton bunches available today are much longer – with σz around 10 cm – and producing bunches as short as 1 mm is not possible without major investment. However, the effect known as self-modulation instability (SMI) provides fortuitous opening of the path to an immediate experimental investigation of proton-driven wakefield acceleration with the existing proton bunches at CERN. A proton beam propagating in plasma produces micro-bunches that are generated by transverse modulation of the bunch density. These micro-bunches are naturally spaced at λpe and so resonantly excite a strong plasma wave (figure 1). Recent studies have shown that wakefields at high amplitudes – similar to what would be driven if all of the charge were in a single short bunch – can be achieved with a modulated long proton bunch (Kumar et al. 2010, Caldwell et al. 2011).
The AWAKE experiment
The AWAKE experiment at CERN was proposed by an international collaboration that is today made up of 13 institutes – including the Budker Institute of Nuclear Physics, CERN, MPI Munich and University College London – and numbers more than 50 engineers and physicists. In addition, several more institutes have expressed interest in participating in AWAKE. The collaboration first outlined the proposed experiment in a letter of intent that was presented to the SPS Committee in 2011. This was followed in March 2013 by a technical design report that was submitted to the CERN management and to the SPS Committee (Caldwell et al. 2013). A positive review and recommendation of the project led to approval of the AWAKE experiment by the CERN Research Board in August.
The measurement programme of the AWAKE project includes benchmark experiments using proton bunches to drive wakefields and to understand the physics of the proton self-modulation process in the plasma. It will also probe the accelerating wakefields with externally injected electrons, study the injection dynamics and production of multi-giga-electron-volt electron bunches, and develop long, scalable and uniform plasma cells and schemes for the production and acceleration of short proton bunches.
Figure 2 shows the conceptual design of the AWAKE experiment. An LHC-type proton bunch of 400 GeV/c but with higher intensity (around 3 × 1011 protons per bunch) and with a longitudinal rms length, σz, of 9–12 cm will be extracted from the SPS and sent to the experiment, where it will be focused to a transverse dimension of 200 μm near the entrance of a plasma cell. A 2 TW laser pulse co-propagating within the proton bunch creates the plasma by ionizing the (initially neutral) gas in the plasma cell. This sudden plasma creation seeds the SMI of the proton bunch. The SMI develops in the proton beam over the first few metres of the plasma, then saturates and the bunch becomes self-modulated. After the SMI saturation, an electron beam with 109 electrons, energy of around 16 MeV and with σz, electrons >λpe will be side-injected at a small angle (a few milliradians). A fraction of the electrons will be trapped in the wakefield and accelerated.
For the first experiments, the plasma cell will be a 10-m-long rubidium vapour source, with the necessary longitudinal density uniformity of around 0.2%. With the current AWAKE baseline parameters, simulations show that the electrons could be accelerated to energies higher than 2 GeV, with an energy spread of a few per cent and achieving a gradient of 0.1–1.2 GV/m along the 10-m-long plasma.
At a later stage the experiment will use two plasma cells and on-axis electron injection. The first cell will preserve the SMI seeding ability and the second will be used for electron acceleration in the wakefields that are resonantly driven by the modulated proton bunch. Introducing a step in plasma density during the growth of the SMI could maintain the wakefields at the level of gigavolts per metre over distances of several metres.
Short, high-intensity bunches have already been studied at the SPS and the scaling of bunch length and transverse emittance as a function of beam intensity has been identified to guide the design parameters of the AWAKE project (Argyropoulos et al. 2013). To obtain a short longitudinal bunch length, the bunches are rotated in longitudinal phase space using the maximum available RF voltage before extraction. Figure 3 shows the bunch length measured before rotation, at 2 MV, and after rotation, at 7–7.7 MV.
The AWAKE experiment will be installed in the existing CERN Neutrinos to Gran Sasso (CNGS) facility (CERN Courier November 2006 p20). This deep underground area, which is designed for running an experiment with a proton beam of high energy and intensity, has a 750-m-long proton beamline that is optimized for a fast extracted beam from the SPS at 400 GeV/c. Figure 4 shows the integration of the AWAKE experiment in the experimental area, with the plasma cell installed in the downstream end of the CNGS proton-beam tunnel and upstream of the CNGS target.
Although the facility already exists, there are several issues that need to be tackled to set up the AWAKE experiment in this area. Essential modifications to the end of the proton beamline include changes to the beam instrumentation and to the final focusing system, in addition to integration of the laser and electron beam with the proton beam. General services such as cooling and ventilation, electricity, radiation monitoring and an access system exist and are operational but some changes will be necessary to adapt them to the AWAKE set-up.
The laser system will be housed in an area that is modified to be dust free and temperature regulated. The high-power laser beam will be transported through a new dedicated tunnel connecting the laser area to the proton-beam tunnel. The area adjacent to the experimental area will be modified to house the electron source and its klystron powering system. The electrons will be transported from the source to the proton-beam tunnel along a beamline through a new 7-m-long liaison tunnel before being injected into the plasma cell. This electron beamline is designed for both side injection (long electron bunches) and on-axis injection (short electron bunches).
The electron source will be driven by a laser pulse that is derived from the same laser system used for the plasma ionization, which will require a synchronization between the laser pulse and the electron beam at a level below 1 ps. The desired synchronization of around 100 ps between the proton beam and the laser beam will be achieved by re-phasing and locking the SPS RF to a stable mode-locker frequency reference from the laser system.
To measure the properties of the electron acceleration, a state-of-the-art magnetic spectrometer with large momentum acceptance (10–5000 MeV) and good momentum resolution will be installed downstream of the plasma cell. A scintillating screen connected to a CCD camera will show the electrons exiting the spectrometer. For the proton beam, various diagnostics will measure the self-modulation effects downstream of the plasma cell. The protons will then be dumped in the existing CNGS beam dump, which is 1100 m downstream of the experimental area, therefore avoiding any radiation into the AWAKE region.
With official approval of the AWAKE experiment in August, the collaboration is now fully “awake” and the first protons can be sent to the plasma cell at the end of 2016. This will be followed by an initial three-to-four-year experimental programme with four periods of data-taking annually, each lasting two weeks.
The interaction with plasma of ultra-high-peak-power lasers at the terawatt to petawatt level has the potential to generate large accelerating gradients of giga-electron-volts per metre. Laser-plasma acceleration could therefore be an important replacement for present technology. However, there are two formidable hurdles to overcome: a laser repetition rate that is limited to a few hertz leading to an average power of only a few watts and a dismal wall-plug efficiency of a fraction of a per cent. This twin technological challenge must be resolved if laser wakefield acceleration is to be considered for large-scale applications in science and society.
On 27–28 June, the International Coherent Amplification Network (ICAN) Consortium concluded its EU-supported 18-month feasibility study with a final symposium at CERN that was organized by Ecole Polytechnique, the University of Southampton, Fraunhofer IOF Jena and CERN. A major topic concerned progress with the novel laser architecture known as coherent amplification network (CAN), which could for the first time provide petawatt peak power at a repetition rate as high as 10 kHz. This corresponds to an average power in the megawatt range with an efficiency better than 30% and opens the door to many applications.
The ICFA-ICUIL Joint Task Force (JTF) produced a report noting that while the science of laser acceleration has matured, the technology for intense lasers is lagging behind
The symposium also looked at the path towards future laser-driven accelerators and colliders, applications in free-electron lasers and neutron/neutrino sources, as well as the laser search for the “dark fields” of dark matter and dark energy. Other topics included compact proton accelerators that could be used for accelerator-driven systems for nuclear energy or for hadron therapy, as well as the generation of γ-ray beams with a host of applications, from the identification of isotopes in exposed spent nuclear fuel – such as at Fukushima – to nuclear pharmacology.
The main paradigm currently driving fundamental high-energy physics is the high-energy charged-particle collider. To apply laser-acceleration methods to a future collider, the international communities involved in intense lasers and high-energy physics – in the form of the International Committee on Ultra-High Intensity Lasers (ICUIL) and the International Committee for Future Accelerators (ICFA), respectively – came together in 2009 to form a collaborative working group to study how best to proceed. The ICFA-ICUIL Joint Task Force (JTF) produced a report noting that while the science of laser acceleration has matured, the technology for intense lasers is lagging behind, specifically in the development of systems with a high repetition rate and high efficiency (Leemans et al. 2011).
The possibility of amplifying laser pulses to extreme energy and peak power not only offers a route to a more compact and cheaper way to perform high-energy physics, it could also open the door to a complementary research area that is underpinned by single-shot, high-field laser pulses – a new field of (laser-based) high-field fundamental physics (Tajima and Dawson 1979 and Mourou et al. 2006). To muster the support of the scientific community for this vision, the International Center on Zetta-Exawatt Science and Technology (IZEST) was set up in 2011 and now has 23 associated institutes worldwide.
The ICAN project
Stimulated by the JTF’s report, members of IZEST set up the ICAN project, which involves a total of 16 institutes including Ecole Polytechnique, the University of Southampton, Fraunhofer IOF Jena and CERN as beneficiaries with a further 13 institutes participating as experts. Starting in October 2011, ICAN began research on the development of the fibre-laser-based CAN concept. Here, pulses from thousands of fibre lasers – built on technology that was originally developed for the telecommunications industry and each capable of producing low-energy pulses efficiently and at a high repetition rate – are coherently added to increase the average power and pulse energy linearly as the number of fibres increases (figure 1). The ICAN project has shown that this architecture can provide a plausible path towards the necessary conditions for a high-energy collider based on a laser accelerator, so answering the challenge that was posed by the JTF report. The fibre laser offers excellent efficiency (>30%) thanks to laser-diode pumping and provides a much larger surface cooling area, therefore making operation at high average power possible.
The most stringent requirement is to phase all of the lasers within a fraction of a wavelength. This originally seemed insurmountable but a preliminary proof of concept that was discussed earlier this year in Nature Photonics suggests that tens of thousands of fibres can be controlled to provide a laser output that is powerful enough to accelerate electrons to energies of several giga-electron-volts at a 10 kHz repetition rate. This is an improvement of at least 10,000 times on today’s state-of-the-art (Mourou et al. 2013). Furthermore, experiments have demonstrated the feedback control of the phase and timing of pulses from each fibre to the attosecond precision necessary for coherent addition. This means that the spatial profile of the overall laser pulse can be delicately controlled to provide a precise beam shape – a highly desirable feature for laser accelerators.
Immediately after the publication of the article in Nature Photonics, the group at Fermilab re-launched the concept of a photon (γ–γ) collider for a Higgs factory, using the CAN laser as the source of low-energy photons to generate high-energy γ rays through inverse Compton scattering from two electron beams (figure 2). Such a collider would have a lower beam energy than the equivalent electron–positron collider and less noise (Chou et al. 2013). The required repetition rate is around 50 kHz, within the reach of CAN technology.
Fundamental physics via high fields
While a photon collider would study Higgs physics and other high-energy phenomena, copious coherent photons from the CAN lasers could also provide a new opportunity to look for undetected “dark fields” that are associated with low-energy dark matter (axion-like particles, for example) and dark energy (Tajima and Homma 2012). The use of three distinct parallel lasers with huge numbers of photons could allow sensitive detection of possible dark fields. The basic idea is akin to degenerate four-wave mixing, well known in traditional nonlinear optics but in this case probing the vacuum, whose possible nonlinear constituent is so weak that it has appeared “dark” until now. Since the parallel injection of lasers can make their beat frequency particularly low, the range of detectable masses is extremely low – down to nano-electron-volts – compared with orthodox high-energy physics experiments. The extremely high equivalent luminosity is also unusual. Because of the coherence of the photons, the luminosity of the events is proportional to the triple products of the numbers of photons from the three lasers – N1N2N3. If the laser has 10 kJ energy, the Ni here can each be as large as 1023. On the other hand, the luminosity of a charged-particle collider is proportional to NaNb, where a and b represent the two beams and these Nj are typically around 1010. The two products that determine the luminosity of each “collider” therefore differ by as much as 50 orders of magnitude.
Laser-driven acceleration processes, laser wakefield acceleration and the related physical processes might also appear in nature, as demonstrated by the recent realization that the accretion disc of a supermassive black hole and its associated jets could be the theatre for extreme high-energy cosmic-ray acceleration up to 1021 eV and accompanying γ-ray emission (Ebisuzaki and Tajima 2013). The disruptive magnetic activities of the disc give rise to the excitation of huge Alfvén waves and the mode-converted electromagnetic waves created in the jet are capable of accelerating ions to extreme energies via a process that is similar to laser acceleration. The coherence of the relativistic waves and particles implies a fundamental similarity between terrestrial and celestial wakefield acceleration processes. It is hoped that one day celestial-type wakefields might be achieved by a similar physical process but on different scales on Earth.
Applications in society
Turning to more earthly issues, the CAN fibre laser opens up many doors to challenging issues in society. With high average power at the same time as high peak power, along with high efficiency and inherent controllability, the CAN source is applicable to many new areas, of which the following are a few examples.
Laser acceleration of protons would provide compact installations compared with conventional accelerators, which could lead to compact proton-therapy accelerators (Habs et al. 2011). As the intensity of the laser becomes highly relativistic and the dynamics of proton acceleration become more relativistic, the acceleration mechanism becomes more efficient and the beam quality improves (Esirkepov et al. 2004). It then becomes plausible to produce an efficient, compact proton source in the relativistic regime.
Laser-driven compact proton beams could also act as the source of neutrons for accelerator-driven systems and accelerator-driven reactors for nuclear energy (figure 3). The high-fluence CAN laser offers the potential for highly efficient compact neutron sources that would be less expensive than those based on conventional methods (Mourou et al. 2013).
Just as in the case of the photon collider but at lower energies, the CAN laser can produce copious γ rays with specified energy in a well defined direction. Such γ-ray sources are useful for detection in nuclear physics, such as in isotopic determination via nuclear resonance fluorescence. Since a CAN-driven γ-ray source could be compact enough to be portable, it could be used for isotopic diagnosis of exposed spent nuclear fuel – as at Fukushima – without contact or proximity. Other industries – auto, chemical, mechanical, medical, energy, etc – have the need for high-fluence, high-efficiency short-pulse lasers. One example is nuclear pharmacology. Since a CAN source produces γ rays (and so neutrons) of specific energy, it can be used to create specific isotopes of nuclei that are known to be useful for medical purposes (Habs et al. 2011).
The future looks bright for fibre lasers – not only in high-energy physics but in many applications for society.
At the traditional dinner party they danced to samba music while holding caipirinhas. During the day, the more than 700 physicists who attended the 33rd Cosmic Ray Conference (ICRC 2013) in Rio de Janeiro listened carefully during the 400 scheduled talks in a variety of plenary and parallel sessions on 2–9 July. Instead of caipirinhas, they held laptops and notepads as they focused on the important findings and data presented at the first ICRC to be held in South America.
Organized under the auspices of the International Union of Pure and Applied Physics (IUPAP) and its C4 Commission on Cosmic Rays, ICRC 2013 was hosted by the Centro Brasileiro de Pesquisas Físicas – an institute of the ministry of science, technology and innovation – the Federal University of Rio de Janeiro and the Brazilian Physical Society. It was sponsored by the National Council for Scientific and Technological Development (CNPq), the Coordination for Improvement of Higher Education Personnel (CAPES) and the Research Support Foundation of the state of Rio de Janeiro (FAPERJ).
The location in South America was not the only “first”. The organization of the 33rd ICRC had a scientific programme committee for the first time, consisting of leading experts in solar and heliospheric physics, cosmic-ray physics, gamma-ray astronomy, neutrino astronomy and dark-matter physics. Also for the first time, ICRC included research on dark matter as a main branch of the programme. For this reason, ICRC 2013 adopted the subtitle “The Astroparticle Physics Conference”. This might also become the C4 Commission’s new name, as Johannes Knapp, the commission’s chair, announced during the closing session. The commission organized a poll during the nine days of the conference in which all registered participants could vote on changing the name from “Cosmic Rays” to “Astroparticle Physics”. The majority voted for the change and the commission is now consulting IUPAP on the matter. To maintain tradition, the conference’s main title – ICRC – will remain unchanged.
In neutrino research, the IceCube experiment has some thrilling results
ICRC 2013 was certainly a success. During the plenary session on results from the Pierre Auger Observatory, Antoine Letessier-Selvon of CNRS and Université Pierre et Marie Curie presented evidence of what could be called “the muon problem”. It concerns the conflict between the prediction from Monte Carlo simulations of the number of muons in the surface Cherenkov detectors and the value extracted from the experimental data, which is about a factor of 1.5 higher. Letessier-Selvon argued that a change in composition at higher energies is not sufficient to explain the discrepancy.
The ground-based gamma-ray experiments HESS, MAGIC and VERITAS have added new gamma-ray sources – both in the Galaxy and beyond it – to the catalogue, which now totals about 150 sources. Teams at the northern-hemisphere observatories reported flaring of the blazar Mkn 421 in April this year, while MAGIC registered another flare in November 2012, in IC 310 – an extra-galactic source that it had previously discovered. Miguel Mostafa of Colorado State University presented the results of the “first light” – in fact, gamma rays – in the High Altitude Water Cherenkov Observatory installed at an altitude of 4150 m in Mexico. It is designed to detect ultra-high-energy gamma rays and is sensitive to energies above 300 GeV. With approximately only one third of the detector in operation, the collaboration was still able to present their view of the Mkn 421 flare of April.
In neutrino research, the IceCube experiment has some thrilling results. Spencer Klein of the Lawrence Berkeley National Laboratory and the University of California, Berkeley presented the 28 events that were detected with energies above 50 TeV, which include the previously revealed events above 1 PeV (CERN Courier July/August 2013 p5). Klein also spoke of the observation of another very-high-energy event in the ongoing analysis of 2012 data – but its characteristics remain “top secret”.
Another highlight of ICRC 2013 was the presentations by Nobel laureate Sam Ting and the Alpha Magnetic Spectrometer (AMS) collaboration of the first results from two years of AMS-02 operation on the International Space Station (ISS). The main goal is to perform a high-precision, large-statistics and long-duration study of cosmic nuclei, elementary charged particles and gamma rays. At the conference the collaboration presented high-precision measurements of the fluxes, ratios and anisotropies of electrons and positrons, as well as first results on proton and helium fluxes (CERN Courier October 2013 p22).
Moving further out in space, Ed Stone from Caltech presented the saga of the Voyager 1 spacecraft, launched in 1977, which is now at the edge of the solar system. The data clearly show a “wall” characterizing the heliosheath. It is astonishing that Voyager 1 is still collecting data after all these years – with an on-board computer of the 1970s and a power source that is still very much alive having passed through the harsh environment of Jupiter and Saturn. Stone was seen not only by the conference participants but also by the 40 million viewers who watched an interview with him during a popular programme on Brazilian TV.
The parallel sessions included presentations on a plethora of new projects ranging from next-generation imaging air-Cherenkov telescopes, represented by the Cherenkov Telescope Array, to the Extreme Universe Space Observatory onboard the Japanese Experiment Module (JEM-EUSO). To be installed on the ISS, JEM-EUSO is designed to measure ultra-high-energy cosmic rays through the fluorescence of the extensive air showers that they produce – an expression of optimism in the future of the field.
The 34th ICRC meeting will be held at The Hague, the Netherlands, in July 2015 and will be followed two years later by the 35th meeting in Busan, Korea. Although there will be no samba or caipirinhas, there will surely be the same level of results and commitment from astroparticle physicists worldwide.
Awards for astroparticle physics
Besides the announcements of important findings and experiments, the conference was the occasion for the traditional awards for outstanding contributions in astroparticle physics. Six people were honoured, from more than 30 nominations.
Aya Ishihara, from Shiba University, received an IUPAP Young Scientist Award for her outstanding work on the search for ultra-high-energy neutrinos and the detection of the two neutrino events at >1 PeV with the IceCube detector. A second Young Scientist Award went to Daniel Mazin, from IFAE Barcelona, for his outstanding work on gamma-ray blazars and extragalactic background light, using the MAGIC Cherenkov telescopes.
Rolf Bühler, from DESY Zeuthen, received the Shakti Duggal Award for his outstanding work on the variability of the emission from the Crab nebula and extragalactic background light, using the HESS and Fermi telescopes. The O’Ceallaigh Medal was awarded to Edward Stone, from Caltech, for his contributions to cosmic-ray physics and specifically his leading role in the Voyager mission.
Motohiko Nagano, from ICRR Tokyo and Fukui University, received the Yodh Prize for his pioneering leadership in the experimental study of the highest-energy cosmic rays. Sunil Gupta, from TIFR Mumbai, was awarded the Homi Bhabha Medal and Prize for his contributions to non-thermal astrophysics and his leading role in the development of gamma-ray astronomy.
CERN Open Days 2013 saw 70,000 people visit more than 40 activities on the surface across CERN’s Meyrin and Prévessin sites, with 20,000 of them able to see something of the accelerators and detectors underground. Highlights for visitors included seeing one of the large experiments on the LHC – ALICE, ATLAS, CMS or LHCb – or operating robotic arms and forklift trucks, or even making superconducting magnets levitate. A taskforce of 2300 volunteers acted as guides and helpers, explaining the variety of activities at CERN – from particle physics and computing to logistics and firefighting – to enthusiasts young and old.
As well as the public open days on Saturday and Sunday, events before and after made this a weekend to remember. On Friday 27 September, CERN welcomed local officials and industrial contacts from throughout its member states for exclusive tours of the laboratory. In the evening – and to celebrate European Researchers’ Night – CERN and the Istituto Nazionale di Astrofisica organized “Origins 2013”, an event that included simultaneous activities at CERN, Paris and Bologna, with participation from UNESCO, ESA, ESO and INFN. During a webcasted event in the Globe of Science and Innovation at CERN, those onstage took questions both from the audience and online.
There was also a flurry of activity on social media. Online events began with a CERN tweetup on Friday, when 12 lucky people visited CERN as citizen journalists to share their exclusive preview of the open days with the world via Twitter.
• Max Brice, the CERN photographer, led a team of 26 photographers recording the open-days’ events, with Anna Pantelia, Fons Rademakers, Laurent Egli, Mike Struick, Didier Steyaert, Mathieu Augustin, Pierre Gildemyn, Matthias Schroder, Dmytro Kovalskyi, Lelia Laureyssens, Sylvain Chapeland, Jan Fiete Grosse-Oetringhaus, Antonella Vitale, Jean-Francois Marchand, Neli Ivanova, Olga Driga, Doris Chromek-Burckhart, Sebastian Lopienski, Tomek photographe, Nicolas Voumard, Erwin van Hove, Stephan Russenschuck, Ilknur Colak, Laura Rossi and Alban Sublet. A selection of photographs is shown here, for many more, see http://cds.cern.ch/collection/Open Days 2013 Photos.
Albert Einstein’s theory of relativity is one of the most successfully tested ideas in physics. Based on the statement that the laws of physics are invariant under rotations and boosts – officially known as Lorentz symmetry – relativity is a cornerstone of the two most successful descriptions of nature: general relativity and the Standard Model. Although experiments to date indicate that relativity provides an accurate description, it became clear in the late 1980s that violations in relativity could appear theoretically as natural features of candidate models of quantum gravity.
During the following decade, a group of theorists led by Alan Kostelecký at Indiana University developed a general framework extending general relativity and the Standard Model to include all possible violations of Lorentz symmetry and CPT symmetry – the combination of charge conjugation, C, parity inversion, P, and time reversal, T – in a realistic field theory. This framework, called the Standard-Model Extension (SME), provides practical methods to compute observable effects for a given experiment. As a result, its advent triggered wide-ranging interest in the features of relativity violations.
Over the past 15 years or so, the experimental community has also enlisted in this challenging enterprise and the search for Lorentz violation has now turned theoretical ideas into a formal field in which theorists and experimentalists worldwide explore possible signals that could reveal that relativity is not exact. Despite the fact that current technology is far from reaching energies that are relevant for quantum gravity, the SME has shown that it is possible to probe well beyond the Planck scale by searching for suppressed effects in low-energy experiments.
In June, the 6th Meeting on CPT and Lorentz Symmetry (CPT’13) took place in Bloomington, Indiana. The latest in a series of unusual conferences that are held every three years, it brought together physicists from a variety of disciplines and global locations to discuss new results and future prospects for studying these fundamental spacetime symmetries. The experiments testing CPT and Lorentz symmetry and the theoretical calculations presented at the meeting together span an impressive set of subfields in physics. Furthermore, the techniques involved cover energies from fractions of an electron-volt to millions of giga-electron-volts.
Given the deep connection between Lorentz invariance and the CPT theorem in local field theory, one of the direct tests of these symmetries involves comparisons between the behaviour of matter and antimatter systems. The ALPHA collaboration reported on the remarkable progress made along these lines in its experiment at CERN. The collaboration has used antihydrogen traps to store antiatoms for several minutes and to perform basic spectroscopy (CERN Courier July/August 2011 p6). These long timescales for antiatomic systems also offer interesting prospects for studying the effects of gravitational fields.
An important theoretical development in the SME described at the meeting is the study of the effects of relativity violations in couplings of gravity to matter and antimatter. This work has motivated different tests of the equivalence principle as presented by several groups including those at the Max Planck Institute for Nuclear Physics in Heidelberg, the University of California at Berkeley and the University of Pisa. Results of analyses using data from the Gravity Probe B satellite were also presented at the meeting.
Particle physics offers another experimental playground to test Lorentz and CPT invariance. The manner in which Lorentz violation could appear in different systems includes modifications to the kinematics arising from unconventional energy–momentum relations as well as dynamic effects in interactions between different particles. A basic notion in the SME is that breaking Lorentz symmetry must lead the universe to manifest at least one preferred direction. For this reason, one of the key signals to study in Earth-based experiments is the sidereal variation of the relevant experimental observables resulting from the change in the coupling between the system studied and the preferred direction in the universe.
Sidereal variations are one of the most used techniques for testing Lorentz symmetry. At the meeting, the experimental group at the University of Groningen presented its implementation of such a test in which the researchers search for sidereal variations in the β-decay rate of sodium atoms. Following a similar approach, a recent proposal formulates the effects of Lorentz violation in the β decay of tritium. The experimental group at the University of Washington reported on the status of the KATRIN experiment and reviewed the use of this detector, which is designed for direct measurement of the mass of the neutrino, as a probe of Lorentz and CPT symmetry using tritium decay. Signals to be tested include the sidereal variation of the endpoint energy and other effects that could mimic a nonzero neutrino mass.
The free propagation of neutrinos has also served as a sensitive probe of Lorentz symmetry. The Double Chooz experiment is designed to measure θ13 – the last of the three neutrino-mixing angles, which is responsible for the disappearance of reactor antineutrinos and is key in the possibility of CP violation in neutrinos. Using data from this experiment, a team from Massachusetts Institute of Technology has recently performed a search for sidereal variations of antineutrino oscillations in the context of the SME and also explored the effects of Lorentz violation in the form of possible neutrino–antineutrino oscillations. Other interferometric techniques reported at CPT’13 included sidereal studies performed using the semileptonic decay of B mesons in the DØ experiment at Fermilab and neutral kaons in the KLOE experiment at INFN’s Frascati National Laboratory. No compelling evidence of Lorentz violation has appeared but impressive new limits on SME coefficients that control deviations from exact symmetry have been established.
In the past, most studies of Lorentz violation have used the minimal SME as a theoretical framework. The minimal SME extends the Standard Model by incorporating only operators of mass dimension four or less, which guarantee power-counting renormalizability of the theory. One of the most ambitious goals in recent years has been the explicit identification and classification of SME operators of arbitrary mass dimension. The basics of the theory of these so-called non-minimal terms were presented in the photon sector in the previous meeting in this series – CPT’10 (CERN Courier October 2010 p29). In the intervening three years, several experimental searches have demonstrated the advantage of studying high-energy photons from astrophysical sources and the competitive sensitivity of different tabletop experiments.
Using astrophysical observations of sources of X and γ rays at cosmological distances, researchers from Washington University in St Louis reported on a systematic study of non-minimal operators in electrodynamics and provided new limits on photon SME coefficients. Using data from the HESS, MAGIC and VERITAS telescopes, the study searched for the possible energy dependence of the speed of high-energy photons. This is one of the unconventional effects predicted by the SME as a consequence of modified photon dispersion relations. The team now plans to use polarimetry measurements with future space telescopes to explore other effects predicted by the SME.
Despite the great sensitivity of high-energy photons, some Lorentz-violating effects are undetectable in astrophysical measurements. In this case, resonant cavities act as complementary probes of quantum-gravity effects. Microwave cavities and cryogenic sapphire oscillators have allowed scientists at the University of Western Australia to complete the first laboratory study of non-renormalizable operators in the SME. In Berlin, cryogenic optical resonators and systems of ultracold atomic quantum gases have been developed by researchers at Humboldt University as tools to test relativity, both on Earth and in space.
The remarkable number of experimental studies already performed has led to a vast number of experimental constraints on SME coefficients that control the various ways that Lorenz symmetry can be broken in different sectors of the theory. The results are compiled in a rapidly growing document – Data Tables for Lorentz and CPT Violation – which is updated every year. Nonetheless, many more effects remain unexplored. The CPT’13 meeting provided a welcome week-long opportunity to exchange ideas, initiate collaborations and share experimental and theoretical techniques among different sectors. The study of violations of Lorentz and CPT symmetry is a continuing and exciting adventure with many new directions still to be explored.
Micropattern gaseous detectors (MPGDs) are the modern heirs of multiwire proportional counter (MWPC) planes, with the wires replaced by microstructures that are engraved on printed-circuit-like substrates. An idea that was first proposed by Anton Oed in 1988, it was the invention of stable amplification structures such as the micromesh gaseous structure (Micromegas) by Ioannis Giomataris in 1996 and the gas electron multiplier (GEM) by Fabio Sauli in 1997 that triggered a boom in the development and applications of these detectors. It was as a consequence of this increasing activity that the series of international conferences on micropattern gaseous detectors was initiated, with the first taking place in Crete in 2009 followed by the second meeting in Kobe in 2011.
The third conference – MPGD2013 – moved to Spain, bringing more than 125 physicists, engineers and students to the Paraninfo building of the Universidad de Zaragoza during the first week of July. The presentations and discussions took place in the same room that, about a century ago, Santiago Ramón y Cajal – the most prominent Spanish winner of a scientific Nobel prize – studied and taught in. The Paraninfo is the university’s oldest building and its halls, corridors and stairs provided an impressive setting for the conference. The streets, bars and restaurants of Zaragoza – the capital of Aragon – were further subjects for the conference participants to discover. After an intense day of high-quality science, lively discussions often continued into the evening and sometimes late into the night, helped by a variety of tapas and wines.
The wealth of topics and applications that were reviewed at the conference reflected the current exciting era in the field. Indeed, the large amount of information and number of projects that were presented make it difficult to summarize the most relevant ones in a few lines. The following is a personal selection. Readers who would like more detail can browse the presentations that are posted on the conference website, including the excellent and comprehensive conference-summary talk given by Silvia Dalla Torre of INFN/Treiste on the last day.
The meeting started with talks about experiments in high-energy and nuclear physics that are using (or planning to use) MPGDs. Since the pioneering implementation of GEM and Micromegas detectors by the COMPASS collaboration at CERN – the first large-scale use of MPGDs in high-energy physics – they have spread to many more experiments. Now all of the LHC experiment collaborations plan to install MPGDs in their future upgrades. The most impressive examples, in terms of detector area, are the 1200 m2 of Micromegas modules to be installed in the muon system of ATLAS and the 1000 m2 of GEM modules destined for the forward muon spectrometer of CMS. These examples confirm that MPGDs are the technology of choice when large areas need to be covered with high granularity and occupancy in a cost-effective way. These numbers also imply that transferring the fabrication know-how to industry is a must. A good deal of effort is currently devoted to industrialization of MPGDs and this was also an important topic at the conference.
MPGDs have found application in other fields of fundamental research. Some relevant examples that were discussed at the conference included their use as X-ray or γ detectors or as polarimeters in astrophysics, as neutron or fission-fragment detectors, or in rare-event experiments. Several groups are exploring the possibility of developing MPGD-based light detectors – motivated greatly by the desire to replace large photo-multiplier tube (PMT) arrays in the next generation of rare-event experiments. Working at cryogenic temperatures – or even within the cryogenic liquid itself – is sometimes a requirement. Large-area light detectors are also needed for Cherenkov detectors and in this context presentations at the conference included several nice examples of Cherenkov rings registered by MPGDs. Several talks reported on applications beyond fundamental research, including a review by Fabrizio Murtas of INFN/Frascati and CERN. MPGDs are being used or considered in fields as different as medical imaging, radiotherapy, material science, radioactive-waste monitoring and security inspection, among others.
An important part of the effort of the community is to improve the overall performance of current MPGDs, in particular regarding issues such as ageing or resilience to discharges. This is leading to modified versions of the established amplification structures of Micromegas and GEMs and to new alternative geometries. Some examples that were mentioned at the conference are variations that go under the names of μ-PIC, THGEM, MHSP or COBRA, as well as configurations that combine several different geometries. In particular, a number of varieties of thick GEM-like (THGEM) detectors (also known as large electron multipliers, or LEM) are being actively developed, as Shikma Bressler of the Weizmann Institute of Science described in her review.
Many of the advances that were presented involve the use of new materials – for example, a GEM made out of glass or Teflon – or the implementation of electrodes with resistive materials, the main goal being to limit the size and rate of discharges and their potential damage. Advances on the GridPix idea – the use of a Micromegas mesh post-processed on top of a Timepix chip – also go in the direction of adding a resistive layer to limit discharges and attracted plenty of interest. Completely new approaches were also presented, such as the “piggyback Micromegas” that separates the Micromegas from the actual readout by a ceramic layer, so that the signal is read by capacitive coupling and the readout is immune to discharges.
Several senior members gave advice to the new generation of MPGD researchers and proposed a toast to them
The presence of CERN’s Rui de Oliveira to review the technical advances in MPGD fabrication techniques and industrialization is already a tradition at these meetings. Current efforts focus on the challenges presented by projects that require GEM and Micromegas detectors with larger areas. Another tradition is to hear Rob Veenhof of Uludağ University and the RD51 collaboration review the current situation in the simulation of electron diffusion, amplification and signal detection in gas, as well as the corresponding software tools. Current advances are allowing the community to understand progressively the performance of MPGDs at the microphysics level. Finally, although electronics issues were present in many of the talks, the participants especially enjoyed a pedagogical talk by CERN’s Alessandro Marchioro about the trends in microelectronics and how they might affect future detectors in the field. These topics were studied in more detail in the sessions of the RD-51 collaboration meeting that came after the conference at the same venue. Fortunately, there was the opportunity to relax before this following meeting, with a one-day excursion to the installations of the Canfranc Underground Laboratory in the Spanish Pyrenees and St George’s castle in the town of Jaca.
The vitality of the MPGD community resides in the relatively large number of young researchers who came to Zaragoza eager to present their work as a talk or as one of the 40 posters that were displayed in the coffee and lunch hall during the week. Three of those young researchers – Michael Lupberger of the University of Bonn, Diego González-Díaz of the University of Zaragoza and Takeshi Fujiwara of the University of Tokyo – received the Charpak Prize to reward their work. This award was first presented at MPGD2011 in Japan and the hope is that it becomes formally established in the MPGD community on future occasions.
Time will tell which of the many ideas that are now being put forward will eventually become established but the creativity of the community is remarkable and one of its most important assets. References to this creativity – and to the younger generation of researchers who foster it – were recurrent throughout the conference. At the banquet, by the Ebro riverbank under the shadow of the tall towers of the Basílica del Pilar, several senior members gave advice to the new generation of MPGD researchers and proposed a toast to them – a blessing for the field.
Translated by Bertrand Nicquevert, a research engineer at CERN, this French edition also contains a preface by Lyn Evans, former LHC project leader, and a postface written by the translator together with ATLAS physicist Pauline Gagnon, Indiana University. For a review of the English edition see CERN Courier July/August 2013 p52.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.