Comsol -leaderboard other pages

Topics

High-gradient X-band technology: from TeV colliders to light sources and more

The demanding and creative environment of fundamental science is a fertile breeding ground for new technologies, especially unexpected ones. Many significant technological advances, from X-rays to nuclear magnetic resonance and the Web, were not themselves a direct objective of the underlying research, and particle accelerators exemplify this dynamic transfer from the fundamental to the practical. From isotope separation, X-ray radiotherapy and, more recently, hadron therapy, there are now many categories of accelerators dedicated to diverse user communities across the sciences, academia and industry. These include synchrotron light sources, X-ray free-electron lasers (XFELs) and neutron spallation sources, and enable research that often has direct societal and economic implications.

During the past decade or so, high-gradient linear accelerator technology developed for fundamental exploration has matured to the point where it is being transferred to applications beyond high-energy physics. Specifically, the unique requirements for the Compact Linear Collider (CLIC) project at CERN have led to a new high-gradient “X-band” accelerator technology that is attracting the interest of light-source and medical communities, and which would have been difficult for those communities to advance themselves due to their diverse nature.

Set to operate until the mid-2030s, the Large Hadron Collider (LHC) collides protons at an energy of 13 TeV. One possible path forward for particle physics in the post-LHC, “beyond the Standard Model”, era is a high-energy linear electron–positron collider. CLIC envisions an initial-energy 380 GeV centre-of-mass facility focused on precision measurements of the Higgs boson and the top quark, which are promising targets to search for deviations from the Standard Model (CERN Courier November 2016 p20). The machine could then, guided by the results from the LHC and the initial-stage linear collider, be lengthened to reach energies up to 3 TeV for detailed studies of this high energy regime. CLIC is overseen by the Linear Collider Collaboration along with the International Linear Collider (ILC), a lower energy electron–positron machine envisaged to operate initially at 250 GeV (CERN Courier January/February 2018 p7).

The accelerator technology required by CLIC has been under development for around 30 years and the project’s current goals are to provide a robust and detailed design for the update of the European Strategy for Particle Physics, with a technical design report by 2026 if resources permit. One of the main challenges in making CLIC’s 380 GeV initial energy stage cost effective, while guaranteeing its reach to 3 TeV, is generating very high accelerating gradients. The gradient needed for the high-energy stage of CLIC is 100 MV/m, which equates to 30 km of active acceleration. For this reason, the CLIC project has made a major investment in developing high-­gradient radio-frequency (RF) technology that is feasible, reliable and cheap.

Evading obstacles

Maximising the accelerating gradient leads to a shorter linac and thus a less expensive facility. But there are two main limiting factors: the increasing need of peak RF power and the limitation of accelerating-structure surfaces to support increasingly strong electromagnetic fields. Circumventing these obstacles has been the focus of CLIC activities for several years.

One way to mitigate the increasing demand for peak power is to use higher frequency accelerating structures (figure 1), since the power needed for fixed-beam energy goes up linearly with gradient but goes down approximately with the inverse square root of the RF frequency. The latest XFELs SACLA in Japan and SwissFEL in Switzerland operate at “C-band” frequencies of 5.7 GHz, which enables a gradient of around 30 MV/m and a peak power requirement of around 12 MW/m in the case of SwissFEL. This increase in frequency required a significant technological investment, but CLIC’s demand for 3 TeV energies and high beam current requires a peak power per metre of 200 MW/m! This challenge has been under study since the late 1980s, with CLIC first focusing on 30 GHz structures and the Next Linear Collider/Joint Linear Collider community developing 11.4 GHz “X-band” technology. The twists and turns of these projects are many, but the NLC/JLC project ceased in 2005 and CLIC shifted to X-band technology in 2007. CLIC also generates high peak power using a two-beam scheme in which RF power is locally produced by transferring energy from a low-energy, high-current beam to a high-energy, low-current beam. In contrast to the ILC, CLIC adopts normal-conducting RF technology to go beyond the approximately 50 MV/m theoretical limit of existing superconducting cavity geometries.

The second main challenge when generating high gradients is more fundamental than the practical peak-power requirements. A number of phenomena come to life when the metal surfaces of accelerating structures are subject to very high electromagnetic fields, the most prominent being vacuum arcing or breakdown, which induces kicks to the beam that result in a loss of luminosity. A CLIC accelerating structure operating at 100 MV/m will have surface electric fields in excess of 200 MV/m, sometimes leading to the formation of a highly conductive plasma directly above the surface of the metal. Significant progress has been made in understanding how to maximise gradient despite this effect, and a key insight has been the identification of the role of local power flow. Pulsed surface heating is another troubling high-field phenomenon faced by CLIC, where ohmic losses associated with surface currents result in fatigue damage to the outer cavity wall and reduced performance. Understanding these phenomena has been essential to guide the development of an effective design and technology methodology for achieving gradients in excess of 100 MV/m.

Test-stand physics

Critical to CLIC’s development of high-gradient X-band technology has been an investment in four test stands, which allowed investigations of the complex, multi-physics effects that affect high-power behaviour in operational structures (figure 2). The test stands provided the RF klystron power, dedicated instrumentation and diagnostics to operate, measure and optimise prototype RF components. In addition, to investigate beam-related effects, one of the stands was fed by a beam of electrons from the former “CTF3” facility. This has since been replaced by the CLEAR test facility, at which experiments will come on line again next year (CERN Courier November 2017 p8).

While the initial motivation for the CLIC test stands was to test prototype components, high-gradient accelerating structures and high-power waveguides, the stands are themselves prototype RF units for linacs – the basic repeatable unit that contains all the equipment necessary to accelerate the beam. A full linac, of course, needs many other subsystems such as focusing magnets and beam monitors, but the existence of four operating units that can be easily visited at CERN has made high-gradient and X-band technology serious options for a number of linac applications in the broader accelerator community. An X-band test stand at KEK has also been operational for many years and the group there has built and tested many CLIC prototype structures.

With CLIC’s primary objective being to provide practical technology for a particle-physics facility in the multi-TeV range, it is rather astonishing that an application requiring a mere 45 MeV beam finds itself benefiting from the same technology. This small-scale project, called Smart*Light, is developing a compact X-ray source for a wide range of applications including cultural heritage, metallurgy, geology and medical, providing a practical local alternative to a beamline at a large synchrotron light source. Led by the University of Eindhoven in the Netherlands, Smart*Light produces monochromatic X-rays via inverse Compton scattering, in which X-rays are produced by “bouncing” a laser pulse off an electron beam. The project teams aims to make the equipment small and inexpensive enough to be able to integrate it in a museum or university setting, and is addressing this objective with a 50 MV/m-range linac powered by one of the two standard CLIC test-stand configurations (a 6 MW Toshiba klystron). Funding has been awarded to construct the first prototype system and, once operational, Smart*Light will pursue commercial production.

Another Compton-source application is the TTX facility at Tsinghua University in China, which is based on a 45 MeV beam. The Tsinghua group plans to increase the energy of the X-rays by upgrading the energy of their electron linac, which must be done by increasing the accelerating gradient because the facility is housed in an existing radiation-shielded building. The energy increase will occur in two steps: the first will raise the accelerating gradient by upgrading parts of the existing S-band 3 GHz RF system, and the second will be to replace sections with an X-band system to increase the gradient up to 70 MV/m. The Tsinghua X-band power source will also implement a novel “corrector cavity” system to flatten the power compressed pulse that is also now part of the 380 GeV CLIC baseline design. Tsinghua has successfully tested a standard CLIC structure to more than 100 MV/m at KEK, demonstrating that high-gradient technology can be transferred, and has taken delivery of a 50 MW X-band klystron for use in a test stand.

Perhaps the most significant X-band application is XFELs, which produce intense and short X-ray bursts by passing a very low-emittance electron beam through an undulator magnet. The electron linac represents a substantial fraction of the total facility cost and the number of XFELs is presently quite limited. Demand for facilities also exceeds the available beam time. Operational facilities include LCWS at SLAC, FERMI at Trieste and SACLA at Riken, while the European XFEL in Germany, the Pohang Light Source in Korea and SwissFEL are being commissioned (CERN Courier July/August 2017 p18), and it is expected that further facilities will be built in the coming years.

XFEL applications

CLIC technology, both the high-frequency and high-gradient aspects, has the potential to significantly reduce the cost of such X-ray facilities, allowing them to be funded at the regional and possibly even university scale. In combination with other recent advances in injectors and undulators, the European Union project CompactLight has recently received a design study grant to examine the benefits of CLIC technology and to prepare a complete technical design report for a small-scale facility (CERN Courier December 2017 p8).

A similar type of electron linac, in the 0.5–1 GeV range, is being proposed by Frascati Laboratory in Italy for XFEL development, in addition to the study of advanced plasma-acceleration techniques. To fit the accelerator in a building on the Frascati campus, the group has decided to use a high-gradient X-band for their linac and has joined forces with CLIC to develop it. The cooperation includes Frascati staff visiting CERN to help run the high-gradient test facilities and the construction of their own test stand at Frascati, which is an important advance in testing its capability to use CLIC technology.

In addition to providing a high-performance technology for acceleration, high-gradient X-band technology is the basis for two important devices that manipulate the beam in low-emittance and short-bunch electron linacs, as used in XFELs and advanced development linacs. The first is the energy-spread lineariser, which uses a harmonic of the accelerating frequency to correct the energy spread along the bunch and enable shorter bunches. A few years ago a collaboration between Trieste, PSI and CERN made a joint order for the first European X-band frequency (11.994 GHz) 50 MW klystrons from SLAC, and jointly designed and built the lineariser structures, which have significantly improved the performance of the Elettra  light source in Trieste and become an essential element of SwissFEL.

Following the CLIC test stand and lineariser developments, a new commercial X-band klystron has become available, this time at the lower power of 6 MW and supplied by Canon (formerly Toshiba). This new klystron is ideally suited for lineariser systems and one has recently been constructed at the soft X-ray XFEL at SINAP in Shanghai, which has a long-standing collaboration with CLIC on high-gradient and X-band technology. Back in Europe, Daresbury Laboratory has decided to invest in a lineariser system to provide the exceptional control of the electron bunch characteristics needed for its XFEL programme, which is being developed at its CLARA test facility. Daresbury has been working with CLIC to define the system, and is now procuring an RF power system based on the 6 MW Toshiba klystron and pulse compressor. This will certainly be a major step in the ease of adoption of X-band technology.

The second major high-gradient X-band beam manipulation application is the RF deflector, which is used at the end of an XFEL to measure the bunch characteristics as a function of position along the bunch. High-gradient X-band technology is well suited to this application and there is now widespread interest to implement such systems. Teams at FLASH2, FLASH-Forward and SINBAD at DESY, SwissFEL and CLIC are collaborating to define common hardware, including a variable polarisation deflector to allow a full 6D characterisation of the electron bunch. SINAP is also active in this domain. The facility is awaiting delivery of three 50 MW CPI klystrons to power the deflectors and will build a standard CLIC test structure for tests at CERN in addition to a prototype X-band XFEL structure in the context of CompactLight.

The rich exchange between different projects in the high-gradient community is typified by PSI and in particular the SwissFEL. Many essential features of the SwissFEL have a linear-collider heritage, such as the micron-precision diamond machining of the accelerating structures, and SwissFEL is now returning the favour. For example, a pair of CLIC X-band test accelerating structures are being tested at CERN to examine the high-gradient potential of PSI’s fabrication technology, showing excellent results: both structures can operate at more than 115 MV/m and demonstrate potential cost savings for CLIC. In addition, the SwissFEL structures have been successfully manufactured to micron precision in a large production series – a level of tolerance that has always been an important concern for CLIC. Now that the PSI fabrication technology is established, the laboratory is building high-gradient structures for other projects such as Elettra, which wishes to increase its X-ray energy and flux but has performance limitations with its 3 GHz linac.

Beyond light sources

High-gradient technology is now working its way beyond electron linacs, particularly in the treatment of cancer. The most common accelerator-based cancer treatment is X-rays, but protons and heavy ions offer many potential advantages. One drawback of hadron therapy is the high cost of the accelerators, which are currently circular. A new generation of linacs offer the potential for smaller, lower cost facilities with additional flexibility.

The TERA foundation has studied such linac-based solutions and a firm called ADAM is now commercialising a version with a view to building a compact hadron-therapy centre (CERN Courier January/February 2018 p25). To demonstrate the potential of high gradients in this domain, members of CLIC received support from the CERN knowledge transfer fund to adapt CLIC technology to accelerate protons in the relevant energy range, and the first of two structures is now under test. The predicted gradient above was 50 MV/m, but the structure has exceeded 55 MV/m and also behaves consistently when compared to the almost 20 CLIC structures. We now know that it is possible to reach high accelerating gradients even for protons, and projects based on compact linacs can now move forward with confidence.

Collaboration has driven the wider adoption of CLIC’s high-gradient technology. A key event took place in 2005 when CERN management gave CLIC a clear directive that, with LHC construction limiting available resources, the study must find outside collaborators. This was achieved thanks to a strong effort by CLIC researchers, also accompanied by a great deal of activity in electron linacs in the accelerator community.

We should not forget that the wider adoption of X-band and high-gradient technology is extremely important for CLIC itself. First, it enlarges the commercial base, driving costs down and reliability up, and making firms more likely to invest. Another benefit is the improved understanding of the technology and its operability by accelerator experts, with a broadened user base bringing new ideas. Harnessing the creative energy of a larger group has already yielded returns to the CLIC study, for instance addressing important industrialisation and cost-reduction issues.

The role of high-gradient and X-band technology is expanding steadily, with applications at a surprisingly wide range of scales. Despite having started in large linear colliders, the use of the technology now starts to be dominated by a proliferation of small-scale applications. Few of these were envisaged when CLIC was formulated in the late 1980s – XFELs were in their infancy at the time. As the technology is applied further, its performance will rise even more, perhaps even leading to the use of smaller applications to build a higher energy collider. The interplay of different communities can make advances beyond what any could on their own, and it is an exciting time to be part of this field.

Time to adapt for big data

It would be impossible for anyone to conceive of carrying out a particle-physics experiment today without the use of computers and software. Since the 1960s, high-energy physicists have pioneered the use of computers for data acquisition, simulation and analysis. This hasn’t just accelerated progress in the field, but driven computing technology generally – from the development of the World Wide Web at CERN to the massive distributed resources of the Worldwide LHC Computing Grid (WLCG) that supports the LHC experiments. For many years these developments and the increasing complexity of data analysis rode a wave of hardware improvements that saw computers get faster every year. However, those blissful days of relying on Moore’s law are now well behind us (see “CPU scaling comes to the end of an era”), and this has major ramifications for our field.

The high-luminosity upgrade of the LHC (HL-LHC), due to enter operation in the mid-2020s, will push the frontiers of accelerator and detector technology, bringing enormous challenges to software and computing (CERN Courier October 2017 p5). The scale of the HL-LHC data challenge is staggering: the machine will collect almost 25 times more data than the LHC has produced up to now, and the total LHC dataset (which already stands at almost 1 exabyte) will grow many times larger. If the LHC’s ATLAS and CMS experiments project their current computing models to Run 4 of the LHC in 2026, the CPU and disk space required will jump by between a factor of 20 to 40 (figures 1 and 2).

Even with optimistic projections of technological improvements there would be a huge shortfall in computing resources. The WLCG hardware budget is already around 100 million Swiss francs per year and, given the changing nature of computing hardware and slowing technological gains, it is out of the question to simply throw more resources at the problem and hope things will work out. A more radical approach for improvements is needed. Fortunately, this comes at a time when other fields have started to tackle data-mining problems of a comparable scale to those in high-energy physics – today’s commercial data centres crunch data at prodigious rates and exceed the size of our biggest Tier-1 WLCG centres by a large margin. Our efforts in software and computing therefore naturally fit into and can benefit from the emerging field of data science.

A new way to approach the high-energy physics (HEP) computing problem began in 2014, when the HEP Software Foundation (HSF) was founded. Its aim was to bring the HEP software community together and find common solutions to the challenges ahead, beginning with a number of workshops organised by a dedicated startup team. In the summer of 2016 the fledgling HSF body was charged by WLCG leaders to produce a roadmap for HEP software and computing. With help from a planning grant from the US National Science Foundation, at a meeting in San Diego in January 2017, the HSF brought community and non-HEP experts together to gather ideas in a world much changed from the time when the first LHC software was created. The outcome of this process was summarised in a 90-page-long community white paper released in December last year.

The report doesn’t just look at the LHC but considers common problems across HEP, including neutrino and other “intensity-frontier” experiments, Belle II at KEK, and future linear and circular colliders. In addition to improving the performance of our software and optimising the computing infrastructure itself, the report also explores new approaches that would extend our physics reach as well as ways to improve the sustainability of our software to match the multi-decade lifespan of the experiments.

Almost every aspect of HEP software and computing is presented in the white paper, detailing the R&D programmes necessary to deliver the improvements the community needs. HSF members looked at all steps from event generation and data taking up to final analysis, each of which presents specific challenges and opportunities.

Souped-up simulation

Every experiment needs to be grounded in our current knowledge of physics, which means that generating simulated physics events is essential. For much of the current HEP experiment programme it is sufficient to generate events based on leading-order calculations – a relatively modest task in terms of computing requirements. However, already at Run 2 of the LHC there is an increasing demand for next-to-leading order, or even next-to-next-to-leading order, event generators to allow more precise comparisons between experiments and the Standard Model predictions (CERN Courier April 2017 p18). These calculations are particularly challenging both in terms of the software (e.g. handling difficult integrations) and the mathematical technicalities (e.g. minimising negative event weights), which greatly increase the computational burden. Some physics analyses based on Run-2 data are limited by theoretical uncertainties and, by Run 4 in the mid-2020s, this problem will be even more widespread. Investment in technical improvements of the computation is therefore vital, in addition to progress in our underlying theoretical understanding.

Increasingly large and sophisticated detectors, and the search for rarer processes hidden amongst large backgrounds, means that particle physicists need ever-better detector simulation. The models describing the passage of particles through the detector need to be improved in many areas for high precision work at the LHC and for the neutrino programme. With simulation being such a huge consumer of resources for current experiments (often representing more than half of all computing done), it is a key area to adapt to new computing architectures.

Vectorisation, whereby processors can execute identical arithmetic instructions on multiple pieces of data, would force us to give up the simplicity of simulating each particle individually. The best way to do this is one of the most important R&D topics identified by the white paper. Another is to find ways to reduce the long simulation times required by large and complex detectors, which exacerbates the problem of creating simulated data sets with sufficiently high statistics. This requires research into generic toolkits for faster simulation. In principle, mixing and digitising the detector hits at high pile-up is a problem that is particularly suited for parallel processing on new concurrent computing architectures – but only if the rate at which data is read can be managed.

This shift to newer architectures is equally important for our software triggers and event-reconstruction code. Investing more effort in software triggers, such as those already being developed by the ALICE and LHCb experiments for LHC Run 3, will help control the data volumes and enable analyses to be undertaken directly from initial reconstruction by avoiding an independent reprocessing step. For ATLAS and CMS, the increased pile-up at high luminosity makes charged-particle tracking within a reasonable computing budget a critical challenge (figure 3). Here, as well as the considerable effort required to make our current code ready for concurrent use, research is needed into the use of new, more “parallelisable” algorithms, which maintain physics accuracy. Only these would allow us to take advantage of the parallel capabilities of modern processors, including GPUs (just like the gaming industry has done, although without the need there to treat the underlying physics with such care). The use of updated detector technology such as track triggers and timing detectors will require software developments to exploit this additional detector information.

For final data analysis, a key metric for physicists is “time to insight”, i.e. how quickly new ideas can be tested against data. Maintaining that agility will be a huge challenge given the number of events physicists have to process and the need to keep the overall data volume under control. Currently a number of data-reduction steps are used, aiming at a final dataset that can fit on a laptop but bloating the storage requirements by creating many intermediate data products. In the future, access to dedicated analysis facilities that are designed for a fast turnaround without tedious data reduction cycles may serve the community’s needs better.

This draws on trends in the data-analytics industry, where a number of products, such as Apache Spark, already offer such a data-analysis model. However, HEP data is usually more complex and highly structured, and integration between the ROOT analysis framework and new systems will require significant work. This may also lend itself better to approaches where analysts concentrate on describing what they want to achieve and a back-end engine takes care of optimising the task for the underlying hardware resource. These approaches also integrate better with data-preservation requirements, which are increasingly important for our field. Over and above preserving the underlying bits of data, a fundamental challenge is to preserve knowledge about how to use this data. Preserved knowledge can help new analysts to start their work more quickly, so there would be quite tangible immediate benefits to this approach.

A very promising general technique for adapting our current models to new hardware is machine learning, for which there exist many excellent toolkits. Machine learning has the potential to further improve the physics reach of data analysis and may also speed up and improve the accuracy of physics simulation, triggering and reconstruction. Applying machine learning is very much in vogue, and many examples of successful applications of these data-science techniques exist, but real insight is required to know where best to invest for the HEP community. For example, a deeper understanding of the impact of such black boxes and how they relate to underlying physics, with good control of systematics, is needed. It is expected that such techniques will be successful in a number of areas, but there remains much research to be done.

New challenges

Supporting the computational training phase necessary for machine learning brings a new challenge to our field. With millions of free parameters being optimised across large GPU clusters, this task is quite unlike those currently undertaken on the WLCG grid infrastructure and represents another dimension to the HL-LHC data problem. There is a need to restructure resources at facilities and to incorporate commercial and scientific clouds into the pool available for HEP computing. In some regions high-performance computing facilities will also play a major role, but these facilities are usually not suitable for current HEP workflows and will need more consistent interfaces as well as the evolution of computing systems and the software itself. Optimising storage resources into “data lakes”, where a small number of sites act as data silos that stream data to compute resources, could be more effective than our current approaches. This will require enhanced delivery of data over the network to which our computing and software systems will need to adapt. A new generation of managed networks, where dedicated connections between sites can be controlled dynamically, will play a major role.

The many challenges faced by the HEP software and computing community over the coming decade are wide ranging and hard. They require new investment in critical areas and a commitment to solving problems in common between us, and demand that a new generation of physicists is trained with updated computing skills. We cannot afford a “business as usual” approach to solving these problems, nor will hardware improvements come to our rescue, so software upgrades need urgent attention.

The recently completed roadmap for software and computing R&D is a unique document because it addresses the problems that our whole community faces in a way that was never done before. Progress in other fields gives us a chance to learn from and collaborate with other scientific communities and even commercial partners. The strengthening of links, across experiments and in different regions, that the HEP Software Foundation has helped to produce, puts us in a good position to move forward with a common R&D programme that will be essential for the continued success of high-energy physics.

We need to talk about the Higgs

It is just over five years ago that the discovery of the Higgs boson was announced, to great fanfare in the world’s media, as a crowning success of CERN’s Large Hadron Collider (LHC). The excitement of those days now seems a distant memory, replaced by a growing sense of disappointment at the lack of any major discovery thereafter.

While there are valid reasons to feel less than delighted by the null results of searches for physics beyond the Standard Model (SM), this does not justify a mood of despondency. A particular concern is that, in today’s hyper-connected world, apparently harmless academic discussions risk evolving into a negative outlook for the field in broader society. For example, a recent news article in Nature led on the LHC’s “failure to detect new particles beyond the Higgs”, while The Economist reported that “Fundamental physics is frustrating physicists”. Equally worryingly, the situation in particle physics is sometimes negatively contrasted with that for gravitational waves: while the latter is, quite rightly, heralded as the start of a new era of exploration, the discovery of the Higgs is often described as the end of a long effort to complete the SM.

Let’s look at things more positively. The Higgs boson is a totally new type of fundamental particle that allows unprecedented tests of electroweak symmetry breaking. It thus provides us with a novel microscope with which to probe the universe at the smallest scales, in analogy with the prospects for new gravitational-wave telescopes that will study the largest scales. There is a clear need to measure its couplings to other particles – especially its coupling with itself – and to explore potential connections between the Higgs and hidden or dark sectors. These arguments alone provide ample motivation for the next generation of colliders including and beyond the high-luminosity LHC upgrade.

So far the Higgs boson indeed looks SM-like, but some perspective is necessary. It took more than 40 years from the discovery of the neutrino to the realisation that it is not massless and therefore not SM-like; addressing this mystery is now a key component of the global particle-physics programme. Turning to my own main research area, the beauty quark – which reached its 40th birthday last year – is another example of a long-established particle that is now providing exciting hints of new phenomena (see Beauty quarks test lepton universality ). One thrilling scenario, if these deviations from the SM are confirmed, is that the new physics landscape can be explored through both the b and Higgs microscopes. Let’s call it “multi-messenger particle physics”.

How the results of our research are communicated to the public has never been more important. We must be honest about the lack of new physics that we all hoped would be found in early LHC data, yet to characterise this as a “failure” is absurd. If anything, the LHC has been more successful than expected, leaving its experiments struggling to keep up with the astonishing rates of delivered data. Particle physics is, after all, about exploring the unknown; the analysis of LHC data has led to thousands of publications and a wealth of new knowledge, and there is every possibility that there are big discoveries waiting to be made with further data and more innovative analyses. We also should not overlook the returns to society that the LHC has brought, from technology developments with associated spin-offs to the training of thousands of highly skilled young researchers.

The level of expectation that has been heaped on the LHC seems unprecedented in the history of physics. Has any other facility been considered to have produced disappointing results because only one Nobel-prize winning discovery was made in its first few years of operation? Perhaps this reflects that the LHC is simply the right machine at the right time, but that time is not over: our new microscope is set to run for the next two decades and bring physics at the TeV scale into clear focus. The more we talk about that, the better our long-term chances of success.

To explore all our coverage marking the 10th anniversary of the discovery of the Higgs boson ...

Aharon Casher 1941–2018

Aharon (Rony) Casher was born in Haifa, Israel, and graduated from the Technion where he performed his thesis work on condensed bosonic systems under Micha Revzen. He then went to Yeshiva University in New York, where he wrote a well-known paper with Joel Lebowitz on heat flow in random harmonic chains. This is also where his longstanding collaborations with Yakir Aharonov and Lenny Susskind began.

The Aharonov–Casher effect, which is dual to the Aharonov–Bohm effect, is textbook material and also led to a beautiful result on the number of zero modes in 2D magnetic fields. With Lev Vaidman, Casher and Aharonov developed the mathematics underpinning weak measurements; and in a separate work with Shimon Yankielowicz they introduced the mechanism of magnetic vacuum condensation for confinement in QCD. The early suggestion by Aharon, Susskind and John Kogut that a vacuum polarisation mechanism can account for quark confinement was extremely influential. Additional, important joint papers on strong interactions, partons and spontaneous chiral symmetry breaking appeared in the early 1970s. The collaboration with Susskind also led to Aharons’ familiarity with string theories and to the early paper with Aharonov of a dual string model for spinning particles.

In the high-energy physics community, Aharon is best known for his work on spontaneous chiral symmetry breaking in QCD. In a singly authored paper he provided a beautiful insight into this subject, followed by a famous paper with Tom Banks that related such breaking to the enhanced density of the low eigenvalues of the Dirac operator. These topics dominated Aharon’s interest throughout the 1970s and early 1980s. His deep knowledge of topological field theory and understanding of non-perturbative effects enabled him to make key and long-lasting contributions.

Aharon often visited Brussels, where he worked with François Englert and others on supergravity, quantum gravity and studies of the early universe. Englert, in turn, became a frequent visitor at Tel Aviv University, and non-perturbative effects in quantum gravity and possible connections to the physics of black holes became a shared passion of both. Although Aharon gave a series of influential lectures on string theory at Tel Aviv shortly after the 1984 “string revolution”, and published with Englert, Nicolai and Taormina a paper showing that all superstring theories are contained in the bosonic string, he was critical of strings as the ultimate theory of nature. He was an independent thinker, uncompromisingly honest when analysing novel ideas in theoretical physics.

Aharon stayed at Tel Aviv for almost 50 years, his knowledge and remarkable talents enabling him to teach any subject in theoretical physics from memory alone. He was accessible to students and attracted many who subsequently had independent academic careers, including Neuberger, Nissan Itzhaki and Yigal Shamir. Aharon was an avid reader, interested in literature, history, science fiction, sports and politics. One could have an interesting conversation with him on any topic.

Aharon was highly negligent as a self- promoter and was in science for the sheer pleasure of doing it. He rarely gave talks about his work, preferring to think and calculate at his desk, and his collaborators and many others had the deepest respect for him. His ability to keep challenging us and to relentlessly pursue the subtleties that could harbour fatal flaws helped maintain our own scientific integrity. Aharon will be deeply missed.

Physics of Atomic Nuclei

By Vladimir Zelevinsky and Alexander Volya Wiley

CCboo4_02_18

This new textbook of nuclear physics aims to provide a review of the foundations of this branch of physics as well as to present more modern topics, including the important developments of the last 20 years. Even though well-established textbooks exist in this field, the authors propose a more comprehensive essay for students who want to go deeper both in understanding the basic principles of nuclear physics and in learning about the problems that researchers are currently addressing. Indeed, a renewed interest has lately revitalised this field, following the availability of new experimental facilities and increased computational resources.

Another objective of this book, which is based on the lectures and teaching experience of the authors, is to clarify, at each step, the relationship between theoretical equations and experimental observables, as well as to highlight useful methods and algorithms from computational physics.

The last few chapters cover topics not normally included in standard courses of nuclear physics, and reflect the scientific interests – and occasionally the point of view – of the authors. Many problems are also provided at the end of each chapter, and some of them are fully solved.

Compiled by Virginia Greco, CERN.

String Theory Methods for Condensed Matter Physics

By Horatiu Nastase
Cambridge University Press

CCboo3_02_18

This book provides an introduction to various methods developed in string theory to tackle problems in condensed-matter physics. This is the field where string theory has been most largely applied, thanks to the use of the correspondence between anti-de Sitter spaces (AdS) and conformal field theories (CFT). Formulated as a conjecture 20 years ago by Juan Maldacena of the Institute for Advanced Study, the AdS/CFT correspondence relates string theory, usually in its low-energy version of supergravity and in a curved background space–time, to field theory in a flat space–time of fewer dimensions. This correspondence is holographic, which means in some sense that the physics in the higher dimension is projected onto a flat surface without losing information.

The book is articulated in four parts. In the first, the author introduces modern topics in condensed-matter physics from the perspective of a string theorist. Part two gives a basic review of general relativity and string theory, in an attempt to make the book as self-consistent as possible. The other two parts focus on the applications of string theory to condensed-matter problems, with the aim of providing the reader with the tools and methods available in the field. Going into more detail, part three is dedicated to methods already considered as standard – such as the pp-wave correspondence, spin chains and integrability, AdS/CFT phenomenology and the fluid-gravity correspondence – while part four deals with more advanced topics that are still in development, including Fermi and non-Fermi liquids, the quantum Hall effect and non-standard statistics.

Aimed at graduate students, this book assumes a good knowledge of quantum field theory and solid-state physics, as well as familiarity with general relativity.

The Standard Theory of Particle Physics: Essays to Celebrate CERN’s 60th Anniversary

By Luciano Maiani and Luigi Rolandi (eds.)
World Scientific

Also available at the CERN bookshop

CCboo2_02_18

This book is a collection of articles dedicated to topics within the field of Standard Model physics, authored by some of the main players in both its theory and experimental development. It is edited by Luciano Maiani and Luigi Rolandi, two well-known figures in high-energy physics.

The volume has 21 chapters, most of them devoted to very specific subjects. The first chapters take the reader through a fascinating tour of the history of the field, starting from the earliest days, around the time when CERN was established. I particularly enjoyed reading some recollections of Gerard ’t Hooft, such as: “Asymptotic freedom was discovered three times before 1973 (when Politzer, Gross and Wilczek published their results), but not recognised as a new discovery. This is just one of those cases of miscommunication. The ‘experts’ were so sure that asymptotic freedom was impossible, that signals to the contrary were not heard, let alone believed. In turn, when I did the calculation, I found it difficult to believe that the result was still not known.”

In chapter three, K Ellis reviews the evolution of our understanding of quantum chromodynamics (QCD) and deep-inelastic scattering. Among many things, he shows how the beta function depends on the strong coupling constant, αS, and explains why many perturbative calculations can be made in QCD, when the interactions take place at high-enough energies. At the hadronic scale, however, αS is too large and the perturbative expansion tool no longer works, so alternative methods have to be used. Many non-perturbative effects can be studied with the lattice QCD approach, which is addressed in chapter five. The experimental status regarding αS is reviewed in the following chapter, where G Dissertori shows the remarkable progress in measurement precision (with LHC values reaching per-cent level uncertainties and covering an unprecedented energy range), and how the data is in excellent agreement with the theoretical expectations.

Through the other chapters we can find a large diversity of topics, including a review of global fits of electroweak observables, presently aimed at probing the internal consistency of the Standard Model and constraining its possible extensions given the measured masses of the Higgs boson and of the top quark. Two chapters focus specifically on the W-boson and top-quark masses. Also discussed in detail are flavour physics, rare decays, neutrino masses and oscillations, as is the production of W and Z bosons, in particular in a chapter by M Mangano.

The Higgs boson is featured in many pages: after a chapter by J Ellis, M Gaillard and D Nanopoulos covering its history (and pre-history), its experimental discovery and the measurement of its properties fill two further chapters. An impressive amount of information is condensed in these pages, which are packed with many numbers and (multi-panel) figures. Unfortunately, the figures are printed in black and white (with only two exceptions), which severely affects the clarity of many of them. A book of this importance deserved a more colourful destiny.

The editors make a good point in claiming the time has come to upgrade the Standard Model into the “Standard Theory” of particle physics, and I think this book deserves a place in the bookshelves of a broad community, from the scientists and engineers who contributed to the progress of high-energy physics to younger physicists, eager to learn and enjoy the corresponding inside stories.

Relativity Matters: From Einstein’s EMC2 to Laser Particle Acceleration and Quark-Gluon Plasma

By Johann Rafelski
Springer

Also available at the CERN bookshop

CCboo1_02_18

This monograph on special relativity (SR) is presented in a form accessible to a broad readership, from pre-university level to undergraduate and graduate students. At the same time, it will also be of great interest to professional physicists.

Relativity Matters has all the hallmarks of becoming a classic with further editions, and appears to have no counterpart in the literature. It is particularly useful because at present SR has become a basic part not only of particle and space physics, but also of many other branches of physics and technology, such as lasers. The book has 29 chapters organised in 11 parts, which cover topics from the basics of four-vectors, space–time, Lorentz transformations, mass, energy and momentum, to particle collisions and decay, the motion of charged particles, covariance and dynamics.

The first half of the book derives basic consequences of the SR assumptions with a minimum of mathematical tools. It concentrates on the explanation of apparently paradoxical results, presenting and refuting counterarguments as well as debunking various incorrect statements in elementary textbooks. This is done by cleverly exploiting the Galilean method of a dialogue between a professor, his assistant and a student, to bring out questions and objections.

The importance of correctly analysing the consequences for extended and accelerating bodies is clearly presented. Among the many “paradoxes”, one notes the accelerating rocket problem that the late John Bell used to tease many of the world’s most prominent physicists with. Few of them provided a perfectly satisfactory answer.

The second half of the book, starting from part VII, covers the usual textbook material and techniques at graduate level, illustrated with examples from the research frontier. The introductions to the various chapters and subsections are still enjoyable for a broader readership, requiring little mathematics. The author does not avoid technicalities such as vector and matrix algebra and symmetries, but keeps them to a minimum. However, in the parts dealing with electromagnetism, the reader is assumed to be reasonably familiar with Maxwell’s equations.

There are copious concrete exercises and solutions. Throughout the book, indeed, every chapter is complemented by a rich variety of problems that are fully worked out. These are often used to illustrate quantitatively intriguing topics, from space travel to the laser acceleration of charged particles.

An interesting afterword concluding the book discusses how very strong acceleration becomes a modern limiting frontier, beyond which SR in classical physics becomes invalid. The magnitude of the critical accelerations and critical electric and magnetic fields are qualitatively discussed. It also briefly analyses attempts by well-known physicists to side-step the problems that arise as a consequence.

Relativity Matters is excellent as an undergraduate and graduate textbook, and should be a useful reference for professional physicists and technical engineers. The many non-specialist sections will also be enjoyed by the general, science-interested public.Torleif Ericson, CERN

CLIC workshop focuses on strategy

The Compact Linear Collider (CLIC) workshop is the main annual gathering of the CLIC accelerator and detector communities, and this year attracted more than 220 participants to CERN on 22–26 January. CLIC is a proposed electron-positron linear collider envisaged for the era beyond the high-luminosity LHC (HL-LHC), that would operate a staged programme over a period of 25 years with collision energies at 0.38, 1.5, and 3 TeV. This year the CLIC workshop focused on preparations for the update of the European Strategy for Particle Physics in 2019–2020.

The initial CLIC energy stage is optimised to provide high-precision Higgs boson and top-quark measurements, with the higher-energy stages enhancing sensitivity to effects from beyond-Standard Model (BSM) physics (CERN Courier November 2016 p20). Following a 2017 publication on Higgs physics, the workshop heard reports on recent developments in top-quark physics and the BSM potential at CLIC, both of which are attracting significant interest from the theory community.

Speakers also reported extensive progress in the validation and performance of the new detector model. To ensure that its performance meets the challenging specifications, a new approach to tracking has been commissioned, and the particle flow analysis and flavour-tagging capabilities have been consolidated. Updates were presented on the broad and active R&D programme on the vertex and tracking detectors, which aims to find technologies that simultaneously fulfil all the CLIC requirements. Reports were given on test-beam campaigns with both hybrid and monolithic assemblies, and on ideas for future developments. Many of the tracking and calorimeter technologies under study for the CLIC detector are also of interest to the HL-LHC, where the high granularity and time-resolution needed for CLIC are equally crucial.

For the accelerator, studies with the aim of reducing the cost and power have particular priority, presenting the initial CLIC stage as a project requiring resources comparable to what was needed for LHC. Key activities in this context are high-efficiency RF systems, permanent magnet studies, optimised accelerator structures and overall implementation studies related to civil engineering, infrastructure, schedules and tunnel layout.

A key aspect of the ongoing accelerator development is moving towards industrialisation of the component manufacture, by fostering wider applications of the CLIC 12 GHz X-band technology with external partners. In this respect, the CLIC workshop coincided with the kick-off meeting for the CompactLight project recently funded by the Horizon 2020 programme, which aims to design an optimised X-ray free-electron laser based on X-band technology for more compact and efficient accelerators (CERN Courier December 2017 p8).

Last year also saw the realisation of the CERN Linear Electron Accelerator for Research (CLEAR), a new user facility for accelerator R&D whose programme includes CLIC high-gradient and instrumentation studies (CERN Courier November 2017 p8). Presentations at the workshop addressed the programmes for instrumentation and radiation studies, plasma-lensing, wakefield monitors and high-energy electrons for cancer therapy.

During 2018 the CLIC accelerator and detector and physics collaborations will prepare summary reports focusing on the 380 GeV initial CLIC project implementation as inputs for the update of the European Strategy for Particle Physics, including plans for the project preparation phase in 2020–2025.

Physics fest for a future circular collider

The second Future Circular Collider (FCC) physics workshop was held at CERN on 15–19 January, gathering particle physicists from around the world for talks and detailed discussions on the physics capabilities of future electron–positron, electron–proton, and proton–proton colliders.

The FCC study, which emerged following the 2013 European Strategy for Particle Physics, is a five-year project led by CERN to investigate a circular collider built in a new 100 km-circumference tunnel in the Geneva region. Such a tunnel could host an e+e collider (called FCC-ee), a 100 TeV proton–proton collider (FCC-hh) or an electron–proton collider (FCC-eh). Further opportunities include the collision of heavy ions in FCC-hh and FCC-eh, and fixed-target experiments using the injector complex.

Last year saw a significant evolution in the maturity of the physics studies for these machines, with many detailed results presented. These results include new techniques to determine the properties of the Higgs boson, such as the all-important Higgs potential, and how these relate to fundamental questions at the smallest distance scales. New ideas about how to search for new particles interacting very weakly with normal matter – such as new species of neutrinos, dark photons or other new light scalar particles – were also studied in depth.

The January workshop was preceded by a dedicated meeting to determine whether the unprecedented precision of physics measurements provided by FCC machines could be compared against equally high-precision theoretical predictions.

The results of this study were affirmative, as reported on the first day of the FCC physics workshop.

A major theme that emerged during the workshop was the depth of complementarity between the capacities of the different FCC modes in exploring the questions that will remain open after the completion of the LHC programme. Combining their individual strengths will enable comprehensive exploration in search of answers to the pending questions in particle physics.

bright-rec iop pub iop-science physcis connect