Bluefors – leaderboard other pages

Topics

The world’s longest superconducting linac

The European X-ray Free Electron Laser (European XFEL) now entering operations at Hamburg in Germany will generate ultrashort X-ray flashes at a rate of 27,000 per second with a peak brilliance one billion times higher than the best conventional X-ray sources. The outstanding characteristics of the facility will open up completely new research opportunities for scientists and industrial users (see see “Europe enters the extreme X-ray era”). Involving close co-operation with nearby DESY and other organisations worldwide, the European XFEL is a joint effort between many countries. No fewer than 17 European institutes contributed to the accelerator complex, with the largest in-kind (> 70%) and other contributions coming from DESY.

The story of the European XFEL is a wonderful example of R&D synergy between the high-energy physics and light-source worlds. At the heart of the European XFEL are superconducting radio-frequency (SRF) cavities that allow the 1.4 km-long linac to accelerate electrons highly efficiently. Despite the clear benefits of using SRF cavities, before the mid-1990s the technology was not mature enough and too expensive to be practical for a large facility. Experience gained at DESY and other major accelerator facilities – including LEP at CERN and CEBAF at Jefferson Lab – changed that picture. It became clear that superconducting accelerating structures with reasonably large gradients can produce high-energy electron beams in long continuous linac sections.

Enter TESLA

A major character in the European XFEL story is the TESLA (TeV Energy Superconducting Linear Accelerator) collaboration, which was founded in 1990 by key players of the SRF community. Among its challenges was to make SRF cavities more affordable. DESY offered to host essential infrastructure and a test facility to operate newly designed accelerator modules housing eight standardised cavities. The first module was built in the mid-1990s in collaboration with many of the later contributors to the European XFEL, and the first electron beam was accelerated in 1997.

The enormous flexibility in how electron bunches can be structured has meant that there has long been a close connection between free-electron lasers and superconducting accelerator technology from the beginning: examples can be found at Stanford University, Darmstadt University and Dresden Rossendorf, Jefferson Lab, and DESY. From the start of the TESLA R&D, it was envisaged that SRF technology would drive a superconducting linear collider operating at a centre-of-mass energy of 500 GeV, with the possibility of extending this to 800 GeV. This facility would have had two linear accelerators pointing towards one another: one for electrons, which would also be used to drive an X-ray laser facility, and one for positrons. At the time, high-energy physicists were weighing up other linear-collider designs in the US and Japan, but TESLA was unique in its choice of superconducting accelerating cavities. In 1997, DESY and the TESLA collaboration published a Conceptual Design Report for a superconducting linear collider with an integrated X-ray laser facility.

Although DESY was preparing for a hard-X-ray FEL, first the goal was to build an intermediate facility operating at slightly lower X-ray energies (corresponding to an output in the VUV region). In 2005 the VUV-FEL at DESY (today known as FLASH) produced laser light at a wavelength of 30 nm based on the principle of self-amplified-spontaneous-emission (SASE), which allows the generation of coherent X-ray light. The project preparation phase for the European XFEL began in 2007, with the official start declared in 2009 after the foundation of the European XFEL company. Plans to build a linear collider at DESY were dropped, but in 2004 the TESLA design was chosen for a new International Linear Collider (ILC). This machine is now “shovel ready” and the Japanese government has expressed interest in hosting it, although a final decision is awaited. Since the European XFEL uses TESLA technology at a large scale, the now finished superconducting linac can be considered as a prototype for the linear collider. Moreover, the successful technology transfer with industry that underpinned the construction of the European XFEL serves as a model for a worldwide linear collider effort.

The European XFEL, measuring 3.4 km in length, begins with the injector, which comprises a normal-conducting RF electron gun with a high bunch charge and low emittance. This is followed by a standard superconducting eight-cavity XFEL accelerator module, which takes the electron bunch to an energy of around 130 MeV. A harmonic 3.9 GHz accelerator module (provided by INFN and DESY) further alters the longitudinal beam profile, while a laser heater provided by Uppsala University increases the uncorrelated energy spread. At the end of the injector, 600 μs-long electron-bunch trains of typically 500 pC bunches are available for acceleration.

Once in the main linac of the European XFEL, the electron beam is accelerated in three sections. The first consists of four superconducting XFEL modules and presents a fairly modest gradient (far below the XFEL design gradient of 23.6 MV/m). The second linac section consists of 12 accelerator modules, from which the beam emerges with a relative energy spread of 0.3% at 2.4 GeV. The third and last linac section consists of 80 accelerator modules with an installed length of just less than 1 km. Bunch-compressor sections between the three main linac sections include dipole-magnet chicanes, further focusing elements and beam diagnostics.

Taking into account all installed main-linac accelerator modules, the achievable electron beam energy of the European XFEL is above its design energy of 17.5 GeV, although the exact figure will depend on optimising the RF control. The complete linac is suspended from the ceiling to keep the tunnel floor free for transport and the installation of electronics. During accelerator operation the electrons are distributed via fast kicker magnets into one of the two electron beamlines that feed several photon beamlines. Here, undulators provide X-ray photon beams for various experiments (see “Europe enters the extreme X-ray era”).

Meeting the production challenge

The superconducting accelerator modules for the European XFEL linac were contributed by DESY, CEA Saclay and LAL Orsay in France, INFN Milano in Italy, IPJ Swierk and Soltan Institute in Poland, CIEMAT in Spain and BINP in Russia. More than 100 modules were needed, and although they were based on a prototype developed for the TESLA linear collider, they had to be modified for large-scale industrial production. DESY, which had responsibility for the construction and operation of the particle accelerator, developed a consortium scheme in which collaborators could contribute in-kind, either by producing sub-components or by assuming responsibility for module assembly or component testing. A sophisticated supply chain was established and the pioneering work at FLASH provided invaluable help in dealing with initial challenges.

A standard accelerator module contains eight superconducting cavities, each supplied by one RF power coupler, and a superconducting quadrupole package, which includes correction coils and a beam-position monitor. Each module also contains cold vacuum components such as bellows and valves, and frequency tuners. During the R&D and project preparation phases, less than one accelerator module per year was assembled, thus it took a factor 30 increase in production rate to build the European XFEL. Two European companies – Research Instruments in Germany and Zanon in Italy – shared the task of producing 800 superconducting cavities from solid niobium. Cavity string and module assembly took place at CEA Saclay/Irfu based on completely new infrastructure called the XFEL village. Assembly was directly impacted by the availability of all accelerator module sub-components, and any break in the supply chain was seen as a risk for the overall project schedule. In the end, a total of 96 successfully tested XFEL modules were made available for tunnel installation within a period of just two years.

The operation of the superconducting accelerator modules also requires extensive dedicated infrastructure. DESY provided the RF high-power system and developed the required 10 MW multi-beam klystrons with industrial partners. A total of 27 klystrons, each supplying RF power for 32 superconducting structures (four accelerator modules), were ordered from two vendors. Precision regulation of the RF fields inside the accelerating cavities, which is essential to provide a highly reproducible and stable electron beam, is achieved by a powerful control system developed at DESY. BINP Novosibirsk produced and delivered major cryogenic equipment for the linac, while the cryogenic plant itself (an in-kind contribution from DESY) guarantees pressure variations will stay below 1%. The largest visible contributions to the warm beamline sections are the more than 700 beam-transport magnets and the 3 km vacuum system in the different sections. While most of the magnets were delivered by the Efremov Institute in St Petersburg, a small fraction was built by BINP Novosibirsk and completed at Stockholm University. Many metres of beamline, be it simple straight chambers or the more sophisticated flat bunch-compressor chambers, were also fabricated by BINP Novosibirsk.

State-of-the-art electron-beam diagnostics is vital for the success of the European XFEL. Thus 64 screens and 12 wire scanner stations, 460 beam-position monitors of eight different types, 36 toroids and six dark-current monitors are distributed along the accelerator. Longitudinal bunch properties are measured by bunch-compression monitors, beam-arrival monitors, electro-optical devices and transverse deflecting systems. Major contributions to the electron-beam diagnostics came from DESY, PSI in Switzerland, CEA Saclay in France, and from INR Moscow in Russia.

Technology goes full circle

Commissioning for the European XFEL accelerator began in December 2016 with the cool-down of the complete cryogenic system. First beam was injected into the main linac in January 2017, and by March bunches with a sufficient beam quality to allow lasing were accelerated to 12 GeV and stopped in a beam dump. After passing this beam through the “SASE1” undulator, first lasing at a wavelength of 0.9 nm was observed on 2 May. Further improvements to the beam quality and alignment led to lasing at 0.2 nm on 24 May. More than 90% of the installed accelerator modules are now in RF operation, with effective accelerating gradients reaching the expected performance in fully commissioned stations.   

The first hard-X-ray SASE free-electron laser, the Linac Coherent Light Source (LCLS) at SLAC in the US, was based on a normal-conducting accelerator. Upgrades to LCLS-II now aim for continuous wave operation using 280 superconducting cavities of essentially the same design as those of the European XFEL. Improvements to the superconducting technology were made to further reduce the cryogenic load of the accelerator structures. New techniques such as nitrogen doping and infusion, developed by Fermilab and other LCLS-II partners, are also essential, and established procedures and expertise with series production will benefit future FEL user operation. The now existing European SRF expertise and collaboration scheme also sketches out a mechanism for a European in-kind contribution to a Japan-hosted ILC.

The European XFEL is one of the largest accelerator-based research facilities in the world, and is driven by the longest and most advanced superconducting linac ever constructed. This was possible thanks to the great collaborative effort and team spirit of all partners involved in this project over the past 20 years or more.

Discovering diamonds

Natural diamonds are old, almost as old as the planet itself. They mostly originated in the Earth’s mantle around 1 to 3.5 billion years ago and typically were brought to the surface during deep and violent volcanic eruptions some tens of millions of years ago. Diamonds have been sought after for millennia and still hold status. They are also one of our best windows into our planet’s dynamics and can, in what is essentially a galactic narrative, convey a rich story of planetary science. Each diamond is unique in its chemical and crystallographic detail, with micro-inclusions and impurities within them having been protected over vast timescales.

Diamonds are usually found in or near the volcanic pipe that brought them to the surface. It was at one of these, in 1871 near Kimberley, South Africa, where the diamond rush first began – and where the mineral that hosts most diamonds got its name: kimberlite. Many diamond sources have since been discovered and there are now more than 6000 known kimberlite pipes (figure 1 overleaf). However, with current mining extraction technology, which generally involves breaking up raw kimberlite to see what’s inside, diamonds are often damaged and are steadily becoming mined out. Today, a diamond mine typically lasts for a few decades, and it costs around $10–26 to process each tonne of rock. With the number of new, economically viable diamond sources declining – combined with high rates of diamonds being extracted, ageing mines and increasing costs – most forecasts predict a decline in rough diamond production compared to demand, starting as soon as 2020.

A new diamond-discovery technology called MinPET (mineral positron emission tomography) could help to ensure that precious sources of natural diamonds last for much longer. Inspired by the same principles adapted in modern, high-rate, high-granularity detectors commonly found in high-energy physics experiments, MinPET uses a high-energy photon beam and PET imaging to scan mined kimberlite for large diamonds, before the rocks are smashed to pieces.   

From eagle eyes to camera vision

Over millennia, humans have invented numerous ways to look for diamonds. Early techniques to recover loose diamonds used the principle that diamonds are hydrophobic, so resist water but stick readily to grease or fat. Some stories even tell of eagles recovering diamonds from deep, inaccessible valleys, when fatty meat thrown onto a valley floor might stick to a gem: a bird would fly down, devour the meat, and return to its nest, where the diamond could be recovered from its droppings. Today, technology hasn’t evolved much. Grease tables are still used to sort diamond from rock, and the current most popular technique for recovering diamonds (a process called dense media separation) relies on the principle that kimberlite particles float in a special slurry while diamonds sink. The excessive processing required with these older technologies wastes water, takes up huge amounts of land, releases dust into the surrounding atmosphere, and also leads to severe diamond breakage.    

Just 1% of the world’s diamond sources have economically viable grades of diamond and are worth mining. At most sites the gemstones are hidden within the kimberlite, so diamond-recovery techniques must first crush each rock into gravel. The more barren rock there is compared to diamonds, the more sorting has to be done. This varies from mine to mine, but typically is under one carat per tonne – more dilute than gold ores. Global production was around 127 million carats in 2015, meaning that mines are wasting millions of dollars crushing and processing about 100 million tonnes of kimberlite per year that contains no diamonds. We therefore have an extreme case of a very high value particle within a large amount of worthless material – making it an excellent candidate for sensor-based sorting.

Early forms of sensor-based sorting, which have only been in use since 2010, use a technique called X-ray stimulated optical fluorescence, which essentially targets the micro impurities and imperfections in each diamond (figure 2). Using this method, the mined rocks are dropped during the extraction process at the plant, and the curtain of falling rock is illuminated by X-rays, allowing a proportion of liberated or exposed diamonds to fluoresce and then be automatically extracted. The transparency of diamond makes this approach quite effective. When Petra Diamonds Ltd introduced this technique with several X-ray sorting machines costing around $6 million, the apparatus paid for itself in just a few months when the firm recovered four large diamonds worth around $43 million. These diamonds, presumed to be fragments of a larger single one, were 508, 168, 58 and 53 carats, in comparison to the average one-carat engagement ring.

Very pure diamonds that do not fluoresce, and gems completely surrounded by rock, can remain hidden to these sensors. As such, a newer sensor-based sorting technique that uses an enhanced form of dual-energy X-ray transmission (XRT), similar to the technology for screening baggage in airports, has been invented to get around this problem. It can recover liberated diamonds down to 5 mm diameter, where 1 mm is usually the smallest size recovered commercially, and, unlike the fluorescing technique, can detect some locked diamonds. These two techniques have brought the benefits of sensor-based sorting into sharp focus for more efficient, greener mines and for reducing breakage.

Recent innovations in particle-accelerator and particle-detector technology, in conjunction with high-throughput electronics, image-processing algorithms and high-performance computing, have greatly enhanced the economic viability of a new diamond-sensing technology using PET imaging. PET, which has strongly benefitted from many innovations in detector development at CERN, such as BGO scintillating crystals for the LEP experiments, has traditionally been used to observe processes inside the body. A patient must first absorb a small amount of a positron-emitting isotope; the ensuing annihilations produce patterns of gamma rays that can be reconstructed to build a 3D picture of metabolic activity. Since a rock cannot be injected with such a tracer, MinPET requires us to irradiate rocks with a high-energy photon beam and generate the positron emitter via transmutation.

The birth of MinPET

The idea to apply PET imaging to mining began in 1988, in Johannesburg, South Africa, where our small research group of physicists used PET emitters and positron spectroscopy to study the crystal lattice of diamonds. We learnt of the need for intelligent sensor-based sorting from colleagues in the diamond mining industry and naturally began discussing how to create an integrated positron-emitting source.

Advances in PET imaging over the next two decades led to increased interest from industry, and in 2007 MinPET achieved its first major success in an experiment at Karolinska hospital in Stockholm, Sweden. With a kimberlite rock playing the role of a patient, irradiation was performed at the hospital’s photon-based cancer therapy facility and the kimberlite was then imaged at the small-animal PET facility in the same hospital. The images clearly revealed the diamond within, with PET imaging of diamond in kimberlite reaching an activity contrast of more than 50 (figure 3). This result led to a working technology demonstrator involving a conveyor belt that presented phantoms (rocks doped with a sodium PET-emitter were used to represent the kimberlite, some of which contained a sodium hotspot to represent a hidden diamond) to a PET camera. These promising results attracted funding, staff and students, enabling the team to develop a MinPET research laboratory at iThemba LABS in Johannesburg. The work also provided an important early contribution to South Africa’s involvement in the ATLAS experiment at CERN’s Large Hadron Collider.

By 2015 the technology was ready to move out of the lab and into a diamond mine. The MinPET process (figure 4) involves using a high-energy photon beam of some tens of MeV to irradiate a kimberlite rock stream, turning some of the light stable isotopes within the kimberlite into transient positron emitters, or PET isotopes, which can be imaged in a similar way to PET imaging for medical diagnostics. The rock stream is buffered for a period of 20 minutes before imaging the rock, because by then carbon is the dominant PET isotope. Since non-diamond sources of carbon have a much lower carbon concentration than diamond, or are diluted and finely dispersed within the kimberlite, diamonds show up on the image as a carbon-concentration hotspot.

The speed of imaging is crucial to the viability of MinPET. The detector system must process up to 1000 tonnes of rock per hour to meet the rate of commercial rock processing, with PET images acquired in just two seconds and image processing taking just five seconds. This is far in excess of medical-imaging needs and required the development of a very high-rate PET camera, which was optimised, designed and manufactured in a joint collaboration between the present authors and a nuclear electronic technology start-up called NeT Instruments. MinPET must also take into account rate capacity, granularity, power consumption, thermal footprints and improvements in photon detectors. The technology demonstrator is therefore still used to continually improve MinPET’s performance, from the camera to raw data event building and fast-imaging algorithms.

An important consideration when dealing with PET technology is that radiation remains within safe limits. If diamonds are exposed to extremely high doses of radiation, their colour can change – something that can be done deliberately to alter the gems, but which reduces customer confidence in a gem’s history. Despite being irradiated, the dose exposure to the diamonds during the MinPET activation process is well below the level it would receive from nature’s own background. It has turned out, quite amazingly, that MinPET offers a uniquely radiologically clean scenario. The carbon PET activity and a small amount of sodium activity are the only significant activations, and these have relatively short half-lives of 20 minutes and 15 hours, respectively. The irradiated kimberlite stream soon becomes indistinguishable from non-irradiated kimberlite, and therefore has a low activity and allows normal mine operation.

Currently, XRT imaging techniques require each particle of kimberlite rock being processed to be isolated and smaller than 75 mm; within this stream only liberated diamonds that are at least 5 mm wide can be detected and XRT can only provide 2D images. MinPET is far more efficient because it is currently able to image locked diamonds with a width of 4 mm within a 100 mm particle of rock, with full 3D imaging. The size of diamonds MinPET detects means it is currently ideally suited for mines that make their revenue predominantly from large diamonds (in some mines breakage is thought to cause up to a 50% drop in revenue). There is no upper limit for finding a liberated diamond particle using MinPET, and it is expected that larger diamonds could be detected in up to 160 mm-diameter kimberlite particles.

To crumble or shine

MinPET has now evolved from a small-scale university experiment to a novel commercial technology, and negotiations with a major financial partner are currently at an advanced stage. Discussions are also under way with several accelerator manufacturers to produce a 40 MeV beam of electrons with a power of 40–200 kW, which is needed to produce the original photon beam that kick-starts the MinPET detection system.

Although the MinPET detection system costs slightly more than other sorting techniques, overall expenditure is less because processing costs are reduced. Envisaged MinPET improvements over the next year are expected to take the lower limit of discovery down to as little as 1.5 mm for locked diamonds. The ability to reveal entire diamonds in 3D, and locating them before the rocks are crushed, means that MinPET also eliminates much of the breakage and damage that occurs to large diamonds. The technique also requires less plant, energy and water – all without causing any impact on normal mine activity.

The world’s diamond mines are increasingly required to be greener and more efficient. But the industry is also under pressure to become safer, and the ethics of mining operations are a growing concern among consumers. In a world increasingly favouring transparency and disclosure, the future of diamond mining has to be in using intelligent, sensor-based sorting that can separate diamonds from rock. MinPET is the obvious solution – eventually allowing marginal mines to become profitable and the lifetime of existing mines to be extended. And although today’s synthetic diamonds offer serious competition, natural stones are unique, billions of years old, and came to the surface in a violent fiery eruption as part of a galactic narrative. They will always hold their romantic appeal, and so will always be sought after.

The Higgs adventure: five years in

Where were you on 4 July 2012, the day the Higgs boson discovery was announced? Many people will be able to answer without referring to their diary. Perhaps you were among the few who had managed to secure a seat in CERN’s main auditorium, or who joined colleagues in universities and laboratories around the world to watch the webcast. For me, the memory is indelible: 3.00 a.m. in Watertown, Massachusetts, huddled over my laptop at the kitchen table. It was well worth the tired eyes to witness remotely an event that will happen once in a lifetime.

“I think we have it, no?” was the question posed in the CERN auditorium on 4 July 2012 by Rolf Heuer, CERN’s Director-General at the time. The answer was as obvious as the emotion on faces in the crowd. The then ATLAS and CMS spokespersons, Fabiola Gianotti and Joe Incandela, had just presented the latest Higgs search results based on roughly two years of LHC operations at energies of 7 and 8 TeV. Given the hints for the Higgs presented a few months earlier in December 2011, the frenzy of rumours on blogs and intense media interest during the preceding weeks, and a title for the CERN seminar that left little to the imagination, the outcome was anticipated. This did not temper excitement.

Since then, we have learnt much about the properties of this new scalar particle, yet we are still at the beginning of our understanding. It is the final and most interesting particle of the Standard Model of particle physics (SM), and its connections to many of the deepest current mysteries in physics mean the Higgs will remain a focus of activities for experimentalists and theorists for the foreseeable future.

Speculative theories

The Higgs story began in the 1960s with speculative ideas. Theoretical physicists understood how the symmetries of materials can spontaneously break down, such as the spontaneous alignment of atoms when a magnet is cooled from high temperatures, but it was not yet understood how this might happen for the symmetries present in the fundamental laws of physics. Then, in three separate publications by Brout and Englert, by Higgs, and by Guralnik, Hagen and Kibble in 1964, the broad particle-physics structures for spontaneous symmetry breaking were fleshed out. In this and subsequent work it became clear that a scalar field was a cornerstone of the general symmetry-breaking mechanism. This field may be excited and oscillate, much like the ripples that appear on a disturbed pond, and the excitation of the Higgs field is known as the Higgs boson.

As the detailed theoretical structure of symmetry breaking in nature was later developed, in particular by Weinberg, Glashow, Salam, ’t Hooft and Veltman, the precise role of the Higgs in the SM evolved to its modern form. In addition to explaining what we see in modern particle detectors, the Higgs plays a leading role in the evolution of the universe. In the hot early epoch an infinitesimally small fraction of a second after the Big Bang, the Higgs field spontaneously “slipped” from having zero average value everywhere in space to having an average value equivalent to about 246 GeV. When this happened, any field that was previously kept massless by the SU(2) × U(1) gauge symmetries of the SM instantly became massive.

Before delving further into the vital role of the Higgs, it is worth revisiting a couple of common misconceptions. One is that the Higgs boson gives mass to all particles. Although all of the known massive fundamental particles obtain their mass by interacting with the pervasive Higgs field, there are non-elementary particles, such as the proton, whose mass is dominated by the binding energy of the strong force that holds its constituent gluons and quarks together. So very little of the mass we see in nature comes directly from the Higgs field. Another misconception is that the Higgs boson gives mass to everything it interacts with. On the contrary, the Higgs has very important interactions with two massless fundamental fields: the photon and the gluon. The Higgs is not charged under the forces associated with the photon and the gluon (quantum electrodynamics and quantum chromodynamics), and therefore cannot give them mass, but it can still interact with them. Indeed, somewhat ironically, it was precisely its interactions with massless gluons and photons that revealed the existence of the Higgs boson in the summer of 2012.

The one remaining unmeasured free parameter of the SM at that time, which governs which production and decay modes the particle can have, was the Higgs boson mass. In the early days it was not at all clear what the mass of the Higgs boson would be, since in the SM this is an input parameter of the theory. Indeed, in 1975, in the seminal paper about its experimental phenomenology by Ellis, Gaillard and Nanopoulos, it is notable that the allowed Higgs mass range at that time spanned four orders of magnitude, from 18 MeV to over 100 GeV, with experimental prospects in the latter energy range opaque at best (figure 1).

How the Higgs was found

By 4 July 2012 the picture was radically different. The Higgs no-show at previous colliders, including LEP at CERN and the Tevatron at Fermilab, had cornered its mass to be greater than 114 GeV and not to lie between 147–180 GeV, while theoretical limits on the allowed properties of W- and Z-boson scattering required it to be below around 800 GeV. If nature used the SM version of the Higgs mechanism, there was nowhere left to hide once CERN’s LHC switched on. In the end, the Higgs weighed in at the relatively light mass of 125 GeV. How the different Higgs cross-sections, which are related to the production rate for various processes, depend on the mass are shown in figure 2, left.

Producing the Higgs would alone not be sufficient for discovery. It would also have to be observed, which depends on the different fractional ways in which the Higgs boson will decay (figure 2, right). If heavy, one would have to search for decays to the weak gauge bosons, W and Z; if lighter, a cocktail of decays would light up detectors. Going further, if thousands of Higgs bosons could be produced, then decays to pairs of photons may show up. Thus, by the time of the LHC operation, the basic theoretical recipe was relatively simple: pick a Higgs mass, calculate the SM predictions and search.

On the other hand, the experimental recipe was far from simple. The LHC, a particle accelerator capable of colliding protons at energies far beyond anything previously achieved, was a necessity. But energy alone was not enough, as sufficient numbers of Higgs bosons also had to be produced. Although occurring at a low rate, Higgs decays into pairs of massless photons would prove to be experimentally clean and furnish the best opportunity for discovery. Once detection efficiencies, backgrounds, and requirements of statistical significance are folded into the mix, on the order of 100,000 Higgs bosons would be required for discovery. This is no short order, yet that is what the accelerator teams delivered to the detectors.

With the accelerator running, it remained to observe the thing. This would push ingenuity to its limits. Physicists on the ATLAS and CMS detectors would need to work night and day to filter through the particle detritus from innumerable proton–proton collisions to select data sets of interest. The search set tremendous challenges for the energy-resolution and particle-identification capabilities of the detectors, not to mention dealing with enormous volumes of data. In the end, the result of this labour reduced to a couple of plots (figure 3). The discovery was clear for each collaboration: a significance pushing the 5σ “discovery” threshold. In further irony for the mass-giving Higgs, the discovery was driven primarily by the rare but powerful diphoton decays, followed closely by Higgs decays to Z bosons. Global media erupted in a science-fuelled frenzy. It turns out that everyone gets excited when a fundamental building block of nature is discovered.

The hard work begins

The joy in the experimental and theoretical communities in the summer of 2012 was palpable. If we were to liken early studies of the electroweak forces to listening to a crackling radio, LEP had given us black and white TV and the LHC was about to show us the world in full cinematic colour. Particle physicists now had the work they had waited a lifetime to do. Is it the SM Higgs boson, or something else, something exotic? All we knew at the time was that there was a new boson, with mass of roughly 125 GeV, that decayed to photons and Z bosons.

Despite the huge success of the SM, there was every reason to hope that the new boson would not be of the common variety. The Higgs brings us face-to-face with questions that the SM cannot answer, such as what constitutes dark matter (observed to make up roughly 80% of all the matter in the universe). Unlike the other SM  particles, it is uncharged and without spin, and can therefore interact easily with any other neutral scalar particles. This makes it a formidable tool in the hunt for dark matter – a possibility we often call the “Higgs portal”. The ATLAS and CMS collaborations have been busy exploring the Higgs portal and we now know that the Higgs decay rate into invisible new dark particles must be less than 34% of its total rate into known particles. This is an incredible thing to know for a particle that is itself so elusive, and a significant early step for dark-sector physics.

Another deep puzzle, even more esoteric than dark matter and which has driven the theoretical community to distraction for decades, is called the hierarchy problem. We know that at higher energies (smaller sizes) there must be more structure to the laws of nature: the scale of quantum gravity, the Planck scale, is one example, but there are hints of others. For any other SM particle, this new physics at high energies has no dramatic effect, since fundamental particles with nonzero spin possess special protective symmetries that shield them from large quantum corrections. But the Higgs possesses no such symmetry, and is thus a sensitive creature: quantum-mechanical effects will give large corrections to its mass, pulling it all the way up to the masses of the new particles it is interacting with. That has clearly not happened, given the mass we measure in experiments, so what is going on?

Thus the discovery of the Higgs brings the hierarchy problem to the fore. If the Higgs is composite, being made up of other particles, in a similar fashion to the ubiquitous QCD pion, then the problem simply goes away because there is no fundamental scalar in the first place. Another popular theory, supersymmetry, postulates new space–time symmetries, which protect the Higgs boson from these quantum corrections and could modify its properties. Measurements of the Higgs interactions thus indirectly probe this deepest of questions in modern particle physics. For example, we now know the interaction between the Higgs boson and the Z boson to an accuracy at the level of 10%, a significant constraint on these theories.

It is also crucial that we understand the way the Higgs interacts with fermions. Anyone who has ever looked up the masses of the quarks and leptons will see that they follow cryptic hierarchical patterns, while families of fermions can also mix into one another through the emission of a W boson in peculiar patterns that we do not yet understand. By playing a star role in generating particle masses, and as a supporting actor by also generating the mixings, the Higgs could shed light on these mysteries.

At the time of the Higgs discovery in 2012, the only interactions we were certain of concerned bosons: photons, W and Z bosons, and, to a certain degree, gluons. There was emerging evidence for interactions with top quarks, but it was circumstantial, coming from the role of the top quark in the quantum-mechanical process that generates Higgs interactions with gluons and photons. After a four-year wait, in 2016 ATLAS and CMS combined forces to reach the first 5σ direct discovery of Higgs interactions with a fermion: the τ lepton, to be precise. This was a significant milestone, not least because it also happened to give the first direct evidence of Higgs interactions with leptons.

CChig6_06_17

The scope of the Higgs programme has also broadened since the early days of the discovery. This applies not only to the precision with which certain couplings are measured, but also to the energy at which they are measured. For example, when the Higgs boson is produced via the fusion of two gluons at the LHC, additional gluons or quarks may be emitted at high energies. By observing such “associated production” we may gain information about the magnitude of a Higgs interaction and about its detailed structure. Hence, if new particles that influence Higgs boson interactions exist at high energies, probing Higgs couplings at high energies may reveal their existence. The price to be paid for associated production is that the probability, and hence the rate, is low (figure 2). As an ever increasing number of Higgs production events have been recorded at the LHC in the past five years, this has allowed physicists to begin mapping the nature of the Higgs boson’s interactions.

What’s next?

We have much to anticipate. Although the Higgs is too light to be able to decay into pairs of top quarks, experimentalists will study its interactions with the top quark by observing Higgs produced in association with pairs of top quarks. Another anticipated discovery, which is difficult to pick out above other background processes, is the decay of the Higgs to bottom quarks. Amazingly, despite the incredibly rare signal rate, the upgraded High-Luminosity LHC will be able to discover Higgs decays to muons. This would be the first observation of Higgs interactions with the second generation of fermions, pointing a floodlight towards the flavour puzzle. These measurements will bring the overall picture of how the Higgs generates particle masses into closer focus. Even now, after only five years, the picture is becoming clear: Higgs physics is becoming a precision science at the LHC (figure 4).

There is more to Higgs physics than a shopping list of couplings, however. By the end of the LHC’s operation in the mid-2030s, more than one hundred million Higgs bosons will have been produced. That will allow us to search for extremely rare and exotic Higgs production and decay modes, perhaps revealing a first crack in the SM. On the opposing flank, by observing the standard production processes in extreme kinematic corners, such as Higgs production at very high momentum, we will be able to measure its interactions over a range of energies. In both cases the challenge will not only be experimental, as the SM predictions must also keep pace with the accuracy of the measurements – a fact which is already driving revolutions in our theoretical understanding.

Setting our sights on the distant future of Higgs physics, it would be remiss to overlook the “white whale” of Higgs physics: the Higgs self-interaction. In yet another unique twist, the Higgs is the only particle in the SM that can scatter off itself (figure 5). In contrast, gluons only interact with other non-identical gluons. If we could access the Higgs self-interactions, by determining how a Higgs boson scatters on itself in measurements of Higgs boson pair-production processes, we would be measuring the shape of the Higgs scalar potential. This is tremendously important because, in theory, it determines the fate of the entire universe: if the scalar potential “turns back over” again at high field values, it would imply that we live in a metastable state. There is mounting evidence, in the form of the measured SM parameters such as the mass of the top quark, that this may be the case. Unfortunately, with the LHC we will not be able to measure this interaction well enough to definitively determine the shape of the Higgs scalar potential, and so we must ultimately look to future colliders to answer this question, among others.

The Higgs is the keystone of the SM and therefore everything we learn about this new particle is central to the deepest laws of nature. When huddled over my laptop at 3.00 a.m. on 4 July 2012, I was 27 years old and in the first year of my first postdoctoral position. To me, and presumably the rest of my generation, it felt like a new scientific continent had been discovered, one that would take a lifetime to explore. On that day we finally knew it existed. Today, after five years of feverish exploration, we have in our hands a sketch of the coastline. We have much to learn before the mountains and valleys of the enigmatic Higgs boson are revealed.

FAIR forges its future

Supernova explosions, neutron-star mergers and rare radioactive ions might not seem to have much connection to terrestrial matters. Yet, while the lightest elements were synthesised immediately after the Big Bang, and elements up to iron were created in stellar cores, all of the heavy elements beyond gold and platinum were produced via complex production paths during extreme astrophysical events. Experiments with intense heavy-ion beams produced at the international Facility for Antiproton and Ion Research (FAIR), which is under construction at Darmstadt in Germany, promise new and detailed insights into the nuclear reactions and rare radioactive ion species that underpin the synthesis of heavy elements in the universe.

FAIR is a multipurpose accelerator facility that will provide beams, from protons up to uranium ions, with a wide range of intensities and energies, in addition to secondary beams of antiprotons and rare isotopes. Complementary to CERN’s Large Hadron Collider or Super Proton Synchrotron, FAIR is pushing the intensity rather than the energy frontier for hadron beams. It will enable scientists to produce and study reactions involving rare exotic hadronic states or rare, very short-lived radioactive nuclei. It will enable the investigation of processes under the extreme temperatures and pressures that prevail in large planets, stars and stellar explosions. FAIR will also allow physicists to produce and study dense hadronic matter and its transition to quark matter, and permit tests of quantum electrodynamics in the regime of very strong electromagnetic fields, to name but a few goals.

Overall, FAIR’s scientific programme comprises hadron physics, nuclear structure and astrophysics, atomic physics, plasma physics, materials research, and radiation biophysics and its applications in cancer therapy and space research. Its science is divided between four main pillars (see panel “FAIR’s four scientific pillars” below), including experiments similar in design to those in high-energy physics. After a lengthy and complex phase of development, a groundbreaking ceremony held on 4 July 2017 marked the start of construction of the FAIR facility.

Project evolution

FAIR was developed by the international science community and the GSI laboratory (the Helmholtz Centre for Heavy Ion Research) around the turn of the millennium. GSI, founded in 1969, has a long tradition in nuclear and atomic physics and, more generally, heavy-ion research, and was therefore a natural site on which to develop the next generation of accelerators and experiments for these fields. The initial start date of FAIR was 7 October 2010, when nine partner countries (Finland, France, Germany, India, Poland, Romania, Russia, Slovenia and Sweden) signed an intergovernmental agreement for its construction and operation. The UK joined FAIR as an associate member in 2013.

During late 2014, the then FAIR management reported difficulties surrounding new construction requirements. Although not unusual for a complex, one-of-a-kind facility such as FAIR, this caused major modifications of the civil-construction design and resulted in a delay and cost increase of the overall project. In September 2015 the FAIR Council, representing the nine shareholders, unanimously agreed to adapt the FAIR construction budget and timeline according to the necessary design modifications.

Following this key decision, FAIR was completely reorganised and consolidated: the FAIR and GSI GmbH companies aligned their managerial and administrative structures and processes, and a joint management team was installed in a stepwise process, with former spokesperson for the ALICE experiment at CERN, Paolo Giubellino, appointed as scientific managing director and spokesperson for FAIR-GSI in January 2017. Thanks to these and other changes, civil-construction work for the tunnel that will house FAIR’s main accelerator began on schedule this summer, with the goal to finish all FAIR buildings by the end of 2022. In parallel, procurement of the FAIR accelerator systems and construction of the FAIR detector instrumentation is progressing well. Following the installation and commissioning of the accelerators and experiments starting in 2020/2021, the FAIR science programme is expected to start operation in 2025.

A journey through FAIR

The FAIR accelerator complex is optimised to deliver intense and energetic beams of particles to different production targets. The resulting beams will then be steered to various fixed-target experiments or injected into storage-cooler rings for novel in-ring experiments with beams of secondary antiprotons or radioactive ions at the highest beam qualities. The central machines of FAIR are: the fast-ramping SIS100 synchrotron, which provides intense primary beams; the large-aperture Super Fragment Separator (Super-FRS), which filters out the exotic ion beams; and the cooler storage rings CR and HESR (see image). The SIS100 is the heart of FAIR. With a circumference of 1.1 km and a maximum magnetic bending power of 100 Tm, the machine will accelerate ion beams with maximum intensities ranging from 4 × 1013 protons at 29 GeV to 5 × 1011 uranium (28+) ions at 2.7 GeV/u. The existing GSI accelerators UNILAC and SIS18 will serve as injectors and pre-accelerators for SIS100, while a new proton linac will be installed for high-intensity injection into the SIS18/SIS100 synchrotron chain.

To maximise the luminosity of the SIS100, fast-ramped superconducting superferric magnets with a maximum field of 1.9 T and ramp rates up to 4 T per second have been developed to enable cycle times of the same order as the cooling rates in the storage rings (see image). Together with the upgraded SIS18 pre-accelerator, the SIS100 will provide uranium ion beams 10 times more intense than previously available beams at GSI. The cold machine design has a further advantage: the SIS100 beam pipe enables heavy residual gas components to be pumped, potentially stabilising the dynamic pressure. Due to the tight beam-loss budget, the iron yoke of the superconducting magnets must be built with the highest precision and reproducibility. Production has already started for the SIS100 dipole magnets, and the first beams from SIS100 are foreseen for 2025. Three test facilities at GSI, JINR/Dubna and CERN have been established to assess the different types of superconducting magnets.

Two production targets for rare isotope and antiproton beams will be served by the SIS100. A primary ion beam can either be slowly extracted to the Super-FRS over a period of many seconds to produce radioactive secondary beams for fixed-target experiments, or it can be extracted quickly in the form of a single, compressed, short bunch to produce a secondary beam of antiprotons or exotic ions. The in-flight-generated rare isotopes, produced via projectile fragmentation of all primary beams up to uranium-238 or alternatively via fission of uranium-238 beams, are efficiently separated in the large aperture of the Super-FRS. Due to the large acceptance of this machine, the gain in primary-beam intensities for uranium ions in the SIS100 translates into a factor of more than 1000 for secondary-beam intensities of rare, radioactive isotopes.

After production and separation, the hot secondary ion beams drive three experimental scenarios: they can be stopped to allow studies of their ground-state properties; used in in-flight and secondary reactions to produce even more exotic species; or stored and pre-cooled in the collector ring (CR). The fast stochastic cooling process in the CR relies on a fast de-bunching of the injected short bunch. Pre-cooled secondaries will then be transferred from the CR to the high-energy storage ring (HESR), where they can be accumulated and accelerated up to an energy of 15 GeV for antiprotons and about 5–6 GeV/u for very heavy ions. The HESR can also store and cool stable high-charge-state heavy-ion beams, directly injected from the SIS100 via the CR, for precision studies in atomic, nuclear and fundamental physics, such as tests of quantum electrodynamics (QED) in strong fields or tests of special relativity.

FAIR science ahead

About 3000 scientists including more than 500 PhD students from around the world will carry out experiments at FAIR to understand the fundamental structure of matter, explore its exotic forms, and to understand how the universe evolved from its primordial state. FAIR’s science programme is structured into four pillars and organised in four large collaborations with several hundred members each: APPA, serving communities in atomic, plasma physics and applications; CBM, the Compressed Baryonic Matter experiment; NUSTAR, the NUclear STructure, Astrophysics and Reactions programme; and PANDA (antiProton ANihilation in DArmstadt), which aims to study hadrons using antiproton beams. APPA and NUSTAR consist of several sub-collaborations, while CBM and PANDA are rather monolithic experiments involving large detectors (see panel on previous page).

Well before the start of the SIS100 operation in 2025, an upgrade of the GSI accelerators due for completion this year will allow extensive testing of FAIR components. This upgrade will also allow researchers to trial novel FAIR instrumentation for an attractive intermediate research programme, named FAIR phase 0. For instance, the NUSTAR “R3B” spectrometer, the CRYRING and the HITRAP facility will be available and will enable, in combination with the intensity-upgraded SIS18 synchrotron and GSI’s fragment separator, novel experiments in nuclear structure and reactions in so far unexplored areas of the nuclear chart.

The CRYRING and the HITRAP facility will enable physicists to further increase the precision of both atomic-physics measurements of QED effects in highly charged heavy ions and of measurements of fundamental constants. Moreover, the hadronic-matter experimental programme of HADES (High Acceptance Di-Electron Spectrometer) will benefit from the higher intensities from the SIS18. HADES is a versatile detector for the study of dielectron (e+e) and hadron production in heavy-ion collisions, as well as in proton- and pion-induced reactions in the energy range of 1–4 GeV. These are just a few examples from the intermediate research programme, which will start in 2018 and offer about three months of beam time per year, thereby bridging the gap until the commissioning of the SIS100.

The FAIR phase 0 programme intends to maintain and further establish the FAIR-GSI community by offering attractive science before the full complex is up and running. It will also educate and train the next generation of scientists and engineers for FAIR and, last but not least, maintain and extend the technical skills required to operate such a large accelerator complex. While FAIR phase 0 is an important and necessary step offering new and excellent research opportunities for users, full exploitation of the unique science potential opened up by FAIR has to await the start of SIS100 operation in 2025.

Depending on how rich the scientific harvest from FAIR will be and in which specific directions it will be most prominent, one can conceive of several upgrade options. One is a further increase of intensities by up to two orders of magnitude for nuclear structure, reactions and astrophysics, which will also benefit dense-plasma research. Another option is a further increase of beam energy by a factor of 3–6 for hadron- and quark-matter research. Other upgrade possibilities include strengthening the antiproton research programme, via cooled low-energy antiproton beams, for the study of fundamental interactions and symmetries. FAIR is expected to be the flagship facility for hadron, nuclear and atomic physics – as well as related science fields exploiting intense beams of antiprotons and heavy ions – until around 2040.

FAIR's four scientific pillars

Atomic and Plasma Physics, and Applied sciences (APPA)
With about 700 participants, APPA is an umbrella for several sub-collaborations working across atomic physics, plasma physics and applied sciences, with specific programmes in biophysics, medical physics and materials science. Several experimental stations, in addition to the CRYRING and HESR storage rings and the trapping facility HITRAP, will allow the APPA community to tackle a variety of challenges. In atomic physics, for example, high-precision tests of bound-state QED in the non-perturbative regime become possible. A precise determination of fundamental constants such as the fine-structure constant is also a target, which involves very precise measurements of the bound-state g-factors in medium to high-Z hydrogen-like ions confined in a trap. Plasma physicists will be able to create and probe dense plasmas to test models of planetary and stellar structure. By means of FAIR beams, the high-energy component of galactic cosmic radiation can also be simulated to assess the risk of space missions for astronauts and electronic equipment by dedicated irradiation experiments. Finally, the material science and geoscience communities will be able to test how materials respond to the simultaneous application of irradiation and pressure, which is of interest for the synthesis of new materials from highly non-equilibrium conditions and for understanding processes in the Earth’s mantle.

 

The Compressed Baryonic Matter experiment (CBM) 
The CBM experiment, which has more than 500 participants and is organised similarly to the LHC experiments at CERN, will use high-energy nucleus–nucleus collisions to investigate highly compressed nuclear matter. The fixed-target experiment is 10 m long and comprises a large-aperture superconducting dipole magnet and seven subsequent detector systems providing tracking and particle identification. CBM collisions will recreate the matter densities found in supernova explosions, the cores of neutron stars and neutron-star mergers. In contrast to the very high temperatures and low net-baryon densities reached at the Relativistic Heavy Ion Collider in Brookhaven and the LHC at CERN (conditions that are similar to the conditions that prevailed microseconds after the Big Bang), the energies of the FAIR beams are perfectly suited to study the QCD phase diagram of strongly interacting matter at large net baryon densities and low temperatures. Here, it is expected that the QCD phase diagram exhibits a rich structure such as a critical point, a first-order phase transition between hadronic and partonic matter, or new phases such as quarkyonic matter. Discovering these landmarks would be a breakthrough in our understanding of the strong interaction. The CBM experiment is designed to run at interaction rates of up to 10 MHz, which is 3–4 orders of magnitude higher than the rates reached in other high-energy heavy-ion experiments. It has very fast and radiation-hard detectors, a novel data read-out and analysis concept, and a high-performance computing cluster for online event reconstruction and selection.

The PANDA experiment 
The antiProton ANihilation in DArmstadt (PANDA) collaboration is a co-operation of more than 400 scientists from 19 countries, similar to but smaller than the LHC experiments at CERN. Its goal is to understand hadrons using the power of an antiproton beam on fixed hydrogen or other nuclear targets. Antiproton–proton annihilations have enormous advantages compared to proton–proton collisions, such as small momentum-transfer at maximum released energy with well-defined initial states and high-precision mass scanning. The vast difference in mass between the proton and its individual quark constituents is a result of the binding among quarks in the confinement regime, and exotic hadrons such as tetra- and pentaquarks, hybrids and glueballs will reveal uncharted properties of this binding. PANDA will use proton form-factor measurements, deep virtual Compton scattering and quark dynamics, as well as the behaviour of hadrons inside nuclear media, as highly complementary tools with which to understand the very nature of hadrons. Strange quarks in hyperons, for instance, can be used as tags to trace quark dynamics with very high cross-sections and spin degrees of freedom. The PANDA experiment features a modern multipurpose detector with excellent tracking, calorimetry and particle-identification capabilities. Together with the high-quality antiproton beam at FAIR’s high-energy storage ring (HESR), an unprecedented annihilation rate and sophisticated event filtering, it will be ideally suited to address important questions in all aspects of this field.

NUclear STructure, Astrophysics and Reactions (NUSTAR) 
The NUSTAR collaboration at FAIR has more than 800 participants from 180 institutes located in 38 countries. Similar to APPA, NUSTAR does not represent a single monolithic experiment but is structured in several sub-collaborations across different experimental set-ups tailored to various aspects of secondary radioactive ions, such as mass and lifetime measurements. A major goal of NUSTAR is to improve our knowledge of the synthesis and abundance of chemical elements, for which the collaboration will explore the structure and reaction properties of very rare radioactive ions produced for the first time by FAIR. Although much has been learnt about the behaviour of stable and unstable nuclei in past decades, we are still far from understanding how the very heavy elements are formed through reactions involving rare nuclei at the limit of stability. FAIR will allow scientists to artificially produce the nuclei that occur as radioactive intermediate products in the formation of stable isotopes, measuring directly in the laboratory the different processes involved. FAIR offers unique tools for such studies. The Super-FRS will make very efficient use of the highly intense beams at high energies to separate beams of the heaviest and most neutron-rich nuclei, while FAIR’s complex network of storage rings will allow mass and lifetime measurements. This will place NUSTAR at the forefront of this branch of science. Many of NUSTAR’s experimental set-ups are already complete, and the collaboration plans to transfer them into the new buildings starting from 2023. 

Data privacy concerns us all

It is perhaps no coincidence that many dystopian visions of the future in popular fiction, such as Nineteen Eighty-Four, Brave New World and Fahrenheit 451, have breach of data privacy at the core of their plots. With an ever growing level of interaction between humans and a global infrastructure tied together by the internet, there is always the fear that others know more about you than you would like. How can we save ourselves from such a bleak future?

The answer has been to create, over the past 20 years, a number of strict legal obligations and rights when dealing with the personal data of individuals. You will notice that you are increasingly asked for consent for use of your personal data on websites and to allow software to store cookies on your computer. Such legislation is sometimes criticised for generating bureaucracy that gets in the way of “real work”. But for those who work in the data-privacy arena, it is clear that we need to adapt quickly to a rapidly evolving digital environment. What you do, where you go and how long you spend there are valuable assets in the information world.

In 2012 the European Union (EU) proposed new data-protection reforms to strengthen the fundamental rights of citizens. Three years later, EU institutions reached agreement on the rules, and in May 2016 a new regulation was issued called the General Data Protection Regulation (GDPR), which enters into force in all European Economic Area (EEA) countries from 25 May 2018.

You probably haven’t heard much about the GDPR until now, yet it is almost certain to impact the way our field deals with personal data. The central idea is that your personal data is truly yours: it cannot be taken or processed without safeguards to its privacy, and any data collection or processing must have an appropriate legal basis. The new laws offer a very broad interpretation of what “personal data” and “processing” mean, and offer a number of legal bases that must be considered. Personal data is anything that could be used to identify you, including obvious things like name and address but also more subtle information like GPS location or IP address. Processing is equally loosely defined, from storing data in a database to viewing data on a screen and even copying a file.

Although in practice there are many details to be determined, the intention of the regulators is evident: to stop the use of people’s personal data except for well-defined purposes that must be clear when the data are collected to be fair to the individual. Crucially, the new regulations aim to be technology agnostic and therefore apply equally to online databases as well as a filing cabinet full of paper.

All EEA institutions, companies, labs and universities will be subject to the GDPR. Although CERN, as an international organisation, is not directly subject to EU regulations, in light of the coming changes it is reviewing its internal legislation to offer equivalent levels of personal-data protection. Consequently, in January this year CERN established the Office of Data Privacy Protection to assist services that process personal data and to help anyone who is concerned about how their personal data is being handled by the Organization.

Given the broad scope of personal data and data processing, it can be complicated and somewhat burdensome to comply with these new practices. For instance, it will require us to review how passport information should be sent, how records such as medical information and personal attributes are secured, as well as how photos and CCTV are used. At the same time, we need to recognise that protecting privacy is important and that adopting a “nothing to hide, nothing to fear” approach does not protect us from future unknown uses of our personal data.

So, if in any doubt, simply adopt the golden rule of personal data: if you don’t really need it, don’t collect and store it; if you do, delete it as soon as possible.

A First Course in Mathematical Physics

By Colm T Whelan
Wiley-VCH

51gZGHnDTiL._SX346_BO1,204,203,200_

The aim of this book is to provide undergraduate students taking classes in the physical sciences with the fundamental-mathematics tools they need to proceed with their studies.

In the first part the author introduces core mathematics, starting from basic concepts such as functions of one variable and complex numbers, and moving to more advanced topics including vector spaces, fields and operators, and functions of a complex variable.

The second part shows some of the copious applications of these mathematics tools to physics. When introducing complex physics laws and theories, including Maxwell’s equations, special relativity and quantum theory, the author tries to present the material in an easily intelligible way. The author also emphasises the direct connection between the conceptual basis of these physics topics and the mathematical tools provided in the first part of the text.

Two appendices of formulas conclude the book. A large number of problems are included but the solutions are only made available on a password-protected website for lecturers.

Thorium Energy for the World

By J-P Revol, M Bourquin, Y Kadi, E Lillestol, J-C de Mestral and K Samec (eds)
Springer

978-3-319-26542-1

This book contains the proceedings of the Thorium Energy Conference (ThEC13), held in October 2013 at CERN, which brought together some of the world’s leading experts on thorium technologies. According to them, nuclear energy based on a thorium fuel cycle is safer and cleaner than the one generated from uranium. In addition, long-lived waste from existing power plants could be retrieved and integrated into the thorium fuel cycle to be transformed into a stable material while generating electricity.

The technology required to implement this type of fuel cycle is already being developed, nevertheless much effort and time is still needed.

The ThEC13 conference saw the participation of high-level speakers from 30 countries, such as the Nobel prize laureates Carlo Rubbia and Jack Steinberger, the then CERN Director-General Rolf Heuer, and Hans Blix, former director-general of the International Atomic Energy Agency (IAEA), to name a few.

Collecting the contributions of the speakers, this book offers a detailed technical review of thorium-energy technologies from basic R&D to industrial developments, and is thus a tool for informed debates on the future of energy production and, in particular, on the advantages and disadvantages of different nuclear technologies.

Bose–Einstein Condensation and Superfluidity

By L Pitaevskii and S Stringari
Oxford University Press

CCboo2_05_17

This book deals with the fascinating topics of Bose–Einstein condensation (BEC) and superfluidity. The main emphasis is on providing the formalism to describe these phases of matter as observed in the laboratory. This is far from the idealised studies that originally predicted BEC and are essential to interpret the experimental observations.

BEC was predicted in 1925 by Einstein, based on the ideas of Satyendra Nath Bose. It corresponds to a new phase of matter where bosons accumulate at the lowest energy level and develop coherent quantum properties at a macroscopic scale. These properties may correspond to phenomena that seem impossible from an everyday perspective. In particular, BEC lies behind the theory of superfluids, which are fluids that flow without dissipating energy and rotate without generating vorticity – if we except quantised vortices, which are a sort of topological defect.

Experimentally, the first BEC from dilute gases was observed in the laboratory in 1995, recognised by the 2001 Nobel Prize in Physics. Since then, there has been an explosion of interest and new results in the field. It is thus timely that two of its leading experts have updated and extended their volume on BEC to summarise the theoretical aspects of this phase of matter. The authors also describe in detail how superfluid phenomena can occur for Fermi gases in the presence of interactions.

The book is relatively heavy in formalism, which is justified by the wide range of phenomena covered in a relatively concise volume. It starts with some basics about correlation functions, condensation and statistical mechanics. Next, it delves into the simplest systems for which BEC can occur: weakly coupled dilute gases of bosonic particles. The authors describe different approaches to the BEC phase, including the works of Landau and Bogoliubov. They also introduce the Gross–Pitaevskii equation and show its importance in the description of superfluids. Superfluidity is explained in great detail, in particular the occurrence of quantised vortices.

The second part describes how to adapt the theoretical formalism introduced in the first part to realistic traps where BEC is observed. This is very important to connect theoretical descriptions to laboratory research, for instance to predict in which experimental configurations a BEC will appear and how to characterise it.

Part three deals with BEC in fermionic systems, which is possible if the fermions interact and pair-up into bosonic structures. These fermionic phases exhibit superfluid properties and have been created in the laboratory, and the authors consider fermionic condensates in realistic traps. The final part is devoted to new phenomena appearing in mixed bosonic–fermionic systems.

The book is a good resource for the theoretical description of BEC beyond the idealised configurations that are described in many texts. The concise style and large amount of notation requires constant effort from the reader, but seems inevitable to explain many of the surprising phenomena appearing in BECs. The book, perhaps combined with others, will provide the reader with a clear overview of the topic and latest theoretical developments in the field. The text is enhanced by the many figures and plots presenting experimental data.

Making Sense of Quantum Mechanics

By Jean Bricmont
Springer

CCboo1_05_17

In this book, Jean Bricmont aims to challenge Richard Feynman’s famous statement that “nobody understands quantum mechanics” and discusses some of the issues that have surrounded this field of theoretical physics since its inception.

Bricmont starts by strongly criticising the “establishment” view of quantum mechanics (QM), known as the Copenhagen interpretation, which attributes a key role to the observer in a quantum measurement. The quantum-mechanical wavefunction, indeed, predicts the possible outcomes of a quantum measurement, but not which one of these actually occurs. The author opposes the idea that a conscious human mind is an essential part of the process of determining what outcome is obtained. This interpretation was proposed by some of the early thinkers on the subject, although I believe Bricmont is wrong to associate it with Niels Bohr, who relates the measurement with irreversible changes in the measuring apparatus, rather than in the mind of the human observer.

The second chapter deals with the nature of the quantum state, illustrated with discussions of the Stern–Gerlach experiment to measure spin and the Mach–Zender interferometer to emphasise the importance of interference. During the last 20 years or so, much work has been done on “decoherence”. This has shown that the interaction of the quantum system with its environment, which may include the measuring apparatus, prevents any detectable interference between the states associated with different possible measurement outcomes. Bricmont correctly emphasises that this still does not result in a particular outcome being realised.

The author’s central argument is presented in chapter five, where he discusses the de Broglie–Bohm hidden-variable theory. At its simplest, it proposes that there are two components to the quantum-mechanical state: the wavefunction and an actual point particle that always has a definite position, although this is hidden from observation until its position is measured. This model claims to resolve many of the conceptual problems thrown up by orthodox QM: in particular, the outcome of a measurement is determined by the position of the particle being measured, while the other possibilities implied by the wavefunction can be ignored because they are associated with “empty waves”. Bricmont shows how all the results of standard QM – particularly the statistical probabilities of different measurement outcomes – are faithfully reproduced by the de Broglie–Bohm theory.

This is probably the clearest account of this theorem that I have come across. So why is the de Broglie–Bohm theory not generally accepted as the correct way to understand quantum physics? One reason follows from the work of John Bell, who showed that no hidden-variable theory can reproduce the quantum predictions (now thoroughly verified by experiment) for systems consisting of two or more particles in an entangled state unless the theory includes non-locality – i.e. a faster-than-light communication between the component particles and/or their associated wavefunctions. As this is clearly inconsistent with special relativity, many thinkers (including Bell himself) have looked elsewhere for a realistic interpretation of quantum phenomena. Not so Jean Bricmont: along with other contemporary supporters of the de Broglie–Bohm theory, he embraces non-locality and looks to use the idea to enhance our understanding of the reality he believes underlies quantum physics. In fact he devotes a whole chapter to this topic and claims that non-locality is an essential feature of quantum physics and not just of models based on hidden variables.

Other problems with the de Broglie–Bohm theory are discussed and resolved – to the author’s satisfaction at least. These include how the de Broglie–Bohm model can be consistent with the Heisenberg uncertainty principle when it appears to assume that the particle always has a definite position and momentum; he points out that the statistical results of a large number of measurements always agree with conventional predictions, and these include the uncertainty principle.

Alternative ways to interpret QM are presented, but the author does not find in them the same advantages as in the de Broglie–Bohm theory. In particular, he discusses the many-worlds interpretation, which assumes that the only reality is the wavefunction and that, rather than collapsing at a measurement, this produces branches that correspond to all measurement outcomes. One of the consequences of decoherence is that there can be no interaction between the possible measurements, and this means that no branch can be aware of any other. It follows that, even if a human observer is involved, each branch can contain a copy of him or her who is unaware of the others’ presence. From this point of view, all the possible measurement outcomes co-exist – hence the term “many worlds”. Apart from its ontological extravagance, the main difficulty with many-worlds theory is that it is very hard to see how the separate outcomes can have different probabilities when they all occur simultaneously. Many-worlds supporters have proposed solutions to this problem, which do not satisfy Bricmont (and, indeed, myself), who emphasises that this is not a problem for the de Broglie–Bohm theory.

A chapter is also dedicated to a brief discussion of philosophy, concentrating on the concept of realism and how it contrasts with idealism. Unsurprisingly, it concludes that realists want a theory describing what happens at the micro scale that accounts for predictions made at the macro scale – and that de Broglie–Bohm provide just such a theory.

The book concludes with an interesting account of the history of QM, including the famous Bohr–Einstein debate, the struggle of de Broglie and Bohm for recognition, and the influence of the politics of the time.

This is a clearly written and interesting book. It has been very well researched, containing more than 500 references, and I would thoroughly recommend it to anyone who has an undergraduate knowledge of physics and mathematics and an interest in foundational questions. Whether it actually lives up to its title is for each reader to judge.

LHC back with a splash

On 29 April, just after 8.00 p.m., the Large Hadron Collider (LHC) began circulating beams of protons for the first time this year. Extensive technical and maintenance work was undertaken since its end-of-year shutdown in early December, yet the restart of the 27 km-circumference superconducting collider has proceeded smoothly.

Magnet powering tests, which ensured the machine can be operated at an energy of 6.5 TeV per beam, were completed during the last week of April. This was followed by the machine-checkout phase, during which all equipment is placed in its operational state and the four LHC experiment caverns are patrolled and closed.

In the meantime, the crew of the Super Proton Synchrotron (SPS), which feeds protons to the LHC, worked hard to extract the single-bunch beam so that the LHC could be commissioned with beam. By the end of the afternoon on Friday 28 April, protons had been sent successfully down both transfer lines and were knocking at the LHC’s door. The following day, at 6.00 p.m., beam 1 (clockwise direction) was injected and threaded through the LHC’s eight sectors one at a time, circulating the entire machine after a period of 45 minutes. Beam 2 (anticlockwise direction) then went through the same process, and at 8.12 p.m. both beams were circulating. On Sunday 30 April, the single-bunch, low-intensity beams were successfully ramped to an energy of 6.5 TeV.

The next task, which was well under way as the Courier went to press, was to continue with detailed setting up of the machine while stepping up to higher bunch intensities and then multiple bunches. Each step in the intensity ramp up that follows has to be validated by circulating the beams from three fills for up to 20 hours, and the team is aiming for a configuration of 2550 bunches per beam, with each bunch containing of the order 1.2 × 1011 protons. Once stable beams  have been declared, expected in the second half of May, the beams will be brought into collision and the second chapter of LHC Run 2 will be under way.

bright-rec iop pub iop-science physcis connect