Comsol -leaderboard other pages

Topics

Emergence

A murmuration of starlings

Particle physics is at its heart a reductionistic endeavour that tries to reduce reality to its most basic building blocks. This view of nature is most evident in the search for a theory of everything – an idea that is nowadays more common in popularisations of physics than among physicists themselves. If discovered, all physical phenomena would follow from the application of its fundamental laws.

A complementary perspective to reductionism is that of emergence. Emergence says that new and different kinds of phenomena arise in large and complex systems, and that these phenomena may be impossible, or at least very hard, to derive from the laws that govern their basic constituents. It deals with properties of a macroscopic system that have no meaning at the level of its microscopic building blocks. Good examples are the wetness of water and the superconductivity of an alloy. These concepts don’t exist at the level of individual atoms or molecules, and are very difficult to derive from the microscopic laws. 

As physicists continue to search for cracks in the Standard Model (SM) and Einstein’s general theory of relativity, could these natural laws in fact be emergent from a deeper reality? And emergence is not limited to the world of the very small, but by its very nature skips across orders of magnitude in scale. It is even evident, often mesmerisingly so, at scales much larger than atoms or elementary particles, for example in the murmurations of a flock of birds – a phenomenon that is impossible to describe by following the motion of an individual bird. Another striking example may be intelligence. The mechanism by which artificial intelligence is beginning to emerge from the complexity of underlying computing codes shows similarities with emergent phenomena in physics. One can argue that intelligence, whether it occurs naturally, as in humans, or artificially, should also be viewed as an emergent phenomenon. 

Data compression

Renormalisable quantum field theory, the foundation of the SM, works extraordinarily well. The same is true of general relativity. How can our best theories of nature be so successful, while at the same time being merely emergent? Perhaps these theories are so successful precisely because they are emergent. 

As a warm up, let’s consider the laws of thermodynamics, which emerge from the microscopic motion of many molecules. These laws are not fundamental but are derived by statistical averaging – a huge data compression in which the individual motions of the microscopic particles are compressed into just a few macroscopic quantities such as temperature. As a result, the laws of thermodynamics are universal and independent of the details of the microscopic theory. This is true of all the most successful emergent theories; they describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different. For instance, two physical systems that undergo a second-order phase transition, while being very different microscopically, often obey exactly the same scaling laws, and are at the critical point described by the same emergent theory. In other words, an emergent theory can often be derived from a large universality class of many underlying microscopic theories. 

Successful emergent theories describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different

Entropy is a key concept here. Suppose that you try to store the microscopic data associated with the motion of some particles on a computer. If we need N bits to store all that information, we have 2N possible microscopic states. The entropy equals the logarithm of this number, and essentially counts the number of bits of information. Entropy is therefore a measure of the total amount of data that has been compressed. In deriving the laws of thermodynamics, you throw away a large amount of microscopic data, but you at least keep count of how much information has been removed in the data-compression procedure.

Emergent quantum field theory

One of the great theoretical-physics paradigm shifts of the 20th century occurred when Kenneth Wilson explained the emergence of quantum field theory through the application of the renormalisation group. As with thermodynamics, renormalisation compresses microscopic data into a few relevant parameters – in this case, the fields and interactions of the emergent quantum field theory. Wilson demonstrated that quantum field theories appear naturally as an effective long-distance and low-energy description of systems whose microscopic definition is given in terms of a quantum system living on a discretised spacetime. As a concrete example, consider quantum spins on a lattice. Here, renormalisation amounts to replacing the lattice by a coarser lattice with fewer points, and redefining the spins to be the average of the original spins. One then rescales the coarser lattice so that the distance between lattice points takes the old value, and repeats this step many times. A key insight was that, for quantum statistical systems that are close to a phase transition, you can take a continuum limit in which the expectation values of the spins turn into the local quantum fields on the continuum spacetime.

This procedure is analogous to the compression algorithms used in machine learning. Each renormalisation step creates a new layer, and the algorithm that is applied between two layers amounts to a form of data compression. The goal is similar: you only keep the information that is required to describe the long-distance and low-energy behaviour of the system in the most efficient way.

A neural network

So quantum field theory can be seen as an effective emergent description of one of a large universality class of many possible underlying microscopic theories. But what about the SM specifically, and its possible supersymmetric extensions? Gauge fields are central ingredients of the SM and its extensions. Could gauge symmetries and their associated forces emerge from a microscopic description in which there are no gauge fields? Similar questions can also be asked about the gravitational force. Could the curvature of spacetime be explained from an emergent perspective?

String theory seems to indicate that this is indeed possible, at least theoretically. While initially formulated in terms of vibrating strings moving in space and time, it became clear in the 1990s that string theory also contains many more extended objects, known as “branes”. By studying the interplay between branes and strings, an even more microscopic theoretical description was found in which the coordinates of space and time themselves start to dissolve: instead of being described by real numbers, our familiar (x, y, z) coordinates are replaced by non-commuting matrices. At low energies, these matrices begin to commute, and give rise to the normal spacetime with which we are familiar. In these theoretical models it was found that both gauge forces and gravitational forces appear at low energies, while not existing at the microscopic level.

While these models show that it is theoretically possible for gauge forces to emerge, there is at present no emergent theory of the SM. Such a theory seems to be well beyond us. Gravity, however, being universal, has been more amenable to emergence.

Emergent gravity

In the early 1970s, a group of physicists became interested in the question: what happens to the entropy of a thermodynamic system that is dropped into a black hole? The surprising conclusion was that black holes have a temperature and an entropy, and behave exactly like thermodynamic systems. In particular, they obey the first law of thermodynamics: when the mass of a black hole increases, its (Bekenstein–Hawking) entropy also increases.

The correspondence between the gravitational laws and the laws of thermodynamics does not only hold near black holes. You can artificially create a gravitational field by accelerating. For an observer who continues to accelerate, even empty space develops a horizon, from behind which light rays will not be able to catch up. These horizons also carry a temperature and entropy, and obey the same thermodynamic laws as black-hole horizons. 

It was shown by Stephen Hawking that the thermal radiation emitted from a black hole originates from pair creation near the black-hole horizon. The properties of the pair of particles, such as spin and charge, are undetermined due to quantum uncertainty, but if one particle has spin up (or positive charge), then the other particle must have spin down (or negative charge). This means that the particles are quantum entangled. Quantum entangled pairs can also be found in flat space by considering accelerated observers. 

Crucially, even the vacuum can be entangled. By separating spacetime into two parts, you can ask how much entanglement there is between the two sides. The answer to this was found in the last decade, through the work of many theorists, and turns out to be rather surprising. If you consider two regions of space that are separated by a two-dimensional surface, the amount of quantum entanglement between the two sides turns out to be precisely given by the Bekenstein–Hawking entropy formula: it is equal to a quarter of the area of the surface measured in Planck units. 

Holographic renormalisation

The area of the event horizon

The AdS/CFT correspondence incorporates a principle called “holography”: the gravitational physics inside a region of space emerges from a microscopic description that, just like a hologram, lives on a space with one less dimension and thus can be viewed as living on the boundary of the spacetime region. The extra dimension of space emerges together with the gravitational force through a process called “holographic renormalisation”. One successively adds new layers of spacetime. Each layer is obtained from the previous layer through “coarse-graining”, in a similar way to both renormalisation in quantum field theory and data-compression algorithms in machine learning.

Unfortunately, our universe is not described by a negatively curved spacetime. It is much closer to a so-called de Sitter spacetime, which has a positive curvature. The main difference between de Sitter space and the negatively curved anti-de Sitter space is that de Sitter space does not have a boundary. Instead, it has a cosmological horizon whose size is determined by the rate of the Hubble expansion. One proposed explanation for this qualitative difference is that, unlike for negatively curved spacetimes, the microscopic quantum state of our universe is not unique, but secretly carries a lot of quantum information. The amount of this quantum information can once again be counted by an entropy: the Bekenstein–Hawking entropy associated with the cosmological horizon. 

This raises an interesting prospect: if the microscopic quantum data of our universe may be thought of as many entangled qubits, could our current theories of spacetime, particles and forces emerge via data compression? Space, for example, could emerge by forgetting the precise way in which all the individual qubits are entangled, but only preserving the information about the amount of quantum entanglement present in the microscopic quantum state. This compressed information would then be stored in the form of the areas of certain surfaces inside the emergent curved spacetime. 

In this description, gravity would follow for free, expressed in the curvature of this emergent spacetime. What is not immediately clear is why the curved spacetime would obey the Einstein equations. As Einstein showed, the amount of curvature in spacetime is determined by the amount of energy (or mass) that is present. It can be shown that his equations are precisely equivalent to an application of the first law of thermodynamics. The presence of mass or energy changes the amount of entanglement, and hence the area of the surfaces in spacetime. This change in area can be computed and precisely leads to the same spacetime curvature that follows from the Einstein equations. 

The idea that gravity emerges from quantum entanglement goes back to the 1990s, and was first proposed by Ted Jacobson. Not long afterwards, Juan Maldacena discovered that general relativity can be derived from an underlying microscopic quantum theory without a gravitational force. His description only works for infinite spacetimes with negative curvature called anti-de Sitter (or AdS–) space, as opposed to the positive curvature we measure. The microscopic description then takes the form of a scale-invariant quantum field theory – a so-called conformal field theory (CFT) – that lives on the boundary of the AdS–space (see “Holographic renormalisation” panel). It is in this context that the connection between vacuum entanglement and the Bekenstein–Hawking entropy, and the derivation of the Einstein equations from entanglement, are best understood. I have also contributed to these developments in a paper in 2010 that emphasised the role of entropy and information for the emergence of the gravitational force. Over the last decade a lot of progress has been made in our understanding of these connections, in particular the deep connection between gravity and quantum entanglement. Quantum information has taken centre stage in the most recent theoretical developments.

Emergent intelligence

But what about viewing the even more complex problem of human intelligence as an emergent phenomenon? Since scientific knowledge is condensed and stored in our current theories of nature, the process of theory formation can itself be viewed as a very efficient form of data compression: it only keeps the information needed to make predictions about reproducible events. Our theories provide us with a way to make predictions with the fewest possible number of free parameters. 

The same principles apply in machine learning. The way an artificial-intelligence machine is able to predict whether an image represents a dog or a cat is by compressing the microscopic data stored in individual pixels in the most efficient way. This decision cannot be made at the level of individual pixels. Only after the data has been compressed and reduced to its essence does it becomes clear what the picture represents. In this sense, the dog/cat-ness of a picture is an emergent property. This is even true for the way humans process the data collected by our senses. It seems easy to tell whether we are seeing or hearing a dog or a cat, but underneath, and hidden from our conscious mind, our brains perform a very complicated task that turns all the neural data that come from our eyes and ears into a signal that is compressed into a single outcome: it is a dog or a cat. 

Emergence is often summarised with the slogan “the whole is more than the sum of its parts”

Can intelligence, whether artificial or human, be explained from a reductionist point of view? Or is it an emergent concept that only appears when we consider a complex system built out of many basic constituents? There are arguments in favour of both sides. As human beings, our brains are hard-wired to observe, learn, analyse and solve problems. To achieve these goals the brain takes the large amount of complex data received via our senses and reduces it to a very small set of information that is most relevant for our purposes. This capacity for efficient data compression may indeed be a good definition for intelligence, when it is linked to making decisions towards reaching a certain goal. Intelligence defined in this way is exhibited in humans, but can also be achieved artificially.

Artificially intelligent computers beat us at problem solving, pattern recognition and sometimes even in what appears to be “generating new ideas”. A striking example is DeepMind’s AlphaZero, whose chess rating far exceeds that of any human player. Just four hours after learning the rules of chess, AlphaZero was able to beat the strongest conventional “brute force” chess program by coming up with smarter ideas and showing a deeper understanding of the game. Top grandmasters use its ideas in their own games at the highest level. 

In its basic material design, an artificial-intelligence machine looks like an ordinary computer. On the other hand, it is practically impossible to explain all aspects of human intelligence by starting at the microscopic level of the neurons in our brain, let alone in terms of the elementary particles that make up those neurons. Furthermore, the intellectual capability of humans is closely connected to the sense of consciousness, which most scientists would agree does not allow for a simple reductionist explanation.

Emergence is often summarised with the slogan “the whole is more than the sum of its parts” – or as condensed-matter theorist Phil Anderson put it, “more is different”. It counters the reductionist point of view, reminding us that the laws that we think to be fundamental today may in fact emerge from a deeper underlying reality. While this deeper layer may remain inaccessible to experiment, it is an essential tool for theorists of the mind and the laws of physics alike.

Building the future of LHCb

Planes of LHCb’s SciFi tracker

It was once questioned whether it would be possible to successfully operate an asymmetric “forward” detector at a hadron collider. In such a high-occupancy environment, it is much harder to reconstruct decay vertices and tracks than it is at a lepton collider. Following its successes during LHC Run 1 and Run 2, however, LHCb has rewritten the forward-physics rulebook, and is now preparing to take on bigger challenges.

During Long Shutdown 2, which comes to an end early next year, the LHCb detector is being almost entirely rebuilt to allow data to be collected at a rate up to 10 times higher during Run 3 and Run 4. This will improve the precision of numerous world-best results, such as constraints on the angles of the CKM triangle, while further scrutinising intriguing results in B-meson decays, which hint at departures from the Standard Model. 

LHCb’s successive detector layers

At the core of the LHCb upgrade project are new detectors capable of sustaining an instantaneous luminosity up to five times that seen at Run 2, and which enable a pioneering software-only trigger that will enable LHCb to process signal data in an upgraded computing farm at the frenetic rate of 40 MHz. The vertex locator (VELO) will be replaced with a pixel version, the upstream silicon-strip tracker will be replaced with a lighter version (the UT) located closer to the beamline, and the electronics for LHCb’s muon stations and calorimeters are being upgraded for 40 MHz readout. 

Recently, three further detector systems key to dealing with the higher occupancies ahead were lowered into the LHCb cavern for installation: the upgraded ring-imaging Cherenkov detectors RICH1 and RICH2 for sharper particle identification, and the brand new “SciFi” (scintillating fibre) tracker. 

SciFi tracking

The components of LHCb’s SciFi tracker may not seem futuristic at first glance. Its core elements are constructed from what is essentially paper, plastic, some carbon fibre and glue. However, its materials components conceal advanced technologies which, when coupled together, produce a very light and uniform, high-performance detector that is needed to cope with the higher number of particle tracks expected during Run 3.

Located behind the LHCb magnet (see “Asymmetric anatomy” image), the SciFi represents a challenge, not only due to its complexity, but also because the technology – plastic scintillating fibres and silicon photomultiplier arrays – has never been used for such a large area in such a harsh radiation environment. Many of the underlying technologies have been pushed to the extreme during the past decade to allow the SciFi to successfully operate under LHC conditions in an affordable and effective way. 

Scintillating-fibre mat production

More than 11,000 km of 0.25 mm-diameter polystyrene fibre was delivered to CERN before undergoing meticulous quality checks. Excessive diameter variations were removed to prevent disruptions of the closely packed fibre matrix produced during the winding procedure, and clear improvements from the early batches to the production phase were made by working closely with the industrial manufacturer. From the raw fibres, nearly 1400 multi-layered fibre mats were wound in four of the LHCb collaboration’s institutes (see “SciFi spools” image), before being cut and bonded in modules, tested, and shipped to CERN where they were assembled with the cold boxes. The SciFi tracker contains 128 stiff and robust 5 × 0.5 m2 modules made of eight mats bonded with two fire-resistant honeycomb and carbon-fibre panels, along with some mechanics and a light-injection system. In total, the design produces nearly 320 m2 of detector surface over the 12 layers of the tracking stations. 

The scintillating fibres emit photons at blue-green wavelengths when a particle interacts with them. Secondary scintillator dyes added to the polystyrene amplify the light and shift it to longer wavelengths so it can be read out by custom-made silicon photomultipliers (SiPMs). SiPMs have become a strong alternative to conventional photomultiplier tubes in recent years, due to their smaller channel sizes, easier operation and insensitivity to magnetic fields. This makes them ideal to read out the higher number of channels necessary to identify separate but nearby tracks in LHCb during Run 3. 

The width of the SiPM channels, 0.25 mm, is designed to match that of the fibres. Though they need not align perfectly, this provides a better separation power for tracking than the previously used 5 mm gas straw tubes in the outer regions of the detector, while providing a similar performance to the silicon-strip tracker. The tiny channel size results in over 524,288 SiPM channels to collect light from 130 m of fibre-mat edges. A custom ASIC, called the PACIFIC, outputs two bits per channel based on three signal-amplitude thresholds. A field-programmable gate array (FPGA) assigned to each SiPM then groups these signals into clusters, where the location of each cluster is sent to the computing farm. Despite clustering and noise suppression, this still results in an enormous data rate of 20 Tb/s – nearly half of the total data bandwidth of the upgraded LHCb detector.

One of the key factors in the success of LHCb’s flavour-physics programme is its ability to identify charged particles

LHCb’s SciFi tracker is the first large-scale use of SiPMs for tracking, and takes advantage of improvements in the technology in the 10 years since the SciFi was proposed. The photon-detection efficiency of SiPMs has nearly doubled thanks to improvements in the design and production of the underlying pixel structures, while the probability of crosstalk between the pixels (which creates multiple fake signals by causing a single pixel to randomly fire without incident light following radiation damage) has been reduced from more than 20% to a few percent by the introduction of microscopic trenches between the pixels. The dark-single-pixel firing rate can also be reduced by cooling the SiPM. Together, these two methods greatly reduce the number of fake-signal clusters such that the tracker can effectively function after several years of operation in the LHCb cavern. 

RICH2 photon detector plane

The LHCb collaboration assembled commercial SiPMs on flex cables and bonded them in groups of 16 to a 0.5 m-long 3D-printed titanium cooling bar to form precisely assembled photodetection units for the SciFi modules. By circulating a coolant at a temperature of –50 °C through the cold bar, the dark-noise rate was reduced by a factor of 60. Furthermore, in a first for a CERN experiment, it was decided to use a new single-phase liquid coolant called Novec-649 from 3M for its non-toxic properties and low greenhouse warming potential (GWP = 1). Historically, C6F14 – which has a GWP = 7400 – was the thermo-transfer fluid of choice. Although several challenges had to be faced in learning how to work with the new fluid, wider use of Novec-649 and similar products could contribute significantly to the reduction of CERN’s carbon footprint. Additionally, since the narrow envelope of the tracking stations precludes the use of standard foam insulation of the coolant lines, a significant engineering effort has been required to vacuum insulate the 48 transfer lines from the 24 rows of SiPMs and 256 cold-bars where leaks are possible at every connection. 

To date, LHCb collaborators have tirelessly assembled and tested nearly half of the SciFi tracker above ground, where only two defective channels out of the 262,144 tested in the full signal chain were unrecoverable. Four out of 12 “C-frames” containing the fibre modules (see “Tracking tall” image) are now installed and waiting to be connected and commissioned, with a further two installed in mid-July. The remaining six will be completed and installed before the start of operations early next year.

New riches

One of the key factors in the success of LHCb’s flavour-physics programme is its ability to identify charged particles, which reduces the background in selected final states and assists in the flavour tagging of b quarks. Two ring-imaging Cherenkov (RICH) detectors, RICH1 and RICH2, located upstream and downstream of the LHCb magnet 1 and 10 m away from the collision point, provide excellent particle identification over a very wide momentum range. They comprise a large volume of fluorocarbon gas (the radiator), in which photons are emitted by charged particles travelling at speeds higher than the speed of light in the gas; spherical and flat mirrors to focus and reflect this Cherenkov light; and two photon-detector planes where the Cherenkov rings are detected and read out by the front-end electronics.

The original RICH detectors are currently being refurbished to cope with the more challenging data-taking conditions of Run 3, requiring a variety of technological challenges to be overcome. The photon detection system, for example, has been redesigned to adapt to the highly non-uniform occupancy expected in the RICH system, running from an unprecedented peak occupancy of ~35% in the central region of RICH1 down to 5% in the peripheral region of RICH2. Two types of 64-channel multi-anode photomultiplier tubes (MaPMTs) have been selected for the task which, thanks to their exceptional quantum efficiency in the relevant wavelength range, are capable of detecting single photons while providing excellent spatial resolution and very low background noise. These are key requirements to allow pattern-recognition algorithms to reconstruct Cherenkov rings even in the high-occupancy region. 

Completed SciFi C-frames

More than 3000 MaPMT units, for a total of 196,608 channels, are needed to fully instrument both upgraded RICH detectors. The already large active area (83%) of the devices has been maximised by arranging the units in a compact and modular “elementary cell” containing a custom-developed, radiation-hard eight-channel ASIC called the Claro chip, which is able to digitise the MaPMT signal at a rate of 40 MHz. The readout is controlled by FPGAs connected to around 170 channels each. The prompt nature of Cherenkov radiation combined with the performance of the new opto-electronics chain will allow the RICH systems to operate within the LHC’s 25 ns time window, dictated by the bunch-crossing period, while applying a time-gate of less than 6 ns to provide background rejection.

To keep the new RICHes as compact as possible, the hosting mechanics has been designed to provide both structural support and active cooling. Recent manufacturing techniques have enabled us to drill two 6 mm-diameter ducts over a length of 1.5 m into the spine of the support, through which a coolant (the more environmentally friendly Novec649, as in the SciFi tracker) is circulated. Each element of the opto-electronics chain has been produced and fully validated within a dedicated quality-assurance programme, allowing the position of the photon detectors and their operating conditions to be fine-tuned across the RICH detectors. In February, the first photon-detector plane of RICH2 (see “RICH2 to go” image) became the first active element of the LHCb upgrade to be installed in the cavern. The two planes of RICH2, located at the sides of the beampipe, were commissioned in early summer and will see first Cherenkov light during an LHC beam test in October. 

RICH1 spherical mirrors

RICH1 presents an even bigger challenge. To reduce the number of photons in the hottest region, its optics have been redesigned to spread the Cherenkov rings over a larger surface. The spatial envelope of RICH1 is also constrained by its magnetic shield, demanding even more compact mechanics for the photon-detector planes. To accommodate the new design of RICH1, a new gas enclosure for the radiator is needed. A volume of 3.8 m3 of C4F10 is enclosed in an aluminium structure directly fastened to the VELO tank on one side and sealed with a low-mass window on the other, with particular effort placed on building a leak-less system to limit potential environmental impact. Installing these fragile components in a very limited space has been a delicate process, and the last element to complete the gas-enclosure sealing was installed at the beginning of June.

The optical system is the final element of the RICH1 mechanics. The ~2 m2 spherical mirrors placed inside the gas enclosure are made of carbon fibre composite to limit the material budget (see “Cherenkov curves” image), while the two 1.3 m2 planes of flat mirrors are made of borosilicate glass for high optical quality. All the mirror segments are individually coated, glued on supports and finally aligned before installation in the detector. The full RICH1 installation is expected to be completed in the autumn, followed by the challenging commissioning phase to tune the operating parameters to be ready for Run 3.

Surpassing expectations

In its first 10 years of operations, the LHCb experiment has already surpassed expectations. It has enabled physicists to make numerous important measurements in the heavy-flavour sector, including the first observation of the rare decay B0s µ+µ, precise measurements of quark-mixing parameters, the discovery of CP violation in the charm sector, and the observation of more than 50 new hadrons including tetraquark and pentaquark states. However, many crucial measurements are currently statistically limited, including those underpinning the so-called flavour anomalies (see Bs decays remain anomalous). Together with the tracker, trigger and other upgrades taking place during LS2, the new SciFi and revamped RICH detectors will put LHCb in prime position to explore these and other searches for new physics for the next 10 years and beyond.

Science Gateway under construction

Science Gateway foundation stone

On 21 June, officials and journalists gathered at CERN to mark “first stone” for Science Gateway, CERN’s new flagship project for science education and outreach. Due to open in 2023, Science Gateway will increase CERN’s capacity to welcome visitors of all ages from near and afar. Hundreds of thousands of people per year will have the opportunity to engage with CERN’s discoveries and technology, guided by the people who make it possible.

The project has environmental sustainability at its core. Designed by renowned architect Renzo Piano, the carbon-neutral building will bridge the Route de Meyrin and be surrounded by a freshly planted 400-tree forest. Its five linked pavilions will feature a 900-seat auditorium, immersive spaces, laboratories for hands-on activities for visitors from age five upwards, and many other interactive learning opportunities.

“I would like to express my deepest gratitude to the many partners in our Member and Associate Member States and beyond who are making the CERN Science Gateway possible, in particular to our generous donors,” said CERN Director-General Fabiola Gianotti during her opening speech. “We want the CERN Science Gateway to inspire all those who come to visit with the beauty and the values of science.”

Surveyors eye up a future collider

Levelling measurements

CERN surveyors have performed the first geodetic measurements for a possible Future Circular Collider (FCC), a prerequisite for high-precision alignment of the accelerator’s components. The millimetre-precision measurements are one of the first activities undertaken by the FCC feasibility study, which was launched last year following the recommendation of the 2020 update of the European strategy for particle physics. During the next three years, the study will explore the technical and financial viability of a 100 km collider at CERN, for which the tunnel is a top priority. Geology, topography and surface infrastructure are the key constraints on the FCC tunnel’s position, around which civil engineers will design the optimal route, should the project be approved.

The FCC would cover an area about 10 times larger than the LHC, in which every geographical reference must be pinpointed with unprecedented precision. To provide a reference coordinate system, in May the CERN surveyors, in conjunction with ETH Zürich, the Federal Office of Topography Swisstopo, and the School of Engineering and Management Vaud, performed geodetic levelling measurements along an 8 km profile across the Swiss–French border south of Geneva.

Such measurements have two main purposes. The first is to determine a high-precision surface model, or “geoid”, to map the height above sea level in the FCC region. The second purpose is to improve the present reference system, whose measurements date back to the 1980s when the tunnel housing the LHC was built.

“The results will help to evaluate if an extrapolation of the current LHC geodetic reference systems and infrastructure is precise enough, or if a new design is needed over the whole FCC area,” says Hélène Mainaud Durand, group leader of CERN’s geodetic metrology group.

The FCC feasibility study, which involves more than 140 universities and research institutions from 34 countries, also comprises technological, environmental, engineering, political and economic considerations. It is due to be completed by the time the next strategy update gets under way in the middle of the decade. Should the outcome be positive, and the project receive the approval of CERN’s member states, civil-engineering works could start as early as the 2030s.

Web code auctioned as crypto asset

The web’s original source code

Time-stamped files stated by Tim Berners-Lee to contain the original source code for the web and digitally signed by him, have sold for US$5.4 million at auction. The files were sold as a non-fungible token (NFT), a form of a crypto asset that uses blockchain technology to confer uniqueness.

The web was originally conceived at CERN to meet the demand for automated information-sharing between physicists spread across universities and institutes worldwide. Berners-Lee wrote his first project proposal in March 1989, and the first website, which was dedicated to the World Wide Web project itself and hosted on Berners-Lee’s NeXT computer, went live in the summer of 1991. Less than two years later, on 30 April 1993, and after several iterations in development, CERN placed version three of the software in the public domain. It deliberately did so on a royalty-free, “no-strings-attached” basis, addressing the memo simply “To whom it may concern.”

The seed that led CERN to relinquish ownership of the web was planted 70 years ago, in the CERN Convention, which states that results of its work were to be “published or otherwise made generally available” – a culture of openness that continues to this day.

The auction offer describes the NFT as containing approximately 9555 lines of code, including implementations of the three languages and protocols that remain fundamental to the web today: HTML (Hypertext Markup Language), HTTP (Hypertext Transfer Protocol) and URIs (Uniform Resource Identifiers). The lot also includes an animated visualisation of the code, a letter written by Berners-Lee reflecting on the process of creating it, and a Scalable Vector Graphics representation of the full code created from the original files.

Bidding for the NFT, which auction- house Sotheby’s claims is its first-ever sale of a digital-born artefact, opened on 23 June and attracted a total of 51 bids. The sale will benefit initiatives that Berners-Lee and his wife Rosemary Leith support, stated a Sotheby’s press release.

Climate Change and Energy Options for a Sustainable Future

Climate Change and Energy Options for a Sustainable Future

In Climate Change and Energy Options for a Sustainable Future, nuclear physicists Dinesh Kumar Srivastava and V S Ramamurthy explore global policies for an eco-friendly future. Facing the world’s increasing demand for energy, the authors argue for the replacement of fossil fuels with a new mixture of green energy sources including wind energy, solar photovoltaics, geothermal energy and nuclear energy. Srivastava is a theoretical physicist and Ramamurthy is an experimental physicist with research interests in heavy-ion physics and the quark–gluon plasma. Together, they analyse solutions offered by science and technology with a clarity that will likely surpass the expectations of non-expert readers. Following a pedagogical approach with vivid illustra­tions, the book offers an in-depth description of how each green-energy option could be integrated into a global-energy strategy. 

In the first part of the book, the authors provide a wealth of evidence demonstrating the pressing reality of climate change and the fragility of the environment. Srivastava and Ramamurthy then examine unequal access to energy across the globe. There should be no doubt that human wellbeing is decided by the rate at which power is consumed, they write, and providing enough energy to everyone on the planet to reach a human-development index of 0.8, which is defined by the UN as high human development, calls for about 30 trillion kWh per year – roughly double the present global capacity. 

Human wellbeing is decided by the rate at which power is consumed

Srivastava and Ramamurthy present the basic principles of alternative renewable sources, and offer many examples, including agrivoltaics in Africa, a floating solar-panel station in California and wind-turbines in the Netherlands and India. Drawing on their own expertise, they discuss nuclear energy and waste-management, accelerator-driven subcritical systems, and the use of high-current electron accelerators for water purification. The book then finally turns to sustainability, showing by means of a wealth of scientific data that increasing the supply of renewable energy, and reducing carbon-intensive energy sources, can lead to sustainable power across the globe, both reducing global-warming emissions and stabilising energy prices for a fairer economy. The authors stress that any solution should not compromise quality of life or development opportunities in developing countries. 

This book could not be more timely. It is an invaluable resource for scientists, policymakers and educators.

Designing an AI physicist

Merging the insights from AI and physics intelligence

Can we trust physics decisions made by machines? In recent applications of artificial intelligence (AI) to particle physics, we have partially sidestepped this question by using machine learning to augment analyses, rather than replace them. We have gained trust in AI decisions through careful studies of “control regions” and painstaking numerical simulations. As our physics ambitions grow, however, we are using “deeper” networks with more layers and more complicated architectures, which are difficult to validate in the traditional way. And to mitigate 10 to 100-fold increases in computing costs, we are planning to fully integrate AI into data collection, simulation and analysis at the high-luminosity LHC.

To build trust in AI, I believe we need to teach it to think like a physicist.

I am the director of the US National Science Foundation’s new Institute for Artificial Intelligence and Fundamental Interactions, which was founded last year. Our goal is to fuse advances in deep learning with time-tested strategies for “deep thinking” in the physical sciences. Many promising opportunities are open to us. Core principles of fundamental physics such as causality and spacetime symmetries can be directly incorporated into the structure of neural networks. Symbolic regression can often translate solutions learned by AI into compact, human-interpretable equations. In experimental physics, it is becoming possible to estimate and mitigate systematic uncertainties using AI, even when there are a large number of nuisance parameters. In theoretical physics, we are finding ways to merge AI with traditional numerical tools to satisfy stringent requirements that calculations be exact and reproducible. High-energy physicists are well positioned to develop trustworthy AI that can be scrutinised, verified and interpreted, since the five-sigma standard of discovery in our field necessitates it.

It is equally important, however, that we physicists teach ourselves how to think like a machine.

Jesse Thaler

Modern AI tools yield results that are often surprisingly accurate and insightful, but sometimes unstable or biased. This can happen if the problem to be solved is “underspecified”, meaning that we have not provided the machine with a complete list of desired behaviours, such as insensitivity to noise, sensible ways to extrapolate and awareness of uncertainties. An even more challenging situation arises when the machine can identify multiple solutions to a problem, but lacks a guiding principle to decide which is most robust. By thinking like a machine, and recognising that modern AI solves problems through numerical optimisation, we can better understand the intrinsic limitations of training neural networks with finite and imperfect datasets, and develop improved optimisation strategies. By thinking like a machine, we can better translate first principles, best practices and domain knowledge from fundamental physics into the computational language of AI. 

Beyond these innovations, which echo the logical and algorithmic AI that preceded the deep-learning revolution of the past decade, we are also finding surprising connections between thinking like a machine and thinking like a physicist. Recently, computer scientists and physicists have begun to discover that the apparent complexity of deep learning may mask an emergent simplicity. This idea is familiar from statistical physics, where the interactions of many atoms or molecules can often be summarised in terms of simpler emergent properties of materials. In the case of deep learning, as the width and depth of a neural network grows, its behaviour seems to be describable in terms of a small number of emergent parameters, sometimes just a handful. This suggests that tools from statistical physics and quantum field theory can be used to understand AI dynamics, and yield deeper insights into their power and limitations.

If we don’t exploit the full power of AI, we will not maximise the discovery potential of the LHC and other experiments

Ultimately, we need to merge the insights gained from artificial intelligence and physics intelligence. If we don’t exploit the full power of AI, we will not maximise the discovery potential of the LHC and other experiments. But if we don’t build trustable AI, we will lack scientific rigour. Machines may never think like human physicists, and human physicists will certainly never match the computational ability of AI, but together we have enormous potential to learn about the fundamental structure of the universe.

Anatoly Vasilievich Efremov 1933–2021

Anatoly Efremov

On 1 January, after a long struggle with a serious illness, Anatoly Vasilievich Efremov of the Bogoliubov Laboratory of Theoretical Physics (BLTP) at JINR, Dubna, Russia, passed away. He was an outstanding physicist, and a world expert in quantum field theory and elementary particle physics.

Anatoly Efremov was born in Kerch, Crimea, to the family of a naval officer. Since childhood, he retained his love for the sea and was an excellent yachtsman. After graduating from Moscow Engineering Physics Institute in 1958, where among his teachers were Isaak Pomeranchuk and his master’s thesis advisor Yakov Smorodinsky, he started working at BLTP JINR. At the time, Dmitrij Blokhintsev was JINR director. Anatoly always considered him as his teacher, as he did Dmitry Shirkov under whose supervision he defended his PhD thesis “Dispersion theory of low-energy scattering of pions” in 1962.

In 1971, Anatoly defended his DSc dissertation “High-energy asymptotics of Feynman diagrams”. The underlying work immediately found application in the factorisation of hard processes in quantum chromodynamics (QCD), which is now the theoretical basis of all hard-hadronic processes. Of particular note are his 1979 articles (written together with his PhD student A V Radyushkin) about the asymptotic behaviour of the pion form factor in QCD, and the evolution equation for hard exclusive processes, which became known as the ERBL (Efremov–Radyushkin–Brodsky–Lepage) equation. Proving the factorisation of hard processes enabled many subtle effects in QCD to be described, in particular parton correlations, which became known as the ETQS (Efremov–Teryaev–Qiu–Sterman) mechanism.

During the past three decades, Efremov, together with his students and colleagues, devoted his attention to several problems: the proton spin; the role of the axial anomaly and spin of gluons in the spin structure of a nucleon; correlations of the spin of partons; and momenta of particles in jets (“handedness”). These effects served as the theoretical basis for polarised particle experiments at RHIC at Brookhaven, the SPS at CERN and the new NICA facility at JINR. Anatoly was a member of the COMPASS collaboration at the SPS, where he helped to measure the effects he had predicted.

In 1976 he suggested the first model for the production of cumulative particles at x > 1 off nuclei. Within QCD, Efremov was the first to develop the concept of nuclear quark–parton structure function, which entails the presence in the nucleus of a hard collective quark sea. This naturally explains both the EMC nuclear effect and cumulative particle production, and unambiguously indicates the existence of multi- quark density fluctuations (fluctons) – a prediction that was later confirmed and led to the so-called nuclear super-scaling phenomenon. Today, similar effects of fluctons or short-range correlations are investigated in a fixed-target experiment at NICA and in several experiments at Jlab in the US.

Throughout his life, Anatoly continued to develop concrete manifestations of his ideas based on fundamental theory

Throughout his life, Anatoly continued to develop concrete manifestations of his ideas based on fundamental theory, becoming a teacher and advisor of many physicists at JINR, in Russia and abroad. In 1991 he initiated and became the permanent chair of the organising committee of the Dubna International Workshops on Spin Physics at High Energies. He was a long-term and authoritative member of the International Spin Physics Committee coordinating work in this area, and a regular visitor to the CERN theory unit since the 1970s.

Anatoly Vasilievich Efremov was the undisputed scientific leader, who initiated studies of quantum chromodynamics and spin physics in Dubna, one of the key BLTP JINR staff, and at the same time a modest and very friendly person, enjoying the highest authority and respect of colleagues. It is this combination of scientific and human qualities that made Anatoly Efremov’s personality unique, and this is how we will remember him.

A feel for fundamental research

Rana Adhikari

This short film focuses on mechanic turned physicist Rana Adhikari, who contributed to the 2016 discovery of gravitational waves with the Laser Interferometer Gravitational-wave Observatory (LIGO). A laid-back, confident character, Adhikari takes us through the basics of LIGO, while touching upon the future of the field and the public’s view on fundamental research, all while directors Currimbhoy, McCarthy and Pedri facilitate the conversation, which runs at just over 12 minutes.

Following high-school, Adhikari spent time as a car mechanic. Upon reading Einstein’s Medium of Relativity during Hurricane Erin, however, he decided that he wanted to “test the speed of light.” Now, he is a professor at Caltech and a member of the LIGO collaboration, and was awarded a 2019 New Horizons in Physics Prize for his role in the gravitational-wave discovery.

In the film, recorded in 2018, Adhikari explains how fundamental research can be something everyone can get behind, in a world where it is “easy to think we’re all doomed,” and describes the power that rests on collaborations to show the importance of coming together, expressing, “It is a statement of collective willpower.” Through varying shots of him at a blackboard, in and around his experiment, and documentary-style face-to-face discussions, the audience quickly gets to know a positive thinker for whom work is clearly a passion, not a job.

The directors trust Adhikari to take centre stage and explain the world of gravitational waves through accurate metaphors that seem freestyled, yet concise. A sharp cut to a shot of turtles seems unnatural at first, before transforming into an analogy of Adhikari himself – the turtles going underwater and popping their heads up into different streams representing Adhikari’s curiosity, and how he got into the field in the first place.

The film is littered with references to music, most notably with comparisons between guitar strings and the vibrations that LIGO physicists are searching for. After playing a short, smooth riff, Adhikari states his unusual way of analysing data. “It is easier to do maths later – sometimes it’s better to just feel it.” He then plays us the “sound” file of two black holes colliding; a short chirp that is repeated as punchy statements about the long history of gravitational waves are overlayed onto the film.

We should be exploring fundamentals driven by curiosity

Towards the end, the focus takes a shift towards the public’s view on fundamental research. “Lasers weren’t created to scan items in supermarkets,” states Adhikari. “We should be exploring fundamentals driven by curiosity.” The film closes on Adhikari discussing the future of LIGO, tapping a glass to cause a lengthy ring representing the search for longer-wavelength gravitational waves.

Through Adhikari’s story, LIGO: The way the universe Is, I think will inspire anyone who feels alienated or intimidated by fundamental research.

Forging the future of AI

Jennifer Ngadiuba speaks to fellow Sparks! participants Michael Kagan and Bruno Giussani

Field lines arc through the air. By chance, a cosmic ray knocks an electron off a molecule. It hurtles away, crashing into other molecules and multiplying the effect. The temperature rises, liberating a new supply of electrons. A spark lights up the dark.

Vivienne Ming

The absence of causal inference in practical machine learning touches on every aspect of AI research, application, ethics and policy

Vivienne Ming is a theoretical neuroscientist and a serial AI entrepreneur

This is an excellent metaphor for the Sparks! Serendipity Forum – a new annual event at CERN designed to encourage interdisciplinary collaborations between experts on key scientific issues of the day. The first edition, which will take place from 17 to 18 September, will focus on artificial intelligence (AI). Fifty leading thinkers will explore the future of AI in topical groups, with the outcomes of their exchanges to be written up and published in the journal Machine Learning: Science and Technology. The forum reflects the growing use of machine-learning techniques in particle physics and emphasises the importance that CERN and the wider community places on collaborating with diverse technological sectors. Such interactions are essential to the long-term success of the field. 

Anima Anandkumar

AI is orders of magnitude faster than traditional numerical simulations. On the other side of the coin, simulations are being used to train AI in domains such as robotics where real data is very scarce

Anima Anandkumar is Bren professor at Caltech and director of machine learning research at NVIDIA

The likelihood of sparks flying depends on the weather. To take the temperature, CERN Courier spoke to a sample of the Sparks! participants to preview themes for the September event.

Genevieve Bell

2020 revealed unexpectedly fragile technological and socio-cultural infrastructures. How we locate our conversations and research about AI in those contexts feels as important as the research itself

Genevieve Bell is director of the School of Cybernetics at the Australian National University and vice president at Intel

Back to the future

In the 1980s, AI research was dominated by code that emulated logical reasoning. In the 1990s and 2000s, attention turned to softening its strong syllogisms into probabilistic reasoning. Huge strides forward in the past decade have rejected logical reasoning, however, instead capitalising on computing power by letting layer upon layer of artificial neurons discern the relationships inherent in vast data sets. Such “deep learning” has been transformative, fuelling innumerable innovations, from self-driving cars to searches for exotica at the LHC (see Hunting anomalies with an AI trigger). But many Sparks! participants think that the time has come to reintegrate causal logic into AI.

Stuart Russell

Geneva is the home not only of CERN but also of the UN negotiations on lethal autonomous weapons. The major powers must put the evil genie back in the bottle before it’s too late

Stuart Russell is professor of computer science at the University of California, Berkeley and coauthor of the seminal text on AI

“A purely predictive system, such as the current machine learning that we have, that lacks a notion of causality, seems to be very severely limited in its ability to simulate the way that people think,” says Nobel-prize-winning cognitive psychologist Daniel Kahneman. “Current AI is built to solve one specific task, which usually does not include reasoning about that task,” agrees AAAI president-elect Francesca Rossi. “Leveraging what we know about how people reason and behave can help build more robust, adaptable and generalisable AI – and also AI that can support humans in making better decisions.”

Tomaso Poggio

AI is converging on forms of intelligence that are useful but very likely not human-like

Tomaso Poggio is a cofounder of computational neuroscience and Eugene McDermott professor at MIT

Google’s Nyalleng Moorosi identifies another weakness of deep-learning models that are trained with imperfect data: whether AI is deciding who deserves a loan or whether an event resembles physics beyond the Standard Model, its decisions are only as good as its training. “What we call the ground truth is actually a system that is full of errors,” she says.

Nyalleng Moorosi

We always had privacy violation, we had people being blamed falsely for crimes they didn’t do, we had mis-diagnostics, we also had false news, but what AI has done is amplify all this, and make it bigger

Nyalleng Moorosi is a research software engineer at Google and a founding member of Deep Learning Indaba

Furthermore, says influential computational neuroscientist Tomaso Poggio, we don’t yet understand the statistical behaviour of deep-learning algorithms with mathematical precision. “There is a risk in trying to understand things like particle physics using tools we don’t really understand,” he explains, also citing attempts to use artificial neural networks to model organic neural networks. “It seems a very ironic situation, and something that is not very scientific.”

Daniel Kahneman

This idea of partnership, that worries me. It looks to me like a very unstable equilibrium. If the AI is good enough to help the person, then pretty soon it will not need the person

Daniel Kahneman is a renowned cognitive psychologist and a winner of the 2002 Nobel Prize in Economics

Stuart Russell, one of the world’s most respected voices on AI, echoes Poggio’s concerns, and also calls for a greater focus on controlled experimentation in AI research itself. “Instead of trying to compete between Deep Mind and OpenAI on who can do the biggest demo, let’s try to answer scientific questions,” he says. “Let’s work the way scientists work.”

Good or bad?

Though most Sparks! participants firmly believe that AI benefits humanity, ethical concerns are uppermost in their minds. From social-media algorithms to autonomous weapons, current AI overwhelmingly lacks compassion and moral reasoning, is inflexible and unaware of its fallibility, and cannot explain its decisions. Fairness, inclusivity, accountability, social cohesion, security and international law are all impacted, deepening links between the ethical responsibilities of individuals, multinational corporations and governments. “This is where I appeal to the human-rights framework,” says philosopher S Matthew Liao. “There’s a basic minimum that we need to make sure everyone has access to. If we start from there, a lot of these problems become more tractable.”

S Matthew Liao

We need to understand ethical principles, rather than just list them, because then theres a worry that were just doing ethics washing – they sound good but they dont have any bite

S Matthew Liao is a philosopher and the director of the Center for Bioethics at New York University

Far-term ethical considerations will be even more profound if AI develops human-level intelligence. When Sparks! participants were invited to put a confidence interval on when they expect human-level AI to emerge, answers ranged from [2050, 2100] at 90% confidence to [2040, ] at 99% confidence. Other participants said simply “in 100 years” or noted that this is “delightfully the wrong question” as it’s too human-centric. But by any estimation, talking about AI cannot wait.

Francesca Rossi

Only a multi-stakeholder and multi-disciplinary approach can build an ecosystem of trust around AI. Education, cultural change, diversity and governance are equally as important as making AI explainable, robust and transparent

Francesca Rossi co-leads the World Economic Forum Council on AI for humanity and is IBM AI ethics global leader and the president-elect of AAAI

“With Sparks!, we plan to give a nudge to serendipity in interdisciplinary science by inviting experts from a range of fields to share their knowledge, their visions and their concerns for an area of common interest, first with each other, and then with the public,” says Joachim Mnich, CERN’s director for research and computing. “For the first edition of Sparks!, we’ve chosen the theme of AI, which is as important in particle physics as it is in society at large. Sparks! is a unique experiment in interdisciplinarity, which I hope will inspire continued innovative uses of AI in high-energy physics. I invite the whole community to get involved in the public event on 18 September.”

 

bright-rec iop pub iop-science physcis connect