Comsol -leaderboard other pages

Topics

‘First light’ beckons as LCLS-II gears up

LCLS-II linac tunnel and future laser

An ambitious upgrade of the US’s flagship X-ray free-electron-laser facility – the Linac Coherent Light Source (LCLS) at the SLAC National Accelerator Laboratory in California – is nearing completion. Set for “first light” in 2022, LCLS-II will deliver X-ray laser beams that are 10,000 times brighter than LCLS at repetition rates of up to a million pulses per second – generating more X-ray pulses in just a few hours than the current laser has delivered through the course of its 12-year operational lifetime. The cutting-edge physics of the new X-ray laser – underpinned by a cryogenically cooled superconducting radiofrequency (SRF) linac – will enable the two beams from LCLS and LCLS-II to work in tandem. This, in turn, will help researchers observe rare events that happen during chemical reactions and study delicate biological molecules at the atomic scale in their natural environments, as well as potentially shed light on exotic quantum phenomena with applications in next-generation quantum computing and communications systems. 

Strategic commitment

Successful delivery of the LCLS-II linac was possible thanks to a multicentre collaborative effort involving US national and university laboratories, following the decision to pursue an SRF-based machine in 2014 through the design, assembly, test, transportation and installation of a string of 37 SRF cryomodules (most of them more than 12 m long) into the SLAC tunnel (see figures “Tunnel vision” and “Keeping cool”). All told, this non-trivial undertaking necessitated the construction of 40 1.3 GHz SRF cryomodules (five of them spares) and three 3.9 GHz cryomodules (one spare) – with delivery of approximately one cryomodule per month from February 2019 until December 2020 to allow completion of the LCLS-II linac installation on schedule by November 2021. 

This industrial-scale programme of works was shaped by a strategic commitment, early on in the LCLS-II design phase, to transfer, and ultimately iterate, the established SRF capabilities of the European XFEL project into the core technology platform used for the LCLS-II SRF cryomodules. Put simply: it would not have been possible to complete the LCLS-II project, within cost and on schedule, without the sustained cooperation of the European XFEL consortium – in particular, colleagues at DESY (Germany), CEA Saclay (France) and several other European laboratories (as well as KEK in Japan) that generously shared their experiences and know-how so that the LCLS-II collaboration could hit the ground running. 

Better together 

These days, large-scale accelerator or detector projects are very much a collective endeavour. Not only is the sprawling scope of such projects beyond a single organisation, but the risks of overspend and slippage can greatly increase with a “do-it-on-your-own” strategy. When the LCLS-II project opted for an SRF technology pathway in 2014 (to maximise laser performance and future-proofing), the logical next step was to build a broad-based coalition with other US Department of Energy (DOE) national laboratories and universities. In this case, SLAC, Fermilab, Jefferson Lab (JLab) and Cornell University contributed expertise for cryomodule production, while Argonne National Laboratory and Lawrence Berkeley National Laboratory managed delivery of the undulators and photoinjector for the project. For sure, the start-up time for LCLS-II would have increased significantly without this joint effort, extending the overall project by several years.

LCLS-II cryomodule

Each partner brought something unique to the LCLS-II collaboration. While SLAC was still a relative newcomer to SRF technologies, the lab had a management team that was familiar with building large-scale accelerators (following successful delivery of the LCLS). The priority for SLAC was therefore to scale up its small nucleus of SRF experts by recruiting experienced SRF technologists and engineers to the staff team. 

In contrast, the JLab team brought an established track-record in the production of SRF cryomodules, having built its own machine, the Continuous Electron Beam Accelerator Facility (CEBAF), as well as cryomodules for the Spallation Neutron Source (SNS) linac at Oak Ridge National Laboratory in Tennessee. Cornell, too, came with a rich history in SRF R&D – capabilities that, in turn, helped to solidify the SRF cavity preparation process for LCLS-II. 

Finally, Fermilab had, at the time, recently built two cutting-edge cryomodules of the same style as that chosen for LCLS-II. To fabricate these modules, Fermilab worked closely with the team at DESY to set up the same type of production infrastructure used on the European XFEL. From that perspective, the required tooling and fixtures were all ready to go for the LCLS-II project. While Fermilab was the “designer of record” for the SRF cryomodule, with primary responsibility for delivering a working design to meet LCLS-II requirements, the realisation of an optimised technology platform was, in large part, a team effort involving SRF experts from across the collaboration.

Challenges are inevitable when developing new facilities at the limits of known technology

Operationally, the use of two facilities to produce the SRF cryomodules – Fermilab and JLab – ensured a compressed delivery schedule and increased flexibility within the LCLS-II programme. On the downside, the dual-track production model increased infrastructure costs (with the procurement of duplicate sets of tooling) and meant additional oversight to ensure a standardised approach across both sites. Ongoing procurements were divided equally between Fermilab and JLab, with deliveries often made to each lab directly from the industry suppliers. Each facility, in turn, kept its own inventory of parts, so as to minimise interruptions to cryomodule assembly owing to any supply-chain issues (and enabling critical components to be transferred between labs as required). What’s more, the close working relationship between Fermilab and JLab kept any such interruptions to a minimum.

Collective problems, collective solutions 

While the European XFEL provided the template for the LCLS-II SRF cryomodule design, several key elements of the LCLS-II approach subsequently evolved to align with the CW operation requirements and the specifics of the SLAC tunnel. Success in tackling these technical challenges – across design, assembly, testing and transportation of the cryomodules – is testament to the strength of the LCLS-II collaboration and the collective efforts of the participating teams in the US and Europe. 

SRF cryomodule

For starters, the thermal performance specification of the SRF cavities exceeded the state-of-the-art and required development and industrialisation of the concept of nitrogen doping (a process in which SRF cavities are heat-treated in a nitrogen atmosphere to increase their cryogenic efficiency and, in turn, lower the overall operating costs of the linac). The nitrogen-doping technique was invented at Fermilab in 2012 but, prior to LCLS-II construction, had been used only in an R&D setting.

Adapatability in real-time 

The priority was clear: to transfer the nitrogen-doping capability to LCLS-II’s industry partners, so that the cavity manufacturers could perform the necessary materials processing before final helium-vessel jacketing. During this knowledge transfer, it was found that nitrogen-doped cavities are particularly sensitive to the base niobium sheet material – something the collaboration only realised once the cavity vendors were into full production. This resulted in a number of process changes for the heat treatment temperature, depending on which material supplier was used and the specific properties of the niobium sheet deployed in different production runs. JLab, for its part, held the contract for the cavities and pulled out all stops to ensure success.

At the same time, the conversion from pulsed to CW operation necessitated a faster cooldown cycle for the SRF cavities, requiring several changes to the internal piping, a larger exhaust chimney on the helium vessel, as well as the addition of two new cryogenic valves per cryomodule. Also significant is the 0.5% slope in the longitudinal floor of the existing SLAC tunnel, which dictated careful attention to liquid-helium management in the cryomodules (with a separate two-phase line and liquid-level probes at both ends of every module). 

However, the biggest setback during LCLS-II construction involved the loss of beamline vacuum during cryomodule transport. Specifically, two cryomodules had their beamlines vented and required complete disassembly and rebuilding – resulting in a five-month moratorium on shipping of completed cryomodules in the second half of 2019. It turns out that a small, what was thought to be inconsequential, change in a coupler flange resulted in the cold coupler assembly being susceptible to resonances excited by transport. The result was a bellows tear that vented the beamline. Unfortunately, initial “road-tests” with a similar, though not exactly identical, prototype cryomodule had not surfaced this behaviour. 

Shine on: from LCLS-II to LCLS-II HE

Last cryomodule

As with many accelerator projects, LCLS-II is not an end-point in itself, more an evolutionary transition within a longer term development roadmap. In fact, work is already under way on LCLS-II HE – a project that will increase the energy of the CW SRF linac from 4 to 8 GeV, enabling the photon energy range to be extended to at least 13 keV, and potentially up to 20 keV at 1 MHz repetition rates. 

To ensure continuity of production for LCLS-II HE, 25 next-generation cryomodules are in the works, with even higher performance specifications versus their LCLS-II counterparts, while upgrades to the source and beam transport are also being finalised. 

In addition to LCLS-II HE, other SRF disciplines will benefit from the R&D and technological innovation that has come out of the LCLS-II construction programme. SRF technologies are constantly evolving and advancing the state-of-the-art, whether that’s in single-cavity cryogen-free systems, additional FEL CW upgrades to existing machines, or the building blocks that will underpin enormous new machines like the proposed International Linear Collider. 

Such challenges are inevitable when developing new facilities at the limits of known technology. In the end, the problem was successfully addressed using the diverse talents of the collaboration to brainstorm solutions, with the available access ports allowing an elastomer wedge to be inserted to secure the vulnerable section. A key take-away here is the need for future projects to perform thorough transport analysis, verify the transport loads using mock-ups or dummy devices, and install adequate instrumentation to ensure granular data analysis before long-distance transport of mission-critical components. 

Upon completion of the assembly phase, all LCLS-II cryo-modules were subsequently tested at either Fermilab or JLab, with one module tested at both locations to ensure reproducibility and consistency of results. For high Q0 performance in nitrogen-doped cavities, cooldown flow rates of at least 30 g/s of liquid helium were found to give the best results, helping to expel magnetic flux that could otherwise be trapped in the cavity. 

Overall, cryomodule performance on the test stands exceeded specifications, with an average energy gain per cryomodule of 158 MV (versus specification of 128 MV) and average Q0 of 3 × 1010 (versus specification of 2.7 × 1010). Looking ahead, attention is already shifting to the real-world cryomodule performance in the SLAC tunnel – something that will be measured for the first time in 2022.

Transferable lessons

For all members of the collaboration working on the LCLS-II cryomodules, this challenging project holds many lessons. Most important is the nature of collaboration itself, building a strong team and using that strength to address problems in real-time as they arise. The mantra “we are all in this together” should be front-and-centre for any multi-institutional scientific endeavour – as it was in this case. With all parties making their best efforts, the goal should be to utilise the combined strengths of the collaboration to mitigate challenges. Solutions need to be thought of in a more global sense, since the best answer might mean another collaborator taking more onto their plate. Collaboration implies true partnership and a working model very different to a transactional customer–vendor relationship.

Collaboration implies true partnership and a working model very different to a transactional relationship

From a planning perspective, it’s vital to ensure that the initial project cost and schedule are consistent with the technical challenges and preparedness of the infrastructure. Prototypes and pre-series production runs reduce risk and cost in the long term and should be part of the plan, but there must be sufficient time for data analysis and changes to be made after a prototype run in order for it to be useful. Time spent on detailed technical reviews is also time well spent. New designs of complex components need detailed oversight and review, and should be controlled by a team, rather than a single individual, so that sign-off on any detailed design changes are made by an informed collective. 

Planning ahead

Work planning and control is another essential element for success and safety. This idea needs to be built into the “manufacturing system”, including into the cost and schedule, and be part of each individual’s daily checklist. No one disagrees with this concept, but good intentions on their own will not suffice. As such, required safety documentation should be clear and unambiguous, and be reviewed by people with relevant expertise. Production data and documentation need to be collected, made easily available to the entire project team, and analysed regularly for trends, both positive and negative. 

JLab cryomodule

Supply chain, of course, is critical in any production environment – and LCLS-II is no exception. When possible, it is best to have parts procured, inspected, accepted and on-the-shelf before production begins, thereby eliminating possible workflow delays. Pre-stocking also allows adequate time to recycle and replace parts that do not meet project specifications. Also worth noting is that it’s often the smaller components – such as bellows, feedthroughs and copper-plated elements – that drive workflow slowdowns. A key insight from LCLS-II is to place purchase orders early, stay on top of vendor deliveries, and perform parts inspections as soon as possible post-delivery. Projects also benefit from having clearly articulated pass/fail criteria and established procedures for handling non-conformance – all of which alleviates the need to make critical go/no-go acceptance decisions in the face of schedule pressures.

Finally, it’s worth highlighting the broader impact – both personal and professional – to individual team members participating on a big-science collaboration like LCLS-II. At the end of the build, what remained after designs were completed, problems solved, production rates met, and cryomodules delivered and installed, were the friendships that had been nurtured over several years. The collaboration amongst partners, both formal and informal, who truly cared about the project’s success, and had each other’s backs when there were issues arising: these are the things that solidified the mutual respect, the camaraderie and, in the end, made LCLS-II such a rewarding project.

On your way to Cyclotron Road?

Rachel Slaybaugh

Entrepreneurial scientists and engineers take note: the next round of applications to Cyclotron Road’s two-year fellowship programme will open in the fourth quarter, offering a funded path for early-stage start-ups in “hard tech” (i.e. physical hardware rather than software) to fast-track development of their applied research innovations. Now in its sixth year, Cyclotron Road is a division of the US Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley, California) and is run in partnership with non-profit Activate, a specialist provider of entrepreneurship education and training. 

Successful applicants who navigate the rigorous merit-review process will receive $100,000 of research support for their project as well as a stipend, health insurance and access to Berkeley Lab’s world-class research facilities and scientific expertise. CERN Courier gets the elevator pitch from Rachel Slaybaugh, Cyclotron Road division director. 

Summarise your objectives for Cyclotron Road

Our mission is to empower science innovators to develop their ideas from concept to first product, positioning them for broad societal impact in the long term. We create the space for fellows to commercialise their ideas by giving them direct access to the world-leading scientists and facilities at Berkeley Lab. Crucially, we reinforce that support with a parallel curriculum of specialist entrepreneurship education from our programme partner Activate. 

What are the benefits of embedding the fellowship programme at Berkeley Lab?

Cyclotron Road is not a one-size-fits-all programme, so the benefits vary from fellow to fellow. Some of the fellows and their teams only loosely make use of Berkeley Lab services, while others will embed in a staff scientist’s lab and engage in close collaborative R&D work. The value proposition is that our fellows have access to Berkeley Lab and its resources but can choose what model works best for them. It seems to work: since 2015, Cyclotron Road fellows have collaborated with more than 70 Berkeley Lab scientists, while the organisations they’ve founded have collectively raised more than $360 million in follow-on funding. 

What do you look for in prospective Cyclotron Road fellows? 

We want smart, talented individuals with a passion to develop and grow their own early-stage hard-tech venture. Adaptability is key: Cyclotron Road fellows need to have the technical and intellectual capability to pivot their business plan if needed. As such, our fellows are collaborative team players by default, coachable and hungry to learn. They don’t need to take all the advice they’re given in the programme, but they do need to be open-minded and willing to listen to a range of viewpoints regarding technology innovation and commercial positioning. 

Explain the role of Activate in the professional development of fellows 

Activate is an essential partner in the Cyclotron Road mission. Its team handles the parallel programme of entrepreneurship education, including an onboarding bootcamp, weekly mentoring and quarterly “deep-dives” on all aspects of technology and business development. The goal is to turn today’s talented scientists and engineers into tomorrow’s technology CEOs and CTOs. Activate also has staff to curate strategic relationships for our fellows, helping start-ups connect with investors, industry partners and equipment suppliers. That’s reinforced by the opportunity to link up with the amazing companies in Cyclotron Road’s alumni network.

How does Cyclotron Road benefit Berkeley Lab?

There are several upsides. We’re bringing entrepreneurship and commercial thinking into the lab, helping Berkeley scientists build bridges with these new technology companies – and the innovators driving them. That has paybacks in terms of future funding proposals, giving our researchers a better understanding of how to position their research from an applications perspective. The knowledge transfer between Cyclotron Road fellows and Berkeley Lab scientists is very much a two-way process: while fellows progress their commercial ideas, they are often sparking new lines of enquiry among their collaborators here at Berkeley Lab. 

How are you broadening participation?

Fellows receive a yearly living stipend of $80,000 to $110,000, health insurance, a relocation stipend and a travel allowance – all of which means they’re able to focus full-time on their R&D. Our priority is to engage a diverse community of researchers – not just those individuals who already have a high net worth or access to a friends-and-family funding round. We’re building links with universities and labs outside the traditional technology hot-spots like Silicon Valley, Boston and Seattle, as well as engaging institutions that serve under-represented minorities. Worth adding that Cyclotron Road welcomes international applicants in a position to relocate to California for two years.  

Further information on the Cyclotron Road fellowship programme: https://cyclotronroad.lbl.gov/.

Partnership yields big wins for the EIC

The EIC in outline

The international nuclear-physics community will be front-and-centre as a unique research facility called the Electron–Ion Collider (EIC) moves from concept to reality through the 2020s – the latest progression in the line of large-scale accelerator programmes designed to probe the fundamental forces and particles that underpin the structure of matter. 

Decades of research in particle and nuclear physics have shown that protons and neutrons, once thought to be elementary, have a rich, dynamically complex internal structure of quarks, anti-quarks and gluons, the understanding of which is fundamental to the nature of matter as we experience it. By colliding high-energy beams of electrons with high-energy beams of protons and heavy ions, the EIC is designed to explore this hidden subatomic landscape with the resolving power to image its behaviour directly. Put another way: the EIC will provide the world’s most powerful microscope for studying the “glue” that binds the building blocks of matter.

Luminous performance

When the EIC comes online in the early 2030s, the facility will perform precision “nuclear femtography” by zeroing in on the substructure of quarks and gluons in a manner comparable to the seminal studies of the proton using electron–proton collisions at DESY’s HERA accelerator in Germany between 1992 and 2007 (see “Nuclear femtography to delve deep into nuclear matter” panel). However, the EIC will produce a luminosity (collision rate) 100 times greater than the highest achieved by HERA and, for the first time in such a collider, will provide spin-polarised beams of both protons and electrons, as well as high-energy collisions of electrons with heavy ions. All of which will require unprecedented performance in terms of the power, intensity and spatial precision of the colliding beams, with the EIC expected to provide not only transformational advances in nuclear science, but also transferable technology innovations to shape the next generation of particle accelerators and detectors.

The US Department of Energy (DOE) formally initiated the EIC project in December 2019 with the approval of a “mission need”. That was followed in June of this year with the next “critical decision” to proceed with funding for engineering and design prior to construction (with the estimated cost of the build about $2 billion). The new facility will be sited at Brookhaven National Laboratory (BNL) in Long Island, New York, utilising components and infrastructure from BNL’s Relativistic Heavy Ion Collider (RHIC), including the polarised proton and ion-beam capability and the 3.8 km underground tunnel. Construction will be carried out as a partnership between BNL and Thomas Jefferson National Accelerator Facility (JLab) in Newport News, Virginia, home of the Continuous Electron Beam Accelerator Facility (CEBAF), which has pioneered many of the enabling technologies needed for the EIC’s new electron rings. 

Beyond the BNL–JLab partnership, the EIC is very much a global research endeavour. While the facility is not scheduled to become operational until early in the next decade, an international community of scientists is already hard at work within the EIC User Group. Formed in 2016, the group now has around 1300 members – representing 265 universities and laboratories from 35 countries – engaged collectively on detector R&D, design and simulation as well as initial planning for the EIC’s experimental programme. 

A cutting-edge accelerator facility

Being the latest addition to the line of particle colliders, the EIC represents a fundamental link in the chain of continuous R&D, knowledge transfer and innovation underpinning all manner of accelerator-related technologies and applications – from advanced particle therapy systems for the treatment of cancer to ion implantation in semiconductor manufacturing. 

The images “The EIC in outline” and “Going underground” show the planned layout of the EIC, where the primary beams circulate inside the existing RHIC tunnel to enable the collisions of high-energy (5–18 GeV) electrons (and possibly positrons) with high-energy ion beams of up to 275 GeV/nucleon. One thing is certain: the operating parameters of the EIC, with luminosities of up to 1034 cm–2 s–1 and up to 85% beam polarisation, will push the design of the facility beyond the limits set by previous accelerator projects in a number of core technology areas.

The EIC

For starters, the EIC will require significant advances in the field of superconducting radiofrequency (SRF) systems operating under high current conditions, including control of higher-order modes, beam RF stability and crab cavities. A major challenge is the achievement of strong cooling of intense proton and light-ion beams to manage emittance growth owing to intrabeam scattering. Such a capability will require unprecedented control of low-energy electron-beam quality with the help of ultrasensitive and precise photon detection technologies – innovations that will likely yield transferable benefits for other areas of research reliant on electron-beam technology (e.g. free-electron lasers). 

The EIC design for strong cooling of the ion beams specifies a superconducting energy-recovery linac with a virtual beam power of 15 MW, an order-of-magnitude increase versus existing machines. With this environmentally friendly new technology, the rapidly cycling beam of low-energy electrons (150 MeV) is accelerated within the linac and passes through a cooling channel where it co-propagates with the ions. The cooling electron beam is then returned to the linac, timed to see the decelerating phase of the RF field, and the beam power is thus recovered for the next accelerating cycle – i.e. beam power is literally recycled after each cooling pass.

The EIC will also require complex operating schemes. A case in point: fresh, highly polarised electron bunches will need to be frequently injected into the electron storage ring without disturbing the collision operation of previously injected bunches. Further complexity comes in maximising the luminosity and polarisation over a large range of centre-of-mass energies and for the entire spectrum of ion beams. With a control system that can monitor hundreds of beam parameters in real-time, and with hundreds of points where the guiding magnetic fields can be tuned on the fly, there is a vast array of “knobs-to-be-turned” to optimise overall performance. Inevitably, this is a facility that will benefit from the use of artificial intelligence and machine-learning technologies to maximise its scientific output. 

Prototype bunched-beam polarised electron source

At the same time, the EIC and CERN’s High-Luminosity LHC user communities are working in tandem to realise more capable technologies for particle detection as well as innovative electronics for large-scale data read-out and processing. Exploiting advances in chip technology, with feature sizes as small as 65 nm, multipixel silicon sensors are in the works for charged-particle tracking, offering single-point spatial resolution better than 5 µm, very low mass and on-chip, individual-pixel readout. These R&D efforts open the way to compact arrays of thin solid-state detectors with broad angular coverage to replace large-volume gaseous detectors. 

Coupled with leading-edge computing capabilities, such detectors will allow experiments to stream data continuously, rather than selecting small samples of collisions for readout. Taken together, these innovations will yield no shortage of downstream commercial opportunities, feeding into next-generation medical imaging systems, for example, as well as enhancing industrial R&D capacity at synchrotron light-source facilities.

The BNL–JLab partnership

As the lead project partners, BNL and JLab have a deep and long-standing interest in the EIC programme and its wider scientific mission. In 2019, BNL and JLab each submitted their own preconceptual designs to DOE for a future high-energy and high-luminosity polarised EIC based around existing accelerator infrastructure and facilities. In January 2020, DOE subsequently selected BNL as the preferred site for the EIC, after which the two labs immediately committed to a full partnership between their respective teams (and other collaborators) in the construction and operation of the facility. 

Nuclear femtography to delve deep into nuclear matter

Internal quark and gluon substructure of the proton

Nuclear matter is inherently complex because the interactions and structures therein are inextricably mixed up: its constituent quarks are bound by gluons that also bind themselves. Consequently, the observed properties of nucleons and nuclei, such as their mass and spin, emerge from a dynamical system governed by quantum chromodynamics (QCD). The quark masses, generated via the Higgs mechanism, only account for a tiny fraction of the mass of a proton, leaving fundamental questions about the role of gluons in the structure of nucleons and nuclei still unanswered. 

The underlying nonlinear dynamics of the gluon’s self-interaction is key to understanding QCD and fundamental features of the strong interactions such as dynamical chiral symmetry-breaking and confinement. Yet despite the central role of gluons, and the many successes in our understanding of QCD, the properties and dynamics of gluons remain largely unexplored. 

If that’s the back-story, the future is there to be written by the EIC, a unique machine that will enable physicists to shed light on the many open questions in modern nuclear physics. 

Back to basics

At the fundamental level, the way in which a nucleon or nucleus reveals itself in an experiment depends on the kinematic regime being probed. A dynamic structure of quarks and gluons is revealed when probing nucleons and nuclei at higher energies, or with higher resolutions. Here, the nucleon transforms from a few-body system, with its structure dominated by three valance quarks, to a regime where it is increasingly dominated by gluons generated through gluon radiation, as discovered at the former HERA electron–proton collider at DESY. Eventually, the gluon density becomes so large that the gluon radiation is balanced by gluon recombination, leading to nonlinear features of the strong interaction.

The LHC and RHIC have shown that neutrons and protons bound inside nuclei already exhibit the collective behaviour that reveals QCD substructure under extreme conditions, as initially seen with high-energy heavy-ion collisions. This has triggered widespread interest in the study of the strong force in the context of condensed-matter physics, and the understanding that the formation and evolution of the extreme phase of QCD matter is dominated by the properties of gluons at high density.

The subnuclear genetic code

The EIC will enable researchers to go far beyond the present one-dimensional picture of nuclei and nucleons, where the composite nucleon appears as a bunch of fast-moving (anti-)quarks and gluons whose transverse momenta or spatial extent are not resolved. Specifically, by correlating the information of the quark and gluon longitudinal momentum component with their transverse momentum and spatial distribution inside the nucleon, the EIC will enable nuclear femtography. 

Such femtographic images will provide, for the first time, insight into the QCD dynamics inside hadrons, such as the interplay between sea quarks and gluons. The ultimate goal is to experimentally reconstruct and constrain the so-called Wigner functions – the quantities that encode the complete tomographic information and constitute a QCD “genetic map” of nucleons and nuclei.

  Adapted from “Electron–ion collider on the horizon” by Elke-Caroline Aschenauer, BNL, and Rolf Ent, JLab.

The construction project is led by a joint BNL–JLab management team that integrates the scientific, engineering and management capabilities of JLab into the BNL design effort. JLab, for its part, leads on the design and construction of SRF and cryogenics systems, the energy-recovery linac and several of the electron injector and storage-ring subsystems within the EIC accelerator complex. 

More broadly, BNL and JLab are gearing up to work with US and international partners to meet the technical challenges of the EIC in a cost-effective, environmentally responsible manner. The goal: to deliver a leading-edge research facility that will build upon the current CEBAF and RHIC user base to ensure engagement – at scale – from the US and international nuclear-physics communities. 

As such, the labs are jointly hosting the EIC experiments in the spirit of a DOE user facility for fundamental research, while the BNL–JLab management team coordinates the engagement of other US and international laboratories into a multi-institutional partnership for EIC construction. Work is also under way with prospective partners to define appropriate governance and operating structures to enhance the engagement of the user community with the EIC experimental programme. 

With international collaboration hard-wired into the EIC’s working model, the EIC User Group has been in the vanguard of a global effort to develop the science goals for the facility – as well as the experimental programme to realise those goals. Most importantly, the group has carried out intensive studies over the past two years to document the measurements required to deliver EIC’s physics objectives and the resulting detector requirements. This work also included an exposition of evolving detector concepts and a detailed compendium of candidate technologies for the EIC experimental programme.

Cornerstone collaborations 

The resulting Yellow Report, released in March 2021, provides the basis for the ongoing discussion of the most effective implementation of detectors, including the potential for complementary detectors in the two possible collision points as a means of maximising the scientific output of the EIC facility (see “Detectors deconstructed”). Operationally, the report also provides the cornerstone on which EIC detector proposals are currently being developed by three international “proto-collaborations”, with significant components of the detector instrumentation being sourced from non-US partners. 

The EIC represents a fundamental link in the chain of continuous R&D and knowledge transfer

Along every coordinate, it’s clear that the EIC project profits enormously from its synergies with accelerator and detector R&D efforts worldwide. To reinforce those benefits, a three-day international workshop was held in October 2020, focusing on EIC partnership opportunities across R&D and construction of accelerator components. This first Accelerator Partnership Workshop, hosted by the Cockcroft Institute in the UK, attracted more than 250 online participants from 26 countries for a broad overview of EIC and related accelerator-technology projects. A follow-up workshop, scheduled for October 2021 and hosted by the TRIUMF Laboratory in Canada, will focus primarily on areas where advanced “scope of work” discussions are already under way between the EIC project and potential partners.

Nurturing talent 

While discussion and collaboration between the BNL and JLab communities were prioritised from the start of the EIC planning process, a related goal is to get early-career scientists engaged in the EIC physics programme. To this end, two centres were created independently: the Center for Frontiers in Nuclear Science (CFNS) at Stony Brook University, New York, and the Electron-Ion Collider Center (EIC2) at JLab.

The CFNS, established jointly by BNL and Stony Brook University in 2017, was funded by a generous donation from the Simons Foundation (a not-for-profit organisation that supports basic science) and a grant from the State of New York. As a focal point for EIC scientific discourse, the CFNS mentors early-career researchers seeking long-term opportunities in nuclear science while simultaneously supporting the formation of the EIC’s experimental collaborations. 

Conceptual general-purpose detector

Core CFNS activities include EIC science workshops, short ad-hoc meetings (proposed and organised by members of the EIC User Group), alongside a robust postdoctoral fellow programme to guide young scientists in EIC-related theory and experimental disciplines. An annual summer school series on high-energy QCD also kicked off in 2019, with most of the presentations and resources from the wide-ranging CFNS events programme available online to participants around the world. 

In a separate development, the CFNS recently initiated a dedicated programme for under-represented minorities (URMs). The Edward Bouchet Initiative provides a broad portfolio of support to URM students at BNL, including grants to pursue masters or doctoral degrees at Stony Brook on EIC-related research. 

Meanwhile, the EIC2 was established at JLab with funding from the State of Virginia to involve outstanding JLab students and postdocs in EIC physics. Recognising that there are many complementary overlaps between JLab’s current physics programme and the physics of the future EIC, the EIC2 provides financial support to three PhD students and three postdocs each year to expand their current research to include the physics that will become possible once the new collider comes online. 

Beyond their primary research projects, this year’s cohort of six EIC2 fellows worked together to organise and establish the first EIC User Group Early Career workshop. The event, designed specifically to highlight the research of young scientists, was attended by more than 100 delegates and is expected to become an annual part of the EIC User Group meeting.

The future, it seems, is bright, with CFNS and EIC2 playing their part in ensuring that a diverse cadre of next-generation scientists and research leaders is in place to maximise the impact of EIC science over the decades to come.

Strongly unbalanced photon pairs

Figure 1

Most processes resulting from proton–proton collisions at the LHC are affected by the strong force – a difficult-to-model part of the Standard Model involving non-perturbative effects. This can be problematic when measuring rare processes not mediated by strong interactions, such as those involving the Higgs boson, and when searching for new particles or interactions. To ensure such processes are not obscured, precise knowledge of the more dom­inant strong-interaction effects, including those caused by the initial-state partons, is a prerequisite to LHC physics analyses.

The electromagnetic production of a photon pair is the dominant background to the H → γγ decay channel – a process that is instrumental to the study of the Higgs boson. Despite its electromagnetic nature, diphoton production is affected by surprisingly large strong-interaction effects. Thanks to precise ATLAS measurements of diphoton processes using the full Run-2 dataset, the collaboration is able to probe these effects and scrutinise state-of-the-art theoretical calculations.

Measurements studying strong interactions typically employ final states that include jets produced from the showering and hadronisation of quarks and gluons. However, the latest ATLAS analysis instead uses photons, which can be very precisely measured by the detector. Although photons do not carry a colour charge, they interact with quarks as the latter carry electric charge. As a result, strong-interaction effects on the quarks can alter the characteristics of the measured photons. The conservation of momentum allows us to quantify this effect: the LHC’s proton beams collide head-on, so the net momentum transverse to the beam axis must be zero for the final-state particles. Any signs to the contrary indicate additional activity in the event with equivalent but opposite transverse momentum, usually arising from quarks and gluons radiated from the initial-state partons. Therefore, by measuring the transverse momentum of photon pairs, and related observables, the strong interaction may be indirectly probed.

A surprising role of the strong interaction in electro-magnetic diphoton production is revealed

Comparing the measured values to predictions reveals the surprising role of the strong interaction in electromagnetic diphoton production. In a simple picture without the strong interaction, the momentum of each photon should perfectly balance in the transverse plane. However, this simplistic expectation does not match the measurements (see figure 1). Measuring the differential cross-section as a function of the transverse momentum of the photon pair, ATLAS finds that most of the measured photon pairs (black points) have low but non-zero transverse momenta, with a peak at approximately 10 GeV, followed by a smoothly falling distribution towards higher values.

Extending calculations to encompass next-to-next-to-leading order corrections in the strong-interaction coupling constant (purple line), the impact of the strong interaction becomes manifest. The measured values at high transverse momenta are well described by these predictions, including the bump observed at 70 GeV, which is another manifestation of higher-order strong-interaction effects. Monte Carlo event generators like Sherpa (red line), which combine similar calculations with approximate simulations of arbitrarily many-quark and gluon emissions – especially relevant at low energies – properly describe the entire measured distribution.

The results of this analysis, which also include measurements of other distributions such as angular variables between the two photons, don’t just viscerally probe the strong interaction – they also provide a benchmark for this important background process.

Arthur M Poskanzer 1931–2021

Art Poskanzer

Arthur M (Art) Poskanzer, distinguished senior scientist emeritus at Lawrence Berkeley National Laboratory (LBNL), passed away peacefully on 30 June 2021, two days after his 90th birthday. Art had a distinguished career in nuclear physics and chemistry. He made important discoveries of the properties of unstable nuclei and was a pioneer in the study of nuclear collisions at very high energies. 

Born in New York City, Art received his degree in physics and chemistry from Harvard in 1953, an MA from Columbia in 1954, and a PhD in Chemistry from MIT in 1957 under Charles D Coryell. He spent the first part of his career studying the properties of nuclei far from stability produced in high-energy proton collisions. After graduating from MIT, he joined Gerhard Friedlander’s group at Brookhaven National Laboratory (BNL), which was using the Cosmotron to produce beta-delayed proton emitters and neutron-rich light nuclei. In 1966 he moved to the Lawrence Radiation Laboratory (now LBNL) and continued to study nuclei far from stability at the Bevatron in collaboration with Earl Hyde, Joe Cerny and others. He also began his long connection to research in Europe as a Guggenheim fellow at Orsay in 1970–1971, during which he worked with Robert Klapisch’s group on a ground-breaking experiment at the CERN Proton Synchrotron measuring the masses of sodium isotopes.

Soon after Art’s return to Berkeley, beams from the SuperHILAC were injected into the Bevatron, creating the Bevalac, the world’s first high-energy nuclear accelerator. Together with Hans Gutbrod he led the Plastic Ball Project. Analysis of its data in 1984 by Art and Hans Georg Ritter identified directed flow, the first definitive demonstration of the collective behaviour of nuclear matter in nuclear collisions. In 1986 the experiment was moved to CERN and the collaboration with GSI continued with a series of experiments at the Super Proton Synchrotron. During these years, Art made two more extended visits to CERN as a Senior Alexander von Humbold Fellow: first in 1986–1987 working on the WA80 experiment, and then in 1995–1996 on NA49.

From 1990 to 1995 Art was the founding head of LBNL’s relativistic nuclear collisions programme, bringing together local groups to plan an experiment at the Relativistic Heavy Ion Collider (RHIC) under construction at BNL. This resulted in the proposal for STAR, one of the two large multi-purpose RHIC detectors. Art stepped down as programme head in 1995 and returned to research, authoring a seminal paper with Sergey Voloshin on methods for flow analysis and leading the measurement of elliptic flow by STAR. After his retirement in 2002, he remained active for a further decade, leading the successful search for higher order flow components at STAR, and enthusiastically mentoring many postdocs and young scientists. 

Art was a well-known and well-loved member of the heavy-ion community. For his work on nuclei far from stability, he was awarded the Nuclear Chemistry Prize of the American Chemical Society in 1980. For the discovery of collective flow, he was awarded the Tom Bonner Prize of the American Physical Society in 2008. This rare “double” is a lasting tribute to his half-century career at the frontiers of nuclear science.

Emergence

A murmuration of starlings

Particle physics is at its heart a reductionistic endeavour that tries to reduce reality to its most basic building blocks. This view of nature is most evident in the search for a theory of everything – an idea that is nowadays more common in popularisations of physics than among physicists themselves. If discovered, all physical phenomena would follow from the application of its fundamental laws.

A complementary perspective to reductionism is that of emergence. Emergence says that new and different kinds of phenomena arise in large and complex systems, and that these phenomena may be impossible, or at least very hard, to derive from the laws that govern their basic constituents. It deals with properties of a macroscopic system that have no meaning at the level of its microscopic building blocks. Good examples are the wetness of water and the superconductivity of an alloy. These concepts don’t exist at the level of individual atoms or molecules, and are very difficult to derive from the microscopic laws. 

As physicists continue to search for cracks in the Standard Model (SM) and Einstein’s general theory of relativity, could these natural laws in fact be emergent from a deeper reality? And emergence is not limited to the world of the very small, but by its very nature skips across orders of magnitude in scale. It is even evident, often mesmerisingly so, at scales much larger than atoms or elementary particles, for example in the murmurations of a flock of birds – a phenomenon that is impossible to describe by following the motion of an individual bird. Another striking example may be intelligence. The mechanism by which artificial intelligence is beginning to emerge from the complexity of underlying computing codes shows similarities with emergent phenomena in physics. One can argue that intelligence, whether it occurs naturally, as in humans, or artificially, should also be viewed as an emergent phenomenon. 

Data compression

Renormalisable quantum field theory, the foundation of the SM, works extraordinarily well. The same is true of general relativity. How can our best theories of nature be so successful, while at the same time being merely emergent? Perhaps these theories are so successful precisely because they are emergent. 

As a warm up, let’s consider the laws of thermodynamics, which emerge from the microscopic motion of many molecules. These laws are not fundamental but are derived by statistical averaging – a huge data compression in which the individual motions of the microscopic particles are compressed into just a few macroscopic quantities such as temperature. As a result, the laws of thermodynamics are universal and independent of the details of the microscopic theory. This is true of all the most successful emergent theories; they describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different. For instance, two physical systems that undergo a second-order phase transition, while being very different microscopically, often obey exactly the same scaling laws, and are at the critical point described by the same emergent theory. In other words, an emergent theory can often be derived from a large universality class of many underlying microscopic theories. 

Successful emergent theories describe universal macroscopic phenomena whose underlying microscopic descriptions may be very different

Entropy is a key concept here. Suppose that you try to store the microscopic data associated with the motion of some particles on a computer. If we need N bits to store all that information, we have 2N possible microscopic states. The entropy equals the logarithm of this number, and essentially counts the number of bits of information. Entropy is therefore a measure of the total amount of data that has been compressed. In deriving the laws of thermodynamics, you throw away a large amount of microscopic data, but you at least keep count of how much information has been removed in the data-compression procedure.

Emergent quantum field theory

One of the great theoretical-physics paradigm shifts of the 20th century occurred when Kenneth Wilson explained the emergence of quantum field theory through the application of the renormalisation group. As with thermodynamics, renormalisation compresses microscopic data into a few relevant parameters – in this case, the fields and interactions of the emergent quantum field theory. Wilson demonstrated that quantum field theories appear naturally as an effective long-distance and low-energy description of systems whose microscopic definition is given in terms of a quantum system living on a discretised spacetime. As a concrete example, consider quantum spins on a lattice. Here, renormalisation amounts to replacing the lattice by a coarser lattice with fewer points, and redefining the spins to be the average of the original spins. One then rescales the coarser lattice so that the distance between lattice points takes the old value, and repeats this step many times. A key insight was that, for quantum statistical systems that are close to a phase transition, you can take a continuum limit in which the expectation values of the spins turn into the local quantum fields on the continuum spacetime.

This procedure is analogous to the compression algorithms used in machine learning. Each renormalisation step creates a new layer, and the algorithm that is applied between two layers amounts to a form of data compression. The goal is similar: you only keep the information that is required to describe the long-distance and low-energy behaviour of the system in the most efficient way.

A neural network

So quantum field theory can be seen as an effective emergent description of one of a large universality class of many possible underlying microscopic theories. But what about the SM specifically, and its possible supersymmetric extensions? Gauge fields are central ingredients of the SM and its extensions. Could gauge symmetries and their associated forces emerge from a microscopic description in which there are no gauge fields? Similar questions can also be asked about the gravitational force. Could the curvature of spacetime be explained from an emergent perspective?

String theory seems to indicate that this is indeed possible, at least theoretically. While initially formulated in terms of vibrating strings moving in space and time, it became clear in the 1990s that string theory also contains many more extended objects, known as “branes”. By studying the interplay between branes and strings, an even more microscopic theoretical description was found in which the coordinates of space and time themselves start to dissolve: instead of being described by real numbers, our familiar (x, y, z) coordinates are replaced by non-commuting matrices. At low energies, these matrices begin to commute, and give rise to the normal spacetime with which we are familiar. In these theoretical models it was found that both gauge forces and gravitational forces appear at low energies, while not existing at the microscopic level.

While these models show that it is theoretically possible for gauge forces to emerge, there is at present no emergent theory of the SM. Such a theory seems to be well beyond us. Gravity, however, being universal, has been more amenable to emergence.

Emergent gravity

In the early 1970s, a group of physicists became interested in the question: what happens to the entropy of a thermodynamic system that is dropped into a black hole? The surprising conclusion was that black holes have a temperature and an entropy, and behave exactly like thermodynamic systems. In particular, they obey the first law of thermodynamics: when the mass of a black hole increases, its (Bekenstein–Hawking) entropy also increases.

The correspondence between the gravitational laws and the laws of thermodynamics does not only hold near black holes. You can artificially create a gravitational field by accelerating. For an observer who continues to accelerate, even empty space develops a horizon, from behind which light rays will not be able to catch up. These horizons also carry a temperature and entropy, and obey the same thermodynamic laws as black-hole horizons. 

It was shown by Stephen Hawking that the thermal radiation emitted from a black hole originates from pair creation near the black-hole horizon. The properties of the pair of particles, such as spin and charge, are undetermined due to quantum uncertainty, but if one particle has spin up (or positive charge), then the other particle must have spin down (or negative charge). This means that the particles are quantum entangled. Quantum entangled pairs can also be found in flat space by considering accelerated observers. 

Crucially, even the vacuum can be entangled. By separating spacetime into two parts, you can ask how much entanglement there is between the two sides. The answer to this was found in the last decade, through the work of many theorists, and turns out to be rather surprising. If you consider two regions of space that are separated by a two-dimensional surface, the amount of quantum entanglement between the two sides turns out to be precisely given by the Bekenstein–Hawking entropy formula: it is equal to a quarter of the area of the surface measured in Planck units. 

Holographic renormalisation

The area of the event horizon

The AdS/CFT correspondence incorporates a principle called “holography”: the gravitational physics inside a region of space emerges from a microscopic description that, just like a hologram, lives on a space with one less dimension and thus can be viewed as living on the boundary of the spacetime region. The extra dimension of space emerges together with the gravitational force through a process called “holographic renormalisation”. One successively adds new layers of spacetime. Each layer is obtained from the previous layer through “coarse-graining”, in a similar way to both renormalisation in quantum field theory and data-compression algorithms in machine learning.

Unfortunately, our universe is not described by a negatively curved spacetime. It is much closer to a so-called de Sitter spacetime, which has a positive curvature. The main difference between de Sitter space and the negatively curved anti-de Sitter space is that de Sitter space does not have a boundary. Instead, it has a cosmological horizon whose size is determined by the rate of the Hubble expansion. One proposed explanation for this qualitative difference is that, unlike for negatively curved spacetimes, the microscopic quantum state of our universe is not unique, but secretly carries a lot of quantum information. The amount of this quantum information can once again be counted by an entropy: the Bekenstein–Hawking entropy associated with the cosmological horizon. 

This raises an interesting prospect: if the microscopic quantum data of our universe may be thought of as many entangled qubits, could our current theories of spacetime, particles and forces emerge via data compression? Space, for example, could emerge by forgetting the precise way in which all the individual qubits are entangled, but only preserving the information about the amount of quantum entanglement present in the microscopic quantum state. This compressed information would then be stored in the form of the areas of certain surfaces inside the emergent curved spacetime. 

In this description, gravity would follow for free, expressed in the curvature of this emergent spacetime. What is not immediately clear is why the curved spacetime would obey the Einstein equations. As Einstein showed, the amount of curvature in spacetime is determined by the amount of energy (or mass) that is present. It can be shown that his equations are precisely equivalent to an application of the first law of thermodynamics. The presence of mass or energy changes the amount of entanglement, and hence the area of the surfaces in spacetime. This change in area can be computed and precisely leads to the same spacetime curvature that follows from the Einstein equations. 

The idea that gravity emerges from quantum entanglement goes back to the 1990s, and was first proposed by Ted Jacobson. Not long afterwards, Juan Maldacena discovered that general relativity can be derived from an underlying microscopic quantum theory without a gravitational force. His description only works for infinite spacetimes with negative curvature called anti-de Sitter (or AdS–) space, as opposed to the positive curvature we measure. The microscopic description then takes the form of a scale-invariant quantum field theory – a so-called conformal field theory (CFT) – that lives on the boundary of the AdS–space (see “Holographic renormalisation” panel). It is in this context that the connection between vacuum entanglement and the Bekenstein–Hawking entropy, and the derivation of the Einstein equations from entanglement, are best understood. I have also contributed to these developments in a paper in 2010 that emphasised the role of entropy and information for the emergence of the gravitational force. Over the last decade a lot of progress has been made in our understanding of these connections, in particular the deep connection between gravity and quantum entanglement. Quantum information has taken centre stage in the most recent theoretical developments.

Emergent intelligence

But what about viewing the even more complex problem of human intelligence as an emergent phenomenon? Since scientific knowledge is condensed and stored in our current theories of nature, the process of theory formation can itself be viewed as a very efficient form of data compression: it only keeps the information needed to make predictions about reproducible events. Our theories provide us with a way to make predictions with the fewest possible number of free parameters. 

The same principles apply in machine learning. The way an artificial-intelligence machine is able to predict whether an image represents a dog or a cat is by compressing the microscopic data stored in individual pixels in the most efficient way. This decision cannot be made at the level of individual pixels. Only after the data has been compressed and reduced to its essence does it becomes clear what the picture represents. In this sense, the dog/cat-ness of a picture is an emergent property. This is even true for the way humans process the data collected by our senses. It seems easy to tell whether we are seeing or hearing a dog or a cat, but underneath, and hidden from our conscious mind, our brains perform a very complicated task that turns all the neural data that come from our eyes and ears into a signal that is compressed into a single outcome: it is a dog or a cat. 

Emergence is often summarised with the slogan “the whole is more than the sum of its parts”

Can intelligence, whether artificial or human, be explained from a reductionist point of view? Or is it an emergent concept that only appears when we consider a complex system built out of many basic constituents? There are arguments in favour of both sides. As human beings, our brains are hard-wired to observe, learn, analyse and solve problems. To achieve these goals the brain takes the large amount of complex data received via our senses and reduces it to a very small set of information that is most relevant for our purposes. This capacity for efficient data compression may indeed be a good definition for intelligence, when it is linked to making decisions towards reaching a certain goal. Intelligence defined in this way is exhibited in humans, but can also be achieved artificially.

Artificially intelligent computers beat us at problem solving, pattern recognition and sometimes even in what appears to be “generating new ideas”. A striking example is DeepMind’s AlphaZero, whose chess rating far exceeds that of any human player. Just four hours after learning the rules of chess, AlphaZero was able to beat the strongest conventional “brute force” chess program by coming up with smarter ideas and showing a deeper understanding of the game. Top grandmasters use its ideas in their own games at the highest level. 

In its basic material design, an artificial-intelligence machine looks like an ordinary computer. On the other hand, it is practically impossible to explain all aspects of human intelligence by starting at the microscopic level of the neurons in our brain, let alone in terms of the elementary particles that make up those neurons. Furthermore, the intellectual capability of humans is closely connected to the sense of consciousness, which most scientists would agree does not allow for a simple reductionist explanation.

Emergence is often summarised with the slogan “the whole is more than the sum of its parts” – or as condensed-matter theorist Phil Anderson put it, “more is different”. It counters the reductionist point of view, reminding us that the laws that we think to be fundamental today may in fact emerge from a deeper underlying reality. While this deeper layer may remain inaccessible to experiment, it is an essential tool for theorists of the mind and the laws of physics alike.

Building the future of LHCb

Planes of LHCb’s SciFi tracker

It was once questioned whether it would be possible to successfully operate an asymmetric “forward” detector at a hadron collider. In such a high-occupancy environment, it is much harder to reconstruct decay vertices and tracks than it is at a lepton collider. Following its successes during LHC Run 1 and Run 2, however, LHCb has rewritten the forward-physics rulebook, and is now preparing to take on bigger challenges.

During Long Shutdown 2, which comes to an end early next year, the LHCb detector is being almost entirely rebuilt to allow data to be collected at a rate up to 10 times higher during Run 3 and Run 4. This will improve the precision of numerous world-best results, such as constraints on the angles of the CKM triangle, while further scrutinising intriguing results in B-meson decays, which hint at departures from the Standard Model. 

LHCb’s successive detector layers

At the core of the LHCb upgrade project are new detectors capable of sustaining an instantaneous luminosity up to five times that seen at Run 2, and which enable a pioneering software-only trigger that will enable LHCb to process signal data in an upgraded computing farm at the frenetic rate of 40 MHz. The vertex locator (VELO) will be replaced with a pixel version, the upstream silicon-strip tracker will be replaced with a lighter version (the UT) located closer to the beamline, and the electronics for LHCb’s muon stations and calorimeters are being upgraded for 40 MHz readout. 

Recently, three further detector systems key to dealing with the higher occupancies ahead were lowered into the LHCb cavern for installation: the upgraded ring-imaging Cherenkov detectors RICH1 and RICH2 for sharper particle identification, and the brand new “SciFi” (scintillating fibre) tracker. 

SciFi tracking

The components of LHCb’s SciFi tracker may not seem futuristic at first glance. Its core elements are constructed from what is essentially paper, plastic, some carbon fibre and glue. However, its materials components conceal advanced technologies which, when coupled together, produce a very light and uniform, high-performance detector that is needed to cope with the higher number of particle tracks expected during Run 3.

Located behind the LHCb magnet (see “Asymmetric anatomy” image), the SciFi represents a challenge, not only due to its complexity, but also because the technology – plastic scintillating fibres and silicon photomultiplier arrays – has never been used for such a large area in such a harsh radiation environment. Many of the underlying technologies have been pushed to the extreme during the past decade to allow the SciFi to successfully operate under LHC conditions in an affordable and effective way. 

Scintillating-fibre mat production

More than 11,000 km of 0.25 mm-diameter polystyrene fibre was delivered to CERN before undergoing meticulous quality checks. Excessive diameter variations were removed to prevent disruptions of the closely packed fibre matrix produced during the winding procedure, and clear improvements from the early batches to the production phase were made by working closely with the industrial manufacturer. From the raw fibres, nearly 1400 multi-layered fibre mats were wound in four of the LHCb collaboration’s institutes (see “SciFi spools” image), before being cut and bonded in modules, tested, and shipped to CERN where they were assembled with the cold boxes. The SciFi tracker contains 128 stiff and robust 5 × 0.5 m2 modules made of eight mats bonded with two fire-resistant honeycomb and carbon-fibre panels, along with some mechanics and a light-injection system. In total, the design produces nearly 320 m2 of detector surface over the 12 layers of the tracking stations. 

The scintillating fibres emit photons at blue-green wavelengths when a particle interacts with them. Secondary scintillator dyes added to the polystyrene amplify the light and shift it to longer wavelengths so it can be read out by custom-made silicon photomultipliers (SiPMs). SiPMs have become a strong alternative to conventional photomultiplier tubes in recent years, due to their smaller channel sizes, easier operation and insensitivity to magnetic fields. This makes them ideal to read out the higher number of channels necessary to identify separate but nearby tracks in LHCb during Run 3. 

The width of the SiPM channels, 0.25 mm, is designed to match that of the fibres. Though they need not align perfectly, this provides a better separation power for tracking than the previously used 5 mm gas straw tubes in the outer regions of the detector, while providing a similar performance to the silicon-strip tracker. The tiny channel size results in over 524,288 SiPM channels to collect light from 130 m of fibre-mat edges. A custom ASIC, called the PACIFIC, outputs two bits per channel based on three signal-amplitude thresholds. A field-programmable gate array (FPGA) assigned to each SiPM then groups these signals into clusters, where the location of each cluster is sent to the computing farm. Despite clustering and noise suppression, this still results in an enormous data rate of 20 Tb/s – nearly half of the total data bandwidth of the upgraded LHCb detector.

One of the key factors in the success of LHCb’s flavour-physics programme is its ability to identify charged particles

LHCb’s SciFi tracker is the first large-scale use of SiPMs for tracking, and takes advantage of improvements in the technology in the 10 years since the SciFi was proposed. The photon-detection efficiency of SiPMs has nearly doubled thanks to improvements in the design and production of the underlying pixel structures, while the probability of crosstalk between the pixels (which creates multiple fake signals by causing a single pixel to randomly fire without incident light following radiation damage) has been reduced from more than 20% to a few percent by the introduction of microscopic trenches between the pixels. The dark-single-pixel firing rate can also be reduced by cooling the SiPM. Together, these two methods greatly reduce the number of fake-signal clusters such that the tracker can effectively function after several years of operation in the LHCb cavern. 

RICH2 photon detector plane

The LHCb collaboration assembled commercial SiPMs on flex cables and bonded them in groups of 16 to a 0.5 m-long 3D-printed titanium cooling bar to form precisely assembled photodetection units for the SciFi modules. By circulating a coolant at a temperature of –50 °C through the cold bar, the dark-noise rate was reduced by a factor of 60. Furthermore, in a first for a CERN experiment, it was decided to use a new single-phase liquid coolant called Novec-649 from 3M for its non-toxic properties and low greenhouse warming potential (GWP = 1). Historically, C6F14 – which has a GWP = 7400 – was the thermo-transfer fluid of choice. Although several challenges had to be faced in learning how to work with the new fluid, wider use of Novec-649 and similar products could contribute significantly to the reduction of CERN’s carbon footprint. Additionally, since the narrow envelope of the tracking stations precludes the use of standard foam insulation of the coolant lines, a significant engineering effort has been required to vacuum insulate the 48 transfer lines from the 24 rows of SiPMs and 256 cold-bars where leaks are possible at every connection. 

To date, LHCb collaborators have tirelessly assembled and tested nearly half of the SciFi tracker above ground, where only two defective channels out of the 262,144 tested in the full signal chain were unrecoverable. Four out of 12 “C-frames” containing the fibre modules (see “Tracking tall” image) are now installed and waiting to be connected and commissioned, with a further two installed in mid-July. The remaining six will be completed and installed before the start of operations early next year.

New riches

One of the key factors in the success of LHCb’s flavour-physics programme is its ability to identify charged particles, which reduces the background in selected final states and assists in the flavour tagging of b quarks. Two ring-imaging Cherenkov (RICH) detectors, RICH1 and RICH2, located upstream and downstream of the LHCb magnet 1 and 10 m away from the collision point, provide excellent particle identification over a very wide momentum range. They comprise a large volume of fluorocarbon gas (the radiator), in which photons are emitted by charged particles travelling at speeds higher than the speed of light in the gas; spherical and flat mirrors to focus and reflect this Cherenkov light; and two photon-detector planes where the Cherenkov rings are detected and read out by the front-end electronics.

The original RICH detectors are currently being refurbished to cope with the more challenging data-taking conditions of Run 3, requiring a variety of technological challenges to be overcome. The photon detection system, for example, has been redesigned to adapt to the highly non-uniform occupancy expected in the RICH system, running from an unprecedented peak occupancy of ~35% in the central region of RICH1 down to 5% in the peripheral region of RICH2. Two types of 64-channel multi-anode photomultiplier tubes (MaPMTs) have been selected for the task which, thanks to their exceptional quantum efficiency in the relevant wavelength range, are capable of detecting single photons while providing excellent spatial resolution and very low background noise. These are key requirements to allow pattern-recognition algorithms to reconstruct Cherenkov rings even in the high-occupancy region. 

Completed SciFi C-frames

More than 3000 MaPMT units, for a total of 196,608 channels, are needed to fully instrument both upgraded RICH detectors. The already large active area (83%) of the devices has been maximised by arranging the units in a compact and modular “elementary cell” containing a custom-developed, radiation-hard eight-channel ASIC called the Claro chip, which is able to digitise the MaPMT signal at a rate of 40 MHz. The readout is controlled by FPGAs connected to around 170 channels each. The prompt nature of Cherenkov radiation combined with the performance of the new opto-electronics chain will allow the RICH systems to operate within the LHC’s 25 ns time window, dictated by the bunch-crossing period, while applying a time-gate of less than 6 ns to provide background rejection.

To keep the new RICHes as compact as possible, the hosting mechanics has been designed to provide both structural support and active cooling. Recent manufacturing techniques have enabled us to drill two 6 mm-diameter ducts over a length of 1.5 m into the spine of the support, through which a coolant (the more environmentally friendly Novec649, as in the SciFi tracker) is circulated. Each element of the opto-electronics chain has been produced and fully validated within a dedicated quality-assurance programme, allowing the position of the photon detectors and their operating conditions to be fine-tuned across the RICH detectors. In February, the first photon-detector plane of RICH2 (see “RICH2 to go” image) became the first active element of the LHCb upgrade to be installed in the cavern. The two planes of RICH2, located at the sides of the beampipe, were commissioned in early summer and will see first Cherenkov light during an LHC beam test in October. 

RICH1 spherical mirrors

RICH1 presents an even bigger challenge. To reduce the number of photons in the hottest region, its optics have been redesigned to spread the Cherenkov rings over a larger surface. The spatial envelope of RICH1 is also constrained by its magnetic shield, demanding even more compact mechanics for the photon-detector planes. To accommodate the new design of RICH1, a new gas enclosure for the radiator is needed. A volume of 3.8 m3 of C4F10 is enclosed in an aluminium structure directly fastened to the VELO tank on one side and sealed with a low-mass window on the other, with particular effort placed on building a leak-less system to limit potential environmental impact. Installing these fragile components in a very limited space has been a delicate process, and the last element to complete the gas-enclosure sealing was installed at the beginning of June.

The optical system is the final element of the RICH1 mechanics. The ~2 m2 spherical mirrors placed inside the gas enclosure are made of carbon fibre composite to limit the material budget (see “Cherenkov curves” image), while the two 1.3 m2 planes of flat mirrors are made of borosilicate glass for high optical quality. All the mirror segments are individually coated, glued on supports and finally aligned before installation in the detector. The full RICH1 installation is expected to be completed in the autumn, followed by the challenging commissioning phase to tune the operating parameters to be ready for Run 3.

Surpassing expectations

In its first 10 years of operations, the LHCb experiment has already surpassed expectations. It has enabled physicists to make numerous important measurements in the heavy-flavour sector, including the first observation of the rare decay B0s µ+µ, precise measurements of quark-mixing parameters, the discovery of CP violation in the charm sector, and the observation of more than 50 new hadrons including tetraquark and pentaquark states. However, many crucial measurements are currently statistically limited, including those underpinning the so-called flavour anomalies (see Bs decays remain anomalous). Together with the tracker, trigger and other upgrades taking place during LS2, the new SciFi and revamped RICH detectors will put LHCb in prime position to explore these and other searches for new physics for the next 10 years and beyond.

Science Gateway under construction

Science Gateway foundation stone

On 21 June, officials and journalists gathered at CERN to mark “first stone” for Science Gateway, CERN’s new flagship project for science education and outreach. Due to open in 2023, Science Gateway will increase CERN’s capacity to welcome visitors of all ages from near and afar. Hundreds of thousands of people per year will have the opportunity to engage with CERN’s discoveries and technology, guided by the people who make it possible.

The project has environmental sustainability at its core. Designed by renowned architect Renzo Piano, the carbon-neutral building will bridge the Route de Meyrin and be surrounded by a freshly planted 400-tree forest. Its five linked pavilions will feature a 900-seat auditorium, immersive spaces, laboratories for hands-on activities for visitors from age five upwards, and many other interactive learning opportunities.

“I would like to express my deepest gratitude to the many partners in our Member and Associate Member States and beyond who are making the CERN Science Gateway possible, in particular to our generous donors,” said CERN Director-General Fabiola Gianotti during her opening speech. “We want the CERN Science Gateway to inspire all those who come to visit with the beauty and the values of science.”

Surveyors eye up a future collider

Levelling measurements

CERN surveyors have performed the first geodetic measurements for a possible Future Circular Collider (FCC), a prerequisite for high-precision alignment of the accelerator’s components. The millimetre-precision measurements are one of the first activities undertaken by the FCC feasibility study, which was launched last year following the recommendation of the 2020 update of the European strategy for particle physics. During the next three years, the study will explore the technical and financial viability of a 100 km collider at CERN, for which the tunnel is a top priority. Geology, topography and surface infrastructure are the key constraints on the FCC tunnel’s position, around which civil engineers will design the optimal route, should the project be approved.

The FCC would cover an area about 10 times larger than the LHC, in which every geographical reference must be pinpointed with unprecedented precision. To provide a reference coordinate system, in May the CERN surveyors, in conjunction with ETH Zürich, the Federal Office of Topography Swisstopo, and the School of Engineering and Management Vaud, performed geodetic levelling measurements along an 8 km profile across the Swiss–French border south of Geneva.

Such measurements have two main purposes. The first is to determine a high-precision surface model, or “geoid”, to map the height above sea level in the FCC region. The second purpose is to improve the present reference system, whose measurements date back to the 1980s when the tunnel housing the LHC was built.

“The results will help to evaluate if an extrapolation of the current LHC geodetic reference systems and infrastructure is precise enough, or if a new design is needed over the whole FCC area,” says Hélène Mainaud Durand, group leader of CERN’s geodetic metrology group.

The FCC feasibility study, which involves more than 140 universities and research institutions from 34 countries, also comprises technological, environmental, engineering, political and economic considerations. It is due to be completed by the time the next strategy update gets under way in the middle of the decade. Should the outcome be positive, and the project receive the approval of CERN’s member states, civil-engineering works could start as early as the 2030s.

Web code auctioned as crypto asset

The web’s original source code

Time-stamped files stated by Tim Berners-Lee to contain the original source code for the web and digitally signed by him, have sold for US$5.4 million at auction. The files were sold as a non-fungible token (NFT), a form of a crypto asset that uses blockchain technology to confer uniqueness.

The web was originally conceived at CERN to meet the demand for automated information-sharing between physicists spread across universities and institutes worldwide. Berners-Lee wrote his first project proposal in March 1989, and the first website, which was dedicated to the World Wide Web project itself and hosted on Berners-Lee’s NeXT computer, went live in the summer of 1991. Less than two years later, on 30 April 1993, and after several iterations in development, CERN placed version three of the software in the public domain. It deliberately did so on a royalty-free, “no-strings-attached” basis, addressing the memo simply “To whom it may concern.”

The seed that led CERN to relinquish ownership of the web was planted 70 years ago, in the CERN Convention, which states that results of its work were to be “published or otherwise made generally available” – a culture of openness that continues to this day.

The auction offer describes the NFT as containing approximately 9555 lines of code, including implementations of the three languages and protocols that remain fundamental to the web today: HTML (Hypertext Markup Language), HTTP (Hypertext Transfer Protocol) and URIs (Uniform Resource Identifiers). The lot also includes an animated visualisation of the code, a letter written by Berners-Lee reflecting on the process of creating it, and a Scalable Vector Graphics representation of the full code created from the original files.

Bidding for the NFT, which auction- house Sotheby’s claims is its first-ever sale of a digital-born artefact, opened on 23 June and attracted a total of 51 bids. The sale will benefit initiatives that Berners-Lee and his wife Rosemary Leith support, stated a Sotheby’s press release.

bright-rec iop pub iop-science physcis connect