Comsol -leaderboard other pages

Topics

CERN and ESA join forces in harsh environments

The effects of radiation on electronics for the JUICE mission

Strengthening connections between particle physics and related disciplines, CERN signed a collaboration agreement with the European Space Agency (ESA) on 11 July to address the challenges of operating equipment in harsh radiation environments. Such environments are found in both particle-physics facilities and outer space, and the agreement identifies several high-priority projects, including: high-energy electron tests; high-penetration heavy-ion tests; assessment of commercial components and modules; radiation-hard and radiation-tolerant components and modules; radiation detectors, monitors and dosimeters; and simulation tools for radiation effects. Important preliminary results have already been achieved in some areas, including high-energy electron tests of electronics for the Jupiter Icy Moons Explorer (JUICE) mission performed at CERN’s CLEAR/VESPER facility.

Study comes full EuroCirCol

CERN Director-General Fabiola Gianotti at FCC Week 2019

More than 400 researchers convened in Brussels from 24 to 28 June for the annual meeting of the Future Circular Collider (FCC) study. In addition to innovations in superconductivity, high-field magnets, superconducting radio-frequency systems and civil-engineering studies, discussions sought to clarify issues surrounding the physics research topics that FCC can address.

The meeting also marked the final event of the Horizon 2020 EuroCirCol project – a European Union project to produce a conceptual design study for a post-LHC research infrastructure based on an energy-frontier 100 TeV circular hadron collider. Since June 2015 the project has produced a wealth of results in high-tech domains via the collaborative efforts of partners in Europe and other countries such as the US, Japan, Korea and Russia. These include impressive progress toward 16 T magnets and in the performance of superconducting wires. Breakthroughs in both fields, such as a first accelerator-type magnet exceeding 14 T (see Advanced dipole sets high-field record) and an increase in the critical current density of Nb3Sn wire, promise to significantly reduce the costs of exploring the high-energy frontier and could find practical applications outside particle physics.

The four-volume FCC conceptual design report was also presented. Authored by 1350 people from 150 institutes, the report “underlines the global attractiveness of the FCC and documents the far-reaching benefits that the project can have for Europe and future generations,” said Frédérick Bordry, CERN director for accelerators and technologies.

A wide range of talks focused on a future circular lepton collider (FCC-ee) as the first step of the FCC programme, followed by an energy-frontier proton collider (FCC-hh). Results testify to the technological readiness of the FCC-ee, which could be operational by the end of the 2030s and therefore allow time to develop the novel technologies required for a 100 TeV proton–proton collider.

In his keynote talk, Nima Arkani- Hamed of the Institute for Advanced Study highlighted the importance of scrutinising the Higgs boson at a post-LHC machine. Speakers also stressed the complementarity between the different FCC options in searching for dark-matter candidate particles and other new physics. Finally, the potential for studying the strong interaction with heavy-ion collisions, and detailing parton distribution functions with a proton–electron interaction point, were demonstrated.

The sustainability of research infrastructures and the assessment of their societal impact were other highlights of FCC week 2019, as discussed at a special “Economics of Science” workshop. Experts from the field of economics shared lessons learned with representatives from CERN and other research organisations, including SKA, ESA and ESS, demonstrating the many benefits beyond physics that major international projects bring.

The greatest lepton collider

A quadrupole next to one of the long dipole magnets in LEP

A few minutes before midnight on a summer’s evening in July 1989, 30 or so people were crammed into a back room at CERN’s Prévessin site in the French countryside. After years of painstaking design and construction, we were charged with breathing life into the largest particle accelerator ever built. The ring was complete, the aperture finally clear and the positron beam made a full turn on our first attempt. Minutes later beams were circulating, and a month later the first Z boson event was observed. Here began a remarkable journey that firmly established the still indefatigable Standard Model of particle physics.

So, what can go wrong when you’re operating 27 kilometres of particle accelerator, with ultra-relativistic leptons whizzing around the ring 11,250 times a second? The list is long. The LEP ring was packed with magnets, power converters, a vacuum system, a control system, a cryogenics system, a cooling and ventilation system, beam instrumentation – and much more. Then there was the control system, fibres, networks, routers, gateways, software, databases, separators, kickers, beam dump, radio-frequency (RF) cavities, klystrons, high-voltage systems, interlocks, synchronisation, timing, feedback… And, of course, the experiments, the experimenters and everybody’s ability to get along in a high-pressure environment.

LEP wasn’t the only game in town. There was fierce competition from the more innovative Stanford Linear Collider (SLC) in California. But LEP was off to a fantastic start and its luminosity increase was much faster than at its relatively untested linear counterpart. A short article capturing the transatlantic rivalry appeared in the Economist on 19 August 1989. “The results from California are impressive,” the magazine reported, “especially as they come from a new and unique type of machine. They may provide a sure answer to the generation problem before LEP does. This explains the haste with which the finishing touches have been applied to LEP. The 27 km-long device, six years in the making, was transformed from inert hardware to working machine in just four weeks – a prodigious feat, unthinkable anywhere but at CERN. Even so, it was still not as quick as Carlo Rubbia, CERN’s domineering director-general might have liked.”

Notes from the underground

LEP’s design dates from the late 1970s, the project being led by accelerator-theory group leader Eberhard Keil, RF group leader Wolfgang Schnell and C J “Kees” Zilverschoon. The first decision to be made was the circumference of the tunnel, with four options on the table: a 30 km ring that went deep into the Jura mountains, a 22 km ring that avoided them entirely, and two variants with a length of 26.7 km that grazed the outskirts of the mountains. Then director-general Herwig Schopper decided on a circumference of 26.7 km with an eye on a future proton collider for which it would be “decisive to have as large a tunnel as possible” (CERN Courier July/August 2019 p39). The final design was approved on 30 October 1981 with Emilio Picasso leading the project. Construction of the tunnel started in 1983, after a standard public enquiry in France.

Blasting the LEP tunnel under the Jura mountains

LEP’s tunnel, the longest-ever attempted prior to the Channel Tunnel, which links France and Britain, was carved by three tunnel-boring machines. Disaster struck just two kilometres into the three-kilometre stretch of tunnel in the foothills of the Jura, where the rock had to be blasted because it was not suitable for boring. Water burst in and formed an underground river that took six months to eliminate (figure 1). By June 1987, however, part of the tunnel was complete and ready for the accelerator to be installed.

Just five months after the difficult excavation under the Jura, one eighth of the accelerator (octant 8) had been completely installed, and, a few minutes before midnight on 12 July 1988, four bunches of positrons made the first successful journey from the town of Meyrin in Switzerland (point 1) to the village of Sergy in France (point 2), a distance of 2.5 km. Crucially, the “octant test” revealed a significant betatron coupling between the transverse planes: a thin magnetised nickel layer inside the vacuum chambers was causing interference between the horizontal and vertical focusing of the beams. The quadrupole magnets were adjusted to prevent a resonant reinforcement of the effect each turn, and the nickel was eventually demagnetised.

Giving birth to LEP

The following months saw a huge effort to install equipment in the remaining 24 km of the tunnel – magnets, vacuum chambers and RF cavities, as well as beam instrumentation, injection equipment, electrostatic separators, electrical cabling, water cooling, ventilation and all the rest. This was followed by conditioning the cavities, baking out and leak-testing the vacuum chambers, and individual testing. At the same time a great deal of effort to prepare the software needed to operate the collider was made with limited resources.

In the late 1980s, control systems for accelerators were going through a major transition to the PC. LEP was caught up in the mess and there were many differences of opinion on how to design LEP’s control system. As July 1989 approached, the control system was not ready and a small team was recruited to implement the bare minimum controls required to inject beam and ramp up the energy. Unable to hone key parameters such as the tune and orbit corrections before beam was injected, we had two major concerns: is the beam aperture clear of all obstacles, and are there any polarity errors in the connections of the many thousand magnetic elements? So we nominated a “Mr Polarity”, whose job was to check all polarities in the ring. This may sound trivial, but with thousands of connections it was a huge task.

Tidal forces, melting ice and the TGV to Paris

Diagrams showing variations in LEP’s beam-energy resolution

LEP’s beam-energy resolution was so precise that it was possible to observe distortion of the 27 km ring by a single millimetre, whether due to the tidal forces of the Sun and Moon, or the seasonal distortion caused by rain and meltwater from the nearby mountains filling up Lac Léman and weighing down one side of the ring. In 1993 we noticed even more peculiar random variations on the energy signal during the day – with the exception of a few hours in the middle of the night when the signal was noise free. Everybody had their own pet theory. I believed it was some sort of effect coming from planes interacting with the electrical supply cables. Some nights later I could be seen sitting in a car park on the Jura at 2 a.m., trying to prove my theory with visual observations, but it was very dark and all the planes had stopped landing several hours beforehand. Experiment inconclusive! The real culprit, the TGV (a high-speed train), was discovered by accident a few weeks later during a discussion with a railway engineer: leakage currents on the French rail track flowed through the LEP vacuum chamber with the return path via the Versoix river back to Cornavin. The noise hadn’t been evident when we first measured the beam energy as TGV workers had been on strike.

At a quarter to midnight on 14 July 1989, the aperture was free of obstacles and the beam made its first turn on our first attempt. Soon afterwards we managed to achieve a circulating beam, and we were ready to fine tune the multitude of parameters needed to prepare the beams for physics.

The goal for the first phase of LEP was electron–positron collisions at a total energy of 91 GeV – the mass of the neutral carrier of the weak force, the Z boson. LEP was to be a true Z factory, delivering millions of Zs for precision tests of the Standard Model. To mass-produce them required beams not only of high energy but also of high intensity, and delivering them required four steps. The first was to accumulate the highest possible beam current at 20 GeV – the injection energy. This was a major operation in itself, involving LEP’s purpose-built injection linac and electron–positron accumulator, the Proton Synchrotron, the Super Proton Synchrotron (SPS) and, finally, transfer lines to inject electrons and positrons in opposite directions – these curved not only horizontally but also vertically as LEP and the SPS were at different heights. The second step was to ramp up the accumulated current to the energy of the Z resonance with minimal losses. Thirdly, the beam had to be “squeezed” to improve the collision rate at the interaction regions by changing the focusing of the quadrupoles on either side of the experiments, thereby reducing the transverse cross section of the beam at the collision points.

Following the highly successful first turn on 14 July 1989, we spent the next month preparing for the first physics run. Exactly a month later, on 13 August, the beams collided for the first time. The following 10 minutes seemed like an eternity since none of the four experiments – ALEPH, DELPHI, L3 and OPAL – reported any events. I was in the control room with Emilio Picasso and we were beginning to doubt that the beams were actually colliding when Aldo Michelini called from OPAL with the long-awaited comment: “We have the first Z0!” ALEPH and OPAL physicists had connected the Z signal to a bell that sounded on the arrival of the particle in their detectors. While OPAL’s bell rang proudly, ALEPH’s was silent, leading to a barrage of complaints before it became apparent that they were waiting for the collimators to close before turning on their sub detectors. As the luminosity rose during the subsequent period of machine studies the bells became extremely annoying and were switched off.

From the Z pole to the WW threshold

Physicists in front of the final superconducting RF-cavity module to be installed

The first physics run began on 20 September 1989, with LEP’s total energy tuned for five days to the Z mass peak at 91 GeV, providing enough integrated luminosity to generate 1400 Zs in each experiment. A second period followed, this time with the energy scanned through the width of the Z at five different beam energies: at the peak and ±1 GeV and ±2 GeV to either side, allowing the experiments to measure the width of the Z resonance. First physics results were announced on 13 October, just three months after the final testing of the accelerator’s components (see LEP’s electroweak leap).

LEP dwelt at the Z peak from 1989 to 1995, during which time the four experiments each observed approximately 4.5 million Z decays. In 1995 a major upgrade dubbed LEP2 saw the installation of 288 superconducting cavities (figure 2), enabling LEP to sit at or near the WW threshold of 161 GeV for the following five years. The maximum beam energy reached was 104.4 GeV. There was also a continuous effort to increase the luminosity by increasing the number of bunches, reducing the emittance by adjusting the focusing, and squeezing the bunches more tightly at the interaction points, with LEP’s performance ultimately limited by the nonlinear forces of the beam–beam interaction – the perturbations of the beams as they cross the opposing beam. LEP surpassed every one of its design parameters (figure 3).

Life as a LEP accelerator physicist

Being an accelerator physicist at LEP took heart as well as brains. The sisyphean daily task of coaxing the seemingly temperamental machine to optimal performance even led us to develop an emotional attachment to it. Challenges were unpredictable, such as for the engineers dispatched on a fact-finding mission to ascertain the cause of an electrical short circuit, only to discover two deer, “Romeo and Juliet”, locked in a lover’s embrace having bitten through a cable, or the discovery of sabotage with beer bottles (see The bizarre episode of the bottles in the beampipe). The aim, however, was clear: inject as much current as possible into both beams, ramp the energy up to 45 GeV, squeeze the beam size down at the collision points, collide and then spend a few hours delivering events to the experiments. The reality was hours of furious concentration, optimisation, and, in the early days, frustrating disappointment.

Diagram showing LEP’s integrated luminosity

In the early years, filling LEP was a delicate hour-long process of parameter adjustment, tweaking and coaxing the beam into the machine. On a good day we would see the beam wobble alarmingly on the UV telescopes, lose a bit and watch the rest struggle up the ramp. On a bad day, futile attempt after futile attempt, most of the beam would disappear without warning in the first few seconds of the ramp. The process used to last minutes and there was nothing you could do. We would stand there, watching the lifetime buck and dip, and the painstakingly injected beam would either slowly or quickly drift out of the machine. The price of failure was a turn around and refill. Success brought the opportunity to chance the squeeze – an equally hazardous manoeuvre whereby the interaction-point focusing magnets were adjusted to reduce the beam size – and then perhaps a physics fill, and a period of relative calm. At this stage the focus would move to the experimental particle physicists on shift at the four experiments. Each had their own particular collective character, and their own way of dealing with us. We verged between being accommodating, belligerent, maverick, dedicated, professional and very occasionally hopelessly amateur – sometimes all within the span of a single shift, depending on the attendant pressures.

Table showing LEP

The experiment teams paraded their operational efficiency numbers – plus complaints or congratulations – at twice weekly scheduling meetings. Well run and disciplined, ALEPH almost always had the highest efficiency figures; their appearances at scheduling meetings nearly always a simple statement of 97.8% or thereabouts. This was livened in later years by the repeated appearance of their coordinator Bolek Pietrzyk, who congratulated us each time we stepped up in energy or luminosity with a strong, Polish-accented, “Congratulations! You have achieved the highest energy electron–positron collisions in the universe!”, which was always gratifying. Equally professional, but more relaxed, was OPAL, which had a strong British and German contingent. These guys understood human nature. Quite simply, they bribed us. Every time we passed a luminosity target or hit a new energy record they’d turn up in the control room with champagne or crates of German beer. Naturally we’d do anything for them, happily moving heaven and earth to resolve their problems. L3 and DELPHI had their own quirks. DELPHI, for example, ran their detector as a “state machine”, whose status changed automatically based on signals from the accelerator control room. All well and good, but they depended on us to change the mode to “dump beam” at the end of a fill, something that was occasionally skipped, leaving DELPHI’s subdetectors on and them ringing us desperately for a mode change. Baffled DELPHI students on shift would ask what was going on. Filling and ramping were demanding periods during the operational sequence and a lot of concentration was required. The experiment teams did well not to ring and make too many demands at this stage – requests were occasionally rebuffed with a brusque response.

On the verge of a great discovery?

LEP’s days were never fated to dwindle. Early on, CERN had a plan to install the LHC in the same tunnel, in a bid to scan ever higher energies and be the first to discover the Higgs boson. However, on 14 June 2000, LEP’s final year of scheduled running, the ALEPH experiment reported a possible Higgs event during operations at a centre-of-mass energy of 206.7 GeV. It was consistent with “Higgs-strahlung”, whereby a Z radiates a Higgs boson, which was expected to dominate Higgs-boson production in e+e collisions at LEP2 energies. On 31 July and 21 August ALEPH reported second and third events corresponding to a putative reconstructed Higgs mass in the range 114–115 GeV.

The bizarre episode of the bottles in the beampipe

The bottles in the beampipe

The story of the sabotage of LEP has grown in the retelling, but I was there in June 1996, hurrying back early from a conference to help the machine operators, who had been struggling to circulate a beam for several days. After exhausting other possibilities, it became clear that there was an obstruction in the vacuum pipe, and we detected the location using the beam position system. It appeared to be around point 1 (where ATLAS now sits), so we opened the vacuum seal and took a look inside the beampipe using mirrors and endoscopes. Not seeing anything, I frustratedly squeezed my head between the vacuum flanges and peered down inside the pipe. In the distance was something resembling a green concave lens. “This looks like the bottom of a beer bottle,” I thought, restraining myself from uttering a word to anyone in the vicinity. I went to the opposite open end of the vacuum section and peered into the vacuum pipe again: a green circular disk this time, but again, not a word. Someone got a long pole to poke out the offending article – out it came, and my guess was correct: it was a Heineken beer bottle, which had indeed refreshed the parts no other beer could reach, as the slogan ran. A hasty search revealed a second bottle. Upon closer inspection it was clear that the control room operators had almost succeeded in making the beam circulate despite the obstacles: there was a scorch burn along the label, indicating that they had almost managed to steer the beam past the bottles. If there had only been one they may have succeeded. The Swiss police interviewed me concerning this act of sabotage  but the culprit was never unmasked.

LEP was scheduled to stop in mid-September with two weeks of reserve time granted to the LEP experiments to see if new Higgs-like events would appear. After the reserve weeks, ALEPH requested two months more running to double its integrated luminosity. One was granted, yielding a 50% increase in the accumulated data, and ALEPH presented an update of their results on 10 October: the signal excess had increased to 2.6σ. Things were really heating up, and on 16 October L3 announced a missing-energy candidate. By now the accelerator team was pushing LEP to its limits, to squeeze out every ounce of physics data in the service of the experiments’ search for the elusive Higgs. At the LEP committee meeting on 3 November, ALEPH presented new data that confirmed their excess once again – it had now grown to 2.9σ. A request to extend LEP running by one year was made to the LEPC. There was gridlock, and no unanimous recommendation could be made.

All of CERN was discussing the proposed running of LEP in 2001 to get final evidence of a possible discovery of the Higgs boson. Arguments against included delays to the start of the LHC of up to three years. There was also concern that Fermilab’s Tevatron would beat the LHC to the discovery of the Higgs, and mundane but practical arguments about the transfer of human resources to the LHC and the impact on the materials budget, including electricity costs. The impending closure of LEP, when many of us thought we were about to discover the Higgs, was perceived like the death of a dear friend by most of the LEP-ers. After each of the public debates on the subject a group of us would meet in some local pub, drink a few beers, curse the disbelievers and cry on each other’s shoulders. This was the only “civil war” that I saw in my 43 years at CERN.

LEP’s final moments before being decommissioned and replaced by the LHC

The CERN research board met again on 7 November and again there was deadlock, with the vote split eight votes to eight. The next day, then director-general Luciano Maiani announced that LEP had closed for the last time. It was a deeply unpopular decision, but history has shown it to be correct: the Higgs was discovered at the LHC 12 years later, with a mass of not 115 but 125 GeV. LEP’s closure allowed a massive redeployment of skilled staff, and the experience gained for the first time in running large accelerators went on to prove essential to the safe and efficient operation of the LHC.

When LEP was finally laid to rest we met one last time for an official wake (figure 4). After the machine was dismantled, requiring the removal to the surface of around 30,000 tonnes of material, some of the magnets and RF units were shipped to other labs for use in new projects. Today, LEP’s concrete magnet casings can still be seen scattered around CERN as shielding units for antimatter and fixed-target experiments, and even as road barriers.

LEP was the highest energy e+e collider ever built. Its legacy was and is extremely important for present and future colliders. The quality and precision of the physics data remain unsurpassed in luminosity, energy and energy calibration. It is the reference for any future e+e-ring collider design.

Tunnelling for physics

New service tunnels

In 2012 the CERN management asked a question: what is the largest circular machine that could be feasibly constructed in the Geneva region from a civil-engineering perspective? Teams quickly embarked on an extensive investigation of the geological, environmental and technical constraints in pursuit of the world’s largest accelerator. Such a machine would be the next logical step in exploring the universe at ever smaller scales.

Since construction of the 27 km circumference Large Hadron Collider (LHC) was completed in 2005, CERN has been looking at the potential layouts for the tunnels that will house the next generation of particle accelerators. The Compact Linear Collider (CLIC) and the Future Circular Collider (FCC) are the two largest projects under consideration. With a circumference of 100 km, the FCC will require one of the world’s largest tunnels – almost twice as long as the recently completed 57 km Gotthard Base Tunnel in the Swiss Alps. Designing large infrastructure like the FCC tunnel requires the collection and interpretation of numerous data, which have to be balanced for the optimum level of risk, cost and project requirements.

The first and most important task in designing tunnels is to understand the needs and requirements of the users. For road or rail tunnels, this is relatively straightforward. For a cutting-edge scientific experiment, multi-disciplinary working groups are needed to identify the key criteria. The diameter of a new tunnel depends on what components would be inside – ventilation systems, magnets, lighting, transport corridors, etc – so they can fit in like a jigsaw.

Bespoke designs

Unlike other tunnelling projects, there are no standard rules or guidance for the design of particle-accelerator tunnels, meaning each design is, to a large extent, bespoke. One reason for this is the sensitivity of the equipment inside. Digging a 5.6 m-diameter hole disturbs rock that has been there for millennia, causing it to relax and to move. Modern tunnelling techniques can control these movements and get a tunnel to within a few centimetres of its intended design. For example, the two ends of the 27 km LEP ring came together with just 1 cm of error. It would be impossible to achieve the nanometre-level tolerances that the beamline requires, so the sensitive equipment installed in a completed accelerator tunnel must incorporate adjustable alignment systems into their designs.

The scale of the proposed CLIC and FCC projects

The city of Geneva sits on a large plateau between the Jura and Prealps mountains. The bedrock of the plateau is a competent (resistant to deformation) sedimentary rock, called molasse, which formed when eroded material was deposited and consolidated in a basin as the Alps lifted up. On top of the molasse sits a softer soil, called the moraines, which is made up of more recent, unconsolidated glacial deposits. The Jura itself is made of limestone rock, which while competent, is soluble and can form a network of underground voids, known as karsts.

We can never fully understand the ground before we start tunnelling and there is always the risk of encountering something unexpected, such as water, faults or obstructions. These cost money to overcome and/or delay the project; in the worst cases, they may even cause the tunnel to collapse. To help mitigate these risks and provide technical information for the tunnel design, we investigate the ground in the early stages of the project by drilling boreholes and testing ground samples. Like most things in civil engineering, however, there is a balance between the cost of the investigations versus the risks they mitigate. No boreholes have been sunk specifically for FCC yet, but we have access to a substantial amount of data from the LHC and from the Swiss and French authorities.

The answer to CERN’s question in 2012 was that a (quasi-)circular tunnel up to 100 km long could be built near Geneva (figure 1). This will be confirmed with further site investigations to verify the design assumptions and optimise a layout for the new machine. The FCC study considers two potential high-energy accelerators: hadron–hadron and electron–positron, and the FCC would consist of a series of arcs and straight sections (figure 2). Depending on the choice of a future collider, civil-engineering designs for FCC and/or CLIC will need to be developed further. Although the challenges between the two studies differ, the processes and tools used will be similar.

Optimising the alignment

Having determined the FCC’s feasibility, CERN’s civil engineers started designing the optimal route of the tunnel. Geology and topography are the key constraints on the tunnel position. Two alignment options were under consideration in 2012, both 80 km long, one located under the Jura Mountains and the other in the Geneva basin. When the FCC study officially kicked off in 2014, they were reviewed alongside a 47 km-circumference option fully excavated in the molasse.

Diagram of the FCC

Experience of tunnelling through Jura limestone during construction of the Large Electron Positron collider (LEP; from which the LHC inherited many of its tunnels) convinced civil engineers to discard the Jura option. Mining through the karstic limestone caused several delays and costly repairs after water and sediment flowed into the tunnel (see The greatest lepton collider). To this day, intensive maintenance works are needed between sectors 3 and 4 of the LHC tunnel and this has led to machine shutdowns lasting as long as two weeks.

By 2016, the proposed length of the FCC had increased to between 80 and 100 km to achieve higher energies with two alignments under consideration: intersecting (which crosses the LHC in plan view) and non-intersecting. The former is the current baseline design. The tunnel is located primarily in the competent molasse rock and avoids the problematic Jura limestone and the Prealps. However, it does pass through the Mandallaz limestone formation and also has to cross under Lake Geneva. To deal with the wealth of topographical, geological and environmental data relevant for a 100 km ring, CERN embarked on an innovative tunnel optimisation tool (TOT) that would let us assess a multitude of alignment options in a fraction of the time (see CERN’s tunnel optimisation tool).

CERN’s tunnel optimisation tool

The tunnel optimisation tool

In 2014, with the help of UK-based engineering consultancy Arup, CERN developed the tunnel optimisation tool (TOT) to integrate project requirements and data into a geospatial model.The web-based tool allows the user to digitally move the FCC tunnel, change its size, shape and depth and see, in real-time, the impacts of the changes on the design. Geology, surface constraints and environmentally protected areas are visualised, and parameters such as plane inclinations and tunnel depth can be changed at the click of a mouse. The tool warns users if certain limits are exceeded or obstacles are encountered, for example, if a shaft is in the middle of Lake Geneva! When it was built, TOT was the first of its kind within the industry. It has cut the cost of the civil-engineering design and has provided us with the flexibility to meet changing requirements to ultimately deliver a better project. The success of TOT led to its replication for CLIC and the International Linear Collider (ILC) under consideration in Japan. Recently, a TOT was built by Arup to quickly and cheaply assess a range of alignments for a 3 km tunnel under the ancient Stonehenge heritage site in the UK.

The alignment of the FCC tunnel has been optimised based on three key criteria at this stage: geology (building in the competent molasse rock wherever possible); shaft depth (minimising the depth of shafts); and surface sites (choosing locations that minimise disruption to residents and the environment).

Despite the best efforts to avoid the risky Jura Mountains, the geology is not perfect. The Prealps region has complex, faulted geology and it is uncertain which layers the tunnel will cross. Cracks or faults, caused by tectonic movements of the Alps and Jura, can occur in the molasse and limestone. Excavation through Mandallaz limestone can lead to similar issues encountered during LEP’s construction. Large, high-pressure inflows can be difficult to remedy, expensive and can create delays in the programme.

Options for tunnelling under Lake Geneva

To minimise the depth of the shafts, the entire FCC ring sits in an inclined plane with different heights above sea level around the tunnel. Modelling a range of alignment options at different locations and with different tunnel inclinations, constrained by the spacing requirements of the experiments, it turned out that one shaft was 558 m deep in the baseline design. The team therefore decided to replace the vertical shaft with an inclined tunnel (15% slope) to pop out the side of the mountain.

The presence of Lake Geneva influences the overall depth of the FCC, and the tunnel optimisation tool tells us that it isn’t possible to avoid tunnelling under the lake within the study boundary. Modern tunnelling techniques open up different options for crossing the lake, instead of simply digging deeper until we reach the rock (figure 3). Several options were considered, even including an option to build a hybrid particle accelerator-road tunnel in an immersed tube tunnel (which was later scrapped because of potential vibrations caused by traffic disrupting the beamline). The current design compromises on a mid-depth tunnel passing through the permeable moraines on the lake bed.

At the bottom of some of the FCC shafts are large experimental caverns with spans of up to 35 m. To determine the best arrangement for experimental and service caverns, Amberg Engineering carried out a stress analysis (figure 4). Although for data-acquisition purposes it is often desirable to have the two caverns as close as possible to each other, the analysis showed that it would be prohibitively expensive to build a 10 m concrete wall between the caverns. The cheaper option is to use the existing rock as a natural pillar, which would require a minimum spacing of 45 m.

Stress analysis of separated vs adjacent service and experimental caverns

Tunnelling inevitably disturbs the surrounding area. The beamline of the LHC is incredibly sensitive and can detect even the smallest vibrations from the outside world. This was a potential issue for construction works currently taking place for the High-Luminosity LHC project. The contractor had to improvise and modify a standard diesel excavator with an electric motor to eliminate vibrations from the engine. The programme was also adapted so that only the shafts were constructed during operation of the LHC, leaving the more disruptive cavern construction until the start of the current shutdown.

Securing the future

CERN currently has 83 km of underground structures. The FCC would add over 100 km of tunnels, 3720 m of shafts, 26 caverns (not including junction caverns), 66 alcoves and with up to 30 km between the Meyrin campus and the furthest site. The estimated civil-engineering cost for FCC (carried out by ILF Consulting Engineers) is approximately 6 billion Swiss Francs – 45% for tunnels and the rest for shafts, caverns and surface facilities – and benefits from significant advances in tunnelling technology since the LEP-tunnel days (see Advances in civil engineering since the LEP days).

Advances in civil engineering since the LEP days

Herrenknecht’s Mixshield TBM

It has been almost 35 years since three tunnel boring machines (TBMs) set off to carve out the 27 km-long hole that would house LEP and, later, the LHC. Contrary to the recent claims of tech entrepreneur Elon Musk, the technology used to construct modern tunnels has been quietly and rapidly advancing since the construction of LEP, providing a faster, safer and more versatile way to build tunnels. TBMs act as a mobile factory that simultaneously excavates rock from the face and builds a tunnel lining from prefabricated segments behind it. The outer shield of the machine protects workers from falling rock, making sure they are never working in unsupported ground.

One of the main advances in TBM technology is their ability to cope in variable ground conditions. Most of the LEP tunnels were constructed in dry, competent rock, meaning the excavation face needed little support to stand up. Underneath the Jura Mountains, however, pockets of water and soil form where the limestone dissolves into karsts. When a TBM hits this, the water can flow into the tunnels, causing flooding and, at worst, tunnel collapse. Modern TBMs come with a variety of face-support measures, including earth-pressure balance machines that use the excavated soil to push back against the excavated face for support. Herrenknecht’s Mixshield TBM (above) could be used to tunnel the FCC under Lake Geneva, where water-bearing moraines are encountered.

Segmental linings can be constructed off-site in a factory, improving quality, speed and safety. The segments are assembled in the rear of the TBM immediately after excavation. The segments can be fitted with a rubber gasket, which provides a waterproof seal, eliminating the need for the traditional secondary lining. Across the 100 km of the FCC, this will lead to substantial cost savings.

Seismic and sonic scanners can be mounted to the front of the TBM, allowing operators to detect voids or obstacles up to 40 m ahead and adjust their approach accordingly. Probe drilling and pre-support measures can also be implemented from within the machine, meaning that the mining crew is safe and minimising delays to the construction programme.

For vertical shafts, the vertical shaft sinking machine and shaft boring machine are the latest technological breakthroughs, taking all the technology of a TBM and standing it on its end. The giant rig hangs off a crane and excavates below the platform, whilst building a lining above it. The machine can even work underwater to stabilise the shafts during construction.

Traditional tunnelling techniques, which are useful for creating non-standard shapes or smaller tunnels like the experimental caverns in FCC, have come a long way, too. These aren’t the normal sticks of dynamite you see in films or cartoons – highly stable explosives are slotted precisely in holes using a giant rig with multiple arms for speed. The electric detonators can be configured to the millisecond for complex patterns of explosions that give tunnellers precise control of the shape, speed and quality of the excavation.

The safety of the underground areas is critical to ensure the safe and continued operation of the experiments, and CERN has developed advanced tools to inspect the structures – some of which are more than 60 years old. Manually inspecting the condition of the structures on the scale of the FCC will become extremely challenging. We are therefore developing new technologies that will allow us to monitor the condition of the tunnels remotely. Currently, teams are testing out how fibre-optic cables can be attached to the concrete linings to measure movements over time, and developing and training algorithms to be able to spot and characterise faults in the tunnel lining. In the future, the software will be able to measure these faults and compare the changes with previous inspections to assess how they have progressed. To capture these images, a Tunnel Inspection Machine, which runs on the monorail in the roof of the LHC, and a floor-roving inspection robot have both been tested to collect images and data, even when the tunnel is not safe for humans. These images can be rebuilt in a 3D environment and viewed through a virtual-reality headset.

Projects like the FCC and CLIC are not just exciting for physicists. For civil engineers they represent challenges that demand new ideas and technology. At the annual World Tunnel Congress, attended by more than 2000 leading tunnel and underground-space experts, CERN’s FCC has already generated great interest. If approved, it would require the largest construction projects science has ever seen, bequeathing a tunnel that would serve fundamental exploration into the next century. 

Dipole marks path to future collider

Installation of the MDP

Researchers in the US have demonstrated an advanced accelerator dipole magnet with a field of 14.1 T – the highest ever achieved for such a device at an operational temperature of 4.5 K. The milestone is the work of the US Magnet Development Program (MDP), which includes Fermilab, Lawrence Berkeley National Laboratory (LBNL), the National High-Field Magnetic Field Laboratory and Brookhaven National Laboratory. The MDP’s “cos-theta 1” (MDPCT1) dipole, made from Nb3Sn superconductor, beats the 13.8 T at 4.5 K achieved by LBNL magnet “HD2” a decade ago, and follows the 14.6 T at 1.9 K (13.9 T at 4.5 K) reached by “FRESCA 2” at CERN in 2018, which was built as a superconducting-cable test station. Together with other recent advances in accelerator magnets in Europe and elsewhere, the result sends a positive signal for the feasibility of next-generation hadron colliders.

The MDP was established in 2016 by the US Department of Energy to develop magnets that operate as closely as possible to the fundamental limits of superconducting materials while minimising the need for magnet training. The programme aims to integrate domestic accelerator-magnet R&D and position the US in the technology development for future high-energy proton-proton colliders, including a possible 100 km-circumference facility at CERN under study by the Future Circular Collider (FCC) collaboration. In addition to the baseline design of MDPCT1, other design options for such a machine have been studied and will be tested in the coming years.

“The goal for this first magnet test was to limit the coil mechanical pre-load to a safe level, sufficient to produce a 14 T field in the magnet aperture,” explains MDPCT1 project leader Alexander Zlobin of Fermilab. “This goal was achieved after a short magnet training at 1.9 K: in the last quench at 4.5 K the magnet reached 14.1 T. Following this successful test the magnet pre-stress will be increased to reach its design limit of 15 T.”

The result sends a positive signal for the feasibility of next-generation hadron colliders

The development of high-field superconducting accelerator magnets has received a strong boost from high-energy physics in the past decades. The current state of the art is the LHC dipole magnets, which operate at 1.9 K to produce a field of around 8 T, enabling proton-proton collisions at an energy of 13 TeV. Exploring higher energies, up to 100 TeV at a possible future circular collider, requires higher magnetic fields to steer the more energetic beams. The goal is to double the field strength compared to the LHC dipole magnets, reaching up to 16 T, which calls for innovative magnet design and a different superconductor compared to the Nb-Ti used in the LHC. Currently, Nb3Sn (niobium tin) is being explored as a viable candidate for reaching this goal. High-temperature superconductors, such as REBCO, MgB2 and iron-based materials, are also being studied.

HL-LHC first

The first accelerator magnets to use Nb3Sn technology are the 11 T dipole magnets and the final-focusing magnets under development for the high luminosity LHC (HL-LHC), which will be installed around the interaction points. But the FCC would require more than 5000 superconducting dipoles grouped for powering in series and operating continuously over long time periods. A number of critical aspects underlie the design, cost-effective  manufacturing and reliable operation of 16 T dipole magnets in future colliders. Among the targets for the Nb3Sn conductor is a critical current density of 1500 A/mm2 at 16 T and 4.2 K – almost a 50% increase compared to the current state of the art. In addition to the conductor, developing an industry-adapted design for 16 T dipoles and other accelerator magnets with higher performance presents a major challenge.

Training quench history for the MDPCT1 demonstrator magnet

The FCC collaboration has launched a rigorous R&D programme towards 16T magnets. Key components are the global Nb3Sn conductor development programme, featuring a network of academic institutes and industrial partners, and the 16 T magnet-design work package supported by the EU-funded EuroCirCol project. This is now being followed by a 16 T short-model programme aiming at constructing model magnets with several partners worldwide such as the US MDP. Unit lengths of Nb3Sn wires with performance at least comparable to that of the HL-LHC conductor have already been produced by industry and cabled at CERN, while, at Fermilab, multi-filamentary wire produced with an internal oxidation process has already exceeded the critical current density target for the FCC – just two examples of many recent advances in this area. EuroCirCol, which officially wound up this year (see Study comes full EuroCirCol), has also enabled a design and cost model for the magnets of FCC, demonstrating the feasibility of Nb3Sn technology.

“The enthusiasm of the worldwide superconductor community and the achievements are impressive,” says Amalia Ballarino, leader of the conductor activity at CERN. “The FCC conductor development targets are very challenging. The demonstration of a 14 T field in a dipole accelerator magnet and the possibility of reaching the target critical current density in R&D wires are milestones in the history of Nb3Sn conductor and a reassuring achievement for the FCC magnet development programme.”

Cloud services take off in the US and Europe

Fermilab has announced the launch of HEPCloud, a step towards a new computing paradigm in particle physics to deal with the vast quantities of data pouring in from existing and future facilities. The aim is to allow researchers to “rent” high-performance computing centres and commercial clouds at times of peak demand, thus reducing the costs of providing computing capacity. Similar projects are also gaining pace in Europe.

“Traditionally, we would buy enough computers for peak capacity and put them in our local data centre to cover our needs,” says Fermilab’s Panagiotis Spentzouris, one of HEPCloud’s drivers. “However, the needs of experiments are not steady. They have peaks and valleys, so you want an elastic facility.” All Fermilab experiments will soon submit jobs to HEPCloud, which provides a uniform interface so that researchers don’t need expert knowledge about where and how best to run their jobs.

The idea dates back to 2014, when Spentzouris and Fermilab colleague Lothar Bauerdick assessed the volumes of data coming from Fermilab’s neutrino programme and the US participation in CERN’s Large Hadron Collider (LHC) experiments. The first demonstration of HEPCloud on a significant scale was in February 2016, when the CMS experiment used it to achieve about 60,000 cores on the Amazon cloud, AWS, and, later that year, to run 160,000 cores using Google Cloud Services. Most recently in May 2018, the NOvA team at Fermilab was able to execute around 2 million hardware threads at a supercomputer at the National Energy Research Scientific Computing Center of the US Department of Energy’s Office of Science. HEPCloud project members now plan to enable experiments to use the state-of-the art supercomputing facilities run by the DOE’s Advanced Scientific Computing Research programme at Argonne and Oak Ridge national laboratories.

Europe’s Helix Nebula

CERN is leading a similar project in Europe called the Helix Nebula Science Cloud (HNSciCloud). Launched in 2016 and supported by the European Union (EU), it builds on work initiated by EIROforum in 2010 and aims to bridge cloud computing and open science. Working with IT contractors, HNSciCloud members have so far developed three prototype platforms and made them accessible to experts for testing.

The results and lessons learned are contributing to the implementation of the European Open Science Cloud

“The HNSciCloud pre-commercial procurement finished in December 2018, having shown the integration of commercial cloud services from several providers (including Exoscale and T-Systems) with CERN’s in-house capacity in order to serve the needs of the LHC experiments as well as use cases from life sciences, astronomy, proton and neutron science,” explains project leader Bob Jones of CERN. “The results and lessons learned are contributing to the implementation of the European Open Science Cloud where a common procurement framework is being developed in the context of the new OCRE [Open Clouds for Research Environments] project.”

The European Open Science Cloud, an EU-funded initiative started in 2015, aims to bring efficiencies and make European research data more sharable and reusable. To help European research infrastructures move towards this open-science future, a €16 million EU project called ESCAPE (European Science Cluster of Astronomy & Particle Physics ESFRI) was launched in February. The 3.5 year-long project led by the CNRS will see 31 facilities in astronomy and particle physics collaborate on cloud computing and data science, including CERN, the European Southern Observatory, the Cherenkov Telescope Array, KM3NeT and the Square Kilometre Array (SKA).

In the context of ESCAPE, CERN is leading the effort of prototyping and implementing a FAIR (findable, accessible, interoperable, reproducible) data infrastructure based on open-source software, explains Simone Campana of CERN, who is deputy project leader of the Worldwide LHC Computing Grid (WLCG). “This work complements the WLCG R&D activity in the area of data organisation, management and access in preparation for the HL-LHC. In fact, the computing activities of the CERN experiments at HL-LHC and other initiatives such as SKA will be very similar in scale, and will likely coexist on a shared infrastructure.”

Austrian synchrotron debuts carbon-ion cancer treatment

The ion-beam injectors of the MedAustron facility in Austria. Credit: MedAustron/T Kästenbauer

MedAustron, an advanced hadron-therapy centre in Austria, has treated its first patient with carbon ions. The medical milestone, which took place on 2 July 2019, elevates the particle-physics-linked facility to the ranks of only six centres worldwide that can combat tumours with both protons and carbon ions.

When protons and carbon ions strike biological material, they lose energy much more quickly than photons, which are traditionally used in radiotherapy. This makes it possible to deposit a large dose in a small and well-targeted volume, reducing damage to healthy tissue surrounding a tumour and thereby reducing the risk of side effects. While proton therapy has been successfully used at MedAustron since December 2016, treating more than 400 cancer patients so far, carbon-ion therapy opens up new opportunities to target tumours that were previously difficult or impossible to treat. Carbon ions are biologically more effective than protons and therefore allow a higher dose to be administered to the tumour.

MedAustron’s accelerator complex is based on the CERN-led Proton Ion Medical Machine Study, the design subsequently developed by CERN, the TERA Foundation, INFN in Italy and the CNAO Foundation (see “Therapeutic particles”). Substantial help was also provided by the Paul Scherrer Institute, in particular for the gantry and beam-delivery designs. The MedAustron system comprises an injector, where ions from three ion sources are pre-accelerated by a linear accelerator, a synchrotron, a high-energy beam transport system to deliver the beam to various beam ports, and a medical front-end, which controls the irradiation process and covers all safety aspects with respect to the patient. Certified as a medical product, the accelerator provides proton and carbon ion beams with a penetration depth of about up to 37 cm in water-equivalent tissue, and is able to deliver carbon-ions with 255 different energies ranging from 120 to 400 MeV with maximum intensities of up to 109 ions per extracted beam pulse.

The MedAustron proton/carbon-ion synchrotron

“The first successful carbon-ion treatment unveils MedAustron’s full potential for cancer treatment,” says Michael Benedikt of CERN, who co-ordinated the laboratory’s contributions to the project. “The realisation of MedAustron, through the collaboration with CERN for the construction of the accelerator facility, is an excellent example of large-scale technology transfer from fundamental research to societal applications.”

Particle therapy with carbon ions was first used in Japan in 1994, and a total of almost 30,000 patients worldwide have since been treated with this method. Initially, treatment with carbon ions at MedAustron will focus on tumours in the head and neck region, and at the base of the skull. But the spectrum will be continuously expanded to include other tumour types. MedAustron is also working on the completion of an additional treatment room with a gantry that administers proton beams from a large variety of irradiation angles.

“Irradiation with carbon ions makes it possible to maintain both the physical functions and the quality of life of patients, even with very complicated tumours,” says Piero Fossati, scientific and clinical director of MedAustron’s carbon ion programme.

LEAPS Plenary Meeting 2019

LEAPS – the League of European Accelerator-based Photon Sources – is a strategic consortium initiated by the Directors of the Synchrotron Radiation and Free Electron Laser (FEL) user facilities in Europe. Its primary goal is to actively and constructively ensure and promote the quality and impact of the fundamental, applied and industrial research carried out at their respective facility to the greater benefit of European science and society.

The Plenary Meeting offers an insight into the LEAPS Strategy

Vertex2019 – Vertex 2019: 28th International Workshop on Vertex Detectors

The International Workshop on Vertex Detectors (VERTEX) is a major annual series of international workshops for physicists and engineers from the high energy and nuclear physics community. VERTEX provides an international forum to exchange the experiences and needs of the community, and to review recent, ongoing, and future activities on silicon based vertex detectors. The workshop covers a wide range of topics: existing and future detectors, new developments, radiation hardness, simulation, tracking and vertexing, electronics and triggering, applications to medical and other fields.

COOL 2019

The bi-annual 12th International Workshop COOL’19 will be held on September 23 – 27, 2019, and co-hosted by the Budker Institute of Nuclear Physics SB RAS and Novosibirsk State University. The workshop will be focused on the various aspects of the cooling methods and technics of charged particles. The workshop Topics:

  • electron cooling
  • stochastic cooling
  • muon cooling
  • cooled beam dynamics
  • new concepts and theoretical advancements in beam cooling
  • facility status updates and beam cooling reviews
bright-rec iop pub iop-science physcis connect