The release of the full mission data of ESA’s Planck spacecraft is a milestone for cosmology. Despite the high quality of the data – including, for the first time, polarization observations – and the thorough analysis by the Planck collaboration, this event had little impact in the community and in public. Rather than strengthening the case for new physics, it confirms with high accuracy the standard model of cosmology, disfavours the existence of a light sterile neutrino, and turns down the hope for a dark-matter origin of the positron excess in cosmic rays.
Planck is the third generation of missions dedicated to the observation of the cosmic microwave background (CMB). This “first light” – freed 380,000 years after the Big Bang – provides key information on the universe as a whole, its fundamental constituents and its past and future evolution. Planck builds on the detection of the first large-scale CMB anisotropies by the Cosmic Background Explorer (COBE) in the early 1990s, and the much sharper view provided by NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) in the following decade (CERN Courier May 2006 p12). Planck’s strength is that, by extending the wavelength coverage to higher frequencies than measured by WMAP, it is better able to disentangle foreground emission from the Milky Way and other galaxies.
The Planck collaboration – including individuals from more than 100 scientific institutes in Europe, the US and Canada – released the first data of the mission in March 2013, together with 31 scientific papers (CERN Courier May 2013 p12). Already then, the results confirmed with higher accuracy the relative abundance of the cosmic ingredients, namely ordinary (baryonic) matter, cold dark matter (CDM) and a cosmological constant (Λ), as derived by WMAP (CERN Courier May 2008 p8). The new Planck results confirm that these components sum up to the critical density corresponding to a flat universe with no global curvature, and yield only small changes to the relative abundances in this standard ΛCDM cosmology.
The press release from ESA emphasizes the result that the re-ionization of the universe by the first stars took place some 550 million years after the Big Bang, which is some 100 million years later than previously assumed for this end of the “Dark Ages”. There are, however, other interesting results on more fundamental physics. Planck further limits the sum of the active neutrino masses to be below 0.194 eV, and constrains the effective number of light, relativistic neutrinos, Neff, to be 3.04±0.33 (both at 95% CL with external constraints). This strongly disfavours the existence of an additional fourth quasi-massless neutrino, which could have been a sterile (“right-handed”) one, but it does not exclude the possibility of heavier (> 1–10 eV) sterile neutrinos.
Another interesting result coming from CMB polarization measurements is a constraint on dark-matter annihilation, which could have contributed to the re-ionization of the universe at the beginning of the “Dark Ages”. The derived upper limit almost entirely excludes the dark-matter interpretation – a neutralino with a mass of the order of 1 TeV – of the positron excess measured by the Alpha Magnetic Spectrometer (CERN Courier November 2014 p6). Finally, the combined analysis of Planck data with the measurements by BICEP2 and the Keck array on the curly B-modes of the CMB polarization confirms that at least most, if not all, of the signal is from galactic dust in the Galaxy (CERN Courier November 2014 p15), therefore disproving the claim for primordial gravitational waves from inflation (CERN Courier May 2014 p13).
Technologies developed for fundamental research in particle, astro-particle and nuclear physics have an enormous impact on everyday lives. To push back scientific frontiers in these fields requires innovation: new ways to detect one signal in a wealth of data, new techniques to sense the faintest signals, new detectors that operate in hostile environments, new engineering solutions that strive to improve on the best – and many others.
The scientific techniques and high-tech solutions developed by high-energy physics can help to address a broad range of challenges faced by industry and society – from developing more effective medical imaging and cancer diagnosis through positron-emission tomography techniques, to developing the next generation of solar panels using ultra-high vacuum technologies. However, it is difficult and costly not only for many organizations to carry out the R&D needed to develop new applications, products and processes, but also for scientists and engineers to turn their technologies into commercial opportunities.
The aim of the high-energy physics technology-transfer network – HEPTech – is to bring together leading European high-energy physics research institutions so as to provide academics and industry with a single point of access to the skills, capabilities, technologies and R&D opportunities of the high-energy physics community in a highly collaborative open-science environment. As a source of technology excellence and innovation, the network bridges the gap between researchers and industry, and accelerates the industrial process for the benefit of the global economy and wider society.
HEPTech is made up of major research institutions active in particle, astroparticle and nuclear physics. It has a membership of 23 institutions across 16 countries, including most of the CERN member states (see table). Detailed information about HEPTech member organizations and an overview of the network’s activities are published annually in the HEPTech Yearbook and are also available on the network’s website.
So, how was the network born? Jean-Marie Le Goff, the first co-ordinator and present chairman of HEPTech, explains: “Particle physics is a highly co-operative environment. The idea was to spread that spirit over to the Technology Transfer Offices.” So in 2008 a proposal was made to the CERN Council to establish a network of Technology Transfer Offices (TTOs) in the field of particle physics. The same year, Council approved the network for a pilot phase of three years, reporting annually to the European Strategy Session of Council. In the light of the positive results obtained over those three years, Council approved the continuation of the network’s activities and its full operation. “Since then it has grown – both in expanding the number of members and in facilitating bodies across Europe that can bring innovation from high-energy physics faster to industrial exploitation”, says Le Goff.
The primary objective of the HEPTech network is to enhance technology transfer (TT) from fundamental research in physics to society. Therefore, the focus is on furthering knowledge transfer (KT) from high-energy physics to other disciplines, industry and society, as well as on enhancing TT from fundamental research in physics to industry for the benefit of society. The network also aims to disseminate intellectual property, knowledge, skills and technologies across organizations and industry, and to foster collaborations between scientists, engineers and business. Another important task is to enable the sharing of best practices in KT and TT.
HEPTech’s activities are fully in line with its objectives. To foster the contacts with industry at the European level, the network organizes regular academia–industry matching events (AIMEs). These are technology-themed events that provide matchmaking between industrial capabilities and the needs of particle physics and other research disciplines. They are HEPTech’s core offering to its members and the wider community, and the network has an active programme in this respect. Resulting from joint efforts by the network and its members, the AIMEs usually attract about 100 participants from more than 10 countries (figures from 2014). Last year, the topics ranged from the dissemination of micropattern-gas-detector technologies beyond fundamental physics, through potential applications in the technology of controls, to fostering academia–industry collaboration for manufacturing large-area detectors for the next generation of particle-physics experiments, and future applications of laser technologies.
HEPTech has teamed up with the work package on relations with industry of the Advanced European Infrastructures for Detectors at Accelerators (AIDA) project
“The topics of the events are driven on the one hand by the technologies we have – it’s very much a push model. On the other hand, they are the results of the mutual effort between the network and its members, where the members have the biggest say because they put in a lot of effort”, says Ian Tracey, the current HEPTech co-ordinator. He believes that a single meeting between the right people from academia and industry is only the first step in the long process of initiating co-operation. To establish a project fully, the network should provide an environment for regular repetitive contact for similar people. To address this need, HEPTech looks at increasing the number of AIMEs from initially four up to eight events per year.
“The benefit of having HEPTech as a co-organizer of the AIMEs is clearly the European perspective”, says Katja Kroschewski, head of TT at DESY. “Having speakers from various countries enlarges the horizon of the events and allows coverage of the subject field across Europe. It is different from doing a local event – for instance, having companies only from Hamburg or just with the focus on Germany. As the research work concerned has an international scope, it absolutely makes sense to organize such events. It is good to have the input of HEPTech in shaping the programme of the event and to have the network’s support within the organizing committee as well.”
HEPTech has teamed up with the work package on relations with industry of the Advanced European Infrastructures for Detectors at Accelerators (AIDA) project (which was co-funded by the European Commission under FP7 in 2011–2014), to organize AIMEs on detectors, with a view to fostering collaboration with industry during the pre-procurement phase. A total of seven AIMEs were organized in collaboration with AIDA and the RD51 collaboration at CERN, covering most of the technology fields of importance for detectors at accelerators. HEPTech financed four of them. A total of 101 companies attended the events, giving an average of 14 companies per event. For technology topics where Europe could meet the needs of academia, the percentage of EU industry was about 90% or above, going down to 70% when the leading industry for a technology topic was in the US and/or Asia.
To help event organizers find pertinent academic and industrial players in the hundreds, sometimes thousands, of organizations active in a particular technology, CERN used graph-analysis techniques to develop a tool called “Collaboration spotting”. The tool automatically processes scientific publications, patents and data from various sources, selects pertinent information and populates a database that is later used to automatically generate interactive sociograms representing the activity occurring in individual technology fields. Organizations and their collaborations are displayed in a graph that makes the tool valuable for monitoring and assessing the AIMEs.
However, the findings from AIDA show that it is difficult to conduct an assessment of the impact on industry of an AIME. “To keep a competitive advantage, companies entering a partnership agreement with academia tend to restrict the circulation of this news as much as possible, at least until the results of the collaboration become commercially exploitable,” explains Le Goff. “Although it tends to take some years before becoming visible, an increase in the number of co-publications and co-patents among attendees is a good indicator of collaboration. Clearly some of them could have been initiated at preceding events or under other circumstances, but in any case, the AIME has contributed to fostering or consolidating these collaborations.”
Learning and sharing
Another area of activity is the HEPTech Symposium, which is dedicated to the support of young researchers in developing entrepreneurial skills and in networking. This annual event brings together researchers at an early stage in their careers who are working on potentially impactful technologies in fields related to astro-, nuclear and particle physics. For one week, HEPTech welcomes these Early Stage Researchers from around Europe, providing an opportunity for networking with commercially experienced professionals and TT experts and for developing their entrepreneurial potential.
The first HEPTech Symposium took place in June 2014 in Cardiff. The young researcher whose project attracted the greatest interest was awarded an expenses-paid trip around the UK to look for funding for his project. The 2015 symposium will be held in Prague on 31 May–6 June and will be hosted by Inovacentrum from the Czech Technical University in collaboration with ELI Beamlines and the Institute of Physics of the Academy of Sciences. HEPTech has established a competitive procedure for members that would like to host the event in future. Those interested have to demonstrate their capacity for organizing both a quality training programme and the entertainment of the participants.
CERN Council encouraged HEPTech to continue its activities and amplify its efforts
Providing opportunities for capacity-building and sharing best practice among its members is of paramount importance to HEPTech. The network is highly active in investigating and implementing novel approaches to TT. A dedicated workgroup on sharing best practices responds to requests from members that are organizing events on a number of subjects relevant to the institutions and their TT process. These include, for instance, workshops presenting cases on technology licensing, the marketing of science and technology ond others. Through workshops, the network is able to upscale the skills of its member institutions and provide capacity-building by sharing techniques and different approaches to the challenges faced within TT. These events – an average of four per year – are driven by the members’ enthusiasm to explore advanced techniques in KT and TT, and help to create a collaborative spirit within the network. The members provide significant assistance to the implementation of these events, including lecturers and workshop organization.
Bojil Dobrev, co-convener of the workgroup on best practices provides a recent example of best-practice transfer within the network, in which intellectual property (IP) regulations elaborated by a HEPTech workgroup were successfully used as a basis for development of IP regulations at Sofia University, Bulgaria. In 2013–2014, a survey focusing on the needs and skills of HEPTech members was conducted within the remit of this workgroup. The objectives were to identify the skills and potential of the HEPTech members and their requirement for support through the network, focusing mainly on the early stage (established recently) TTOs. The survey covered all aspects of a TTO’s operation – from organization and financing, through IP activities, start-ups, licensing and contacts with industry, to marketing and promotion. “The survey was used as a tool to investigate the demand of the TTOs. Its outcomes helped us to map HEPTech’s long-term strategy and to elaborate our annual work plan, particularly in relation to training and best-practice sharing”, explains Dobrev.
Taking into consideration the overall achievements of HEPTech and based on the annual reports of the network co-ordinator, CERN Council encouraged HEPTech to continue its activities and amplify its efforts in the update of the European Strategy for Particle Physics in May 2013. The following year, in September, the Council president gave strong support and feedback for HEPTech’s work.
HEPTech’s collaborative efforts with the European Extreme Light Infrastructure (ELI) project resulted in network membership of all three pillars of the project. Moreover, at the Annual Forum of the EU Strategy for the Danube Region, representatives of governments in the Danube countries acknowledged HEPTech’s role as a key project partner in the Scientific Support to the Danube Strategy initiative.
With its stated vision to become “the innovation access-point for accelerator- and detector-driven research infrastructures” within the next three years, HEPTech is looking to expand – indeed, three new members joined the network in December 2014. It also aims to take part in more European-funded projects and is seeking closer collaboration with other large-scale science networks, such as the European TTO Circle – an initiative of the Joint Research Centre of the European Commission, which aims to connect the TTOs of large European public research organizations.
The neutrino is the most abundant matter-particle in the universe and the lightest, most weakly interacting of the fundamental fermions. The way in which a neutrino’s flavour changes (oscillates) as it propagates through space implies that there are at least three different neutrino masses, and that mixing of the different mass states produces the three known neutrino flavours. The consequences of these observations are far reaching because they imply that the Standard Model is incomplete; that neutrinos may make a substantial contribution to the dark matter known to exist in the universe; and that the neutrino may be responsible for the matter-dominated flat universe in which we live.
This wealth of scientific impact justifies an energetic programme to measure the properties of the neutrino, interpret these properties theoretically, and understand their impact on particle physics, astrophysics and cosmology. The scale of investment required to implement such a programme requires a coherent, international approach. In July 2013, the International Committee for Future Accelerators (ICFA) established its Neutrino Panel for the purpose of promoting both international co-operation in the development of the accelerator-based neutrino-oscillation programme and international collaboration in the development of a neutrino factory as a future intense source of neutrinos for particle-physics experiments. The Neutrino Panel’s initial report, presented in May 2014, provides a blueprint for the international approach (Cao et al. 2014).
The accelerator-based contributions to this programme must be capable both of determining the neutrino mass-hierarchy and of seeking for the violation of the CP symmetry in neutrino oscillations. The complexity of the oscillation patterns is sufficient to justify two complementary approaches that differ in the nature of the neutrino beam and the neutrino-detection technique (Cao et al. 2015). In one approach, which is adopted by the Hyper-K collaboration, a neutrino beam of comparatively low energy and narrow energy spread (a narrow-band beam) is used to illuminate a “far” detector at a distance of approximately 300 km from the source (see “Proto-collaboration formed to promote Hyper-Kamiokande”). In a second approach, a neutrino beam with a higher energy and a broad spectrum (a wide-band beam) travels more than 1000 km through the Earth before being detected.
Since summer 2014, a global neutrino collaboration has come together to pursue the second approach, using Fermilab as the source of a wide-band neutrino beam directed at a far detector located deep underground in South Dakota. In addition to measuring the neutrinos in the beam, the experiment will study neutrino astrophysics and nucleon decay. This experiment will be of an unprecedented scale and dramatically improve the understanding of neutrinos and the nature of the universe.
This new collaboration – currently dubbed ELBNF for Experiment at the Long-Baseline Neutrino Facility – has an ambitious plan to build a modular liquid-argon time-projection chamber (LAr-TPC) with a fiducial mass of approximately 40 kt as the far detector and a high-resolution “near” detector. The collaboration is leveraging the work of several independent efforts from around the world that have been developed through many years of detailed studies. These groups have now converged around the opportunity provided by the megawatt neutrino-beam facility that is planned at Fermilab and by the newly planned expansion with improved access of the Sanford Underground Research Facility in South Dakota, 1300 km from Fermilab. To give a sense of scale, to house this detector some 1.5 km underground requires a hole that is approximately 120,000 m3 in size – nearly equivalent to the volume of Wimbledon’s centre-court stadium.
The principal goals of ELBNF are to carry out a comprehensive investigation of neutrino oscillations, to search for CP-invariance violation in the lepton sector, to determine the ordering of the neutrino masses and to test the three-neutrino paradigm. In addition, with a near detector on the Fermilab site, the ELBNF collaboration will perform a broad set of neutrino-scattering measurements. The large volume and exquisite resolution of the LAr-TPC in its deep underground location will be exploited for non-accelerator physics topics, including atmospheric-neutrino measurements, searches for nucleon decay, and measurement of astrophysical neutrinos (especially those from a core-collapse supernova).
The new international team has the necessary expertise, technical knowledge and critical mass to design and implement this exciting “discovery experiment” in a relatively short time frame. The goal is the deployment of a first detector with 10 kt fiducial mass by 2021, followed by implementation of the full detector mass as soon as possible. The accelerator upgrade of the Proton Improvement Plan-II at Fermilab will provide 1.2 MW of power by 2024 to drive a new neutrino beam line at the laboratory. There is also a plan that could further upgrade the Fermilab accelerator complex to enable it to provide up to 2.4 MW of beam power by 2030 – an increase of nearly a factor of seven on what is available today. With the possibility of space for expansion at the Sanford Underground Research Facility, the new international collaboration will develop the necessary framework to design, build and operate a world-class deep-underground neutrino and nucleon-decay observatory. This plan is aligned with both the updated European Strategy for Particle Physics and the report of the Particle Physics Project Prioritization Panel (P5) written for the High Energy Physics Advisory Panel in the US.
A letter of intent (LoI) was developed during autumn 2014, and the first collaboration meeting was held in mid January at Fermilab. Sergio Bertolucci, CERN’s director for research and computing, and interim chair of the institutional board, ran the meeting. More than 200 participants from around the world attended, and close to 600 physicists from 140 institutions and 20 countries have signed the LoI, to date. The collaboration has chosen its new spokespersons – Mark Thomson of Cambridge University and André Rubbia of ETH Zurich – and will begin the process of securing the early funding needed to excavate the cavern in a timely fashion so that detector installation can begin in the early 2020s.
Mounting such a significant experiment on such a compressed time frame will require all three world regions – Asia, the Americas and Europe – to work in concert. The pioneering work on the liquid-argon technique was carried out at CERN and implemented in the ICARUS detector, which ran for many years at Gran Sasso. To deliver the ELBNF far detector requires that the LAr-TPC technology be scaled up to an industrial scale. To deliver the programme required to produce the large LAr-TPC, neutrino platforms are being constructed at Fermilab and at CERN. A team led by Marzio Nessi is working hard to make this resource available at CERN and to complete in the next few years the R&D needed for both the single- and dual-phase liquid-argon technologies that are being proposed on a large scale (see box).
The steps taken by the neutrino community during the nine months or so since summer 2014 have put the particle-physics community on the road towards an exciting and vibrant programme that will culminate in exquisitely precise measurements of neutrino oscillation. It will also establish in the US one of the flagships of the international accelerator-based neutrino programme called for by the ICFA Neutrino Panel. In addition, ELBNF will be a world-leading facility for the study of neutrino astrophysics and cosmology.
With such a broad and exciting programme, ELBNF will be at the forefront of the field for several decades. The remarkable success of the LHC programme has demonstrated that a facility of this scale can deliver exceptional science. The aim is that ELBNF will provide a second example of how the world’s high-energy-physics community can come together to deliver an amazing scientific programme. New collaborators are still welcome to join in the pursuit.
• In March the new collaboration chose the name DUNE for Deep Underground Neutrino Experiment.
ICARUS and WA105
The ICARUS experiment, led by Carlo Rubbia, employs the world’s largest (to date) liquid-argon detector, which was built for studies of neutrinos from the CNGS beam at CERN. The ICARUS detector with its 600 tonnes of liquid argon took data from 2010 to 2012 at the underground Gran Sasso National Laboratory. ICARUS demonstrated that the liquid-argon detector has excellent spatial and calorimetric resolution, making for perfect visualization of the tracks of charged particles. The detector has since been removed and taken to CERN to be upgraded prior to sending it to Fermilab, where it will begin a new scientific programme.
For more than a decade, the neutrino community has been interested in mounting a truly giant liquid-argon detector with some tens-of-kilotonnes active mass for next-generation long-baseline experiments, neutrino astrophysics and proton-decay searches – and, in particular, for searches for CP violation in the neutrino sector. WA105, an R&D effort located at CERN and led by André Rubbia of ETH Zurich, should be the “last step” of detector R&D before the underground deployment of detectors on the tens-of-kilotonne scale. The WA105 demonstrator is a novel dual-phase liquid-argon time-projection chamber that is 6 m on a side. It is already being built, and should be ready for test beam by 2017 in the extension of CERN’s North Area that is currently under construction
On 31 December, commissioning of the Taiwan Photon Source (TPS) at the National Synchrotron Radiation Research Center (NSRRC) brought 2014 to a close on a highly successful note as a 3 GeV electron beam circulated in the new storage ring for the first time. A month later, the TPS was inaugurated in a ceremony that officially marked the end of the 10-year journey since the project was proposed in 2004, the past five years being dedicated to the design, development, construction and installation of the storage ring.
The new photon source is based on a 3 GeV electron accelerator consisting of a low-emittance synchrotron storage ring 518.4 m in circumference and a booster ring (CERN Courier June 2010 p16). The two rings are designed in a concentric fashion and housed in a doughnut-shaped building next to a smaller circular building where the Taiwan Light Source (TLS), the first NSRRC accelerator, sits (see cover). The TLS and the new TPS will together serve scientists worldwide whose experiments require photons ranging from infrared radiation to hard X-rays with energies above 10 keV.
Four-stage commissioning
The task of commissioning the TPS comprised four major stages involving: the linac system plus the transportation of the electron beam from the linac to the booster ring; the booster ring; the transportation of the electron beam from the booster ring to the storage ring; and, finally, the storage ring. Following the commissioning of the linac system in May 2011, the acceptance tests of key TPS subsystems progressed one after the other over the next three years. The 700 W liquid-helium cryogenic system, beam-position monitor electronics, power supplies for quadrupole and sextupole magnets, and two sets of 2 m-long in-vacuum undulators completed their acceptance tests in 2012. Two modules of superconducting cavities passed their 300 kW high-power tests. The welding, assembly and baking of the 14 m-long vacuum chambers designed and manufactured by in-house engineers were completed in 2013. Then, once the installation of piping and cable trays had begun, the power supply and other utilities were brought in, and set-up could start on the booster ring and subsystems in the storage ring.
The installation schedule was also determined by the availability of magnets. By April 2014, 80% of the 800 magnets had been installed in the TPS tunnel, allowing completion of the accelerator installation in July (bottom right). Following the final alignment of each component, preparation for the integration tests of the complete TPS system in the pre-commissioning phase was then fully under way by autumn.
The US$230 million project (excluding the NSRRC staff wages) involved more than 145 full-time staff members
The performance tests and system integration of the 14 subsystems in the pre-commissioning stage started in August. By 12 December, the TPS team had begun commissioning the booster ring. The electron beam was accelerated to 3 GeV on 16 December and the booster’s efficiency reached more than 60% a day later. Commissioning of the storage ring began on 29 December. On the next day, the team injected the electrons for the first time and the beam completed one cycle. The 3 GeV electron beam with a stored current of 1 mA was then achieved and the first synchrotron light was observed in the early afternoon on 31 December (far right). The stored current reached 5 mA a few hours later, just before the shut down for the New Year holiday. As of the second week of February 2015, the TPS stored beam current had increased to 50 mA.
The US$230 million project (excluding the NSRRC staff wages) involved more than 145 full-time staff members in design and construction. Like any other multi-million-dollar, large-scale project, reaching “first light” required ingenious problem solving and use of resources. Following the groundbreaking ceremony in February 2010, the TPS project was on a fast track, after six months of preparing the land for construction. Pressures came from the worldwide financial crisis, devaluation of the domestic currency, reduction of the initial approved funding, attrition of young engineers who were recruited by high-tech industries once they had been trained with special skills, and bargaining with vendors. In addition, the stringent project requirements left little room for even small deviations from the delivery timetable or system specifications, which could have allowed budget re-adjustments.
To meet its mandate on time, the project placed reliance and pressure on experienced staff members. Indeed, more than half of the TPS team and the supporting advisors had participated in the construction of the TLS in 1980s. During construction of the TPS, alongside the in-house team were advisers from all over the world whose expertise played an important role in problem solving. In addition, seven intensive review meetings took place, conducted by the Machine Advisory Committee.
From the land preparation in 2010 onwards, the civil-construction team faced daily challenges. For example, at the heart of the Hsinchu Science Park, the TPS site is surrounded by heavy traffic, 24 hours a day, all year round. To eliminate the impact of vibration from all possible sources, the 20 m wide concrete floor of the accelerator tunnel is 1.6 m thick. Indeed, the building overall can resist an earthquake acceleration of 0.45 g, which is higher than the Safe Shutdown Earthquake criteria for US nuclear power plants required by the US Nuclear Regulatory Commission.
The civil engineering took an unexpected turn at the very start when a deep trench of soft soil, garbage and rotting plants was uncovered 14 m under the foundations. The 100 m long trench was estimated to be 10 m wide and nearly 10 m thick. The solution was to fill the trench with a customized lightweight concrete with the hardness and geological characteristics of the neighbouring foundations. The delay in construction caused by clearing out the soft soil led to installation of the first accelerator components inside the TPS shielding walls in a dusty, unfinished building with no air conditioning. The harsh working environment in summer, with temperatures sometimes reaching 38 °C, made the technological challenges seem almost easy.
Technology transfer
The ultra-high-vacuum system was designed and manufactured by NSRRC scientists and engineers, who also trained local manufacturers in the special technique of welding, the clean-room setup, and processing in an oil-free environment. This transfer of technology is helping the factories to undertake work involving the extensive use of lightweight aluminum alloy in the aviation industry. During the integration tests, the magnetic permeability of the vacuum system in the booster ring, perfectly tailored for the TPS, proved not to meet the required standard. The elliptical chambers were removed immediately to undergo demagnetization heat-treatment in a furnace heated to 1050 °C. For the 2 m long components this annealing took place in a local factory, while shorter components were treated at the NSRRC. The whole system was back online after only three weeks – with an unexpected benefit. After the annealing process, the relative magnetic permeability of the stainless vacuum steel chambers reached 1.002, lower than the specification of 1.01 currently adopted at light-source facilities worldwide.
The power supplies of the booster dipole magnets were produced abroad and had several problems. These included protection circuits that overheated to the extent that a fire broke out, causing the system to shut down during initial integration tests in August. As the vendor could not schedule a support engineer to arrive on site before late November, the NSRRC engineers instead quickly implemented a reliable solution themselves and resumed the integration process in about a week. The power supplies for the quadrupole and sextupole magnets of the storage ring were co-produced by the NSRRC and a domestic manufacturer, and deliver a current of 250 A, stable to less than 2.5 mA. Technology transfer from the NSRRC to the manufacturer on the design and production of this precise power supply is another byproduct of the TPS project.
24-hour shifts
Ahead of completion of the TPS booster ring, the linac was commissioned at a full-scale test site built as an addition to the original civil-construction plan (CERN Courier July/August 2011 p11). The task of disassembling and moving the linac to the TPS booster ring, re-assembling it and testing it again was not part of the initial plan in 2009. The relocation process nearly doubled the effort and work time. As a result, the four-member NSRRC linac team had to work 24-hour shifts to keep to the schedule and budget – saving US$700,000 of disassembly and re-assembly fees had this been carried out by the original manufacturer. After the linac had been relocated, the offsite test facility was transformed into a test site for the High-Brightness Injector Group.
Initially, the TPS design included four superconducting radiofrequency (SRF) modules based on the 500 MHz modules designed and manufactured at KEK in Japan for the KEKB storage ring. However, after the worldwide financial crisis in 2008 caused the cost of materials to soar nearly 30%, the number of SRF modules was reduced to three and the specification for the stored electron beam was reduced from 400 mA to 300 mA. But collaboration and technology transfer on a higher-order mode-damped SRF cavity for high-intensity storage rings from KEK has allowed the team at NSRRC to modify the TPS cavity to produce higher operational power and enable a stored electron beam of up to 500 mA – better, that is, than the original specification. (Meanwhile, the first phase of commissioning in December used three conventional five-cell cavities from the former PETRA collider at DESY – one for the booster and two for the storage ring – which had been purchased from DESY and refurbished by the NSRRC SRF team.)
The TPS accelerator uses more than 800 magnets designed by the NSRRC magnet group, which were contracted to manufacturers in New Zealand and Denmark for mass production. To control the electron beam’s orbit as defined by the specification, the magnetic pole surfaces must be machined to an accuracy of less than 0.01 mm. At the time, the New Zealand factory was also producing complicated and highly accurate magnets for the NSLS-II accelerator at Brookhaven National Laboratory. To prevent delays in delivering the TPS magnets – a possible result of limited factory resources being shared by two large accelerator projects – the NSRRC assigned staff members to stay at the overseas factory to perform on-site inspection and testing at the production line. Any product that failed to meet the specification was returned to the production line immediately. The manufacturer in New Zealand also constructed a laboratory that simulated the indoor environment of the TPS with a constant ambient temperature. Once the magnets reached an equilibrium temperature corresponding to a room temperature of 25°C in the controlled laboratory, various tests were conducted.
Like the linac, the TPS cryogenic system was commissioned at a separate, specially constructed test site. The helium cryogenic plant was dissembled and reinstalled inside the TPS storage ring in March 2014, followed by two months of function tests. With the liquid nitrogen tanks situated at the northeast corner, outside and above the TPS building, feeding the TPS cooling system – which stretches more than several hundred metres – is a complex operation. It needs to maintain a smooth transport and a long-lasting fluid momentum, without triggering any part of the system to shut down because of fluctuations in the coolant temperature or pressure. The cold test and the heat-load test of the liquid helium transfer-line is scheduled to finish by the end of March 2015 so that the liquid helium supply will be ready for the SRF cavities early in April.
Since both the civil engineering and the construction of the accelerator itself proceeded in parallel, the TPS team needed to conduct acceptance tests of most subsystems off-site, owing to the compact and limited space in the NSRRC campus. When all of the components began to arrive at the yet-to-be completed storage ring, the installation schedule was planned mainly according to the availability of magnets. This led to a two-step installation plan. In the first half of the ring, bare girders were set up first, followed by the installation of the magnets as they were delivered and then the vacuum chambers. For the second half of the ring, girders with pre-mounted magnets were installed, followed by the vacuum chambers. This allowed error-sorting with the beam-dynamics model to take place before finalizing the layout of the magnets for the minimum impact on the beam orbit. Afterwards, the final alignment of each component and tests of the integrated hardware were carried out in readiness for the commissioning phase.
Like other large-scale projects, leadership played a critical role in the success of completing the TPS construction to budget and on schedule. Given the government budget mechanism and the political atmosphere created by the worldwide economic turmoil over the past decade, leaders of the TPS project were frequently second-guessed on every major decision. Only by having the knowledge of a top physicist, the mindset of a peacemaker, the sharp sense of an investment banker and the quality of a versatile politician, were the project leaders able to guide the team to focus unwaveringly on the ultimate goal and turn each crisis into an opportunity.
Quarkonium lies at the very foundation of quantum chromodynamics (QCD). In the 1970s, following the discovery of the J/ψ in 1974, the narrow width (and later the hyperfine splittings) of quarkonium states corroborated spectacularly asymptotic freedom as predicted by QCD in 1973 and served to establish it as the theory of the strong interaction (CERN Courier January/February 2013 p24). Further progress in explaining quarkonium physics in terms of QCD turned out, however, to be slow in coming and relied for a long time on models. The reason for these difficulties is that non-relativistic bound states, such as quarkonia, are multiscale systems. While some processes, such as annihilations, happen at the heavy-quark mass scale and, as a consequence of asymptotic freedom, are well described by perturbative QCD, all quarkonium observables are also affected by low-energy scales. If these scales are low enough for perturbative QCD to break down, then they call for a nonperturbative treatment.
In the 1990s, the development of non-relativistic effective field theories such as non-relativistic QCD (1986, 1995) and potential non-relativistic QCD (1997, 1999) led to a systematic factorization of high-energy effects from low-energy effects in quarkonia. Progress in lattice QCD allowed an accurate computation of the latter. Hence, the theory of quarkonium physics became fully connected to QCD. The founding of the Quarkonium Working Group (QWG) in 2002 was driven mostly by this theoretical progress and the urgency and enthusiasm to transmit the new paradigm. Electron–positron collider experiments (BaBar, Belle, BES, CLEO) and experiments at Fermilab’s Tevatron were yielding quarkonium data with unprecedented precision, and QCD was in a position to take full advantage of these data.
The QWG gathered together experimentalists and theorists to establish a common language, highlight unsolved problems, set future research directions, discuss the latest data, and suggest new analyses in quarkonium physics. Its first meeting took place at CERN in November 2002, where the urgency and enthusiasm of 2002 animated long evening sessions that eventually led to the first QWG document in 2005 (Quarkonium Working Group Collaboration 2005). This document reflects the original intent of the QWG: to rewrite quarkonium physics in the language of effective field theories, emphasizing its potential for systematic and precise QCD studies. But surprises were around the corner.
In 2003 the first observation of the X(3872) by Belle – which with more than 1000 citations is the most quoted result of the B-factories – opened an era of new spectroscopy studies sometimes called the “charmonium renaissance” (CERN Courier January/February 2004 p8). From 2003 onwards, several new states were found in the charmonium and bottomonium regions of the spectrum, and they were unlikely to be standard quarkonia. Some of them – the many charged states named Z±c and Z±b – surely were not. Suddenly quarkonium became again a tool for discoveries, not necessarily of new theories, but of new phenomena in the complex realm of low-energy QCD. The second QWG document in 2011 captured this overwhelming flow of new data and the surrounding excitement (Quarkonium Working Group Collaboration 2011). But more excitement and more new data were still to come.
Almost exactly 12 years after the first meeting organized by the QWG, quarkonium experts converged again on CERN for the group’s 10th meeting on 10–14 November 2014. Sponsored by the QWG, the 2014 meeting was organized locally by CERN affiliates and staff members, and supported by the LHC Physics Centre at CERN. The meeting began with several sessions devoted to spectroscopy, with the focus on the new spectroscopy. The ATLAS, Belle, BESIII, CMS and LHCb collaborations all presented new analyses and data. One surprise was BESIII’s observation of an e+e– → γX(3872) signal at s > 4 GeV, perhaps via Y(4260) → γX(3872). If confirmed it would relate two of the best known new states in the charmonium region and challenge the popular interpretation of the Y(4260) as a charmonium hybrid.
In view of the many new states, theoretical effort has concentrated on finding a common framework that could describe them. Molecular interpretations, tetraquarks and threshold cusp-effects were among the possibilities discussed at the meeting. A novelty was the proposal to use lattice data to build hybrid and tetraquark multiplets within the Born–Oppenheimer approximation. Two lively round-table discussions debated the new states further. In a special discussion panel, the Particle Data Group members of the QWG asked for input in establishing a naming scheme for these states. The suggestion that eventually came from the QWG was to call the new states in the charmonium region XcJ if JPC= J++, YcJ if JPC = J–, PcJ if JPC = J–+ and ZcJ if JPC = J+–, and to follow a similar scheme in the bottomonium region.
Presenting new results
Non-relativistic effective field theories, perturbative QCD and lattice calculations played a major role in the sessions that were devoted to precise determinations of the heavy-quark masses, the strong coupling constant and other short-range quantities. Typical results required calculations with three-loop or higher accuracies. New results were presented on the leptonic width of the Υ(1S), the quarkonium spectrum, the heavy-quark masses, heavy-quark pair production at threshold and αs. Lattice QCD provided a valuable input in some of these determinations (figure 1) or an alternative derivation with comparable precision. On the experimental side, the KEDR collaboration highlighted some of its most recent precision measurements in the charmonium region below or close to threshold. Quarkonium observables may serve not only to constrain precisely Standard Model parameters in the QCD sector, but also to determine some otherwise difficult-to-access electroweak parameters. In particular, there was a report on the possibility of measuring the Hcc coupling in the radiative decay of the Higgs boson to J/ψ.
To isolate the relevant signal, it is important that the effects of cold nuclear matter are properly accounted for
The last two days of the workshop were devoted to quarkonium production at heavy-ion and hadron colliders. Measurements of quarkonium production cross-sections in heavy-ion and proton–heavy-ion collisions were presented by the LHC collaborations ALICE, CMS and LHCb, and by PHENIX and STAR at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC). It has been known since 1986 that quarkonium dissociation – induced by the medium that is produced in heavy-ion collisions – may serve as a probe of the properties of the medium, possibly revealing the presence of a new state of matter.
To isolate the relevant signal, it is important that the effects of cold nuclear matter are properly accounted for. This is the motivation behind measurements of proton–heavy-ion collisions, and the many theoretical studies that were presented at the workshop. It is important to account for recombination effects and to look at tightly bound states that are less sensitive to nonperturbative contributions (bottomonium). It is also important to consider the dynamics of thermalization – a key ingredient to link spectral studies to actual data. Finally, it is important to have a controlled way to compute the underlying dissociation processes. This is where major progress was made in recent years with the development of non-relativistic effective field theories for quarkonium in a thermal bath. The main result has been a change in the understanding of quarkonium dissociation. Until recently, dissociation was mostly understood as a consequence of the screening that is induced by the medium, but, nowadays, additional mechanisms of dissociation have been identified, which under some circumstances may be more important than screening. Several speakers reported on the present theoretical situation, as well as lattice calculations in an effective-field-theory framework.
Quarkonium production mechanisms in hadron colliders are at the core of the modern understanding of quarkonium physics. The successful theoretical description of production data from the Tevatron through the so-called colour-octet mechanism helped to establish non-relativistic QCD as a suitable effective field theory for quarkonia in the 1990s. Predictions of non-relativistic QCD continue to be challenged by the enormous amount of data that has been provided over the past years by the experiments at DESY’s HERA collider and at the Tevatron and, most recently, by the LHC experiments. ALICE, ATLAS, CMS and LHCb all presented data on regions of large transverse-momentum that were, up to now, unexplored. The meeting discussed theoretical issues that arise in trying to describe these data, and emphasized the crucial role that experiments must play in resolving these issues. One such issue is that different determinations of the nonperturbative matrix elements of non-relativistic QCD, which rely on fitting to the data in different transverse-momentum regions and/or on different sets of observables, lead to different results. Some of these determinations fail to yield definite predictions for quarkonium polarizations, while others lead to polarization predictions that are in contradiction with polarization data (figure 2).
An important related issue is to establish clearly the transverse-momentum region in which non-relativistic QCD factorization holds. This issue is best addressed by having the greatest possible amount of cross-section and polarization data at high and low transverse-momenta for both charmonium and bottomonium states, including the P-wave χc and χb states. Some speakers pointed out that measurements of additional production processes may further constrain the non-relativistic QCD matrix elements. Finally, others suggested that a resolution of the theoretical issues may not be far away.
A celebration of quarkonium
Embedded in the workshop, Chris Quigg’s seminar in the CERN Physics Department’s series celebrated the first 40 years of quarkonium in the presence of many of the heroes of quarkonium physics. The talk, rich in anecdotes and insights, but also with many highlights on current directions, served as a delightful pause in the packed schedule of the workshop. It also served to put the workshop, whose discussion items focused on the advances of the past year and a half, on a broader, more historical perspective.
Quarkonium is a special system. Its multiscale nature with at least one large energy scale allows for systematic studies of QCD across a range of energy scales. Its clean experimental signatures have led over the years to a significant experimental programme, which is pursued today by, among others, the Belle and BES experiments as well as those at RHIC and at the LHC. Quarkonium has proved to be a competitive, sometimes unique, source for precise determinations of the parameters of the Standard Model (the strong-coupling constant, masses, Higgs-coupling to heavy quarks), a valuable probe for emergent QCD phenomena in vacuum (such as exotic bound states: hybrids, tetraquarks, molecules; or the colour-octet mechanism in quarkonium production and decay) and in medium (such as the state of matter formed in heavy-ion collisions); and, possibly, a probe for new physics. The QWG has provided during the past 12 years an organization in which the advance of quarkonium physics could be shared in a coherent framework among a wide community of physicists. The CERN workshop in 2014 was also a celebration of this achievement.
In my journey as a migrant scientist, crossing continents and oceans to serve physics, institutions and nations wherever and whenever I am needed and called upon, CERN has always been the focal point of illumination. It has been a second home to whichever institution and country I have been functioning from, particularly at times of major personal and professional transition. Today, at the completion of yet another major transition across the seas, I am beginning to connect to the community from my current home at Fermilab and Northern Illinois University. Eight years ago, I wrote in this column on “Amazing particles and light” and, serendipitously, I am drawn by CERN’s role in shaping developments in particle physics to comment again in this International Year of Light, 2015.
“For the rest of my life I want to reflect on what light is!”, Albert Einstein exclaimed in 1916. A little later, in the early 1920s, S N Bose proposed a new behaviour for discrete quanta of light in aggregate and explained Planck’s law of “black-body radiation” transparently, leading to a major classification of particles according to quantum statistics. The “photon statistics” eventually became known as the Bose–Einstein statistics, predicting a class of particles known as “bosons”. Sixty years later, in 1983, CERN discovered the W and Z boson at its Super Proton Synchrotron collider, at what was then the energy frontier. In another 30 years, a first glimpse of a Higgs boson appeared in 2012 at today’s high-energy frontier at the LHC, again at CERN.
CERN has again taken the progressive approach of basing such colliders on technological innovation
Today, CERN’s highest-priority particle-physics project for the future is the High-Luminosity LHC upgrade. However, the organization has also taken the lead in exploring for the long-term future the scientific, technological and fiscal limits of the highest energy scales achievable in laboratory based particle colliders, via the recently launched Future Circular Collider (FCC) design effort, to be completed by 2018. In this bold initiative, in line with its past tradition, CERN has again taken the progressive approach of basing such colliders on technological innovation, pushing the frontier of high-field superconducting dipole magnets beyond the 16 T range. The ambitious strategy inspires societal aspirations, and has the promise of returning commensurate value to global creativity and collaboration. It also leaves room for a luminous electron–positron collider as a Higgs factory at the energy frontier, either as an intermediate stage in the FCC itself or as a possibility elsewhere in the world, and is complementary to the development of emerging experimental opportunities with neutrino beams at the intensity frontier in North America and Asia.
What a marvellous pursuit it is to reach ever higher energies via brute-force particle colliders in an earth-based laboratory. Much of the physics at the energy frontier, however, is hidden in the so-called “dark sector” of the vacuum. Lucio Rossi wrote in this column last month how light is the most important means to see, helping us to bridge reality with the mind. Yet even light could have a dark side and be invisible – “hidden-sector photons” could have a role to play in the world of dark matter, along with the likes of axions. And dark energy – is it real, what carries it?
All general considerations for the laboratory detection of dark matter and dark energy lead to the requirement of spectacular signal sensitivities with the discrimination of one part in 1025, and an audacious ability to detect possible dark-energy “zero-point” fluctuation signals at the level of 10–15 g. Already today, the electrodynamics of microwave superconducting cavities offers a resonant selectivity of one part in 1022 in the dual “transmitter–receiver” mode. Vacuum, laser and particle/atomic beam techniques promise gravimeters at 10–12 g levels. Can we stretch our imagination to consider eavesdropping on the spontaneous disappearance of the “visible” into the “dark”, and back again? Or of sensing directly in a laboratory setting the zero-point fluctuations of the dark-energy density, complementing the increasingly precise refinement of the nonzero value of the cosmological constant via cosmological observations?
The comprehensive skills base in accelerator, detector and information technologies accumulated across decades at CERN and elsewhere could inspire non-traditional laboratory searches for the “hidden sector” of the vacuum at the cosmic frontier, complementing the traditional collider-based energy frontier.
Like the synergy between harmony and melody in music – as in the notes of the harmonic minor chord of Vivaldi’s Four Seasons played on the violin, and the same notes played melodiously in ascending and descending order in the universal Indian raga Kirwani (a favourite of Bose, played on the esraj) – the energy frontier and the cosmic frontier are tied together intimately in the echoes of the Big Bang, from the laboratory to outer space.
By Miao Li, Xiao-Dong Li, Shuang Wang and Yi Wang World Scientific
Hardback: £56
E-book: £42
The first volume in the Peking University–World Scientific Advance Physics Series, this book introduces the current state of research on dark energy. The first part deals with preliminary knowledge, including general relativity, modern cosmology, etc. The second part reviews major theoretical ideas and models of dark energy, and the third part reviews some observational and numerical work. It will be useful for graduate students and researchers who are interested in dark energy.
By Alessandro Bettini Cambridge University Press
Hardback: £40 $75
Also available as an e-book, and at the CERN bookshop
The second edition of Alessandro Bettini’s Introduction to Elementary Particle Physics appeared on my doorstep just in time for me to plan my next teaching of the class for beginning graduate students. I liked the first edition very much, and used it for my classes. I like the second edition even better.
First, the level is not overburdened with mathematics, while still introducing enough theory to make meaningful homework assignments. Inspection of the 10 chapter titles – beginning with “Preliminary Notions” of kinematics and the passage of radiation through matter, and ending with the mixing and oscillation of “Neutrinos” – shows that it is clearly written by a knowledgeable experimentalist. The organization illustrates the critical interplay between experiment and theory, but leaves the reader with no doubts that physics is an experimental science.
In the first version, I already liked the presentation of the core material such as the quantum numbers of the pion and their measurement, as well as the more sophisticated presentation of material such as the Lamb shift and the resulting development of quantum electrodynamics. Fortunately, the best of this material has also propagated into the second edition, not always the case even in famous physics texts such as those by Jackson and by Halliday and Resnick, where at least to me, the first editions are better than what followed.
Bettini weaves in a good amount of history of the pivotal discoveries that shaped the Standard Model. Beginning students were not even born when the LHC was designed, and their parents were toddlers when the W and Z bosons were discovered. I was on a bus tour at a recent physics meeting in Europe, when a young postdoc asked me what I had worked on in the past. When I told him UA1, he asked “What’s that?” I was speechless, as were the more senior colleagues around us who overheard our conversation. Bettini gives a must-read, whole and balanced introduction to particle physics, appropriate for a first course.
A companion website from Cambridge University Press has some nice slides of plots and figures. I generally do not like to lecture from my laptop, but sometimes data are essential to the presentation, so this is a real time saver. There is also a new solutions manual for all of the end-of-chapter problems – available only to instructors. I like many of these problems, and will use a mix of them together with my own. Best of all, for the current version, there are some timely additions, most notably the discovery of the Higgs boson and an expanded chapter on neutrino oscillations. I will need to supplement this material with the latest measurements, but I am happy to do that because it reminds me that although progress at the frontier of knowledge is painfully slow, it is not zero. Let us hope that Run 2 at the LHC will necessitate the writing of a third edition of this wonderful book.
By A Campa, T Dauxois, D Fanelli and S Ruffo Oxford University Press
Hardback: £55 $94.95
Also available as an e-book
This book deals with an important class of many-body systems: those where the interaction potential decays slowly for large inter-particle distances and, in particular, systems where the decay is slower than the inverse inter-particle distance raised to the dimension of the embedding space. Gravitational and Coulomb interactions are the most prominent examples, although long-range interactions are more common than thought previously. Intended for Master’s and PhD students, the book tries to acquaint the reader with the subject gradually. The first two parts describe the theoretical and computational instruments needed to study both equilibrium and dynamical properties of systems subject to long-range forces. The third part is devoted to applications to the most relevant examples of long-range systems.
By Lars Brink (ed.) World Scientific
Hardback: £51
Paperback: £22
E-book: £17
This volume is a collection of lectures delivered by the Nobel prizewinners in physics, together with their biographies and the presentation speeches by Nobel Committee members, for the years 2006–2010. The lectures provide detailed explanations of the phenomena for which the laureates were awarded the Nobel prize. The volume includes John Mather and George Smoot, honoured “for their discovery of the blackbody form and anisotropy of the cosmic microwave background radiation”, as well as Yoichiro Nambu “for the discovery of the mechanism of spontaneous broken symmetry in subatomic physics”, and Makoto Kobayashi and Toshihide Maskawa “for the discovery of the origin of the broken symmetry, which predicts the existence of at least three families of quarks in nature”.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.