Bluefors – leaderboard other pages

Topics

CMS updates its search for diphoton resonances

In December 2015, just a few weeks after the end of the initial LHC Run 2 period recording proton–proton collisions at the world-record collision energy of 13 TeV, CMS and ATLAS presented several new results based on this novel data. These results were eagerly anticipated: at this centre-of-mass energy, new particles heavier than 1–2 TeV could be produced over 10 times more frequently than during Run 1.

The results presented by CMS were based on a data set corresponding to an integrated luminosity of ~ 2.7 fb–1. Because of the short time between the end of data-taking and the presentation of the results, only preliminary calibrations could be applied. However, these were not all of the data that CMS recorded: an additional 0.6 fb–1 were collected without a magnetic field (0 T data set). The cryogenic plant delivering the necessary liquid helium to operate the superconducting solenoid was disrupted during 2015 by the presence of contaminants. The filters inside of the cryogenic plant had to be regenerated several times, in conjunction with the magnet being ramped down. Before continuing the story of the 0 T data set, we want to reassure the reader that the system underwent an extensive programme of cleaning and maintenance during the end-of-year technical stop, and it is now on track for reliable operation in 2016.

The perfect candidate analysis for these data is the search for resonances in the diphoton final state. Preliminary results for this search, shown by CMS and ATLAS in December 2015, generated significant interest within the high-energy community because of a simultaneous excess of data with respect to the expected background seen by both experiments at a diphoton mass of about 750 GeV.

While the momenta of charged particles require a magnetic field to be measured, the energies of neutral and charged particles can be measured with the CMS electromagnetic and hadronic calorimeters without a magnet. Therefore, although challenging, it is still possible to use data collected without a magnetic field through implementation of special and dedicated reconstruction and selection procedures. Photons are neutral particles, which do not bend in the magnetic field, and their energies are measured with a precision better than 1.5% using the CMS lead-tungstate crystal electromagnetic calorimeter. For the 0 T data set, the energy scale and resolution of the electromagnetic calorimeter were carefully cross-checked and adjusted using electrons from Z-boson decays. The momentum information normally used for the vertex assignment and isolation criteria was substituted at 0 T by track-counting, as was previously done by CMS in the summer of 2015 for the very first publication on the 13 TeV data, which was a study of the hadron multiplicity without a magnetic field.

The inclusion of the 0 T data and the use of optimised calibrations improve the overall expected sensitivity for a narrow resonance at 750 GeV by about 20%. The new results still exhibit an excess at a mass around 750 GeV. The new local significance for a narrow resonance hypothesis is 2.8σ. When combined with the 8 TeV data set from Run 1, the largest excess is observed at 750 GeV with a local significance of 3.4σ, corrected to 1.6σ when accounting for the possibility of a signal appearing anywhere in the explored mass range. The analysis gives similar results for both spin-0 and spin-2 signal hypotheses.

Therefore, even after the final calibration and with slightly more data, an intriguing excess remains. Only additional data will tell us whether this is an early sign of new physics.

World’s most precise measurements and search for the X(5568) tetraquark candidate

LHCb

At the Rencontres de Moriond EW conference held at La Thuile (Italy) from 12 to 19 March, the LHCb collaboration presented new important results.

CKM γ-angle measurements. The parameters that describe the difference in behaviour between matter and antimatter, known as CP violation, are constrained in the so-called CKM, or unitarity, triangle. The angles of this triangle are denoted α, β and γ, and among these, γ is the least precisely known. The γ value of (70.9+7.1–8.5)° presented at the conference was obtained from a combination of many different LHCb measurements, and is the most precise determination of γ from a single experiment. One of the new analyses presented at the conference uses decays of charged B mesons into charmed D mesons and pions or kaons. In turn, the D mesons decay into various combinations of pions and kaons. The results show different rates of positive and negative B mesons, clearly indicating different properties of matter and antimatter.

Determination of the B0 oscillation frequency. A fascinating feature of quantum mechanics, in which the B0s, B0 and D0 particles turn into their antimatter partners, is called oscillation or mixing. LHCb physicists analysed the full Run 1 data sample of semileptonic B0 decays with charged D or D* mesons, and presented the most precise single measurement of the parameter that sets the B0-meson oscillation frequency to be Δm= (505.0±2.1±1.0) ns–1.

Non-confirmation of the X(5568) tetraquark candidate. Recently, the DZero collaboration at Fermilab reported the observation of a narrow structure, X(5568), in the invariant mass of the B0s meson and a charged-pion π (CERN Courier April 2016 p13), and interpreted it as a tetraquark candidate composed of four different quarks (b, s, u and d).

At the Moriond conference, the LHCb collaboration reported a result of a similar analysis using a sample of B0s mesons 20 times higher than that used by the DZero collaboration. The B0sπ invariant mass spectrum is shown in the figure, using the B0s mesons decaying into J/ψ and φ mesons or into Ds and π mesons. No structure is seen in the region around the mass of 5568 MeV (indicated by the arrow). Hence, the LHCb analysis does not confirm the DZero result. Using similar kinematic requirements applied by the DZero collaboration in their analysis, the ratio of the X(5568) to the B0s-meson production rate is found to be less than 1.6%, at 90% confidence level.

CALET sees events in millions

Just a few months after its launch and the successful completion of the on-orbit commissioning phase aboard the International Space Station, the CALorimetric Electron Telescope (CALET) has started observations of high-energy charged particles and photons coming from space. To date, more than a hundred million events at energies above 10 GeV have been recorded and are under study.

CALET is a space mission led by JAXA with the participation of the Italian Space Agency (ASI) and NASA. CALET is also a CERN-recognised experiment; the collaboration used CERN’s beams to calibrate the instrument, which was launched from the Tanegashima Space Center on 19 August 2015, on board the Japanese H2-B rocket. After berthing with the ISS a few days later, CALET was robotically extracted from the transfer-vehicle HTV5, operated by JAXA, and installed on the external platform JEM-EF of the Japanese module (KIBO). The check-out phase went smoothly, and after data calibration and verification, CALET moved to regular observation mode in mid-October 2015. The data-taking will go on for period of two years, initially, with a target of five years.

The first data sets are confirming that all of the instruments are working extremely well.

CALET is designed to study electrons, nuclei and γ-rays coming from space. In particular, one of its main goals is to perform precision measurements of the detailed shape of the electron spectrum above 1 TeV. High-energy electrons are expected to come from less than a few-thousand light-years from Earth, as they quickly lose energy travelling in space. Their detection might reveal the presence of nearby astronomical source(s) where electrons are accelerated. The high end of the spectrum is particularly interesting because it could provide a clue to possible signatures of dark matter.

The first data sets are confirming that all of the instruments are working extremely well. The event image above (raw data) shows the detailed shape of the development of a shower of secondary particles generated by the impact of a candidate electron with an estimated energy greater than 1 TeV. The high-resolution energy measurement is provided by CALET’s deep, homogeneous calorimeter equipped with lead-tungstate (PbWO4) crystals preceded by a high-granularity (1 mm scintillating fibres) pre-shower calorimeter with advanced imaging capabilities. The depth of the instrument ensures good containment of electromagnetic showers in the TeV region.

In the coming months, thanks to its ability to identify cosmic nuclei from hydrogen to beyond iron, CALET will be able to study the high-energy hadronic component of cosmic rays. CALET will focus on the deviation from a pure power law that has been recently observed in the energy spectra of light nuclei. It will extend the present data to energies in the multi-TeV region with accurate measurements of the curvature of the spectrum as a function of energy, and of the abundance ratio of secondary to primary nuclei – an important ingredient to understand cosmic-ray propagation in the Galaxy.

Fast radio bursts reveal unexpected properties

Two studies show that fast radio bursts (FRBs) have a richer phenomenology than initially thought and might originate in two different classes. While a group could, for the first time, pinpoint the location of a FRB and constrain baryon density in the intergalactic medium, a second study has found repeated FRBs from the same source, which cannot be of cataclysmic origin.

FRBs are very brief flashes of radio emission lasting just a few milliseconds. Although the first FRB was recorded in 2001, it was detected and recognised as a new class of astronomical events only six years later (CERN Courier November 2007 p10). It has been overlooked until re-analysis of the data searching for very short radio pulses. Since then, more than 10 other FRBs have been detected, and they all suggest very powerful events occurring at cosmological distances (CERN Courier September 2013 p14). Unlike for gamma-ray bursts (GRBs), there is a way to infer the distance via the time delay of the pulse observed at different radio frequencies. This delay increases towards lower radio frequencies and is proportional to the dispersion measure (DM), which refers to the integrated density of free electrons along the line of sight from the source to Earth.

The real-time detection of a FRB at the Parkes radio telescope has now made it possible, for the first time, to quickly search for afterglow emission, which has routinely been done for GRBs for more than a decade (CERN Courier June 2003 p12). Only two hours after the burst, the Australia Telescope Compact Array (ATCA) observed the field and identified two variable compact sources. One of them was rapidly fading and is very likely the counterpart of the FRB. This achievement is reported in Nature by a collaboration led by Evan Keane of Swinburne University of Technology in Australia and project scientist of the Square Kilometre Array Organisation.

What makes the study so interesting is that the precise localisation of the afterglow allowed identification of the FRB’s host galaxy and, therefore, via its redshift of z = 0.492±0.008, the precise distance to the event. With this information, the DM can be used to measure the density of ionised baryons in the intergalactic medium. The obtained value of ΩIGM = 4.9±1.3, expressed in per cent of the critical density of the universe, is in good agreement with the cosmological determinations by the WMAP and Planck satellites.

The second paper, also published in Nature, reports the discovery of a series of FRBs from the same source. A total of 10 new bursts were recorded in May and June 2015, and correspond in location and DM to a FRB first detected in 2012. This unexpected behaviour was found by Paul Scholz, a PhD student at McGill University in Montreal, Canada, sifting through data from the Arecibo radio telescope in Puerto Rico. The recurrence of bursts on minute-long timescales cannot come from a cataclysmic event, but is likely to be from a young, highly magnetised neutron star, according to lead author Laura Spitler of the Max Planck Institute for Radioastronomy in Bonn, Germany. It is likely that this FRB is of a different nature to other FRBs.

The status of the field is reminiscent of that of GRBs in the 1990s, with the first afterglow detections and redshift determinations in 1997, and the earlier understanding that soft gamma repeaters are distinct from genuine extragalactic GRBs, which are cataclysmic events like supernova explosions and neutron star mergers.

The ILC project keeps its momentum high

Résumé

Le projet de collisionneur linéaire international a toujours le vent en poupe

Cela fait trois ans que la communauté internationale qui planifie le Collisionneur linéaire international a publié son rapport de conception technique. Le Collisionneur linéaire international est une proposition de nouvel accélérateur de particules, qui ferait entrer en collision des électrons et leurs antiparticules correspondantes, des positons, à une énergie de 500 GeV. Ce projet est inscrit dans toutes les feuilles de route pour la physique des particules, mais il n’a pas encore été décidé, jusqu’à présent, s’il doit être construit ou non. En attendant, la R&D se poursuit sur des éléments essentiels des détecteurs et accélérateurs de pointe, notamment sur les aspects de la conception qui seraient fonction de l’endroit où la machine serait construite.

It’s been three years since the worldwide community of the International Linear Collider (ILC) published its Technical Design Report (TDR). The proposed new particle accelerator would smash electrons and their antiparticles, positrons, into each other at energies of 500 GeV. However, even though the ILC features on all particle-physics road maps worldwide, no decision has been taken so far as to whether or not it should be built. In the meantime, R&D continues on key aspects of the state-of the-art accelerator and detectors, with particular focus on those aspects of the design that depend on where the machine would be built. A proposed site exists, and, if it goes ahead, the machine would be built underneath the lush hills of a region in northern Japan called Kitakami, in Iwate province, some three hours north of Tokyo. The green light depends on commitments from and negotiations between many governments, notably the Japanese, which hasn’t yet confirmed its willingness to host the world’s next big particle-physics adventure.

The ILC is said to complement results from the LHC because of the different nature of its collisions. Whereas the LHC collides protons with protons, the ILC would collide electrons with their antiparticles, positrons, with the option of starting out as a Higgs factory at 250 GeV and upgrading to 1 TeV in other stages. The physics case has recently been summed up in a paper published in the European Physical Journal C: “Due to the collision of point-like particles the physics processes take place at the precisely and well-defined initial energy √s, both stable and measurable up to the per-mille level,” the paper states. The energy at the ILC is tunable, which allows precise energy scans to be carried out and permits kinematic conditions for the different physics processes to be optimised. In addition, the beams can be polarised: the electron beam up to about 80%, the positron beam up to about 30%. Due to all of these circumstances, it is possible to fully reconstruct the final states so that numerous observables such as mass and total cross-sections, but also differential energy and angular distributions, are available for data analyses. For more information, see Eur. Phys. J. C 2015 75 371 doi:10.1140/epjc/s10052-015-3511-9.

Precise, efficient and novel systems

The ILC would use superconducting radiofrequency technology to accelerate its particles. Some 16,000 1 m-long accelerating cavities made of pure niobium with an accelerating gradient of up to 35 MV/m are needed to get electrons and positrons up to speed. The final-focus system needs to be extremely precise and efficient if collisions at the design luminosity of 2 × 1034 cm–2 s–1 are to occur in the two detectors. The detectors – after planning, design and testing by universities from around the world, involving many students – will take turns to sit in the interaction point. A novel system called “push–pull”, where one detector is pushed into the interaction point while the other is pulled out so that one can take data while the other is being serviced, was devised in the course of the R&D work for the project’s TDR, published in 2013. Compared with the option of switching the beam between two separate interaction regions, this option managed to cut the estimated cost by a significant amount because it eliminated several kilometres of tunnel and some cubic metres of cavern digging in one go.

The TDR sets the estimated cost of the project at $7.8 billion plus 23 million man-hours. This includes all civil engineering, technology production, construction, the accelerator components, etc, but it does not include detectors, contingency, escalation or operation costs. “The basis of the final design and the future construction for the ILC project has been completed, and we’re basically ready to push the green button,” said then-ILC-director Barry Barish, who led the team of physicists and engineers from around the world who formed the Global Design Effort (GDE) from 2005 to 2013, and who took the project to a construction-ready stage. Three previous regional projects (NLC, JLC and TESLA) needed to be combined into the best and most cost-effective option. People were busy evaluating one option against others, coming up with new ones, checking compatibilities and keeping an eye on the cost. Despite some major setbacks along the way, the R&D work culminated in the TDR. But even though the maturity of the technologies would allow for the machine to be built tomorrow, tunnel-boring machines have to wait for the official green light.

With the publication of the TDR, the mandate of the GDE ended, and a new organisation was put in place: the Linear Collider collaboration, or LCC. Barry Barish returned to LIGO to find gravitational waves and Lyn Evans took over and united the friendly competitors, the ILC and the Compact Linear Collider (CLIC) study, under one organisational roof. Even though the two linear colliders have very different designs, there are still synergies to be exploited between them. Detector developers, for example, work closely together on such state-of-the-art parts like high-granularity calorimeters as part of the CALICE collaboration. These high-granularity calorimeters have, in fact, spun off to the LHC, and will be used in the CMS detector’s calorimeters for the high-luminosity upgrade.

Move from technology to diplomacy

Lyn Evans, former LHC project leader and director of the Linear Collider collaboration, founded in 2013, calls the process a move from technology to diplomacy. Together with his team of project and regional directors, he is busy facilitating negotiations between state officials from various countries to get the approval process under way. The process is slow, and requires many small steps and a large number of study groups and committees; while Japan needs reassurance from governments and funding agencies of potential future member-state countries, to take a decision to host, partner states would prefer to hear “Let’s go!” from Japan, before committing themselves to vast amounts of money and manpower. To break an impasse, discussions between political leaders from relevant countries and proactive approaches by scientists to the governments of their countries are under way. A decision is expected sometime around 2018.

Research and design work hasn’t stopped, though. The main focus is now on adapting the generic collider design to the specifications of the future site. For example, access shafts and tunnels have been adapted to the geology that exists at the site. With the help of a civil-engineering tool originally developed for the Future Circular Collider study, the interaction region has now shifted by a few kilometres so that detector parts can be lowered into the cavern vertically, rather than needing to be driven in on an inclined slope. Engineers are looking at the nearest port that would receive most of the huge accelerator and detector parts from around the world, and at the bridges that these components would need to cross. One might expect doubts, even fear, from the local community, but the contrary is the case: pro-ILC banners, drawings, bumper stickers and flags along roads are visible proof of the region’s support. Local governments have set up ILC promotion offices manned by international residents of Japan, who make sure that everybody in Kitakami not only knows about but also gives their blessing to the ILC. Hitoshi Yamamoto, professor at Tohoku University, tells of the support that he witnessed during a recent site visit of the civil-engineering group: a grandfather and granddaughter saw the group of researchers standing on a field and walked up to them. The group expected to be told to go away, but instead the grandfather pointed at his granddaughter, saying “Please try your best to build the ILC – for this child.”

A new international science project would undoubtedly bring benefits to the region, even though the global nature of the ILC would mean that components and parts would be built and tested in labs and universities around the world, and then shipped to their final destination, mirroring what was done for the LHC at CERN. Industrialisation is therefore a high-priority topic for the ILC community: getting 16,000 high-tech cavities built, tested and shipped halfway around the world isn’t obvious, and researchers are learning a lot from the European X-Ray Free-Electron Laser (European XFEL) currently under construction at DESY in Hamburg, Germany. This uses the same technology as the ILC over a length of some 2 km, providing a neat model for cavity and cryomodule serial production. The European XFEL, which started its life as a spin-off from the TESLA accelerator once planned at DESY, also using SCRF technology, employed two companies for cavity production and devised a complicated (but functioning) ballet of component production, transport, testing and integration between various production places, the companies, DESY, the French CEA laboratory Irfu in Saclay and CNRS lab LAL in Orsay. For the ILC, an order of magnitude more parts will have to be shipped around the world.

Redoubled international efforts

The International Committee for Future Accelerators (ICFA) has decided to continue the linear-collider organisation, and has extended the mandate of the LCC by a year. At the February meeting, ICFA reached a consensus that the international effort, led by ICFA, for an ILC in Japan should continue, and a subgroup has been formed to study the future of the linear-collider organisation and make a proposal for a new structure to be in place from 2017.

ICFA has traditionally been the committee to which the ILC effort reported its progress, the body that set up committees and boards, gave them their mandates and monitored developments. Its partner organisation, the Asian Committee for Future Accelerators (ACFA), met with the Asia-Pacific High Energy Physics Panel (AsiaHEP) in February, and decided to issue a statement about the ILC and the potential circular Higgs factory to be built in China, CEPC. About the ILC, the statement says: “AsiaHEP and ACFA reassert their strong endorsement of the ILC, which is in a mature state of technical development…In continuation of decades of worldwide co-ordination, we encourage redoubled international efforts at this critical time to make the ILC a reality in Japan.” About CEPC, it states: “We encourage the effort led by China in this direction, and look forward to the completion of the technical design in a timely manner.”

These statements mirror what the strategic road maps for the future of particle physics in the different regions have said: that the physics case for the ILC is “extremely strong” and that the “interest expressed in Japan in hosting the ILC is an exciting development” (P5, US). “There is a strong scientific case for an electron–positron collider, complementary to the LHC, that can study the properties of the Higgs boson and other particles with unprecedented precision and whose energy can be upgraded,” states the European Strategy in its fifth recommendation. “Europe looks forward to a proposal from Japan to discuss a possible participation.” Obviously all strategies give top priority to the continued operation of the LHC and its future upgrade for operation at higher luminosities, to ensure the exploitation of its full scientific potential, and recommend competitive neutrino programmes and priorities and the development of a post-LHC accelerator project at CERN with global contribution.

• For further details, visit www.linearcollider.org.

The Tevatron legacy: a luminosity story

Résumé

Une histoire de luminosité

Les résultats du programme du Tevatron ont largement dépassé les objectifs de physique prévus initialement, en raison notamment de l’augmentation d’un facteur 100 de la luminosité produite par rapport à ce qui était prévu au départ. Même avec une énergie fixe, les collisionneurs d’hadrons sont, à bien des égards, une source presque intarissable de physique. Le LHC a produit jusqu’ici environ 1 % de la luminosité attendue au total, situation similaire à celle du Tevatron en 1995, au moment de la découverte du quark top. Si les choses se passent comme avec le Tevatron, les expériences LHC nous offriront encore beaucoup de résultats exceptionnels !

Throughout history, the greatest instruments have yielded a treasure trove of scientific results and breakthroughs via long-term “exposures” to the landscape they were designed to study. Among many examples, there are telescopes and space probes (such as Hubble), land-based observatories (such as LIGO), and particle accelerators (such as the Tevatron and the LHC).

The long-lived nature of these explorations not only opens up the possibility for discovery of the rarest of phenomena with increases in the amount of the data collected, but also allows a narrower focus on specific regions of interest. In these sustained endeavours, the scientist’s ingenuity is unbounded, and through a combination of instrumental and data-analysis innovations, the programmes evolve well beyond their original scope and expected capabilities.

In 2015, the LHC increased its collision energy from 8 to 13 TeV, marking the start of what ought to be a long era of exploration of proton–proton collisions at the LHC’s design energy. In December 2015, both the CMS and the ATLAS experiments disclosed intriguing results in their di-photon invariant mass spectra, where an excess of events near 750 GeV suggest the possibility of a new and unexpected particle emerging from the data (CERN Courier January/February 2016 p8). With just a few inverse femtobarn of data recorded, the statistical significance of the observation is not sufficient to conclude if this is a coincidental background fluctuation or whether this might be a great new discovery. One thing is certain: more data are needed.

It is worth reflecting on the experience of the Tevatron collider programme, where proton–antiproton collisions at ~ 2 TeV centre-of-mass energy were accumulated during a 25 year period, from 1986 to 2011. During this time period, the Tevatron’s instantaneous luminosity increased from 1029 cm–2 s–1 to above 4 × 1032 cm–2 s–1 – exceeding by two orders of magnitude the original design luminosity. Figure 1 shows the progression of the initial luminosity for each Tevatron store (in each interaction region) versus time. Also shown in the figure are periods of no data during extended upgrade shutdowns. The luminosity growth was due to the construction of new large facilities, and due to upgrades and better use of the existing equipment. The steady growth of antiproton production was the cornerstone of the growth in luminosity. The construction of the facilities supporting the Tevatron’s luminosity growth included the Linac extension in the early 1990s that resulted in doubling its energy to 400 MeV and an increase of the Booster intensity; the construction of the Main Injector (150 GeV rapid-cycling proton accelerator) that greatly increased proton beam power for antiproton production; the construction of the Recycler ring made of permanent magnets and commissioned at the beginning of the 2000s added the third antiproton ring, which was helpful for an increase in the antiproton production rate; and a major upgrade of the stochastic cooling system for the antiproton complex and development and construction of the electron cooling in the 2000s reduced the antiproton beam emittances. A large number of other accelerator improvements were also key for the Tevatron ultimately delivering more than 10 fb–1 of luminosity to each of the two Tevatron general-purpose experiments, CDF and D0. All of them required deep insight into the underlying accelerator physics problems, inventiveness and creativity.

From the measurement of the charged-particle multiplicity in the first few proton–antiproton collisions to the tour-de-force that was the search for the Higgs boson with the full data set of ~10 fb–1, the CDF and D0 experiments harvested a cornucopia of scientific results. In the period from 2005 to 2013, the combined number of publications was constant at roughly 80 per year, with the total number of papers published using Tevatron data reaching 1200, with more coming.

The results from this bountiful programme include such fundamental results as the top-quark discovery and evidence for the Higgs boson, rare Standard Model processes (such as di-boson and single top-quark production), new composite particles (such as a new family of heavy b-baryons), and very subtle quantum phenomena (such as Bs mixing). The results also include many high-precision measurements (such as the mass of the W boson), the opening of new research areas (such as precision measurement of the top-quark mass and its properties), and searches for new physics in all of its forms. As shown in Figure 2, progress in each of these categories was obtained steadily throughout the whole running period of the Tevatron, as more and more data were accumulated.

The observation of Bs mixing is an example where ~ 0.1 fb–1 of 1990’s data were simply not enough to yield a statistically significant measurement. With 10 times more data by 2006, this phenomena was clearly established, with a statistical significance exceeding five standard deviations. As a result, many models of new physics that predicted an oscillation frequency away from its Standard Model expectation were excluded.

With about 2 fb–1 of data, enough events were accumulated to firmly establish a new family of heavy baryons containing a b quark, such as Cascade b and Sigma b baryons. Some of these discoveries had ~ 10–20 signal events, and a large number of proton–antiproton collisions, in addition to the development of new analysis methods, were critical for discovering these new baryons. It took a bit longer to discover the Omega b baryon, which is heavier and has a smaller production cross-section, but with 4 fb–1, 16 events were observed with backgrounds small enough to firmly establish its existence. It was exciting to witness how events accumulated in the corresponding mass peak with each additional inverse femtobarn of data collected.

It was not only discoveries that benefited from more data, high-precision measurements did also. The masses of elementary particles, such as the W boson, are among the most fundamental parameters in particle physics. With 1 fb–1 of data, samples containing hundreds of thousands of W bosons became available, resulting in an uncertainty of ~ 40 MeV or 0.05%. With 4 fb–1 of data, the accuracy of the measurement for each individual experiment was of ~ 20 MeV. With more data, many of the systematic uncertainties were successfully reduced as well. Ultimately, not all systematic uncertainties are better constrained by more data, and those become the limiting factor in the measurement.

Searches for physics beyond the Standard Model are always of the highest priority in the research programme of every energy-frontier collider, and the Tevatron was no exception. The number of publications in this area is the largest among all physics topics studied. Tighter and tighter limits have been set on many exotic theories and models, including supersymmetry. In many cases, limits on the masses of the new sought-after particles reached 1 TeV and above, about half of the Tevatron’s centre-of-mass energy.

The observation of the electroweak production of the top quark was among the many important tests of the Standard Model performed at the Tevatron. While the cross-section for electroweak single top-quark production is only a factor of two lower than top-quark pair production at the Tevatron, the final state with such a single decaying heavy particle was very difficult to detect in the presence of large backgrounds, such as W+jets production. It was the search for the single top quark where new multivariate analysis methods were very effectively used for the first time in the discovery of a new process, replacing standard “cut based” analyses and increasing the sensitivity of the search substantially. Even with these new analysis methods, 1 fb–1 of data was needed to obtain the first evidence for this process, and more than 2 fb–1 to make a firm discovery – almost 50 times more data than the amount of luminosity that was needed to discover the top-quark via pair production in 1995.

The analysis methods developed in the single top-quark observation were, in turn, very useful later on in the search for the Standard Model Higgs boson. The cross-sections for Higgs production are rather low at the Tevatron, so only the most probable decay modes to a pair of b quarks or W bosons contributed significantly to the search sensitivity. With 5 fb–1 of data accumulated, each experiment began to be sensitive to detecting Higgs bosons with mass around 165 GeV, where the Higgs decays mainly to a pair of W bosons. It became evident at that time that the statistical accuracy that each experiment could achieve on its own would not be enough to reach a strong result in their Higgs searches, so the two experiments combined their results to effectively double the luminosity. In this way, by 2011, the Tevatron experiments were able to exclude the Higgs boson in nearly the complete mass range allowed by the Standard Model. In the summer of 2012, using their full data set, the Tevatron’s experiments obtained evidence for Higgs boson production and its decay to a pair of b quarks, as the LHC experiments discovered the Higgs boson in its decays to bosons.

Lessons learnt

Among the many lessons learnt from the 25 year-long Tevatron run is that important results will appear steadily as the size of the data set increases. Among the reasons for this are the vast sets of studies that these general-purpose experiments perform. Hundreds of studies, for example, with the top quark or with particles containing a b quark or with processes containing a Higgs boson, provided exciting results at various luminosities when enough data for the next important result in one of the analyses were accumulated. Upgrades to the detectors are critical to handle ever-higher luminosities. Both CDF and D0 had major upgrades to the trackers, calorimeters and muon detectors, as well as trigger and data-acquisition systems. Developments of new analysis methods are also important, enabling the extraction of more information from the data. Improvements in the Tevatron luminosity were critical in keeping the luminosity doubling time to about a year or two, until the end of the programme, providing significant data increases over a relatively short period of time.

The impact of the Tevatron programme extended well beyond its originally planned physics goals, to a large extent due to the hundred-fold increase in the delivered luminosity with respect to what was originally planned. In many ways, even at a fixed energy, hadron colliders are a nearly inexhaustible source of physics. The LHC has gathered so far approximately 1% of its expected luminosity, a similar situation to where the Tevatron was back in 1995 at the time of the top-quark discovery. Based on the Tevatron experience, many more exciting results from the LHC experiments are yet to come.

• For further details, see www-d0.fnal.gov/d0_publications/d0_pubs_list_bydate.htmland www-cdf.fnal.gov/physics/physics.html.

Super-magnets at work

 

To obtain 10 times the LHC original design luminosity, the HL-LHC will need to replace more than 40 large superconducting magnets, in addition to about 60 superconducting corrector magnets. A wealth of innovative magnet technologies will be exploited to ensure the final performance of the new machine. Two key features are of paramount importance for the whole project: the production of high magnetic fields and the stability and reliability of the various components.

The backbones of the upgrade are the 24 new focussing quadrupoles (inner triplets) that will be installed at both ends of the ATLAS (Point 1) and CMS (Point 5) interaction regions. These magnets will provide the final beam squeeze to maximise the collision rate in the experiments. They are particularly challenging, because they will have to reach a field of nearly 12 T with an aperture that is more than double that of the current triplets.

In its final configuration, the new machine will have 36 new superferric corrector magnets, of which four will be quadrupoles, and 32 higher-order magnets, up to the dodecapoles. These magnets will also feature a much larger aperture than the ones currently used in the LHC. However, they are designed to be more stable and reliable, to stand the tougher operating conditions of the new machine.

The HL-LHC will also need a more efficient collimation system, because the present one will not be sufficient to handle the new beam intensity, which is twice the LHC nominal design value. For this reason, powerful dipoles will be installed at Point 7 of the ring, in the dispersion-suppression region. The idea is to replace an 8 T, 15 m-long standard LHC dipole with two 11 T, 5.5 m-long new dipoles, therefore achieving the same beam bending strength, but making space to allow the insertion of new collimators in the 4 m central slot. The new dipoles will have a peak field approaching 12 T, comparable to the new inner triplet quadrupoles.

Superferric: strong and reliable

The first HL-LHC magnet, ready and working according to specifications, is a sextupole corrector. This first component is also rather unique because, unlike the superconducting magnets currently used in the LHC, it relies on a “superferric” heart.

Although the name might sound unfamiliar, superferric magnets were initially proposed in the 1980s as a possible solution for high-energy colliders. However, many technical problems had to be overcome, and a good opportunity had to show up, before the use of superferric magnets could become a reality.

In a standard superconducting magnet, the iron is used only in the yoke, while in a superferric (or “iron-dominated”) magnet, iron is also used in the poles that shape the field, much like classical resistive magnets. In the HL-LHC superferric correctors, the coils are made of Nb-Ti superconductor and will be operated at 1.9 K. The superferric design was selected among other options because it has a sharp fringe field and is very robust. This requirement is crucial for HL-LHC, where it will be essential to sustain the increased radioactive load caused by the collisions of the high-intensity beams.

A superferric corrector magnet had been developed by CIEMAT for the sLHC-PP study (the study that preceded that of the HL-LHC, see project-slhc.web.cern.ch/project-slhc/), and that design was used as a starting point for the HL-LHC correctors. Subsequently, in the framework of a collaboration agreement for the HL-LHC project signed in 2013 between CERN and the Italian National Institute for Nuclear Physics (INFN), the LASA laboratory of the Milan section of the INFN has taken over as a partner in the project.

Recent tests carried out at LASA showed that the sextupole corrector magnet is highly stable, because it could reach and surpass the ultimate current value of 150 A (132 A being the nominal operating value) required by the design specifications before quenching. Actually, the first training quench appeared well above 200 A.

Record dipoles

The HL-LHC is an important test bed for a new concept of dipoles. Built using superconducting niobium-tin (Nb3Sn) coils kept at a temperature of 1.9 K, the new dipoles will have to reach a bore field of 11 T in stable conditions.

If the expected requirements will be met, the niobium-tin magnets will be crucial to all future collider machines because, for the time being, this is the only technology able to produce magnetic fields greater than 10 T.

Following up on initial successful tests conducted at Fermilab on single-aperture magnets, CERN’s experts went on to design and manufacture the first 2 m-long model magnet. While relying on the coil technology developed at Fermilab, the CERN magnet includes some new design features: cable insulation made by braiding S2-glass on mica-glass tape, a new material for the coil wedge, more flexible coil end spacers and a new collaring concept for pre-loading the brittle niobium-tin coils.

Magnets reach their nominal operation after “training” – a procedure that pushes the magnet to the highest possible field before it quenches. Quench after quench, the magnets acquire memory of their previous performance and reach higher fields. In January this year, the first double-aperture (two-in-one) dipole has shown a full memory of the single-coil test, and has established a record field for accelerator dipoles reaching a stable operating field of 12.5 T. Even more relevant to future operation in the LHC, it passed a nominal field of 11 T with no quench, and reached the ultimate field of 12 T with only two quenches (see figure 1).

Even if they will be able to produce a much higher field, the HL-LHC 11 T dipoles are very similar to the standard LHC dipoles, because they must fit in the continuous cryostat and must be powered in series with the rest of the LHC dipoles in a sector. In parallel, the members of the HL-LHC project are developing a bypass cryostat to host the new collimators, which will be inserted between two 11 T dipoles. The new components will allow the machine to cope with the increased number of particles drifting out of the primary beam and hitting the magnets in the cleaning insertions of Point 7. The work is so well advanced that the first collimators should be installed in the dispersion suppression region of the LHC ring during the next long shutdown of the machine scheduled for 2019–2020. They will improve the performance of the LHC during Run 3 and will put the new Nb3Sn technology to test, in preparation for the final configuration of the HL-LHC.

With an aperture of 150 mm, the focussing quadrupoles currently being developed by the US-LARP collaboration (BNL, FNAL and LBNL) in collaboration with CERN take advantage of all of the most advanced magnet technologies. Similar to the 11 T dipoles, they use Nb3Sn coils and their operating temperature will be 1.9 K. Because of the high field and large aperture, these magnets store an energy per unit length twice as large as that stored in the LHC dipoles. The mechanical structure to contain forces and to assure the field shape is of a new type, first proposed by LBNL, called “keys & bladders”. Based on force control rather than dimension control, like in the case of a classical “collars” structure, “keys and bladders” is very well suited for the Nb3Sn mechanical characteristics (Nb3Sn is indeed a very brittle material), and it is easy to implement on a limited number of magnets. A special tungsten absorber system, integrated in the beam screen, shields these magnets from the “heavy rain” of collision debris, which will be five times more intense than in the LHC.

At the beginning of March, the LARP teams at Fermilab succeeded in training the first 1.5 m-long model. Designed and manufactured by a CERN–LARP joint team, this is the first accelerator-quality final cross-section model of the inner triplet magnet. Following a very smooth training curve (see figure 2), the model magnet surpassed the operating gradient, which corresponds to a peak field of 11.4 T, actually reaching 12.5 T. Together with the HL-LHC companion 11 T dipole, it is the first built accelerator-quality magnet reaching such fields.

Building on the proven performance of the Nb3Sn technology, experts from both sides of the Atlantic will go on to build the actual-length quadrupoles. At Fermilab, the final US magnets will measure 4.2 m in length, while the CERN team aims at manufacturing 7.2 m-long quadrupoles to halve the number of magnets to be installed.

In addition to ensuring the future of collider physics, for which the success of high-field Nb3Sn for the HL-LHC is a key ingredient, the new powerful magnets will certainly pave the way for their use in other fields, including the medical one. Indeed, the Nb3Sn technology is already at the core of ultra-high-field magnets used in various fields but not yet in magnetic resonance imaging (MRI), which today represents the largest commercial use of superconductivity. However, the hope is that the development of this technology in the HL-LHC machine may also boost wider use of Nb3Sn magnets in the medical sector. Indeed, thanks to the higher magnetic fields that can be achieved with Nb3Sn, MRI systems using this technology would be able to provide more detailed images and faster scanning, e.g. for functional imaging. The challenge is now within reach.

The HL-LHC: a bright vision

The LHC is one of the world’s largest and most complex scientific instruments. Its design and construction required more than 20 years of hard work and the unique expertise of a number of experts. Following on from the discovery of the Higgs boson in 2012, the machine continues to run at unprecedented energy to give physicists access to phenomena that have so far remained out of reach.

The full exploitation of the LHC and its high-luminosity upgrade programme, the High Luminosity LHC (HL-LHC), have been identified as one of Europe’s highest priorities for the next decade in the European Strategy for Particle Physics (CERN Courier July/August 2013 p9) adopted by CERN Council in the special session held in Brussels on 30 May 2013. The HL-LHC was also recently selected as one of the 29 landmark projects of the European Strategy Forum on Research Infrastructures (ESFRI) 2016 Roadmap.

Although it concerns only about 5% of the current machine, the HL-LHC is a major upgrade programme requiring a number of key innovative technologies, each one an exceptional technological challenge that involves several institutes around the world.

At the heart of the new configuration are the powerful magnets – both dipoles and quadrupoles – that will have to operate at unprecedented field values: 11 and 12 T, respectively. In particular, the quadrupoles, also called “inner triplets”, which will be installed on both sides of the collision points, are crucial components to obtain the designed leap in the integrated luminosity: from the 300 fb–1 of the LHC by the end of its initial run to the 3000 fb–1 of the HL-LHC. Their aperture will be more than double that of the current triplets – a requirement that would scare many magnet experts, because the stored energy goes with the square of the magnetic field and the magnet aperture.

The overall increase of luminosity cannot be reached without revolutionising the superconducting technologies currently used in particle accelerators. The new magnets rely on niobium-tin (Nb3Sn) superconducting cables, instead of the LHC’s niobium-titanium alloy. The first model, with full-size cross-section and shorter length than the actual magnet (1 m long compared with the final 4.2 or 7 m), has just proven that the technology works well, even beyond expectation. Similarly good results were reached in January by the experts dealing with the 11 T dipoles that will house the new collimation system for the Dispersion Suppressor, which is being entirely redesigned (see “Super-magenets at work” in this issue).

Another key element of the new machine is the crab cavities. Unlike standard radiofrequency cavities, crab cavities produce a rotation of the beam by providing a transverse deflection of the bunches. This is used to increase the luminosity at the collision points and to reduce the beam–beam parasitic effects that limit the collision efficiency of the accelerator. The crab-cavity concept was explored by the KEKB machine, but it will be implemented for the first time at the HL-LHC for a proton collider.

The current operation of the LHC is often disrupted by the machine powering system breaking down. This also happens because of high levels of radiation caused by the high-energy and high-intensity circulating beams. With even higher luminosity, this problem could prevent the accelerator from performing reliably. New magnesium-diboride-based (MgB2) superconducting cables capable of transporting electrical currents of 20 to 100 kA have already proven their capability for such large current transport, at a convenient 20 K temperature. In this way, it will be possible to move the power converters from the LHC tunnel to a new service gallery, thereby facilitating technical and maintenance operations and reducing the radiation dose to personnel.

All in all, more than 1.2 km of the current ring will need to be replaced with new components. Using cutting-edge technologies, it will be possible for scientists to significantly extend the discovery potential of the LHC (e.g. providing about a 30% higher mass reach for new particles) without replacing the full ring. This is also a challenge for the experiments, which will have to upgrade their inner detectors and other components to face the higher collision rate (CERN Courier January/February 2016 p26).

Based on innovative technological solutions, the HL-LHC will also allow physicists to study in depth the properties of the Higgs boson and any possible new particles that the LHC may discover in future runs. In addition, it will play a decisive role in the future of experimental particle physics because it is the ideal test bed for both technology demonstration and for the design of future accelerators beyond the LHC.

The promising results obtained so far have been possible thanks to the collaborative effort of several institutes in Europe and around the world. It is indeed amazing to realise that since its inception, the HL-LHC has brought together more than 250 scientists from 25 countries. This is confirmation that, today, no big scientific endeavour, however bright and smart it might be, can actually be pursued without the contribution of the whole community.

Laser Experiments for Chemistry and Physics

By R N Compton and M A Duncan
Oxford University Press

9780198742982

The book provides an introduction to the characteristics and operation of lasers through laboratory experiments for undergraduate students in physics and chemistry.

After a first section reviewing the properties of light, the history of laser invention, the atomic, molecular and optical principles behind how lasers work, as well as the kinds of lasers that are available today, the text presents a rich set of experiments on various topics: thermodynamics, chemical analysis, quantum chemistry, spectroscopy and kinetics.

Each chapter gives the historical and theoretical background to the topics covered by the experiments, and variations to the prescribed activities are suggested.

Both of the authors began their research careers at the time when laser technology was taking off, and witnessed advances in the development and application of this new technology to many fields. In this book they aim to pass on some of their experience to new students, and to stimulate practical activities in optics and lasers courses.

Resummation and Renormalization in Effective Theories of Particle Physics

By Antal Jakovác and András Patkós
Springer

978-3-319-22620-0

The book re-collects notes written by the authors for a course on finite-temperature quantum fields, and more specifically on the application of effective models of strong and electroweak interactions in particle-physics phenomenology.

The topics selected reflect the research interests of the authors, nevertheless in their opinion, the material covered in the volume can help students of master’s degrees in physics to improve their ability to deal with reorganisations of the perturbation series of renormalisable theories.

The book is made up of eight chapters and is organised in four parts. An historic overview of effective theories (which are scientific theories that propose to model certain effects without proposing to adequately model their causes) opens the text, then two chapters provide the basics of quantum field theory necessary for following the directions of contemporary research. The third part introduces three different and widely used approaches to improving convergence properties of renormalised perturbation theory. Finally, results that emerge from the application of these techniques to the thermodynamics of strong and electroweak interactions are reviewed in the last two chapters.

bright-rec iop pub iop-science physcis connect