The Facility for Rare Isotope Beams (FRIB) Project, which was awarded two years ago to Michigan State University by the US Department of Energy Office of Science (DOE-SC) is making significant progress towards start-up in 2020. An important milestone was passed in September 2010 when DOE-SC approved the preferred alternative design in Critical Decision-1 with an associated cost up to $614.50 million and a schedule range from the autumn of fiscal year 2018 to spring of 2020.
When FRIB becomes operational, it will be a new DOE national user facility for nuclear science, funded by the DOE-SC Office of Nuclear Physics and operated by Michigan State University. FRIB will provide intense beams of rare isotopes – short-lived nuclei not normally found on Earth. The main focus of FRIB will be to produce such isotopes, study their properties and use them in applications to address national needs. FRIB will provide researchers with the technical capabilities not only to investigate rare isotopes, but also to put this knowledge to use in various applications, for example in materials science, nuclear medicine and the fundamental understanding of nuclear material important to stewardship of nuclear-weapons stockpiles.
An optimization from the layout initially proposed for FRIB to the preferred alternative design moves the linac from a straight line extending to the northeast through Michigan State University’s campus to a paperclip-like configuration next to the existing structure at the National Superconducting Cyclotron Laboratory (NSCL). The linac will have more than 344 superconducting RF cavities in an approximately 170 m-long tunnel about 12 m underground and will accelerate stable nuclei to kinetic energies of a minimum of 200 MeV/nucleon for all ions, with beam power up to 400 kW. (Energies range from 200 MeV/nucleon for uranium to above 600 MeV for protons.)
The Critical Decision (CD)-2 review to approve the performance baseline is planned for spring 2012 and the CD-3 review to approve the start of construction is planned for 2013. The selected architect/engineering firm and FRIB construction manager are exploring options to advance civil construction to the summer of 2012.
Recent meetings between NSCL and FRIB User Groups have put a merger in the works, expected to be initiated this year with the final merger for more than 800 members and functions by the end of the year or early in 2012.
The successful completion of the upgrade to the Nuclotron at JINR marks the end of an important first step in the construction of the Nuclotron-based Ion Collider Facility and Multi-Purpose Detector (NICA/MPD) project. NICA, which is JINR’s future flagship facility in high-energy physics, will allow the study of heavy-ion collisions both in fixed-target experiments and in collider experiments with 197Au79+ ions at a centre-of mass energy of 4–11 GeV (1–4.5 GeV/u ion kinetic energy) and an average luminosity of 1027 cm−2 s−1. Other goals include polarized-beam collisions and applied research.
NICA’s main element is the Nuclotron, a 251 m circumference superconducting synchrotron for accelerating nuclei and multi-charged heavy ions, which started up in 1993. It currently delivers ion beams for experiments at internal targets and has a slow extraction system for fixed-target experiments. By 2007, it was accelerating proton beams to 5.7 GeV, deuterons to 3.8 GeV/u and nuclei (Li, F, C, N, Ar, Fe ) to 2.2 GeV/u.
The Nuclotron upgrade – the Nuclotron-M project – was a key part of the first phase of construction work for NICA. It included work to develop the existing accelerator complex for the generation of relativistic ion-beams with atomic masses from protons to gold and uranium, at energies corresponding to the maximum design magnetic field of 20 T. The goals were to reach a new level in beam parameters and to improve substantially the reliability and efficiency of accelerator operation and renovate or replace some of the equipment.
The Nuclotron facility includes a cryogenic supply system with two helium refrigerators, as well as infrastructure for the storage and circulation of helium liquid and gas. The injection complex consists of a high-voltage pre-injector with a 7000 kV pulsed transformer and an Alvarez-type linac, LU-20, which accelerates ions of Z/A ≥0 0.33 up to an energy 50 MeV/u. The wide variety of ion types is provided by a heavy-ion source, ESIS “KRION-2”, a duoplasmatron, a polarized deuteron source, POLARIS, and a laser ion-source for light ions.
As a key element of the NICA collider injection chain, the Nuclotron has to accelerate a single bunch of fully stripped heavy ions (U92+, Pb82+ or Au79+) from 0.6–4.5 GeV/u with a bunch intensity of about 1–1.5 × 109 ions. The particle losses during acceleration must not exceed 10% and the magnetic field should ramp at 1 T/s. To demonstrate the capacity of the Nuclotron complex and satisfy these requirements, the general milestones of the Nuclotron-M project were specified as the acceleration of heavy ions (at atomic numbers larger than 100) and stable and safe operation at 2 T of the dipole magnets.
The upgrade, which started in 2007, involved the modernization of almost all of the Nuclotron systems, with time in six beam runs devoted to testing newly installed equipment. Two stages of the ring vacuum system were upgraded and cryogenic power was doubled. A new power supply for the electrostatic septum of the slow extraction system was constructed and tested, and new power supplies for the closed-orbit corrector magnets were also designed and tested at the ring. The ring’s RF system was upgraded to increase the RF voltage and for tests of adiabatic trapping of particles into the acceleration mode. Vacuum conditions at the Nuclotron’s injector were improved to increase the acceleration efficiency. A completely new power-supply system as well as a quench protection system for magnets and magnetic lenses were also constructed, including: new main power supply units; a new power supply unit for current decrease in the quadrupole lenses; 10 km of new cable lines; and 2000 new quench detectors. In parallel, there was also progress in the design and construction of new heavy-ion and polarized light-ion sources.
Following the Nuclotron’s modernization, in March 2010 124Xe42+ ions were accelerated to about 1.5 GeV/u and slow extraction of the beam at 1 GeV/u was used for experiments. In December, the stable and safe operation of the magnetic system was achieved with a main field of 2 T. During the run the power supply and the quench protection systems were tested in cycles with the bending field of 1.4, 1.6, 1.8 and 2 T at the plateau. The field ramped at 0.6 T/s and the active time for each cycle was about 7s. A few tens of the energy evacuation events were acquired; in all of them the process was in the nominal regime.
In parallel with the upgrade work, the technical design was prepared for elements in the collider injection chain (a new heavy-ion linear accelerator, booster synchrotron and LU-20 upgrade programme). In addition, the technical design for the collider is in the final stage. The dipole and quadrupole magnets for the collider, as well as for the booster, are based on the design of the Nuclotron superconducting magnets. These have a cold-iron window-frame yoke and low-inductance winding made of a hollow composite superconductor; the magnetic-field distribution is formed by the iron yoke. The fabrication of these magnets gave JINR staff a great deal of experience in superconducting magnet design and manufacturing.
The prototype dipole magnet for the NICA booster was made in 2010 and construction of the magnet model for the collider, based on the preliminary design, is in the final stage. To construct the booster and collider rings, JINR needs to manufacture more than 200 dipole magnets and lenses during a short time period. The working area for magnet production and test benches for the magnet commissioning are currently being prepared.
The rapid decline in temperature of a young neutron star in the supernova remnant Cassiopeia A (Cas A) suggests superfluidity and superconductivity at its core. This conclusion, based on observations by NASA’s Chandra X-ray Observatory, gives new insights into nuclear interactions at ultra-high densities.
Cas A was the “first light” target of the Chandra satellite. Only a month after launch, the image released on 26 August 1999 revealed the filamentary structure of the supernova remnant with details that competed with the best optical images – an incredible achievement for X-ray instrumentation. A tiny spot at the heart of the nebula was identified as a neutron star (CERN Courier October 2004 p19). It has previously remained unnoticed because no pulsations were detected from Cas A, unlike from the pulsar in the Crab Nebula, which is a neutron star spinning 30 times a second (CERN Courier January/February 2006 p10, CERN CourierNovember 2008 p11). Another oddity of Cas A is that nobody noticed the onset of the supernova some 330 years ago, except maybe John Flamsteed, who reports the observation of a sixth magnitude star in August 1680 near the position of the remnant. The explosion of the massive star only 11,000 light-years away should have been visible from Europe, unless it was heavily obscured by dust (CERN Courier January/February 2006 p10).
Recent observations of Cas A by Chandra have now revealed another surprise about its neutron star: that its surface has cooled by about 4% in 10–years. According to two independent studies, this dramatic drop in temperature is evidence for superfluid and superconducting matter in the interior of the ultra-dense star. Dany Page from the National Autonomous University of Mexico and colleagues submitted their results to Physical Review Letters just two days before the submission of another letter to the Monthly Notices of the Royal Astronomical Society by Peter Shternin of the Ioffe Institute in St Petersburg and collaborators.
While superfluidity is observed only at temperatures near absolute zero on Earth, theorists have estimated that this friction-free state of matter may survive temperatures of hundreds of million degrees in the ultra-dense core of neutron stars. The nuclear density of neutron stars – one teaspoon of their material has a mass of the order of a 1000–million tonnes – forces protons and electrons to merge, resulting in a star composed mostly of neutrons. The new results suggest strongly that the remaining protons in the star’s core are in a superfluid state and – because they carry a charge – also form a superconductor.
Both teams further show that the rapid cooling in Cas–A can be explained by the formation of a neutron superfluid in the stellar core within about the past 100–years, as seen from Earth. This should continue for a few decades before slowing down. The cool-down is caused by the production of superfluid neutron pairs – so-called Cooper pairs – accompanied by the emission of two neutrinos per pair, which escape from the star and carry away energy.
With Cas A, astronomers have been lucky to catch a young neutron star just in the transition to superfluid state. It allows them to set the critical temperature for the onset of superfluidity for neutrons interacting via the strong force to about 500 million degrees. The results are also important for understanding properties of neutron stars, such as magnetar outbursts and “glitches”. The latter are the sudden spin-up of pulsars, probably as a result of quakes in the crust of the neutron stars. There has been previous evidence for superfluidity, but at subnuclear densities in the crust. The research into Cas A provides the first direct evidence for superfluid neutrons and protons in the core of the star.
The conferences on Computing in High Energy and Nuclear Physics (CHEP), which are held approximately every 18 months, reached their silver jubilee with CHEP 2010, held at the Academia Sinica Grid Computing Centre (ASGC) in Taipei in October. ASGC is the LHC Computing Grid (LCG) Tier 1 site for Asia and the organizers are experienced in hosting large conferences. Their expertise was demonstrated again throughout the week-long meeting, drawing almost 500 participants from more than 30 countries, including 25 students sponsored by CERN’s Marie Curie Initial Training Network for Data Acquisition, Electronics and Optoelectronics for LHC Experiments (ACEOLE).
Appropriately, given the subsequent preponderance of LHC-related talks, the LCG project leader, Ian Bird of CERN, gave the opening plenary talk. He described the status of the LCG, how it got there and where it may go next, and presented some measures of its success. The CERN Tier 0 centre moves some 1 PB of data a day, in- and out-flows combined; it writes around 70 tapes a day; the worldwide grid supports some 1–million jobs a day; and it is used by more than 2000 physicists for analysis. Bird was particularly proud of the growth in service reliability, which he attributed to many years of preparation and testing. For the future, he believes that the LCG community needs to be concerned with sustainability, data issues and changing technologies. The status of the LHC experiments’ offline systems were summarized by Roger Jones of Lancaster University. He stated that the first year of operations had been a great success, as presentations at the International Conference on High Energy Physics in Paris had indicated. He paid tribute to CERN’s support of Tier–0 and he remarked that data distribution has been smooth.
In the clouds
As expected, there were many talks about cloud computing, including several plenary talks on general aspects, as well as technical presentations on practical experiences and tests or evaluations of the possible use of cloud computing in high-energy physics. It is sometimes difficult to separate hype from initiatives with definite potential but it is clear that clouds will find a place in high-energy physics computing, probably based more on private clouds rather than on the well known commercial offerings.
Harvey Newman of Caltech described a new generation of high-energy physics networking and computing models. As the available bandwidth continues to grow exponentially in capacity, LHC experiments are increasingly benefiting from it – to the extent that experiment models are being modified to make more use of pulling data to a job rather than pushing jobs towards the data. A recently formed working group is gathering new network requirements for future networking at LCG sites.
Lucas Taylor of Fermilab addressed the issue of public communications in high-energy physics. Recent LHC milestones have attracted massive media interest and Taylor stated that the LHC community simply has no choice other than to be open, and welcome the attention. The community therefore needs a coherent policy, clear messages and open engagement with traditional media (TV, radio, press) as well as with new media (Web 2.0, Twitter, Facebook, etc.). He noted major video-production efforts undertaken by the experiments, for example ATLAS-Live and CMS TV, and encouraged the audience to contribute where possible – write a blog or an article for publication, offer a tour or a public lecture and help build relationships with the media.
There was an interesting presentation of the Facility for Antiproton and Ion Research (FAIR) being built at GSI, Darmstadt. Construction will start next year and switch-on is scheduled for 2018. Two of the planned experiments are the size of ALICE or LHCb, with similar data rates expected. Triggering is a particular problem and data acquisition will have to rely on event filtering, so online farms will have to be several orders of magnitude larger than at the LHC (10,000 to 100,000 cores). This is a major area of current research.
David South of DESY, speaking on behalf of the Study Group for Data Preservation and Long-term Analysis in High-Energy Physics set up by the International Committee for Future Accelerators, presented what is probably the most serious effort yet for data preservation in high-energy physics. The question is: what to do with data after the end of an experiment? With few exceptions, data from an experiment are often stored somewhere until eventually they are lost or destroyed. He presented some reasons why preservation is desirable but needs to be properly planned. Some important aspects include the technology used for storage (should it follow storage trends, migrating from one media format to the next?), as well as the choice of which data to store. Going beyond the raw data, this must also include software, documentation and publications, metadata (logbooks, wikis, messages, etc.) and – the most difficult aspect – people’s expertise.
Although some traditional plenary time had been scheduled for additional parallel sessions, there were still far too many submissions to be given as oral presentations. So, almost 200 submissions were scheduled as posters, which were displayed in two batches of 100 each over two days. The morning coffee breaks were extended to permit attendees to view them and interact with authors. There were also two so-called Birds of a Feather sessions on LCG Operations and LCG Service Co-ordination, which allowed the audience to discuss aspects of the LCG service in an informal manner.
The parallel stream on Online Computing was, of course, dominated by LHC data acquisition (DAQ). The DAQ systems for all experiments are working well, leading to fast production of physics results. Talks on event processing provided evidence of the benefits of solid preparation and testing; simulation studies have proved to provide an amazingly accurate description of LHC data. Both the ATLAS and CMS collaborations report success with prompt processing at the LCG Tier 0 at CERN. New experiments, for example at FAIR, should take advantage of the experiment frameworks used currently by all of the LHC experiments, although the analysis challenges of the FAIR experiments exceed those of the LHC. There was also a word of caution – reconstruction works well today but how will it cope with increasing event pile-up in the future?
Presentations in the software engineering, data storage and databases stream covered a heterogeneous range of subjects, from quality assurance and performance monitoring to databases, software re-cycling and data preservation. Once again, the conclusion was that the software frameworks for the LHC are in good shape and that other experiments should be able to benefit from this.
The most popular parallel stream of talks was dedicated to distributed processing and analysis. A main theme was the successful processing and analysis of data in a distributed environment, dominated, of course, by the LHC. The message here is positive: the computing models are mainly performing as expected. The success of the experiments relies on the success of the Grid services and the sites but the hardest problems take far longer to solve than foreseen in the targeted service levels. The other two main themes were architecture for future facilities such as FAIR, the Belle II experiment, at the SuperKEKB upgrade in Japan, and the SuperB project in Italy; and improvements in infrastructure and services for distributed computing. The new projects are using a tier structure, but apparently with one layer fewer than in the LCG. Two new, non-high-energy-physics projects – the Fermi gamma-ray telescope and the Joint Dark Energy Mission – seem not to use Grid-like schemes.
Tools that work
The message from the computing fabrics and networking stream was that “hardware is not reliable, commodity or otherwise”; this statement from Bird’s opening plenary was illustrated in several talks. Deployments of upgrades, patches, new services are slow – another quote from Bird. Several talks showed that the community has the mechanism, so perhaps the problem is in communications and not in the technology? Yes, storage is an issue and there is a great deal of work going on in this area, as shown in several talks and posters. However, the various tools available today have proved that they work: via the LCG, the experiments have stored and made accessible the first months of LHC data. This stream included many talks and posters on different aspects and uses of virtualization. It was also shown that 40 Gbit and 100 Gbit networks are a reality: network bandwidth is there but the community must expect to have to pay for it.
Compared with previous CHEP conferences, there was a shift in the Grid and cloud middleware sessions. These showed that pilot jobs are fully established, virtualization is entering serious large-scale production use and there are more cloud models than before. A number of monitoring and information system tools were presented, as well as work on data management. Various aspects of security were also covered. Regarding clouds, although the STAR collaboration at the Relativistic Heavy Ion Collider at Brookhaven reported impressive production experience and there were a few examples of successful uses of Amazon EC2 clouds, other initiatives are still at the starting gate and some may not get much further. There was a particularly interesting example linking CernVM and Boinc. It was in this stream that one of the more memorable quotes of the week occurred, from Rob Quick of Fermilab: “There is no substitute for experience.”
The final parallel stream covered collaborative tools, with two sessions. The first was dedicated to outreach (Web 2.0, ATLAS Live and CMS Worldwide) and new initiatives (Inspire); the second to tools (ATLAS Glance information system, EVO, Lecture archival scheme).
• The next CHEP will be held on 21–25 May, 2012, hosted by Brookhaven National Laboratory, at the NYU campus in Greenwich Village, New York, see www.chep2012.org/.
The journal Science celebrated its 125th anniversary in 2005 and in a special issue listed what it considered to be the top 25 questions facing scientists during the next quarter of a century (Kerr 2005). These questions included: how does the Earth̓s interior work?
The main geophysical and geochemical processes that have driven the evolution of the Earth are strictly bound by the planet̓s energy budget. The current flux of energy entering the Earth’s atmosphere is well known: the main contribution comes from solar radiation (1.4 × 103 W m–2), while the energy deposited by cosmic rays is significantly smaller (10–8 W m–2). The uncertainties on terrestrial thermal power are larger – although the most quoted models estimate a global heat loss in the range of 40–47 TW, a global power of 30 TW is not excluded. The measurements of the temperature gradient taken from some 4 × 104 drill holes distributed around the world provide a constraint on the Earth’s heat production. Nevertheless, these direct investigations fail near the oceanic ridge, where the mantle content emerges: here hydrothermal circulation is a highly efficient heat-transport mechanism.
The generation of the Earth’s magnetic field, its mantle circulation, plate tectonics and secular (i.e. long lasting) cooling are processes that depend on terrestrial heat production and distribution, and on the separate contributions to Earth’s energy supply (radiogenic, gravitational, chemical etc.). An unambiguous and observationally based determination of radiogenic heat production is therefore necessary for understanding the Earth’s energetics. Such an observation requires determining the quantity of long-lived radioactive elements in the Earth. However, the direct geochemical investigations only go as far as the upper portion of the mantle, so all of the geochemical estimates of the global abundances of heat-generating elements depend on the assumption that the composition of meteorites reflects that of the Earth.
The uranium and thorium decay chains and 40K contribute about 99% of the total radiogenic heat production of the Earth; however, both the total amount and the distribution of these elements inside the Earth remain open to question. Thorium and uranium are refractory lithophile elements, while potassium is volatile. The processes of accretion and differentiation of the early Earth, as well as the subsequent processes of recycling and dehydrating subducting slabs, further enhance the concentrations of these radioactive elements in the crust. According to Roberta Rudnick and Shan Gao, the radiogenic heat production of the crust is 7.3 ± 1.2 (1σ) TW (Rudnick and Gao 2003).
The expected amount and distribution of uranium, thorium and potassium in the mantle are model dependent. The Bulk Silicate Earth (BSE) is a canonical model that provides a description of geological evidence that is coherent within the constraints placed by the combined studies of mantle samples and the most primitive of all of the meteorites – the CI group of carbonaceous chondrites – which have a chemical composition similar to that of the solar photosphere, neglecting gaseous elements. The model predicts a radiogenic heat production in the mantle of about 13 TW. However, it needs to be tested because, on the grounds of available geochemical and/or geophysical data, it is not possible to exclude the theory that the radioactivity in the Earth today is enough to account for the highest estimate of the total terrestrial heat. Some models are based on a comparison of the planet with other chondrites, such as enstatite chondrites, and alternative hypotheses do not exclude the presence of radioactive elements in the Earth’s core. In addition, other models suggest the existence of a geo-reactor of 3–6 TW induced by important amounts of uranium present around the core. The debate remains open.
Neutrinos from the Earth
Geo-neutrinos are the (anti)neutrinos produced by the natural radioactivity inside the Earth. In particular, the decay chains of 238U and 232Th include six and four β− decays, respectively, and the nucleus of 40K decays by electron capture and β− decay with branching ratios of 11% and 89%, respectively. The decays produce heat and electron antineutrinos, with fixed ratios of heat to neutrinos (table 1). A measurement of the antineutrino flux, and possibly of the spectrum, would provide direct information on the amount and composition of radioactive material inside the Earth and so would determine the radiogenic contribution to the heat flow.
The Earth emits mainly in electron-antineutrinos, while the Sun shines in electron-neutrinos. The order of magnitude of the antineutrino flux on the surface, following the model hypotheses, could be 106 cm–2 s–1 from uranium and thorium in the Earth and 107 cm–2 s–1 from potassium, as compared with a neutrino flux of 6 × 1010 cm–2 s–1 from the Sun. Given the two types of crust (continental and oceanic) and their different composition and thickness, the expected flux of geo-neutrinos differs from place to place on the Earth’s surface. Moreover, considering that this variation can be as much as an order of magnitude, a detector’s sensitivity to geo-neutrinos coming from the mantle and the crust will depend on its location.
The process for the detection of low-energy antineutrinos used by the detectors currently running (KamLAND at Kamioka, Japan, and Borexino at Gran Sasso, Italy) and under construction (SNO+ at SNOlab, Canada), is inverse beta decay with a threshold of 1.806 MeV. Hence, only a fraction of the geo-neutrinos from 238U and 232Th are above threshold (figure 1), and the detection of antineutrinos from 40K remains a difficult challenge even for the next generation detectors. These experiments use liquid scintillator as the detecting material: one kilotonne of it contains some 1032 protons. As a consequence the event rate is conveniently expressed in terms of terrestrial neutrino units (TNU), defined as one event per 1032 target protons a year.
In the underground experiments devoted to the measurement of geo-neutrinos, liquid scintillator – essentially hydrocarbons – provides the hydrogen nuclei that act as the target for the antineutrinos. In these detectors a geo-neutrino event is tagged by a prompt signal and a delayed signal, following the inverse beta decay: ν–e + p → e+ + n – 1.806 MeV.
The positron ionization and annihilation provide the prompt signal. The energy of the incoming neutrino is related to the measured energy by the relationship: Eν = Emeasured + 0.782 MeV. The prompt signal is in the energy range (1.02, 2.50) MeV for uranium and (1.02, 1.47) MeV for thorium. The neutron slows down and after thermalization is captured by a proton, making a deuteron and a gamma ray of 2.22 MeV. The gamma ray generates the delayed signal. In large volumes of liquid scintillator the delayed signal is fully contained with an efficiency of around 98%.
The prompt–delayed sequence of the inverse beta decay provides a strong tag for electron antineutrinos, well known since the pioneering experiment of Clyde Cowan and Fred Reines in 1956. There is a correlation in space and time between the prompt and delayed signals. The correlated time depends on the properties of the scintillator and is in the order of 200–250 μs. The correlated distance between the two signals is related to the spatial resolution of the detector (around 10 cm at 1 MeV) and is driven by Compton interactions – with a probability of about 100% it can be less than 1 m.
Any electron-antineutrinos besides the ones produced within the Earth, and any event that can mimic a prompt–delayed signal with a neutron in the final state, can be a source of background. In particular, consider electron-antineutrinos produced by nuclear power reactors. Their energy spectrum partially overlaps the one for geo-neutrinos, but shifted towards higher energies up to about 10 MeV. Some 400 power reactors exist, mainly in North America, Europe, West Russia and Japan. Therefore, depending on the location of the underground laboratories, this background can produce a significant interference with the detection of geo-neutrinos.
Among other background sources there are (α,n) reactions resulting from contaminants in the scintillator, such as 210Po, and cosmogenic radioactive isotopes such as 9Li and 8He, which are produced by muons crossing the laboratory overburden. 9Li and 8He decay through beta-delayed neutron emission with T1/2 = 178.3 ms and 119 ms, respectively. Using dead-time, a cut of 2 s after each detected muon crossing the liquid scintillator, can reject this background with an efficiency of 99.9%. A high level of radiopurity and a fiducial mass cut will reduce uncorrelated random coincidences, which can arise from impurities such as 210Bi, 214Bi and 208Tl.
The first attempt to detect geo-neutrinos was made by the KamLAND experiment in 2005, where a signal was detected at the 2σ level (Araki et al. 2005). Three years later the same experiment reported a second measurement at 2.7σ (Abe et al. 2008). In 2010 Borexino reported evidence of geo-neutrinos at 4.2σ (Bellini et al. 2010). This was followed by a measurement in KamLAND with the same significance (Inoue 2010 and Shimizu 2010). The KamLAND and the Borexino experiments both make use of a large mass of organic liquid scintillator shielded by a large-volume water Cherenkov detector and viewed by a large number of photomultipliers (around 2000). In KamLAND in particular, a fiducial mass of around 700 tonnes can be selected, whereas in Borexino the maximum target mass can be as much as 280 tonnes. The statistics of the KamLAND measurement is higher than in Borexino owing to the larger volume and longer exposure; on the other hand the signal-to-noise ratio in the geo-neutrino spectrum window is about 2 for Borexino and about 0.15 for KamLAND.
The interesting quantity is the flux of geo-neutrinos in a given location on the Earth’s surface. This depends on the spatial distribution of the heat-generating elements within the Earth. Geo-neutrinos can travel as much as some 12,000 km to the detector. Therefore, the measured flux of geo-neutrinos must include the effect of neutrino oscillations. It turns out that for geo-neutrinos, the global effect of oscillations is reduced to a constant suppression of the flux through an average survival probability, <Pee >, of around 0.57.
The number of observed geo-neutrino events in KamLAND is 106 + 29 – 28 (+89–78) at 1σ (3σ) with 2135 live-days and a target mass of about 670 tonnes. Borexino has observed 9.9 + 4.1 – 3.4(+ 14.6 – 8.2) geo-neutrino events in 482 days and 225 tonnes at 1σ (3σ). The rate in TNU for the Borexino and KamLAND observations corresponds to 64.8 + 26.6 – 21.6 and 38.3+10.3–9.9, respectively. In fits to the detected data in both experiments, the shapes of the geo-neutrino spectra are the same as in figure 1, assuming the chondritic Th/U mass ratio of 3.9. The combined KamLAND and Borexino observation has a significance of 5σ (Fogli et al. 2010). Figure 2 shows the allowed range for geo-neutrino rates in Borexino and KamLAND as a function of the Earth’s radiogenic heat. The minimum radiogenic heat of Earth corresponds only to the crust contribution.
The signal-to-noise ratio for reactor antineutrino background in the geo-neutrino energy range is a fundamental parameter for geo-neutrino observations. In Borexino in particular this ratio – neglecting other backgrounds – is around 1.3 because there are no nearby nuclear reactors. Indeed, at Gran Sasso the weighted distance to reactors <Lreac> is about 1000 km. By contrast, at Kamioka <Lreac> is around 200 km with a signal-to-noise ratio of about 0.2. Therefore, at present the significance of the Borexino measurement is limited only by the statistics (figure 3). This indicates that a spectroscopic measurement of the geo-neutrino signal is feasible, taking into account the overall low background rate.
In a few years a third detector, SNO+, with a weighted reactor distance <Lreac> of around 480 km should be operational. A combined analysis of the Borexino, KamLAND and SNO+ experiments could constrain the radiogenic heat of the mantle. In the long term, LENA – a super-massive detector of about 50 kilotonnes – could observe as many as 1000 geo-neutrinos a year. LENA would be located at the Centre for Underground Physics at Pyhäsalmi in Finland with <Lreac> of around 1000 km.
• The authors acknowledge some interesting discussions with W F McDonough, R L Rudnick and G Fiorentini.
The international conference series on spin originated with the biannual Symposia on High Energy Spin Physics, launched in 1974 at Argonne, and the Symposia on Polarization Phenomena in Nuclear Physics, which started in 1960 at Basle and were held every five years. Joint meetings began in Osaka in 2000, with the latest, SPIN2010, being held at the Forschungszentrum Jülich, chaired by Hans Ströher and Frank Rathmann. The 19th International Spin Physics Symposium was organized by the Institut für Kernphysik
(IKP), host of the 3 GeV Cooler Synchrotron, COSY – a unique facility for studying the interactions of polarized protons and deuterons with internal polarized targets. Research there is aimed at developing new techniques in spin manipulation for applications in spin physics, in particular for the new Facility for Antiproton and Ion Research (FAIR) at GSI, Darmstadt. The 250 or so talks presented at SPIN2010 covered all aspects of spin physics – from the latest results on transverse spin physics from around the world to spin-dependence at fusion reactors.
The conference started with a review of the theoretical aspects of spin physics by Ulf-G Meißner, director of the theory division at IKP, who focused on the challenges faced by the modern effective field-theory approach to few-body interactions at low and intermediate energies. Progress here has been tremendous but old puzzles such as the analysing power, Ay, in proton-deuteron scattering, refuse to be fixed. These were discussed in more detail in the plenary talks by Evgeny Epelbaum of Bochum and Johan Messchendorp of Groningen. In the second talk of the opening plenary session, Richard Milner of the Massachusetts Institute of Technology (MIT) highlighted the future of experimental spin physics.
It is fair to say that the classical issue of the helicity structure of protons has decided to take a rest, in the sense that rapid progress is unlikely. During the heyday of the contribution of the Efremov-Teryaev-Altarelli-Ross spin anomaly to the Ellis-Jaffe sum rule, it was tempting to attribute the European Muon Collaboration “spin crisis” to a relatively large number of polarized gluons in the proton. Andrea Bressan of Trieste reported on the most recent data from the COMPASS experiment at CERN, on the helicity structure function of protons and deuterons at small x, as well as the search for polarized gluons via hard deep inelastic scattering (DIS) reactions. Kieran Boyle of RIKEN and Brookhaven summarized the limitations on Δg from data from the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. The non-observation of Δg within the already tight error bars indicates that gluons refuse to carry the helicity of protons. Hence, the dominant part of the proton helicity is in the orbital momentum of partons.
The extraction of the relevant generalized parton distributions from deeply virtual Compton scattering was covered by Michael Düren of Gießen for the HERMES experiment at DESY, Andrea Ferrero of Saclay for COMPASS and Piotr Konczykowski for the CLAS experiment at Jefferson Lab. Despite impressive progress, there is still a long road ahead towards data that could offer a viable evaluation of the orbital momentum contribution to Ji’s sum rule. The lattice QCD results reviewed by Philipp Hägler of Munich suggest the presence of large orbital-angular momenta, Lu ≈ –Ld ≈ 0.36 (1/2), which tend to cancel each other.
The future of polarized DIS at electron–ion colliders was reviewed by Kurt Aulenbacher of Mainz. The many new developments range from a 50-fold increase in the current of polarized electron guns to an increase of 1000 in the rate of electron cooling.
Transversity was high on the agenda at SPIN2010. It is the last, unknown leading-twist structure function of the proton – without it the spin tomography of the proton would be forever incomplete. Since the late 1970s, everyone has known that QCD predicts the death of transverse spin physics at high energy. It took quite some time for the theory community to catch up with the seminal ideas of J P Ralston and D E Soper of some 30 years ago on the non-vanishing transversity signal in double-polarized Drell-Yan (DY) processes; it also took a while to accept the Sivers function, although the Collins function fell on fertile ground. Now, the future of transverse spin physics has never been brighter. During the symposium, news came of the positive assessment by CERN’s Super Proton Synchrotron Committee with respect to the continuation of COMPASS for several more years.
Both the Collins and Sivers effects have been observed beyond doubt by HERMES and COMPASS. With its renowned determination of the Collins function, the Belle experiment at KEK paved the way for the first determination of the transversity distribution in the proton, which turns out to be similar in shape and magnitude to the helicity density in the proton. Mauro Anselmino reviewed the phenomenology work at Turin, which was described in more detail by Mariaelena Boglione. Non-relativistically, the tensor/Gamow-Teller (transversity) and axial (helicity) currents are identical. The lattice QCD results reported by Hägler show that the Gamow-Teller charge of protons is indeed close to the axial charge.
The point that large transverse spin effects are a feature of valence quarks has been clearly demonstrated in single-polarized proton–proton collisions at RHIC by the PHENIX experiment, as Brookhaven’s Mickey Chiu reported. The principal implication for the PAX experiment at FAIR from the RHIC data, the Turin phenomenology and lattice QCD is that the theoretical expectations of large valence–valence transversity signals in DY processes with polarized antiprotons on polarized protons are robust.
The concern of the QCD community about a contribution of the orbital angular momentum of constituents to the total spin is nothing new to the radioactive-ion-beam community. Hideki Ueno of RIKEN reported on the progress in the production of spin-aligned and polarized radioactive-ion beams, where the orbital momentum of stripped nucleons shows itself in the spin of fragments.
The spin-physics community is entering a race to test the fundamental QCD prediction of the opposite sign of the Sivers effect in semi-inclusive DIS and DY on polarized protons. As Catarina Quintans from Lisbon explained, COMPASS is well poised to pursue this line of research. At the same time, ambitious plans to measure AN in DY experiments with transverse polarization at RHIC, which Elke-Caroline Aschenauer of Brookhaven presented, have involved scraping together a “yard-sale apparatus” for a proposal to be submitted this year. Paul Reimer of Argonne and Ming Liu of Los Alamos discussed the possibilities at the Fermilab Main Injector.
Following the Belle collaboration’s success with the Collins function, Martin Leitgab of Urbana-Champaign reported nice preliminary results on the interference fragmentation function. These cover a broad range of invariant masses in both arms of the experiment.
In his summary talk, Nikolai Nikolaev, of Jülich, raised the issue of the impact of hadronization on spin correlation. As Wolfgang Schäfer observed some time ago, the beta decay of open charm can be viewed as the final step of the hadronization of open charm. In the annihilation of e+e– to open charm, the helicities of heavy quarks are correlated and the beta decay of the open charm proceeds via the short-distance heavy quark; so there must be a product of the parity-violating components in the dilepton spectrum recorded in two arms of an experiment. However, because the spinning D* mesons decay into spinless Ds, the spin of the charmed quark is washed out and the parity-violating component of the lepton spectrum is obliterated.
The PAX experiment to polarize stored antiprotons at FAIR featured prominently during the meeting. Jülich’s Frank Rathmann reviewed the proposal and also reported on the spin-physics programme of the COSY-ANKE spectrometer. Important tests of the theories of spin filtering in polarized internal targets will be performed with protons at COSY, before the apparatus is moved to the Antiproton Decelerator at CERN – a unique place to study the spin filtering of antiprotons. Johann Haidenbauer of Jülich, Yury Uzikov of Dubna and Sergey Salnikov of the Budker Institute of Nuclear Physics reported on the Jülich- and Nijmegen-model predictions for the expected spin-filtering rate. There are large uncertainties with modelling the annihilation effects but the findings of substantial polarization of filtered antiprotons are encouraging. Bogdan Wojtsekhowski of Jefferson Lab came up with an interesting suggestion for the spin filtering of antiprotons using a high-pressure, polarized 3He target. This could drastically reduce the filtering time but the compatibility with the storing of the polarized antiprotons remains questionable.
Kent Paschke of Virginia gave a nice review on nucleon electromagnetic form factors, where there is still a controversy between the polarization transfer and the Rosenbluth separation of GE and GM. He and Richard Milner of MIT discussed future direct measurements of the likely culprit – the two-photon exchange contribution – at Jefferson Lab’s Hall B, at DESY with the OLYMPUS experiment at DORIS and at VEPP-III at Novosibirsk.
Spin experiments have always provided stringent tests of fundamental symmetries and there were several talks on the electric dipole moments (EDMs) of nucleons and light nuclei. Experiments with ultra-cold neutrons could eventually reach a sensitivity of dn ≈ 10–28 e⋅cm for the neutron EDM, while new ideas on electrostatic rings for protons could reach a still smaller dp ≈ 10–29 e⋅cm. The latter case, pushed strongly by the groups at Brookhaven and Jülich, presents enormous technological challenges. In the race for high precision versus high energy, such upper bounds on dp and dn would impose more stringent restrictions on new physics (supersymmetry etc.) than LHC experiments could provide.
Will nuclear polarization facilitate a solution to the energy problem? There is an old theoretical observation by Russell Kulsrud and colleagues that the fusion rate in tokomaks could substantially exceed the rate of depolarization of nuclear spins. While the spin dependence of the 3HeD and D3H fusion reactions is known, the spin dependence of the DD fusion reaction has never been measured. Kirill Grigoriev of PNPI Gatchina reported on the planned experiment on polarized DD fusion. Even at energies in the 100 keV range, DD reactions receive substantial contributions from higher partial waves and, besides possibly meeting the demands of fusion reactors, such data would provide stringent tests of few-body theories – in 2010 the existing theoretical models predict quintet suppression factors which differ by nearly one order in magnitude.
• The proceedings will be published by IOP Publishing in Journal of Physics: Conference Series (online and open-access). The International Spin Physics Committee (www.spin-community.org) decided that the 20th Spin Physics Symposium will be held in Dubna in 2012.
When the LHC operates at peak luminosity, about a 1000 million interactions will be produced and detected each second at the heart of the CMS experiment. However, only a tiny fraction of these events will be of major importance. As in many particle-physics experiments, a trigger system selects the most interesting physics in real time so that data from just a few of the collisions are recorded. The remaining events – the vast majority – are discarded and cannot be recovered later. The trigger system, therefore, in effect determines the physics potential of the experiment for ever.
The traditional trigger system in a hadron-collider experiment is comprised of three tiers. Level 1 (L1) is mostly hardware and low-level firmware that selects about 100,000 interactions from the 1000 million or so produced each second. Level 2 (L2), which is typically a combination of custom-built hardware and software, then filters a few thousand interactions to be sent to the next level. Level 3 (L3), in turn, invokes higher-level algorithms to select the couple of hundred events per second that require detailed study.
At the LHC, proton bunches cross in the experiments at a rate of up to 40 million times a second – with up to 20 or so interactions per crossing. At CMS, each crossing can produce around 1 MB of data. The aim of the trigger system is to reduce the data rate to about 1 GB/s, which is the speed at which the data-acquisition system can record data. This implies reducing the event rate to around 100 Hz.
The novelty of the CMS trigger system is that the traditional L2 and L3 components are merged into a single system – the high-level trigger (HLT). This is a commercial PC farm that takes all of the interactions from L1 and selects the best 200–300 events each second. Therefore, at CMS the reduction in data rate is carried out in two steps. The L1 trigger, based on custom-built electronics, first reduces the number of events by a factor of around 400, while another factor of about 1000 comes from the HLT.
The data from the collisions are initially stored in buffers, but the L1 electronics still has less than 3 μs to make a decision and transfer that data on to the HLT. Given this short time frame, the L1 trigger acts only on information with coarse granularity from the muon detectors and the calorimeters, which is used to identify important objects, such as muons and jets. By contrast, the HLT works with a modified version of the CMS event offline reconstruction software, with full granularity for all of the sub-detectors, including the central tracker. To reduce the time taken, usually only the regions identified by the L1 trigger are read out, and reconstructed in a “regional reconstruction” process.
Such a system has never before operated at a particle collider. The advantage that this design buys is additional flexibility in the online selection system: the CMS experiment can run the more sophisticated L3 algorithms on a larger fraction of the collisions. In a three-tier system, experiments do this only on events that have been filtered through the second stage. With a two-tier trigger, CMS can do the more sophisticated filtering earlier in the game, so the experiment can look for more exotic events that might not have been recorded in a traditional trigger system. The price that CMS pays for this flexibility is a higher-capacity network switch and a larger “filter farm” of around 5000 CPUs.
Events à la carte
Running the trigger for a large experiment is a complex process because there are typically many conflicting needs coming from different detector and physics groups within the collaboration. As far as possible, everyone’s needs have to be covered – but this is no easy task. The CMS experiment is sophisticated and can do a great deal of different physics, but it all comes down to whether or not the events have been selected by the trigger. There is a constant struggle to make sure that the collaboration can maximize the physics potential of the experiment as a whole, while at the same time catering to the assorted tastes of the various groups.
The trigger “menu” can be thought of as a selection of triggers to suit all tastes. Some groups order just the entrée of established Standard Model physics, while others look to tuck in to the main course of Higgs particles, supersymmetry (SUSY), heavy-ion physics, CP-violation and so on. Those with a sweet tooth come with their minds set predominantly on the dessert of exotica – all of the new physics that is not related to the main course.
At a practical level, the menu consists of various paths that fall into one of three categories. First, inclusive trigger paths look at overall properties, such as total energy or missing transverse energy, which are particularly important for detector studies. Second, single-object paths identify objects, for example, an electron or a jet. These are valuable for physics studies, particularly for Standard Model processes. Third, multi-object paths contain a combination of single objects. The trigger menu pulls the various paths together and the filter farm executes the HLT algorithms as much as possible in parallel – the HLT has less than 100 ms to make a decision for a L1 rate of about 50 kHz. Figure 1 shows rates for several HLT paths for an instantaneous luminosity of 8 × 1031 cm–2 s–1.
The menu has to cover a range of physics: it must be as inclusive as possible, not only to accommodate more physics needs but also to make room for things that had not been considered when the experiment was running. For example, some theorists might come up with a new idea only after CMS has finished collecting data, but the experiment may have already captured what is needed if it has run with an “inclusive” trigger.
As the luminosity of the LHC increases, so does the collision rate, which means that tighter selection criteria need to be applied and the menu must constantly evolve to accommodate these needs. At CMS, physics groups – as well as detector groups – regularly submit proposals for triggers that they would like to have implemented. Requests are merged whenever possible into common triggers to simplify the menu. This makes it easier to maintain the menu as well as to spot mistakes and fix them. In addition, the bandwidth can be maximized if two groups share a trigger. For example, instead of two groups receiving a rate of 2 Hz each, they could devote 4 Hz to a common, more economical, trigger.
Once the proposals have been made, the Trigger Menu Development and HLT Code Integration Groups come up with a menu prototype, rather like a “tasting menu”. This takes all of the proposals into account and tries to implement them in a coherent trigger menu that adheres to every parameter to satisfy all appetites. While attempts are made to accommodate as many triggers as possible, if there are conflicting needs from different groups then the Physics and Trigger co-ordination has the final word.
The Trigger Performance Group then takes the prototype and runs it on “signal” events – using either real data or simulated – from all of the physics groups to test whether the menu picks out what it is supposed to select. If problems are found – and they often are – then the teams go back and fix them to produce the next prototype. At some point, the prototype will appear to be good enough to be deployed by the Trigger Menu Integration Group. This team then puts the menu online to test it, making sure that everything functions as expected. One important aspect of this validation is to verify that the full menu can run at the HLT within the budgeted time (figure 2).
Ever-changing ingredients
The CMS experiment has evolved since the early running period, when it was in commissioning mode, so that by the end of the 2010 the collaboration could maximize the physics output. The trigger system has adjusted in parallel to reflect this changing reality. During the 2010 proton run, the Trigger Studies Group produced more than a dozen menus of L1 and HLT “dishes”, which successfully filtered CMS physics data over five orders of magnitude in luminosity, over the range 1 × 1027 – 2 × 1032 cm–2 s–1.
Most of the triggers for the LHC start-up in March 2010 covered what was needed to understand the detector, such as calibration, alignment, noise studies and commissioning in general. Since then, these triggers have been gradually reduced to a minimum. The menu is now dominated by physics triggers, including a whole suite of new SUSY triggers that were deployed last September.
As mentioned above, the complexity of the trigger menu increases as a function of luminosity. Because the early interactions were at low luminosities, it was possible to be inclusive – to record as many events as possible. As the luminosity has increased, however, certain triggers have had to be sacrificed. Triggers for Standard Model physics have been the first to be reduced because the priority is to discover new physics. However, a fraction of the trigger bandwidth always goes to Standard Model physics, which is used as a reference.
Sometimes, triggers are removed because they are no longer needed or they have been replaced by more advanced versions. At other times, there is an overlap period to understand what the new trigger does compared with the old one.
The incredible performance of the LHC – which reached the luminosity target for the 2010 proton–proton collisions run of 1032 cm–2 s–1 several weeks earlier than expected – has kept the trigger-system team on its toes. Over the next few years, the evolution of luminosity will continue to require the trigger “chefs” to produce creative menus to cope with the ever-changing range of ingredients on offer.
CERN has announced that the LHC will run through to the end of 2012, with a short technical stop at the end of 2011. The beam energy for 2011 will be 3.5 TeV. This decision, taken by CERN management following the annual planning workshop held in Chamonix last week and a report delivered by the laboratory’s machine advisory committee, gives the LHC experiments a good chance of finding new physics in the next two years, before the machine goes into a long shutdown to prepare for higher-energy running starting in 2014.
“If the LHC continues to improve in 2011 as it did in 2010, we’ve got a very exciting year ahead of us,” says Steve Myers, CERN’s director for accelerators and technology. “The signs are that we should be able to increase the data-collection rate by at least a factor of three over the course of this year.”
The LHC was previously scheduled to run to the end of 2011 before going into a long technical stop to prepare it for running at the full design energy of 7 TeV per beam. However, the machine’s excellent performance in its first full year of operation forced a rethink. Improvements in 2011 should increase the rate at which the experiments can collect data by at least a factor of three compared with 2010. That would lead to enough data being collected in 2011 to bring tantalizing hints of any new physics that might be within reach of the LHC operating at its current energy. However, to turn those hints into a discovery would require more data than can be delivered in one year, hence the decision to postpone the long shutdown. Running through 2012 will give the LHC experiments the data needed to explore this energy range fully before moving up to higher energy.
“With the LHC running so well in 2010, and further improvements in performance expected, there’s a real chance that exciting new physics may be within our sights by the end of the year,” says Sergio Bertolucci, CERN’s director for research and computing. “For example, if nature is kind to us and the lightest supersymmetric particle, or the Higgs boson, is within reach of the LHC’s current energy, the data we expect to collect by the end of 2012 will put it within our grasp.”
The schedule foresees beams back in the LHC in late February and running through to mid-December. There will then be a short technical stop before resuming in early 2012.
• See also comments by CERN’s director-general, Rolf Heuer
The final particles will collide in Fermilab’s Tevatron this September at the end of the machine’s historic 26-year run. The Tevatron, the world’s largest proton–antiproton collider, is best known for its role in the discovery in 1995 of the top quark, the heaviest elementary particle known to exist.
The Tevatron has out-performed expectations, achieving record-breaking levels of luminosity. Fermilab had planned to shut down the collider in the autumn of 2011 but in August 2010 the laboratory’s international Physics Advisory Committee endorsed an alternative idea: extend the run of the Tevatron through into 2014. The US government’s advisory panel on high-energy physics agreed with the committee’s recommendation, provided that US funding agencies could increase annual support for the field by about $35 million for four years. This would have maintained the laboratory’s ability to continue with its variety of other high-energy physics experiments, some of them being in their critical first stages.
However, this was not to be. In January, Bill Brinkman, director of the US Office of Science, announced that the agency had not located the additional funds required to extend the Tevatron’s operations. The decision disappointed Tevatron physicists, but it also made more secure funding for the other experiments that will carry Fermilab into the future.
Following the closure of the Tevatron, Fermilab will continue on course with a world-leading scientific programme, addressing the central questions of 21st century particle-physics on three frontiers: the energy frontier, the intensity frontier and the cosmic frontier. At the energy frontier, the laboratory will continue its close collaboration with CERN and the international LHC community and will also pursue R&D for future accelerators. At the intensity frontier, Fermilab already operates the highest-intensity neutrino beam in the world and researchers there are about to begin taking data with the laboratory’s largest neutrino detector yet. At the cosmic frontier, Fermilab scientists will continue the search for dark matter and dark energy.
The CMS experiment has released results of a new study that sheds more light on the phenomenon known as di-jet energy imbalance, which was recently observed in lead–lead collisions at the LHC. Indeed, during the first days of the heavy-ion run in November last year both the ATLAS and CMS experiments observed collisions with the production of jets – streams of particles collimated in a small cone around a given direction. In particular, they saw collisions containing two high-energy jets (di-jets), produced more or less back to back, in which there is an unusually large imbalance in the jet energy. In other words, the energy of the jet on one side was much less than that of the jet on the other side.
This energy imbalance could result from a modification of the energy and showering properties of the partons (quarks and gluons) created in the hard scattering collision, as they traverse quark–gluon plasma that may have formed in the head-on collisions. The results on this large di-jet asymmetry, shown in figure 1 for the CMS experiment, were presented publicly by the LHC experiments at a special seminar on 2 December. The measurements were based on the detection of high-energy deposits in the calorimeters by particles emerging from the collision, which were used to characterize the jets. The momentum imbalances observed in the data are significantly larger than those predicted by the simulations, especially for collisions that have a large “centrality”, i.e. for the most violent head-on collisions.
Since then the CMS collaboration has continued its efforts to try to understand this phenomenon in more detail, in particular by also studying the tracks of charged particles produced in head-on lead–lead collisions. Such an analysis can address basic questions. For example, how does the energy redistribution in the lowest-energy jet work? Does the energy flow sideways, out of the jet cone? Or does it end up as low-energy particles that remain within the jet cone, but become difficult for the calorimeters to detect efficiently?
The new data analysis suggests that in fact both effects are present.
Based on the analysis of the charged particles correlated with the jets, CMS observes that the lowest-energy jet indeed becomes wider and the particles in the jet become softer in energy. An important question is then how the energy of the most energetic jet becomes exactly balanced in these collisions. Figure 2 shows the result of the energy-balance study, with the total missing transverse momentum projected onto the jet axis of the leading jet, as a function of the di-jet energy asymmetry, for the most central type of collisions. The contribution to the missing momentum is decomposed in contributions from particles in different intervals of particle momentum, to gain insight into what sort of particles contribute.
The top row shows Monte Carlo predictions, which do not include any physics effects that lead to asymmetries in jet energy; the bottom row shows the CMS data. The left plots sum only the momenta of particles within the jet cone – the right ones the momenta of particles outside the cones. These distributions show clearly that part of the energy of the most energetic jet becomes balanced by particles on the opposite side, outside the jet cone. They also reveal that in the data – but not in the simulation – a large fraction of the balancing momentum is carried by particles with rather low momenta.
These results provide qualitative constraints on the nature of the jet modification in lead–lead collisions and a quantitative input to models of the transport properties of the medium created in these collisions. However, this is just the proverbial tip of the iceberg towards a detailed understanding of this phenomenon and many more studies can be expected soon.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.