About a year ago, the CMS collaboration released its first publication on studies of the top quark – the measurement of the tt production cross-section at 7 TeV. The measurement was based on a data set of only 3 pb–1 of integrated luminosity and the top quarks were identified through the leptonic decay channels of the W boson. Now, a plethora of results on the top quark based on luminosities of 1–2 fb–1 have been released for the summer conferences, in particular for the TOP2011 workshop, held at the end of September at Sant Feliu de Guixol, Spain.
Top quarks decay almost exclusively into a W boson and a b-flavoured quark jet, leading to different event final states that can be used for selecting tops. Figure 1 gives an overview of the CMS results, which use more or less all of the decay modes. The most precise single measurement is the analysis where one W boson decays into leptons while the second W decays into hadrons and b-quark identification is used, giving a cross-section of 164.4 ± 14.3 pb, i.e. a precision of 8.5%. Precise measurements of the cross-section can also be converted into measurements of the top quark’s mass, within a given theoretical scheme. Currently, the CMS cross-section measurements allow for a precision on the top mass of about 7–8 GeV in such data extractions.
Further new analyses include a measurement of the difference in mass of the t and t, which is an interesting test of CPT invariance. For this study, data are used where one of the W bosons decays into a muon, allowing the event to be classified as t or t decay, depending on the charge of the muon. The difference in mass between the t and t is found to be 1.2±1.3 GeV, i.e. the result is compatible with equal mass within the uncertainty. This is the most precise result on this quantity to date.
Another interesting measurement concerns the charge asymmetry in top production. The experiments at Fermilab’s Tevatron reported asymmetries that are larger than expected. At the LHC, tt production is also slightly asymmetric in rapidity as a result of the different roles that the valence and sea quarks play in the production. CMS has studied this asymmetry by measuring the different widths of the rapidity distribution for t and t. The result gives an asymmetry of 1.6% with an uncertainty of about 3.5%; an asymmetry of about 1.3% is expected from theory. The agreement with the Standard Model is good within the measured uncertainties.
Finally, a challenging new measurement on the electroweak production of single top has been undertaken, namely tW associated production. While single top production in the top-quark channel was reported by the LHC experiments earlier this year, this measurement analyses a different final state; also, this channel is not accessible at the Tevatron. CMS finds an excess over expected background events with a significance of 2.7 σ, and is compatible with the expectation for tW production.
With several tens of thousands of top-quark pairs recorded so far, the detailed study of the properties of the heaviest quark is merely starting. Results based on the full 2011 data sample should be ready in time for the 2012 winter conferences.
The Daya Bay Reactor Neutrino Experiment has begun its quest to answer some of the puzzling questions that still remain about neutrinos. The experiment’s first completed set of twin detectors is now recording interactions of antineutrinos as they travel away from the powerful reactors of the China Guangdong Nuclear Power Group, in southern China.
The start-up of the Daya Bay experiment marks the first step in the international effort of the Daya Bay collaboration to measure a crucial quantity related to the third type of oscillation, in which the electron-neutrinos morph into the other two flavours of neutrino. This transformation occurs through the least known neutrino-mixing angle, θ13, and could reveal clues leading to an understanding of why matter predominates over antimatter in the universe.
The experiment is well positioned for a precise measurement of the poorly known value of θ13 because it is close to some of the world’s most powerful nuclear reactors – the Daya Bay and Ling Ao nuclear power reactors, located 55 km from Hong Kong – and it will take data from a total of eight large, virtually identical detectors in three experimental halls deep under the adjacent mountains. Experimental Hall 1, a third of a kilometre from the twin Daya Bay reactors, is the first to start operating. Hall 2, about a half kilometre from the Ling Ao reactors, will come online in the autumn. Hall 3, the furthest hall, about 2 km from the reactors, will be ready to take data in the summer of 2012.
The Daya Bay experiment is a “disappearance” experiment. The detectors in the two closest halls will measure the flux of electron-antineutrinos from the reactors; the detectors at the far hall will look for a depletion in the expected antineutrino flux. The cylindrical antineutrino detectors are filled with liquid scintillator, while sensitive photomultiplier tubes line the detector walls, ready to amplify and record the telltale flashes of light produced by the rare antineutrino interactions. As a result of the large flux of antineutrinos from the reactors, the twin detectors in each hall will capture more than 1000 interactions a day, while at their greater distance the four detectors in the far hall will measure only a few hundred interactions a day. To measure θ13, the experiment records the precise difference in flux and energy distribution between the near and far detectors.
The experimental halls are deep under the mountain to shield the detectors from cosmic rays and the detectors themselves are submerged in pools of water to shield them from radioactive decays in the surrounding rock. Energetic cosmic rays that make it through the shielding are tracked by photomultiplier tubes in the walls of the water pool and muon trackers in the roof over the pool so that events of this kind can be rejected.
After two to three years of collecting data with all eight detectors, the Daya Bay Reactor Neutrino Experiment should be well positioned to meet its goal of measuring the electron-neutrino oscillation amplitude – and hence sin2 2θ13 – with a sensitivity of 1%.
The start up of the experiment begins after eight years of effort – four years of planning and four years of construction – by hundreds of physicists and engineers from around the globe. China and the US lead the Daya Bay collaboration, which also includes participants from Russia, the Czech Republic, Hong Kong and Taiwan. The Chinese effort is led by project manager Yifang Wang of the Institute of High Energy Physics (IHEP), Beijing, and the US effort is led by project manager Bill Edwards of Lawrence Berkeley National Laboratory and chief scientist Steve Kettell of Brookhaven National Laboratory.
One the many surprises to have emerged from studies of heavy-ion collisions at Brookhaven’s Relativistic Heavy Ion Collider (RHIC) and now at CERN’s LHC concerns the extreme fluidity of the dense matter of the nuclear fireball produced. This has traditionally been studied experimentally by measuring the second harmonic of the azimuthal distribution of emitted particles with respect to the plane of nuclear impact. Known as v2, this observable is remarkably large, saturating expectations from hydrodynamic models, suggesting that the so-called quark-gluon plasma is one of the most perfect fluids in nature. Many assumed that the matter in the elliptical nuclear overlap region becomes smooth upon thermalization, rendering the Fourier coefficients other than v2 negligible in comparison.
However, recently it was proposed that collective flow also responds to pressure gradients from the “chunkiness” of matter distributed within the initial fireball in random event-by-event fluctuations. These nonuniformities lead to anisotropy patterns beyond smooth ellipses: triangular, quadrangular, and pentangular flow are now being studied by measurements of v3, v4, v5 and beyond at RHIC and the LHC.
The new measurements evoke comparisons with the vestigial cosmic microwave background (CMB) radiation, whose nonuniformities offer hints about the conditions at the universe’s earliest moments. Just as the CMB anisotropy is expressed by multipole moments, the azimuthal anisotropy of correlated hadron pairs from heavy-ion collisions can be represented by a spectrum of Fourier coefficients VnΔ. In pair-correlation measurements, a “trigger” particle is paired with associated particles in the event to form a distribution in relative azimuth Δφ. Over many events, a correlation function is produced, whose peaks and valleys describe the relative probability of pair coincidence.
The left side of the figure shows a correlation function measured by ALICE for the 2% most central (i.e. head-on) lead–lead collisions at the LHC, where the particle pairs are separated in pseudorapidity to suppress “near-side” jet correlations near Δφ = 0. Even when this gap is imposed, a curious longitudinally-extended near-side “ridge” feature remains. Considerable theoretical effort has been devoted to explaining the source of this feature since its discovery at RHIC. In the correlation function in the figure, the first five VnΔ harmonics are superimposed. The right side of the figure shows the spectrum of the Fourier amplitudes. Evidently in the most head-on collisions, the dominant harmonic is not the second elliptical term, but the triangular one, V3Δ; moreover, the Fourier coefficients here are significant up to n = 5. These results corroborate the idea that initial density fluctuations are non-negligible.
The intriguing double-peak structure evident on the “away side” (i.e. opposite to the trigger particle, at Δφ = π) was not observed in inclusive (i.e. not background-subtracted) correlation functions prior to the LHC. However, in the hope of isolating jet-like correlations, the v2 component was often subtracted as a non-jet background, leaving a residual double peak when the initial away-side peak was broad. This led to interpretation of the structure as a coherent shock-wave response of the nuclear matter to energetic recoil partons, akin to a Mach cone in acoustics. However, the concepts of higher-order anisotropic flow are now gaining favour over theories that depend on conceptually independent Mach-cone and ridge explanations.
These measurements at the LHC are significant because they suggest a single consistent physical picture, vindicating relativistic viscous hydrodynamics as the most plausible explanation for the observed anisotropy. The same collective response to initial spatial anisotropy that causes elliptic flow also economically explains the puzzling “ridge” and “Mach cone” features, once event-by-event initial-state density fluctuations are considered. Moreover, measuring the higher Fourier harmonics offers tantalizing possibilities to improve understanding of the nuclear initial state and the transport properties of the nuclear matter. For example, the high-harmonic features at small angular scales are suppressed by the smoothing effects of shear viscosity. This constrains models incorporating a realistic initial state and hydrodynamic evolution, improving understanding of the deconfined phase of nuclear matter.
The global nature of modern particle physics was clearly manifest at the biennial Lepton Photon conference that took place this year in India. The Tata Institute of Fundamental Research (TIFR), Mumbai, was host to the 25th International Symposium on Lepton Photon Interactions at High Energies – Lepton Photon 2011 – on 22–27 August.
The conference opened with a welcome from Mustansir Barma, director of TIFR, and speeches by Srikumar Banerjee, chair of the Atomic Energy Commission, Shri Prithviraj Chavan, the Chief Minister of the State of Maharashtra, and Patricia McBride, chair of the C11 Committee of the International Union of Pure and Applied Physics, under whose auspices the Lepton Photon conferences take place.
New results from the LHC and the latest news on searches for the Higgs boson were among the highlights, as at the EPS-HEP 2011 meeting held in Grenoble in July. Thanks to the outstanding performance of the LHC, the experiments and the Worldwide LHC Computing Grid, some of the results were from analyses based on roughly twice the data sample presented in Grenoble. With the additional data analysed, the ATLAS and CMS experiments have now excluded the existence of a Higgs over most of the mass region 145–466 GeV with 95% confidence level. Moreover, the significance of hints of a Higgs signal has slightly decreased and it remains the case that the slight excess observed could be the effect of statistical fluctuations.
The talks covered a range of other physics that is being investigated by the LHC experiments. These included precision measurements in top-quark physics, for example, and in B physics, where results from the LHCb experiment on B mesons are becoming the most precise yet. There were also reports on the status and prospects of the LHC machine and, by CERN’s direct general, Rolf Heuer, on the future of colliders after the LHC.
Reports on some of the results presented at the conference follow on the next two pages.
The LHCb collaboration’s presentation at Lepton Photon 2011 included one of the most eagerly awaited measurements in flavour physics: the CP violation phase in Bs–B mixing. This is the counterpart of sin 2β in the B0 system, which was measured by the B-factory experiments BaBar and BELLE using the channel B0→ J/Ψ Ks. They provided the first measurement of CP violation in B0 mixing, which is both large and now well measured, with sin 2β = 0.69 ± 0.02. In contrast, the Standard Model prediction for φs, the corresponding phase for the Bs meson, is extremely small and precise: φs = 0.036 ± 0.002 rad (Charles et al. 2005). It is therefore an interesting place to search for new physics beyond the Standard Model, which may enhance the value. Time-dependent analyses of Bs mesons were not accessible at the B-factories, so this remained a key measurement for hadronic machines, first at the Tevatron and now at the LHC.
The golden mode for this study is Bs → J/Ψ φs where the J/Ψ decays to μ+μ– and the φ decays to K+K– . The measurement is very challenging: the final state is not a pure CP eigenstate, so an angular analysis has to be made to separate the CP-even and CP-odd components. In addition, the fast Bs–Bs oscillation necessitates precise vertex reconstruction, and tagging of the production state (whether it was a Bs or Bs ) is also important. The result for φs is correlated to another quantity in the fit, ΔΓs, the difference in width of the two Bs mass eigenstates. (It is the mass difference of these two states that determines the oscillation frequency.) ΔΓs can be positive or negative, but in the Standard Model is predicted to be 0.087 ± 0.021 ps–1 (Lenz and Nierste 2011). The uncertainties on φs and ΔΓs are correlated, and furthermore the fit turns out to be insensitive to the replacement φs → π – φs when ΔΓs → – ΔΓs so there are two ambiguous solutions. As a result, the measurements are usually plotted as contours in the φs vs ΔΓs plane.
The CDF and DØ experiments at the Tevatron made the first measurements. Their early results agreed with each other and appeared, when combined, to indicate a large value for φs, about 3σ away from the Standard Model expectation. More recent updates have moved their preferred values somewhat closer to the Standard Model, but a hint of a possible discrepancy remained, as shown by the red and green contours in figure 1 (Burdin and DØ 2011, CDF 2010).
LHCb has now accumulated the largest sample of Bs → J/Ψ φ decays in the world, over 8000 signal candidates with very high purity (figure 2). The resulting constraint is shown as the blue contour in figure 1 (LHCb 2011a). It is much more precise than the preceding measurements, with one of the two solutions being in good agreement with the Standard Model expectation – the hint of a discrepancy is not confirmed. This result also gives the first significant direct measurement of ΔΓs, 0.123 ± 0.029 ± 0.008 ps–1, where the first uncertainty is statistical and the second systematic.
Another related analysis presented by LHCb uses a different decay mode, Bs → J/Ψ f0, which should measure the same phase. Although the statistics are lower, the final state is CP-odd in this case, so the analysis is simpler (LHCb 2011b). It gives a consistent result to B0s → J/Ψ φ, and the preliminary combined result from LHCb is φs = 0.03 ± 0.16 ± 0.07 rad (LHCb 2011c). This result is statistically limited, but as data continue to pour in from the LHC there are good prospects for substantial further improvement. So, although LHCb has now ruled out a gross effect from new physics, the experiment should be able to measure the true value even if it is as small as predicted in the Standard Model – and test any subtle effects from new physics.
Measurements of top-quark properties were among the many new results shown by the ATLAS collaboration at Lepton-Photon 2011. The very large mass of this quark relative to the others leads many physicists to believe that it plays a special role in physics beyond the Standard Model.
At the luminosity recently achieved at the LHC, a top quark is produced on average approximately every second. Because of the large number of top quarks produced and the excellent detector performance, the ATLAS experiment is able to measure precisely the quark’s properties, thereby providing stringent tests of the Standard Model as well as probing for the subtle effects of new physics. So far all measurements are consistent with the Standard Model, but further data will bring increased precision and with it greater sensitivity to new effects.
ATLAS has measured the production cross-section of top pairs in the single lepton decay channel to be 179±12 pb. This precision of 7% is better than the uncertainty on the theoretical prediction, providing an excellent testing ground for perturbative QCD. A combination of the measurements in the different channels will further increase the precision.
Electroweak production of single top quarks is sensitive to the element Vtb of the quark-mixing matrix and also to a potential flavour-changing-neutral current component in top-quark couplings. ATLAS has measured top production in the t-channel – first measured only a couple of years ago at the Tevatron – with a significance exceeding 7σ. ATLAS has also placed limits on the production of single top in the s-channel and the Wt final state, laying the groundwork for the eventual measurement of these processes.
As a result of the excellent calibration of the detector, ATLAS has also measured precisely the mass of the top quark at 175.9 GeV, with a total uncertainty of just 2.8 GeV. This precise measurement, together with the W mass and electroweak radiative corrections, implies that the Higgs boson is lurking at low mass – if it is indeed a Standard Model Higgs.
ATLAS has further probed for new physics with the most precise measurements to date of the fraction of longitudinally polarized W bosons in the decay of top quarks and of the degree of spin correlation. The results of these measurements are consistent with Standard Model expectations, as are those of production asymmetries similar to those recently reported to be anomalous at the Tevatron.
The CMS search for the Higgs boson is being carried out using a range of decay products: two photons; two τ leptons; two b quarks; two W bosons; and two Z bosons. Analysing all of these channels ensures that the search is sensitive to observing the Higgs irrespective of its mass. The CMS collaboration presented the first results from a combination of Higgs searches in these channels at the EPS-HEP 2011 conference in Grenoble at the end of July. For Lepton Photon 2011, held in Mumbai a month later, they were able to update several key analyses, using additional data collected during the summer.
The CMS results presented in Mumbai were based on data-sets corresponding to 1.1–1.7 fb–1 (integrated luminosity), depending on the channel. The figure shows the result of all of the search channels combined. It indicates that CMS observes no convincing excess of events in the explored mass range of 110–600 GeV.
The analysis excludes, with a confidence level (CL) of 95% the existence of a Standard Model Higgs boson in three Higgs mass ranges: 145–216 GeV, 226–288 GeV and 310–400 GeV. For the quantity of data collected so far, the CMS collaboration would expect to exclude the Higgs boson in the range 130–440 GeV in the absence of a signal. The two gaps between the three excluded mass ranges observed in the data are consistent with statistical fluctuations. At 90% CL, the results exclude the Standard Model Higgs boson in the mass range from 144–440 GeV, without interruptions. All exclusion regions were obtained using CLs from the modified frequentist construction .
A modest excess of events is, however, apparent for Higgs boson masses below 145 GeV. With the data due to be collected in the coming months, CMS will be able to distinguish between the possible interpretations: either the production of a Higgs boson or a statistical fluctuation of the backgrounds. During the ongoing proton–proton data-taking period at the LHC, which is expected to terminate at the end of 2012, CMS will record substantially more data, leading to a significantly increased sensitivity to the Standard Model Higgs boson – if it exists – over the full range of possible masses.
One of last year’s surprise results came from the MINOS (Main Injector Neutrino Oscillation Search) experiment in the US, which suggested that neutrinos and their antimatter counterparts, antineutrinos, might have different masses – an idea that goes against most commonly accepted theories of how the subatomic world works. At Lepton Photon 2011, however, the MINOS collaboration presented updated results. These constitute the world’s best measurement of muon neutrino and antineutrino mass comparisons and bring the masses more closely together.
Since the result announced in June 2010, the experiment has nearly doubled its data set, from 100 antineutrino events to 197 events. While the new results are only about 1 σ away from the previous results, the combination rules out concerns that the previous result could have arisen from detector or calculation errors. Instead, the combined results point to a statistical fluctuation that has lessened as more data have been collected.
The biennial meetings organized by the High Energy and Particle Physics division of the European Physics Society (EPS) aim to provide a global view of the state of the art in research in the field. This year’s meeting, which took place in Grenoble on 21–27 July, attracted more than 800 participants – and certainly delivered. The International Europhysics Conference on High-Energy Physics, EPS-HEP 2011, was the first major international conference since CERN’s LHC started to supply significant amounts of data in a new energy region. After only one year of data-taking, the LHC took centre stage, in both proton–proton and heavy-ion physics, thanks to the spectacular performance of the accelerator and the impressively fast data analysis by the experiments. In parallel, there were results based on near-final data samples from experiments at Fermilab’s Tevatron and at the B factories, while hot news came from neutrino experiments and searches for dark matter. All in all, the conference was a real success, raising current knowledge up a huge notch across all searches for new physics.
All of the Tevatron and LHC experiments showed improved or new limits in searches, reaching mass limits close to or slightly above 1 TeV in simple supersymmetric models. Anyone who had hoped that the LHC would reveal supersymmetry early on may have been slightly disappointed, but as many theorists reminded the participants: new physics is guaranteed, all that is needed is patience. CERN’s director-general, Rolf Heuer reinforced this point, stating that either finding the Higgs or excluding it will be a great discovery.
At the Tevatron, the CDF and DØ experiments – with data corresponding to an integrated luminosity of about 8 fb–1 – continue to extend their exclusion limits for a Higgs particle with a mass of around 160 GeV. After two decades of truly fruitful physics, the Tevatron was scheduled to shut down definitively at the end of September, leaving a total of more than 10 fb–1 of data ready to be analysed. Meanwhile, the CMS and ATLAS experiments at the LHC have already reached a sensitivity exceeding that of the Tevatron experiments. Both experiments have slightly better preliminary limits in the region covered by CDF and DØ, while also excluding a high-mass Higgs – in a region out of the reach of the Tevatron. Most amazingly, if the LHC continues to perform as well as it has so far, the experiments are guaranteed to find or exclude the Standard Model Higgs by the end of 2012.
There was, however, already some intriguing news from the Higgs sector. Both the CMS and ATLAS collaborations revealed small excesses of events in their preliminary WW and ZZ analyses based on about 1 fb–1 of data. The most significant upward fluctuation over background was obtained in the Higgs → WW → |ν|ν channel. Both groups also see smaller excesses in the same mass region from the decay via ZZ to four leptons. All of these fluctuations fall in regions that are not yet excluded by the Tevatron or the LHC, and they made for some interesting discussions during the conference. If the Standard Model Higgs boson does indeed exist, this is exactly how it will manifest itself: a faint appearance above the distant horizon, which should grow with time as more data are analysed. Nonetheless, everyone agreed that it is far too early to tell what is happening unambiguously before more data are added and further rigorous checks made, because both experiments could be affected similarly by mis-modelling of the background or be victims of statistical fluctuations.
The recent results from CDF, CMS and LHCb on the flavour-changing neutral-current decays Bs → μμ provided another great conversation topic. The CDF collaboration reported a first measurement of this branching ratio at (1.8 + 1.1 – 0.9) × 10–8, which is higher than the Standard Model prediction of (3.2 ± 0.2) × 10–9. On the other hand, CMS and LHCb both have preliminary limits, which when combined are lower than the CDF result. More data from all experiments will soon help to clear this ambiguity.
One analysis that has drawn considerable attention in the preceding months is CDF’s observation of a possible signal of new physics in the final state W + 2 jets. The updated analysis, now based on 7.3 fb–1 of data, shows a clear excess of events with a dijet mass around 145 GeV. At more than 4σ, this is a significant excess, shared equally between the two channels W → e or μ. All eyes have thus turned to DØ, which is best positioned to look into this effect. The DØ team performed an important verification by artificially adding in a signal such as the one that CDF observed to confirm that DØ is indeed sensitive to such a signal. All efforts so far have turned up no excess above Standard Model backgrounds in a 4.3 fb–1 data sample, even when emulating the CDF selection criteria. A joint task force between the two experiments is now hard at work trying to resolve this discrepancy. Meanwhile, at the LHC, a similar signal would have to emerge amid larger backgrounds, so the sensitivity in this search may be diminished, depending on the nature of the new effect. Both CMS and ATLAS are actively combing through their data for signs of this effect, finding no evidence for the CDF signal thus far.
The DØ and CDF collaborations still see a deviation from the Standard Model in the forwards–backwards asymmetry of top–antitop production, opening the door to several possible explanations in terms of physics beyond the Standard Model. This effect is most pronounced at tt masses above 450 GeV. More studies are underway.
From QCD to neutrinos
The session on QCD showed great progress in the field, with updates on parton distribution functions from the experiments at DESY’s HERA collider, which stopped running in June 2007, as well as several results from the LHC. These measurements are now challenging the precision of theoretical predictions and will contribute to further refinements of the Monte Carlo simulations.
On the flavour front, there were as many as 25 new results from near-to-final datasets in the BaBar and Belle experiments at the B factories at SLAC and KEK, respectively, as well as new measurements from the Tevatron and LHC experiments, in particular LHCb. Together, they provide significant tests of the Standard Model, which still stands strong and unchallenged despite every attempt to uncover a flaw. Searches for charged-lepton flavour violation, electric dipole moments and the updated dilepton charge-asymmetries at the Tevatron continue to probe possible new effects. The BaBar collaboration showed impressive limits on rare decays with branching ratios reaching as low as 10–8, while LHCb now has the most precise single measurement and first 5σ observation of CP violation at a hadron machine using B→Kπ decays.
Any deviation from the Standard Model predictions would reveal the existence of new physics, but such deviations now require even more stringent tests, hence more data are needed. To collect even larger data samples, a new generation of B factories is on its way. The Belle II experiment at SuperKEKB is well into the construction phase and will replace KEKB, which shut down definitively in June 2010. The SuperKEKB design luminosity of 8 × 1035 cm–2 s–1 is 40 times higher than the record for KEKB. Commissioning of the upgraded machine and experiment is planned for mid-2014. Meanwhile, the SuperB project has been approved in Italy. Parts of the BaBar detector will soon cross the ocean to be relocated in Rome at a new facility on the Tor Vergata University campus, to start a new life there in 2015.
As ever, conference participants eagerly anticipated news from the various direct searches for dark matter. While both the DAMA/LIBRA and CoGeNT experiments have tantalizing signs of light dark-matter candidates, these are now in contradiction with recent limits revealed by the XENON100 collaboration, whose null results exclude the first two results. Hence, the situation remains ambiguous, with much still to be elucidated. Efforts are ongoing to explain why the peaks of the modulation signals seen by DAMA/LIBRA and CoGeNT do not coincide. Further scrutiny is also being directed towards the unmodulated part of the DAMA/LIBRA signal, together with investigations of other possible backgrounds for the CoGeNT experiment – all with the aim of producing more convincing and irrefutable results. The whole community is hard at work collecting and analysing more data, while construction of larger detectors that will use a tonne or more of active material is underway. With new results and updates expected in a year or two, this topic will be among the highlights of the next EPS-HEP meeting in 2013.
The 295 km Tokai-to-Kamioka (T2K) long-baseline neutrino experiment in Japan, and the Fermilab-based Main Injector Neutrino Oscillation Search (MINOS) with a “far” detector 730 km away in the Soudan Mine, now have the first indications for a sizeable mixing angle between the electron and muon neutrinos. These measurements were discussed in view of their implications for the measurement of CP violation in neutrino mixing and for the design of future long-baseline neutrino projects. One of the proposed experiments, the Laguna Pyhäsalmi project, would have an underground 100 ktonne “far” detector in Finland, 2300 km from CERN. Its goal would be to measure neutrino oscillations, taking advantage of large enhancements from matter effects.
While the LHC is already looking back in time all of the way to the early universe, it will also take particle physics into the future, with plans established up to 2030 and beyond. Three long shutdowns are currently foreseen. The first will take place at the end of 2012 to allow improvements in the magnet interconnects and for the installation of new release-valves to ensure the prevention of further incidents like the one that brought the LHC to a halt in 2009. This will allow the machine to reach its design centre-of-mass energy of 14 TeV by the autumn of 2014. The following long shutdown in 2018 will be for several detector upgrades while the final planned shutdown, in 2022, will be used to prepare for the high-luminosity upgrade to the LHC (HL-LHC). Scenarios are also being considered for an upgrade to higher energies in the even more distant future.
A next-generation linear collider is still under study, with final design reports on the Compact Linear Collider (CLIC) and International Linear Collider (ILC) concepts planned for the end of this year. The physics outcome from the LHC experiments by the end of 2012 will provide crucial input to decide what kind of linear collider will best suit the future needs of particle physics.
On the theoretical front, participants heard of recent accomplishments using scattering amplitudes in quantum field theory. As well as updating the community on their progress with calculation techniques, the presentations by theorists served as a reminder that the questions that are currently unanswered by the Standard Model – from the existence of dark matter to the unexplained problem of particle “generations” – imply that new physics ought to be there, waiting to be discovered.
The closing session was devoted to an outlook for experimental and theoretical particle physics as a whole. Pier Oddone, Rolf Heuer and Atsuko Suzuki, the directors for Fermilab, CERN and KEK, respectively, presented their visions for the future of particle physics from the perspectives of the US, Europe and Asia.
Oddone laid out the many plans for Fermilab in the post-Tevatron era, spanning from searches for dark energy and dark matter, to neutrino physics with upgraded experiments MicroBoone and MINOS+, as well as projects for accelerator development of the ILC and a muon collider. He reminded participants that contributions to the LHC from the US – for both the accelerator and the detector – represented the largest single investment in high-energy physics that the US has made since the 1970s.
Suzuki warmly thanked the community for its extended support after the devastating earthquake and tsunami earlier this year and it was moving to hear about the efforts that Japanese colleagues are making to recover from the effects. The KEK laboratory suffered damage both at the Tsukuba and Tokai campuses. Nevertheless, the construction plans for SuperKEKB are still on schedule. Repairs are underway at Tokai, with the first power tests scheduled for November. In addition, Suzuki presented new projects underway in Asia, such as the Korea Neutrino Research Center, which was scheduled to start operation in August, and the Daya Bay experiment in China, which will study oscillations in reactor neutrinos. “Near” detectors at Daya Bay started data-taking in August, while “far” detectors in the nearby mountains should be operating by next summer.
We have one inverse femtobarn of data in, and 2999 more to go. So be patient. The fun is just starting!
David Gross
Heuer stressed both the importance of international collaboration in establishing any future accelerator projects and how the results of existing facilities should be used to determine the needs for future accelerators. Results from the LHC will therefore be a key ingredient in determining which design is best for a new linear collider. In an effort to increase the collaborative spirit, CERN is already opening its door to new member states.
All of the great results presented at EPS-HEP 2011 could not have been fully appreciated without the impeccable organization provided by the local committee headed by Johann Collot of the University of Grenoble. Sometimes, however, success can bring trouble: interest in the conference following the news of intriguing reports on the Higgs boson nearly brought the conference website to a halt.
The local committee also spared no effort in treating the participants to local specialities, providing more than just food for thought. These included an impressive wine-and-cheese reception on the opening night, followed by “danced” lectures organized for the public. While local speakers explained the field of particle physics, dancers from the University of Grenoble’s modern-dance company accompanied them on stage, seemingly surprising some of the speakers themselves. The evening ended with a beautiful “dance of the particles” to everybody’s delight. The social programme also included a soccer tournament, a reception hosted by the City of Grenoble at the modern art museum, a Bel Canto concert and a gastronomic dinner – enough to suit everyone’s taste.
This conference really marked the beginning of the LHC era. As David Gross, who received the Nobel Prize in Physics in 2004, concluded: “We have one inverse femtobarn of data in, and 2999 more to go. So be patient. The fun is just starting!” Now, even the unexpected can be expected.
A key highlight was the presentations of prizes by the High Energy and Particle Physics Division of the EPS. This year the prestigious High Energy and Particle Physics Prize for an outstanding contribution to High Energy Physics went to Sheldon Lee Glashow of Boston University, John Iliopoulos of the Ecole Normale Supérieure, Paris, and Luciano Maiani from the University of Rome La Sapienza. They were rewarded “for their crucial contribution to the theory of flavour, presently embedded in the Standard Theory of strong and electroweak interactions which is still of utmost importance today”. In 1970, they put forward a compelling argument for the existence of a fourth quark – charm – to solve a number of problems in particle physics. Their proposal, now known as the GIM mechanism, was spectacularly confirmed when particles containing the charm quark were unexpectedly discovered in 1974.
The Giuseppe and Vanna Cocconi Prize for an outstanding contribution (experimental or theoretical) to particle astrophysics and cosmology went to Paolo de Bernardis of the University of Rome La Sapienza and Paul Richards of the University of California, Berkeley, “for their outstanding contributions to the study of cosmic microwave background anisotropies with the balloon-borne experiments BOOMERanG and MAXIMA”.
Davide Gaiotto of the Institute for Advanced Studies in Princeton, received the Gribov Medal for outstanding work by an early-career physicist in theoretical particle physics and/or field theory. He was rewarded “for his work on uncovering new facets of the dynamics of four-dimensional supersymmetric gauge theories and in particular for discovering a large class of four-dimensional superconformal theories and for finding with others important intricate relations between two-dimensional theories of gravity and four-dimensional gauge theories”.
The Young Physicist Prize for outstanding work by one or more was awarded to Paolo Creminelli of the International Centre for Theoretical Physics, Trieste, and Andrea Rizzi of the Swiss Federal Institute of Technology, Zurich. Creminelli received his share of the award “for his contributions to the development of a solid field-theoretical approach to early-universe cosmology and for his studies of non-gaussianities in the cosmic-microwave background”, while Rizzi was rewarded “for his contributions to the reconstruction software and physics programme of the CMS experiment at the LHC”.
Finally, the Outreach Prize for outstanding outreach achievement connected with high-energy physics and/or particle astrophysics went to Christine Kourkoumelis of the University of Athens and Sofoklis Sotiriou, director of the Ellinogermaniki Agogi Center for Science Teachers Training, for “building educational resources to bring the research process in particle physics and its results to teachers and students, both nationally and across Europe”.
Astrophysical neutrinos are produced in the interactions of cosmic rays with an ambient medium of gas (protons) and photons of different energies. Once produced, these cosmic neutrinos can propagate cosmological distances and reach the Earth practically without interactions. They therefore carry unique information about the sources of cosmic rays, their acceleration and the composition of the most energetic phenomena in the universe.
The neutrino sky “seen” by experiments originates in the atmosphere, which shines day and night in neutrinos. One experiment alone, IceCube at the South Pole, has already detected more than 105 atmospheric neutrino events. However, the hope is to see “stars in broad daylight” through this atmospheric flux – that is, to observe neutrinos of cosmic origins. These include neutrinos from various point-like sources and some extended objects, as well as diffuse neutrino fluxes. The selection of the energy band – cosmic neutrinos should dominate at high energies – together with directional and time features, as well as correlations with known objects, emitting for instance in γ rays, are the main tools for distinguishing atmospheric and cosmic neutrinos.
The NUSKY 2011 international workshop on cosmic rays and cosmic neutrinos took place at the Abdus Salam International Centre for Theoretical Physics, Trieste, on 20–24 June. It attracted around 90 participants and featured some 40 talks by the main players in the field, covering all of the important aspects of the production, propagation and detection of high-energy cosmic neutrinos. Numerous discussions ensued, focusing on the implications of the latest experimental results, as well as on the status and perspectives of the field.
The results from IceCube played a prominent role in the discussions
The workshop took place during a critical period for a field in which the working experiments have reached the sensitivity necessary to probe realistic theoretical predictions. The results from IceCube – the first cubic-kilometre-scale detector ever built – thus played a prominent role in the discussions. Their preliminary results correspond to 40 and 59 detector strings (IC40 and IC59); data from IC79 are being analysed and the complete detector, IC86, is now running. So far, the various searches have found no cosmic-neutrino events.
Diffuse neutrino fluxes include the cosmogenic neutrinos generated in cosmic-ray interactions with the photons of the cosmic microwave background, as well as the integrated fluxes from remote, faint and unresolved objects. The IceCube collaboration finds no deviation of the reconstructed neutrino-energy spectrum from that for atmospheric neutrinos. This gives an upper bound on the neutrino flux in the 0.1–10 PeV energy range that is already below the Waxman-Bahcall limit, derived from the known cosmic-ray flux above 1019 eV.
As far as individual sources are concerned, the main suspects are objects that are relatively close, where the acceleration of cosmic rays probably occurs. These include supernova remnants (SNRs) in the Galaxy, as well as active galactic nuclei and gamma-ray bursters (GRBs). The IceCube all-sky maps show no statistically significant signal for steady or transient galactic or extragalactic sources. Nor has any neutrino been detected by IceCube (IC40 + IC59) in the so called stacking analysis of the GRBs (more than 100). The limit on the neutrino flux that emerges from this analysis is a factor of 5 below predictions, thus disfavouring the fireball model of GRBs.
The Pierre Auger Observatory in Argentina and ANITA, the balloon-bourne radio-interferometer that flew over Antarctica, are sensitive to the upper end of the cosmic-neutrino spectrum, the most relevant range for cosmogenic neutrinos (i.e. 1018 eV or 1 EeV). No neutrino-candidate events have been found in Auger data for periods equivalent to two years of the full array. ANITA-II has one candidate event, with one background event expected; cosmogenic models predict from 0.3 to 25 events.
The predictions for atmospheric neutrino fluxes depend on the properties of cosmic rays and on the physical conditions of the sources. In this connection, there are some new and interesting results. IceCube has found cosmic-ray anisotropies in the 20–400 TeV energy range, with a significant angular structure in the southern hemisphere. Anisotropy at higher energies, above 100 TeV, could reveal some connection to nearby SNRs. In addition, the KASCADE-Grande extensive air-shower array has observed structures in the “knee” region of the all-particle cosmic-ray spectrum.
Cosmic-ray origins
Turning to the question of the composition of ultra-high-energy cosmic rays (UHECRs), there had been somewhat contradictory results from the HiRes experiment and the Pierre Auger Observatory. In this connection, the possibilities for UHECR production by sources in the Galaxy (such as past GRBs ), as well as a dominant contribution from Centaurus A, were discussed at the workshop. The basic principles of cosmic-ray acceleration in SNRs are well understood on the basis of the non-linear theory of diffusive acceleration at collisionless Newtonian shocks.
The neutrino−γ-ray connection was at the centre of many discussions as a result of the wealth of new information from γ-ray astronomy. The production of neutrinos should be accompanied by production of γ-rays from π0-decay (the hadronic mechanism). However, ultrahigh-energy γs from extragalactic sources and γs of cosmogenic origin can interact with the medium (photons, electrons), giving rise to electromagnetic cascades. Hence, the whole γ spectrum shifts to lower energies in the giga- to tera-electron-volt range, where the Large Area Telescope (LAT) on the Fermi Gamma Ray Telescope gives important bounds. The Fermi-LAT results on the extragalactic γ flux can be translated into bounds on cosmic rays and cosmogenic neutrinos – the so-called “cascade” bound, based on the approximate equality of the energy released in neutrino production and in the electromagnetic cascade process. These data challenge the GRB origin of cosmic rays: if GRBs are the source of cosmic rays, then 10 events are predicted, while nothing appears in the diffuse bound.
One open question concerns the mechanism for the production of photons at the source. Tera-electron-volt γ-rays from transparent galactic sources can provide a direct indication of cosmic-ray acceleration sites. However, γs can be produced by accelerated electrons via the inverse Compton effect and by synchrotron radiation (both leptonic mechanisms). Fermi-LAT has measured γ spectra from a large number of SNRs and it turns out that both leptonic and hadronic γ-ray models work for SNRs on a source-by-source basis. In the case of GRBs, only bright GRBs are favoured by the Fermi-LAT data as the detectable sources. Nevertheless, bright nearby GRBs seem to be rare.
Features of neutrino propagation are a key element when the flavour of neutrino is taken into account in the detection process. The flavour composition and its dependence on neutrino energy are determined by conditions at the neutrino sources, in particular by the strength of magnetic field, the density distribution, etc. Flavour is also affected by neutrino oscillations and therefore depends on neutrino parameters. The expected composition ratio has the form a : 1 : 1 with a around 1, its precise value depending on 1–3 mixing, the deviation of 2–3 mixing from maximal, the neutrino-mass hierarchy and CP-violation. Various effects typical of physics beyond the Standard Model, such as neutrino decay, non-standard neutrino interactions or the presence of new neutrino species, can also modify the ratio. Finally, the ratio is extremely sensitive to possible violations of fundamental symmetries, such as Lorentz symmetry or the equivalence principle, which lead to modifications in the dispersion relations.
Neutrino astronomy enters a new cubic-kilometre era
Another highlight of the workshop was the report on the first-year of data-taking by DeepCore, the inner detector of IceCube, which has a low energy threshold of 10 GeV. The rate of events, which include cascades induced by electron-neutrinos as well as neutral current muon-neutrinos, was shown. DeepCore will detect around 800 neutrino-induced cascades per year. The physics motivations for the Phased IceCube Next Generation Upgrade (PINGU-I and PINGU-II) were also presented.
Neutrino observatories have now reached sufficient sensitivity to constrain multimessenger signals, γ-rays and UHECRs with minimal assumptions. That there is no evidence as yet for astrophysical neutrinos poses a problem for future projects because it means that IceCube will only scratch the surface of neutrino astronomy. The prime targets now are the transient sources.
There are several projects already under consideration or in progress. KM3NeT, a detector for neutrino astronomy under the Mediterranean Sea, which will have an instrumented volume of more than 5 km3, is in its preparatory phase. It will search for neutrino point sources in the energy range 100 GeV – 1 PeV. The Cherenkov Telescope Array is a new instrument for very high-energy (10–105) GeV γ astronomy. JEM-EUSO will detect Cherenkov light coming from the atmosphere using a telescope on the International Space Station that will have an instantaneous aperture of up to 106 km2. ANITA-III, approved to fly in 2013–2014, will search for ultra-high-energy neutrinos with 3–5 times higher sensitivity than ANITA-II. The Askaryan Radio Array is a ground-based antenna array at the South Pole covering an area of 100 km2. The expected yield is 3–5 neutrinos per year above 1017 eV, below the bulk of the cosmogenic neutrino predictions.
The NUSKY 2011 workshop was held just as high-energy neutrino astronomy enters a new cubic-kilometre era. Current bounds already have important implications and any further improvement of data will have an impact on the picture of the neutrino sky, with important consequences. The hope is that, with progressively more data from IceCube, a discovery is on the horizon. As Francis Halzen, of the University of Wisconsin-Madison and IceCube, concluded, “Hess 1912… and still no conclusion [on the origins of cosmic rays]; now the instrumentation is in place… SNRs and GRBs are in close range!”
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.