The first prototype telescope for the planned Cherenkov Telescope Array (CTA) has been inaugurated in Berlin. The prototype was designed and built at DESY in an international collaboration involving more than 1000 scientists and engineers from 27 countries.
The CTA is an international €200-million project to observe cosmic γ rays (CERN Courier July/August 2012 p28). Designed to achieve a sensitivity that is 10-times better than existing installations, it will combine three types of telescope, each optimized for its own energy range between a few tens of giga-electron-volts and 300 TeV. The prototype is a full-scale version of the medium-sized telescope (MST) with a tessellated 12-m mirror. Forty MSTs will form the central part of the CTA.
Fully functional mechanically, the prototypes will be used to test many aspects including the drive and safety systems, the understanding of vibrations and deformations, the mirror alignment, telescope pointing and the array control. The results will allow the design of the MSTs to be optimized before their production begins.
Construction of the CTA is expected to commence in 2015 at two sites in the southern and northern hemispheres. The larger southern observatory will include 70–100 telescopes that will be spread over 10 km2 and the smaller observatory in the north will have 20–30 telescopes that will be distributed over 1 km2. The sites will be chosen at the end of this year.
What is the actual distribution of dark matter? Does it follow the expectations for cold dark matter (CDM)? “Yes, it does,” according to a team of astronomers that has measured the average distribution of dark matter derived from gravitational lensing in 50 clusters of galaxies. The density is observed to decrease outwards from the centre of these cosmic giants in excellent agreement with the predictions of CDM models.
While dark matter still eludes the scrutiny of particle physicists it is becoming more substantial to astrophysicists. Its gravitational role is essential to hold galaxies together in clusters, to account for the high rotational velocity of stars in the outer regions of galaxies and more generally to explain the formation of large-scale structures in the universe. The relatively recent ability to observe the distribution of dark matter in space makes it even more concrete (CERN Courier January/February 2007 p11). The method consists of measuring the slight distortion of background galaxies that is induced by the gravitational deformation of space–time along the line of sight. This so-called weak-lensing effect allows astronomers to locate and measure the amount of dark matter, even though it is transparent on the image.
The fluctuations of the cosmic microwave background as observed by the Planck satellite imply that CDM constitutes about 27% of the energy content of the universe (CERN Courier May 2013 p12). However, it is important to check whether the distribution of mass in galaxy clusters follows the expectations of a medium of dark-matter particles that only interact with each other via gravity and do not emit or absorb photons. An international team led by Nobuhiro Okabe of Academia Sinica, Taiwan, and Graham Smith of Birmingham University used a sample of 50 massive clusters of galaxies to test this. The clusters were selected from an X-ray catalogue in a narrow redshift range of around z= 0.2 and observed by the Prime Focus Camera of the Japanese 8.2-m Subaru Telescope located next to the two Keck Telescopes on Mauna Kea, Hawai’i. For each cluster, the team derived the weak-lensing signal imprinted in the shape of background galaxies and stacked the results to obtain the average density profile.
The obtained map of the mean dark-matter distribution is remarkably symmetrical with a pronounced central peak. The radial profile is found to be consistent with the Navarro-Frenk-White model at the sub-10% precision level. Measurement of the concentration parameter of the mass in the cluster gives c200 = 4.22 + 0.40–0.36, which is slightly above but still broadly in line with theoretical predictions for CDM. This result confirms that dark matter behaves in clusters of galaxies as expected by the CDM scenario. It solves a tension between previous observations of individual clusters that found a high central concentration and other studies that included the dynamics of the galaxies in the clusters and suggested a shallower distribution.
In the future, the team would like to improve this analysis by measuring the dark-matter density on even smaller scales right in the centre of these galaxy clusters. This will be possible with the installation of a new camera called the Hyper-Suprime-Cam on the Subaru Telescope, which will also allow astronomers to study smaller galaxy clusters, which are predicted to have a slightly higher dark-matter concentration. The team notes, however, that significant advances on the precision achieved in the current study on massive low-redshift galaxies are only expected with future facilities such as the Large Synoptic Survey Telescope or ESA’s Euclid Satellite.
The first collisions occurred in Fermilab’s Tevatron in 1985. Over the following years, both the energy and the luminosity increased and by the time operations ceased in 2011 the collision luminosity had reached 7 × 1032 cm–2 s–1, more than 350 times the original design value. The Tevatron’s unique feature was its collisions of protons with antiprotons. While it requires substantial technical efforts to make antimatter – the Tevatron’s antiproton source was the world’s most powerful producer of antimatter but still incapable by a long way of the destruction imagined in Angels & Demons – the study of proton–antiproton collisions provides the opportunity to study quark–antiquark interactions against low backgrounds. By the final shutdown, a total luminosity of 12 fb–1 had been delivered to each of the two gigantic Tevatron experiments, CDF and DØ, corresponding to around 5 × 1014 proton–antiproton interactions at a collision energy of 2 TeV.
Images of the two experiments (figure 1) appeared on the front pages of many magazines, in artworks and on TV shows. These modern engineering marvels were largely innovative and demonstrated, for example, the power of a silicon detector in a hadron-collider environment, multi-level triggering, uranium–liquid-argon calorimetry and the ability to identify b quarks. From the collisions provided, the teams recorded the 2 × 1010 most interesting events to tape for detailed examination offline. The analysis effort included searches and studies of new particles, such as the Higgs boson, and precision studies of the parameters of the Standard Model. Many of the exciting results obtained before the end of 2011 have already been summarized in CERN Courier. This article presents an update on some of the results obtained by CDF and DØ over the past two years.
The search for the Higgs boson was among the central physics goals of the programme for Tevatron Run II (2001–2011) and the challenge of understanding the origin of mass in the Standard Model attracted world-leading experimentalists to Fermilab. In 2005, the data sets provided by the Tevatron reached the point where the search for a substantial number of Higgs events above backgrounds could start. From then until 2012, the analysis teams provided not only increasingly stringent direct mass-exclusions but also reduced indirectly the mass range where the Higgs boson could exist, using highly precise measurements of the masses of the top quark and the W boson (see below). By early 2011, results from the Tevatron and CERN’s Large Electron–Positron collider had reduced the allowed mass range to 125±10 GeV, so the joke among experimentalists at the time was: “We know the mass of the Higgs, we just don’t know if it exists.”
The CDF and DØ collaborations developed many new experimental methods in their hunt for the Higgs boson, from the combined searches of hundreds of individual channels for the boson’s production and decay to an extremely precise understanding of the backgrounds and a high-efficiency reconstruction of the Higgs-decay objects. The Tevatron’s high luminosity was the key, because only a few events were expected to remain in the signal region following all of the selections. The unique feature of proton–antiproton collisions was critical for the searches, especially in the decay to a pair of b quarks – the most probable channel for Higgs decay at a mass of 125 GeV. While cross-sections for Higgs production increase with energy and are much higher at the LHC, the increase in the main backgrounds is even faster, so the signal-to-background ratio for this main Higgs-decay channel remains favourable at the Tevatron.
By the early summer of 2012, both CDF and DØ had analysed the full Tevatron data set in all sensitive Higgs-decay modes: bb, WW, ττ, γγ and ZZ. The results included not only larger data sets than before but also substantially improved analysis methods. Multivariate analysis was used to take full advantage of the information available in each event, rather than using the more traditional cuts on kinematical parameters. Such techniques optimize the ratio of signal to background in a multi-dimensional phase space and were critical for reaching sensitivity to the Higgs-boson signal.
What became even more exciting was that in the search channels where the Higgs decays to a pair of b quarks only, the significance of the excess exceeded 3σ
At the Tevatron, the primary search sensitive to Higgs masses below around 135 GeV comes from the associated production of the Higgs boson with W or Z bosons, with the Higgs decaying to a pair of b quarks. This topology increases the signal-to-background ratio, because decays to a pair of b quarks have the highest probability while also minimizing backgrounds as the extra W or Z boson provides useful features, both for triggering and for offline event selection. Nevertheless, reconstructing jets from b quarks – which sometimes consist of hundreds of particles – with high precision is challenging. This is why the expected shape of the Higgs signal is rather wide, with a mass resolution of around 15 GeV, in comparison with searches in the channels where single particles, such as a pair of photons or leptons, are used to reconstruct the mass of the Higgs.
The CDF and DØ collaborations then combined their search results that summer. The excess observed around a mass of 125 GeV, which the experiments had seen for the previous two years, became even more pronounced (figure 2). The significance of the excess was close to 3σ. What became even more exciting was that in the search channels where the Higgs decays to a pair of b quarks only, the significance of the excess exceeded 3σ, indicating evidence for the production and decay of a Higgs boson at 3.1σ (Aaltonen et al. 2012). It was an extremely exciting summer. As the Tevatron passed the baton for Higgs searches (and now studies) to the LHC, its experiments had established evidence of the production and decay of a Higgs boson in the most-probable decay channel to a pair of fermions.
The Standard Model is one of the most fundamental and accurate theories of nature, so precision measurements of its parameters figure among the major goals and results of the Tevatron’s physics programme. Those perfected over the past two years include the determination of the masses of the top quark and the W boson, both of which are fundamental parameters of the Standard Model.
Since the discovery of the top quark at the Tevatron in 1995, measurements of its mass have improved by more than an order of magnitude. In addition to the larger data sets, from some 10 events in 1995 to many thousands in 2012, the analysis methods have also been improved dramatically. One of the innovations developed for precision determination of the top mass – the matrix-element method – is now used in many other studies in particle physics.
In the channel that allows the most accurate mass measurement, the top quark’s final decay products are: a lepton (electron or muon); missing energy from the escaping neutrino; a pair of light quark jets from the decay of the W boson; and two b-quark jets. Determination of the energy of the jets is the most challenging task for precision measurement. In addition to using complex methods to determine the jet energy based on energy conservation in di-jet and γ+jet events, the fact that a pair of light jets come from the decay of a W boson with extremely well known mass (see below) is critical in obtaining high precision for the top-quark mass.
Using a large fraction of the Tevatron data, CDF and DØ reached a precision in the measurement of the top-quark mass of less than 1 GeV (figure 3), i.e. a relative accuracy of 0.5% (Tevatron Electroweak Working Group 2013). This is based on the combination of results from both experiments in many different channels. All of the results are in good agreement, demonstrating the validity of the methods that were developed and used to measure the top-quark mass at the Tevatron. Analyses of the full Tevatron data set are in progress and these should improve the accuracy by a further 20–30%. Experiment collaborations at both the LHC (ATLAS and CMS) and the Tevatron have formed a group to combine the results of the top-quark mass measurements from all four experiments. Such a combination will have a precision that is substantially better than individual measurements, because many of the uncertainties are not correlated between the experiments.
The measurement of the mass of the W boson requires even higher precision. By the end of the Tevatron’s operation, the combined Tevatron measurement for this particle with a mass of 80 GeV reached 31 MeV, or 0.04%. A precise value of the mass of the W boson is critical for understanding the Standard Model; in addition to being closely related to the masses of the Higgs boson and the top quark, it defines the parameters of many important processes. The main decay channel used to measure the W mass is the decay to a lepton (muon or electron) and a neutrino (“missing energy”). The precision calibration of the lepton energy is obtained from resonances with well known masses, such as the J/ψ or the Z boson, while the measurement of missing energy is calibrated using different methods for cross-checks. The calibration of the lepton energy is the most difficult part of the measurement; larger data sets provide more events and improve the accuracy of the measurement.
With up to around 50% of the Tevatron data set, the combined analysis of CDF and DØ gives the mass of the W boson to be 80.387 MeV with an accuracy of 16 MeV – twice as good as only a year previously (Tevatron Electroweak Working Group 2013). The accuracy is now driven by systematic uncertainties. In order to reduce them, careful work and analysis of more data are needed; a precision of around 10 MeV should be reachable using the full data set. Such accuracies were once thought to be impossible to achieve in a “dirty” hadron-collider environment.
In the Standard Model, the masses of the Higgs boson, W boson and top quark are closely related and a cross-check of the relationship is one of the model’s most stringent tests. Figure 4 shows recent results for the top-quark mass (from the Tevatron), the W-boson mass (dominated by the Tevatron, with a world-average accuracy of 15 MeV vs 16 MeV Tevatron only) and the mass of the Higgs boson, as measured by the LHC experiments. The good agreement demonstrates the validity of the Standard Model with high precision.
At its inception, researchers had not expected the Tevatron to be the precision b factory that it became. However, with the success of the silicon vertex detectors in identifying the vertexes of the decays of mesons and baryons containing b quarks, the copious production of these b hadrons and the extremely well understood triggers, detectors and advanced analysis techniques, the Tevatron has proved to be extremely productive in this arena. A large number of new mesons and baryons have been discovered there and the properties of particles containing b quarks have been studied with high precision, including the measurement of the oscillation frequency of the Bs mesons.
Studies of particles with b quarks provide an indirect way to look for physics beyond the Standard Model. The rate of the rare decay of the Bs meson to a pair of muons is tiny in the Standard Model but new physics models, including some versions of supersymmetry, predict enhancements. Figure 5 shows how the steady improvements in the Tevatron limits on the decay rate reached around 10–8 by 2012, as more data and more elaborate analysis methods were developed by CDF and DØ.
In late 2011, the ATLAS collaboration presented results indicating the existence of a new particle, which was interpreted as an excited state of a bb pair, χb(3P). It is always important to confirm observations of a new particle with independent measurements and even more important to see such a particle at another accelerator and detector. Within just a couple of months, the DØ collaboration confirmed the observation by ATLAS (Abazov et al. 2012). This was the first time that a discovery at the LHC was confirmed using data already collected at the Tevatron.
Many important studies performed at the Tevatron measure properties of the strong force, which holds together protons and neutrons in the nucleus and is described by the theory of QCD. These include extremely accurate studies of the production of jets and of W and Z bosons accompanied by jets. The Tevatron articles that provide information for the development of the QCD run to tens of pages long and have tens of plots and tables documenting – with extremely high precision – the details of interactions between strongly interacting particles.
One unusual property of the strong interaction is that, contrary to electromagnetic and gravity interactions where the force increases when objects come closer to each other, the interaction of quarks becomes stronger as they move apart. The experiments at the Tevatron studied the strength of the strong force vs the distance between quarks, the running of the strong coupling constant, and verified that the strong force steadily decreases down to a distance between particles of around 5 × 10–16 cm (figure 6).
During the last month of the Tevatron run in September 2011, the CDF and DØ experiments collected data at energies below 2 TeV, going all of the way down to 0.3 TeV in the centre of mass. Such data are useful for studies of the energy dependence of the strong interaction and to compare with previous colliders results, such as the SppS proton–antiproton collider at CERN. An interesting recent measurement is the energy dependence of the “underlying event” in the hard scattering of the proton and antiproton – that is, everything except the two outgoing hard-scattered jets from a pair of hard-scattered quarks (figure 7).
There are many instances when the course of physics changed when experimental results did not fit the current theoretical predictions. Quantum theory and relativity were both born from such “clouds” on the clear horizon of classical physics. Several puzzles remain in the Tevatron data, which are leading to analysis and re-analysis of the full data set. These include the observed anomalous dimuon asymmetry, where the production of negative muon pairs exceeds positive pairs, in contradiction with expectations from the Standard Model (Abazov et al. 2011). This result has attracted much attention, because it could relate to the observed matter–antimatter asymmetry in the universe.
There is also a puzzling effect in the production of the heaviest known elementary particle, the top quark. When top–antitop pairs are produced, more top quarks follow the direction of the colliding proton than is predicted in the Standard Model (Aaltonen et al. 2013, Abazov et al. 2013). Some of the models of new physics predict such abnormal behaviour.
Both of these “clouds” have a significance of 2–3σ and both are easier to study in the collisions of protons and antiprotons at the Tevatron. Will these measurements point to new physics or will the discrepancies be resolved with the further development of analysis tools or more elaborate theoretical descriptions based on the Standard Model? In any scenario, exciting physics from the Tevatron data is set to continue.
The Tevatron was at the leading edge of the energy frontier in particle-physics research for more than a quarter of a century. More than 1000 students received their doctorates based on data analysis in the Tevatron’s physics programme, which as a result trained generations of particle physicists. So far, in excess of 1000 scientific publications have come out of the programme, helping to shape the understanding of the subnuclear world. Analysis of the Tevatron’s unique data set continues and efforts to preserve the data for future access are in progress. There are sure to be many more exciting results in the coming years.
Breast cancer is the most frequent type of cancer among women and accounts for up to 23% of all cancer cases in female patients. The chance of a full recovery is high if the cancer is detected while it is still sufficiently small and has not had time to spread to other parts of the body. Routine breast-cancer screening is therefore part of health-care policies in many advanced countries. Conventional imaging techniques, such as X-ray, ultrasound or magnetic resonance imaging (MRI), rely on anatomical differences between healthy and cancerous tissue. For most patients, the information provided by these different modalities is sufficient to establish a clear diagnosis. For some patients, however, the examination will be inconclusive – for example, because their breast tissue is too dense to allow for a clear image – so these people will require further exams. Others may be diagnosed with a suspicious lesion that requires a biopsy for confirmation. Yet, once this biopsy is over, it might turn out to have been a false alarm.
Patients in this latter category can benefit from nuclear medicine. Positron-emission tomography (PET), for example, offers an entirely different approach to medical imaging by focusing on differences in the body’s metabolism. PET uses molecules involved in metabolic processes, which are labelled by a positron-emitting radioisotope. The molecule, once injected, is taken up in different proportions by healthy and cancerous cells. The emitted positrons annihilate with electrons in the surrounding atoms and produce a back-to-back pair of γ rays of 511 keV. The γ radiation is detected to reveal the distribution of the isotope in the patient’s body. However, whole-body PET suffers from a low spatial resolution of 5–10 mm for most machines, which is too coarse to allow for a precise breast examination. Several research groups are therefore aiming to produce dedicated systems, known as positron-emission mammographs (PEM), that have a resolution better than 2 mm.
One of these groups is the Crystal Clear collaboration (CCC), which is developing a system called ClearPEM. Founded in 1990 as project RD-18 within CERN’s Detector Research and Development Committee’s programme, the CCC aimed at R&D on fast, radiation-hard scintillating crystals for calorimetry at the LHC (Lecoq 1991). In this context, the collaboration contributed to the successful development of the lead tungstate (PbWO4) crystals now used in the electromagnetic calorimeters in the CMS and ALICE experiments at the LHC (Breskin and Voss 2009).
The CCC has transferred its knowledge to medical applications
Building on this experience, the CCC has transferred its knowledge to medical applications – initially through the development of a preclinical scanner for small animals, the ClearPET (Auffray et al. 2004. Indeed, the technical requirements for PET are close to those of applications in high-energy physics. Both require fast scintillators with high light-output and good energy resolution. They need compact and efficient photodetectors that are read by highly integrated, low-noise electronics that can treat the signals from thousands of channels. The CCC also has expertise in co-ordinating an international collaboration to develop leading-edge scientific devices.
Recently, the collaboration has used the experience gained with ClearPET to develop a dedicated PET system for human medicine – the ClearPEM, shown in figure 1 (Lecoq and Varela 2002). The breast was chosen as a target organ because of the benefits related to precise diagnosis of breast cancer. With the ClearPEM, the patient lies in a prone position on a bed designed such that the breast hangs through a hole. A robot moves the bed into position over two parallel detector-plates that rotate around the breast to acquire a full 3D image. In addition, ClearPEM also performs examinations of the armpit – the axilla – by rotating its detector arm by 90 degrees, thereby shifting the plates to be on each side of it.
Each detector plate contains 96 detector matrices, where one matrix consists of an 8 × 4 array of cerium-doped lutetium-yttrium silicate (LYSO:Ce) crystals, each 2 × 2 × 20 mm3 in size. As figure 2 shows, each crystal matrix is coupled to two 8 × 4 arrays of Hamamatsu S8550 avalanche photodiode (APD) arrays, such that every 2 × 2 mm2 read-out face is coupled to a dedicated APD. This configuration allows the depth of interaction (DOI) in the crystals to be measured and reduces the parallax error of the lines of response, contributing to better spatial resolution in the reconstructed image. The DOI can be measured with an uncertainty of around 2 mm on the exact position of the γ interaction in the crystal. Each signal channel is coupled to one input of a dedicated 192-channel ASIC, developed by the Portuguese Laboratory for Particle Physics and Instrumentation (LIP). It provides front-end treatment of the signal before handing it over to a 10-bit sampling ADC for digitalization (Varela et al. 2007). The image is reconstructed with a dedicated iterative algorithm.
Two ClearPEM prototypes have been built. The first is currently installed at the Instituto de Ciências Nucleares Aplicadas à Saúde in Coimbra, Portugal. The second, installed at Hôpital Nord in Marseilles, France, is used for ClearPEM-Sonic, a project within the European Centre for Research in Medical Imaging (CERIMED) initiative. While ClearPEM provides high-resolution metabolic information, it lacks anatomical details. ClearPEM-Sonic, however, extends the second prototype with an ultrasound elastography device, which images strain in soft tissue (Frisch 2011). The aim is to provide multimodal information that reveals the exact location of potential lesions in the surrounding anatomy. The availability of elastographic information further improves the specificity of the examination by identifying non-cancerous diseases – such as benign inflammatory diseases of the breast – that exhibit higher uptake of the radioactive tracer, fluorodeoxyglucose (18F), or FDG, used in PET imaging.
The French authority has approved ClearPEM-Sonic for a first clinical trial on 20 patients
Both prototypes have been tested extensively. The electronic noise level is under 2%, with an interchannel noise dispersion of below 8%. The front-end trigger accepts signals at a rate of 2.5 MHz, while the overall acquisition rate reaches 0.8 MHz. The detector has been properly calibrated and gives an energy resolution of 14.6% FWHM for 511 keV photons, which allows for efficient rejection of photons that have lost energy during a scattering process. The coincidence-time resolution of 4.6 ns FWHM reduces the number of random coincidences. The global detection efficiency in the centre of the plates has been determined to be 1.5% at a plate distance of 100 mm. The image resolution measured with a dedicated Jasczcak phantom is 1.3 mm.
The competent French authority has approved ClearPEM-Sonic for a first clinical trial on 20 patients. The goal of this trial is to study the feasibility and safety of PEM examinations. In parallel, the results of ClearPEM are being compared with other modalities, such as classical B-mode ultrasound, X-ray mammography, whole-body combined PET and computerized tomography (PET/CT) imaging and MRI, which all patients participating in this trial will have undergone. The ClearPEM image is acquired immediately after the whole-body PET/CT, which avoids the need for a second injection of FDG for the patient. The histological assessment of the biopsy is used as the gold standard.
The sample case study shown in figure 3 is a patient who was diagnosed with multifocal breast cancer during the initial examination. The whole-body PET/CT reveals a first lesion in the left breast and a second close to the axilla. Before deciding on the best therapy, it was crucial to find out whether the cancer had spread to the whole breast or was still confined to two individual lesions. An extended examination with MRI shows small lesions around the first one. The whole-body PET/CT image, however, does not show any small lesions. The standard procedure is to obtain biopsy samples of the suspicious tissue. However, the availability of a high-resolution PET can give the same information. Indeed, when the patient was imaged with ClearPEM, the lesions visible with MRI were confirmed to be metabolically hyperactive, i.e. potentially cancerous. The biopsy subsequently conducted confirmed this indication. This clinical case study, together with several others, hints at how ClearPEM could improve the diagnostic process.
This project successfully demonstrates the value of fundamental research in high-energy physics in applications to wider society. The knowledge gained by an international collaboration in the development of particle detectors for the LHC has been put to use in the construction of a new medical device – a dedicated breast PET scanner, ClearPEM. It provides excellent image resolution that allows the detection of small lesions. Its high detection efficiency allows a reduction in the total examination time and in the amount of radioactive tracer that has to be injected. Last, first clinical results hint at the medical value of this device.
• The members of the ClearPEM-Sonic collaboration are: CERN; the University of Aix-Marseille; the Vrije Universiteit Brussels; the Portuguese Laboratory for Particle Physics and Instrumentation, Lisbon; the Laboratoire de Mecanique et Acoustique, Marseille; the University Milano-Biccoca; PETsys, Lisbon; SuperSonic Imagine, Aix-en-Provence; AssistancePublique – Hôpitaux de Marseille; and the Institut Paoli Calmettes, Marseille.
The discovery of a Higgs boson by the ATLAS and CMS collaborations at the LHC has opened new perspectives on accelerator-based particle physics. While much else might well be discovered at the LHC as its energy and luminosity are increased, one item on the agenda of future accelerators is surely a Higgs factory capable of studying this new particle in as much detail as possible. Various options for such a facility are under active consideration and circular electron–positron (e+e–) colliders are now among them.
In a real sense, a Higgs factory already exists in the form of the LHC, which has already produced millions of Higgs bosons and could produce hundreds of millions more with the high-luminosity upgrade planned for the 2020s. However, the experimental conditions at the LHC restrict the range of Higgs decay modes that can be observed directly and measured accurately. For example, decays of the Higgs boson into charm quarks are unlikely to be measurable at the LHC. On the one hand, decays into gluons can be measured only indirectly via the rate of Higgs production by gluon–gluon collisions and it will be difficult to quantify accurately invisible Higgs decays at the LHC. On the other hand, the large statistics at the LHC will enable accurate measurements of distinctive subdominant Higgs decays such as those into photon pairs or ZZ*. The rare decay of the Higgs into muon pairs will also be accessible. The task for a Higgs factory will be to make measurements that complement or are even more precise than those possible with the LHC.
Attractive options
Cleaner experimental conditions are offered by e+e– collisions. Prominent among other contenders for a future Higgs factory are the design studies for a linear e+e– collider: the International Linear Collider (ILC) and the Compact Linear Collider (CLIC). In addition to running at the centre-of-mass energy of 240 GeV that is desirable for Higgs production, these also offer prospects for higher-energy collisions, e.g. at the top–antitop threshold of 350 GeV and at 500 GeV or 1000 GeV in the case of the ILC, or even higher energies at CLIC. These would become particularly attractive options if future, higher-energy LHC running reveals additional new physics within their energy reach. High-energy e+e– collisions would also offer prospects for determining the triple-Higgs coupling, something that could be measured at the LHC only if it is operated at the highest possible luminosity.
There has recently been a resurgence of interest in the capabilities of circular e+e– colliders being used as Higgs factories following a suggestion by Alain Blondel and Frank Zimmermann in December 2011 (Blondel and Zimmermann 2011). It used to be thought that the Large Electron–Positron (LEP) collider would be the largest and highest-energy circular e+e– collider and that linear colliders would be more cost-efficient at higher energies. However, advances in accelerator technology since LEP was designed have challenged this view. In particular, the development of top-up injection at B factories and synchrotron radiation sources, as well as advances in superconducting RF and in beam-focusing techniques at interaction points, raise the possibility of achieving collision rates at each interaction point at a circular Higgs factory that could be more than two orders of magnitude larger than those achieved at LEP. Moreover, it would be possible to operate such a collider with as many as four interaction points simultaneously, as at LEP.
The concept for a circular e+e– collider that has been most studied is TLEP, which would be installed in a tunnel some 80–100 km in circumference. This would be capable of collisions at 350 GeV in the centre of mass, while the specifications call for a luminosity of 1034 cm–2 s–1 at this energy at each of the four interaction points. With conservative technical assumptions, the corresponding luminosity at a centre-of-mass energy of 240 GeV would exceed 4 × 1034 cm–2 s–1 at each interaction point, as figure 1 shows (Koratzinos et al. 2013). It is encouraging that previous circular e+e– colliders – such as LEP and the B factories – have established a track record of exceeding their design luminosities and that there are no obvious show-stoppers to achieving these targets at TLEP.
The design luminosity of TLEP would enable millions of Higgs bosons to be produced under clean experimental conditions
The design luminosity of TLEP would enable millions of Higgs bosons to be produced under clean experimental conditions. The Higgs mass could then be measured with a statistical precision below 10 MeV and the total decay width with an accuracy of better than 1.5%. Many decay modes, such as those into gluon pairs, WW*, ZZ*, and invisible decays could be measured with an accuracy of better than 0.2% and γγ decays to better than 1.5%. This would challenge the predictions of reclusive supersymmetric models, which predict only small deviations of Higgs properties from those expected in the Standard Model, as figure 2 shows.
One essential limitation on the ambition for such a collider is the overall power consumption. The largest, single energy requirement is for the RF acceleration system. Fortunately, because it would operate in continuous rather than pulsed mode, experience with LEP suggests that an overall efficiency above 50% should be attainable. The collision performances quoted here would require an RF power consumption of around 200 MW, to which should be added some 100 MW for cooling, ventilation, other services and the experiments. This is similar to the requirements of other major future accelerators at the energy frontier, such as the ILC and CLIC.
One attractive feature of circular e+e– colliders is that they could offer significantly higher luminosities at lower energies. For example, a total luminosity of 2 × 1036 cm–2 s–1 should be possible with TLEP running at the Z peak, and 5 × 1035 cm–2 s–1 at the W+W– threshold, which would offer prospects of data samples with the order of 1012 Zs and 108 W events. The statistical precision and the sensitivity to rare decays provided by such samples extend far beyond those envisaged in previous studies of Z and W physics, corresponding, e.g. perhaps to δsin2θW ˜ 10–6 and δmW ˜ 1 MeV. It will require both a major experimental effort to understand how to control systematic errors and a major theoretical effort to optimize the interpretation of the information obtainable from such data sets.
Although the baseline for TLEP is a tunnel with a circumference of 80–100 km, it is interesting to consider how the performance of a circular e+e– collider would scale with its circumference. Generally speaking, a smaller ring would be expected to have a lower maximum centre-of-mass energy, as well as lower luminosities at the energies within its reach. The lower limit of the range of ring sizes under consideration is represented by the LHC tunnel, with its circumference of 27 km. An e+e– collider in an LHC-sized tunnel could reach 240 GeV in the centre of mass – for Higgs studies with a luminosity above 1034 cm–2 s–1 at each interaction point – and could produce impressive quantities of Z bosons and WW pairs. It is difficult to imagine installing an e+e– collider in the LHC tunnel before the LHC experimental programme runs its full course but installation in a new tunnel would not be subject to such a restriction and interest in such a project has been expressed in various regions of the world.
One attractive option would be to envisage a future circular e+e– collider as part of a future, very large collider complex. For example, a tunnel with a circumference of 80–100 km could also accommodate a proton–proton collider capable of collisions at 80–100 TeV in the centre of mass, which would also open up the option of very-high-energy electron–proton collisions. This could be an appealing vision for accelerator particle physics at the energy frontier for much of the 21st century. Such a complex would fit naturally into the updated European Strategy for Particle Physics, which has recently been approved.
Ernst Messerschmid first arrived at CERN as a summer student in 1970, in the midst of preparations for the start up of the Intersecting Storage Rings (ISR). The studies that he did on beam pick-ups in this ground-breaking particle collider formed the topic of his diploma thesis – but he was soon back at the laboratory as a fellow, sharing in the excitement of seeing injection of the first beams on 29 October that same year. This time he worked on simulations of longitudinal beam instabilities, the subject of his PhD thesis. By the time he came back to CERN some 40 years later to give a colloquium in May this year, he was one of the few people to have worked in space and he had even had a hand in training another former CERN fellow, Christer Fuglesang, before he also flew into space.
A self-confessed “country boy”, Messerschmid grew up in Reutlingen in south-western Germany, training first as a plumber and then attending the Technisches Gymnasium in Stuttgart. An aptitude for mathematics turned him towards more academic pursuits and after military service he enrolled at the universities of Tübingen and Bonn. Coming to CERN then as a summer student opened up a new world – an international lab set among the French-speaking communities on the Franco-Swiss border near Geneva. He was on the first steps of the journey that would take him much further afield – into space.
Following his fellowship at the ISR, Messerschmid gained his doctorate from the University of Freiburg. After further experience in accelerators at Brookhaven National Laboratory, he started work at DESY in 1977 optimizing the alternating-gradient magnets for the PETRA electron–positron collider. He looked set for a career in accelerator physics but while deciding on his future he spotted an advert in the newspaper Die Zeit: “Astronauts wanted.”
The questions were easy for a ‘CERNois’ to answer. The challenge came afterwards.
Ernst Messerschmid
Messerschmid had by chance come across ESA’s first astronaut selection campaign. “There were five boxes to tick,” he recalls. “Scientific training, good health, psychological stability, language skills and experience in an international environment. Being prepared by my time at CERN, I could tick them all.” Out of some 7000 applicants, he was among five selected in Germany, of whom three eventually went into space: Ulf Merbold was first, as an ESA astronaut, with Reinhard Furrer and Messerschmid following. “The questions were easy for a ‘CERNois’ to answer,” he says. “The challenge came afterwards.”
So, Messerschmid left the world of particle accelerators and in 1978 went to work on satellite-based, search-and-rescue communication systems at the German Aerospace Test and Research Institute (DFVLR), the precursor of the German Aerospace Centre (DLR). He was selected for training as a science astronaut in 1983. Two years later, after scientific training at various universities and industrial laboratories, as well as flight training at ESA and NASA, he was assigned as a payload specialist on the first German Spacelab mission, D1, aboard the Challenger space shuttle. With two NASA pilots, three NASA mission specialists and three payload specialists from Europe, this was the largest crew to fly on the space shuttle. Joining Messerschmid from Europe were his fellow German, Reinhard Furrer, and Wubbo Ockels, from the Netherlands. It was the first Spacelab mission in which payload activities were controlled from outside the US. It was also the last flight of the Challenger before the disaster in January 1986.
Working in space
Between 30 October and 6 November 1985, Ernst and his colleagues performed more than 70 experiments. This was the first series to take full advantage of the “weightless” conditions on Spacelab, covering a range of topics in physical, engineering and life-science disciplines. “These were not just ‘look and see’ experiments,” Messerschmid explains. “We studied critical phenomena. With no gravity-driven convection and no sedimentation we could make different alloys – for example, mixing a heavy metal with aluminium. In other experiments we grew uniform, large single crystals of pharmaceuticals for X-ray crystallography studies.”
It was the experiments – more than the launch and distance from Earth – that proved to be the most stressful. “There were 100 or so professors and some 200 students who were dependent on the data we were collecting,” Messerschmid says. “We worked 15–18 hours a day. There was not much time to look out of the window!” But look out of the window he did, and the view of Earth was to leave a lasting impression, not only because of its beauty but also because of the cautionary signs of exploitation. He saw smoke from fires in the rainforests and the bright lights at night over huge urban areas. “Our planet is overcrowded,” he observes. “We can’t continue like this.”
Since 2005, Messerschmid has been back at Stuttgart as chair of astronautics and space stations
After his spaceflight, Messerschmid moved to the University of Stuttgart, where he became director of the Space Systems Institute, doing research and lecturing on space and transportation systems, re-entry vehicles and experiments in weightlessness. He went on to be dean of the faculty of aerospace at Stuttgart and the university’s vice-president for science and technology before becoming head of the European Astronaut Centre in Cologne in 2000. There, he was involved in training Fuglesang, who has since flown twice aboard a space shuttle to the International Space Station. Since 2005, Messerschmid has been back at Stuttgart as chair of astronautics and space stations.
In the meantime, he renewed contact with CERN when he joined Gerald Smith from Pennsylvania State University and others in 1996 on a proposal for a general-purpose, low-energy antiproton facility at the Antiproton Decelerator, based on a Penning trap. Messerschmid and Smith were interested in using antiprotons and in particular the decay chain of the annihilation products for plasma heating in a concept for an antimatter engine. A letter-of-intent described an experiment to measure the energy deposit of proton–antiproton annihilation products in gaseous hydrogen or xenon and compare it with numerical simulations. Messerschmid’s student, Felix Huber, worked at CERN for several months but in the end nothing came of the proposal.
Back in Stuttgart, Messerschmid continues to teach astronautics and – as in the colloquium at CERN – to spread the word on the value of spaceflight for knowledge and innovation. “We fly a mission,” he says, “and afterwards, as professors, we become ‘missionaries’ – ambassadors for science and innovation.” His “mission statement” for spaceflight has much in common with that of CERN, with three imperatives: to explore – the cultural imperative; to understand – the scientific imperative; and to unify – the political imperative. While the scientific imperative is probably self-evident, the cultural imperative recognizes the human desire to learn and the need to inspire the next generation, and the political imperative touches on the value of global endeavours without national boundaries – all aspects that are close to the hearts of the founders of CERN and their successors.
So what advice would he give a young person, perhaps coming to CERN as a summer student, as he did 40 years ago? The plumber-turned-accelerator physicist who became an astronaut reflects for a few moments. “Thinking more in terms of jobs,” he replies, “consider engineering – physicists can also become engineers.” Then he adds: “Physicists live on the technologies that engineers produce. Engineers solve the differential equations, they makes theories a reality.” Who knows, one day the antimatter plasma-heating for propulsion might become a reality.
Since the revolutionary discovery of the J/ψ meson, quarkonia – bound states of heavy quark–antiquark pairs – have played a crucial role in understanding fundamental interactions. Being the hadronic-physics equivalent of positronium, they allow detailed study of some of the basic properties of quantum chromodynamics (QCD), the theory of strong interactions. Yet, despite the apparent simplicity of these states, the mechanism behind their production remains a mystery, after decades of experimental and theoretical effort (Brambilla et al. 2011). In particular, the angular decay-
distributions of the quarkonium states produced in hadron collisions – which should provide detailed information on their formation and quantum properties – remain challenging and present a seemingly irreconcilable disagreement between the measurements and the QCD predictions.
Given the success of the Standard Model, why has this intriguing situation not captivated more attention in the high-energy-physics community? The reason may be that this problem belongs to the notoriously obscure and computationally cumbersome “non-perturbative” side of the Standard Model. While the failure to reproduce an experimental observable that is perturbatively calculable in the electroweak or strong sector would be interpreted as a sign of new physics, phenomena requiring a non-perturbative treatment – such as those related to the long-distance regime of the strong force – are less likely to trigger an immediate reaction.
It can also be argued that, until recently, doubts existed regarding the reliability of the experimental data, given some contradictions among results and the incompleteness of the analysis strategies (Faccioli et al. 2010). Similar doubts also existed about the usefulness of the data as a test of theory, given their limited extension into the “interesting” region of high transverse-momentum (pT). The recently published, precise and exhaustive polarization measurements of Υ from the CDF and CMS experiments (CDF collaboration 2012 and CMS collaboration 2013a), which extend to pT of around 40 GeV, have significantly changed this picture, building a robust and unambiguous set of results to challenge the theoretical predictions.
his approach successfully reproduces the differential pT cross-sections, which has been interpreted as a plausible indication that the underlying assumptions are correct
Quarkonium production has been the subject of ambitious theoretical efforts aimed at fully and systematically calculating how an intrinsically non-perturbative system (the cc or bb state) is produced in high-energy collisions and – potentially – at providing Standard Model references for fully fledged precision studies. The nonrelativistic QCD (NRQCD) framework consistently fuses perturbative and non-perturbative aspects of the quarkonium production process, exploiting the notion that the heavy quark and antiquark move relatively slowly when bound as a quarkonium state (Bodwin et al. 1995). This approach introduces into the calculations a mathematical expansion in the quarkʼs velocity-squared, v2, supplementing the usual expansion in the strong coupling constant αs of the hard-scattering processes.
The non-perturbative ingredients in these calculations are the long-distance matrix elements (LDME) that describe the transitions from point-like di-quark objects, which can also be coloured (“colour-octet” states), to the colourless observable quarkonia. In principle these could be calculated using non-perturbative models but the current approach leaves them as free parameters of a global fit to some kinematic spectra of quarkonium production. This approach successfully reproduces the differential pT cross-sections, which has been interpreted as a plausible indication that the underlying assumptions are correct.
The next step in the validation of the NRQCD framework is to make other predictions without changing the previously fitted matrix elements and compare them with independent measurements. The framework clearly predicts that S-wave quarkonia (J/ψ, ψ(2S) and the Υ(nS) states) directly produced in parton–parton scattering at pT much higher than their mass are transversely polarized – that is, their angular momentum vectors are aligned as the spin of a real photon. Specifically, considering their decay into μ+μ–, this means that the decay leptons are preferentially emitted in the meson’s direction of motion. The measurements made by CDF and CMS contradict this picture dramatically: the Υ states always decay almost isotropically, meaning that they are produced with no preferred orientation of their angular momentum vectors.
One aspect to keep in mind is that sizeable but not yet well measured fractions of the S-wave quarkonia (orbital angular momentum L=0) are produced from feed-down decays of P-wave states (L=1) leading to more complex polarization patterns. In particular, it is conceivable that the transverse polarization of the directly produced Υ(1S) mesons, say, is washed away by a suitable level of longitudinal polarization brought by the Υ(1S) mesons produced in χb decays. Such potential “conspiracies” illustrate how intertwined the studies of S- and P-wave states are, showing that a complete understanding of the underlying physics requires a global analysis of the whole family.
Few measurements are so far available on the production and polarization of P-wave quarkonia (χc and χb), which are experimentally challenging because the main detection channels involve radiative decays producing low-energy photons. In this respect the Υ(3S) resonance, only affected by feed-down decays from the recently discovered χb(3P) state, a presumably small contribution, offers a clearer comparison between predictions and measurements: the verdict is that there is striking disagreement, as the left-hand figure above shows.
A more decisive assessment of the seriousness of the theory difficulties is provided by measurements of the polarization of high-pT charmonia. Such data probe a domain of high values of the ratio of pT to mass, where the NRQCD prediction is supposed to rest on firmer ground. Furthermore, the heavier charmonium state, ψ(2S), is free from feed-down decays and so its decay angular distribution exclusively reflects the polarization of S-wave quarkonia directly produced in parton–parton scattering, therefore representing a cleaner test of theory. The results for the ψ(2S) shown recently by the CMS collaboration at the Large Hadron Collider Physics Conference, reaching up to pT of 50 GeV, are in disagreement with the theoretically expected transverse polarization, as the right-hand figure indicates (CMS collaboration 2013b). This challenges the assumed hypothesis that long- and short-distance aspects of the strong force can be separated in calculations on these QCD phenomena. The ultimate “smoking-gun signal” will come from measurements of the polarization of directly produced J/ψ mesons. These probe higher pT/mass ratios and lower heavy-quark velocities than the studies of ψ(2S) but at additional cost in the necessary experimental discrimination of the J/ψ mesons from χc decays.
The solution to the quarkonium-polarization problem remains unknown but it seems a safe bet that it will open new perspectives over a whole class of processes
Definite judgements will have to wait for more thorough scrutiny of the theoretical ingredients. An explicit proof that perturbative and non-perturbative effects can be factorized – already existing for several hard-scattering processes in QCD – has yet to be formally provided for the case of quarkonium production. At the same time, the method to determine the colour-octet transition-matrix elements using measured pT spectra must be improved. For example, the existing NRQCD global fits use differential cross-sections measured with acceptance corrections that are evaluated assuming unpolarized production, ignoring the large uncertainty that the experiments assign to the lack of prior knowledge about quarkonium polarization (the acceptance determinations strongly depend on the shape of the dilepton decay distributions). Paradoxically, the fit results lead to the prediction of strong transverse polarization. Moreover, while the NRQCD predictions are considered robust only at sufficiently high pT, the fits assign equal weight to data collected at high pT and those collected at pT values that are similar to the quarkonium mass, which drive the results because of their higher precision. Finally, it could be that the higher-order corrections in the perturbative part of the calculations (currently performed at next-to-leading order in αs) are sizable and not yet well accounted for in current theoretical uncertainties, or that the LDME expansion in the heavy-quark velocity should be reconsidered.
The solution to the quarkonium-polarization problem remains unknown but it seems a safe bet that it will open new perspectives over a whole class of processes. It could unveil an improved Standard Model capable of providing testable predictions for high-momentum production of a large category of non-perturbative hadronic objects. In any case, it will surely stimulate profound rethinking of how such phenomena can be described and predicted.
Some 100 physicists, including experts from all around the world, converged on Bologna on 8–12 April for the 14th International Conference on B Physics at Hadron Machines (Beauty 2013). Hosted by the Istituto Nazionale di Fisica Nucleare (INFN) and by the local physics department, the meeting took place in the prestigious Giorgio Prodi lecture hall, at the heart of a magnificent complex in the city centre.
The Beauty conference series aims to review results in the field of heavy-flavour physics and address the physics potential of existing and upcoming B-physics experiments. The major goal of this research at the high-precision frontier is to perform stringent tests of the flavour structure of the Standard Model and search for new physics, where strongly suppressed, “rare” decays and the phenomenon of CP violation – i.e. the non-invariance of weak interactions under combined charge-conjugation (C) and parity (P) transformations – play central roles. New particles may manifest themselves in the corresponding observables through their contributions to loop processes and may lead to flavour-changing neutral currents that are forbidden at tree level in the Standard Model. These studies are complementary to research at the high-energy frontier conducted by the general-purpose experiments ATLAS and CMS at the LHC, which aim to produce and detect new particles directly.
During the past decade e+e– B factories have been the main pioneers in the field of B physics, complemented by the CDF and DØ experiments at the Tevatron, which made giant leaps in the exploration of decays of the Bs0 meson. Exploiting the highly successful operation of the LHC, the experimental field of quark-flavour physics is being advanced by the purpose-built LHCb experiment, as well as by ATLAS and CMS. In the coming years, a new e+e– machine will join the B-physics programme, following the upgrade of the KEKB collider in Japan and the Belle detector there. This field of research will therefore continue to be lively for many years, with the exciting perspective of reaching the ultimate precision in various key measurements.
The participants at Beauty 2013 were treated to reports on a variety of impressive new results. CP violation has recently been established by LHCb in the Bs0 system with a significance exceeding 5σ by means of the Bs0 → K–π+ channel. The ATLAS collaboration reported its first flavour-tagged study of Bs0 → J/ψφ decays and the corresponding result for the Bs0 – Bs0 mixing phase φs, which is in agreement with previous LHCb analyses. LHCb presented the first combination of several measurements of the angle γ of the unitarity triangle from pure tree-level decays. In the field of charm physics, a new LHCb analysis of the difference of the CP asymmetries in the D0 → π+π– and D0 → K+K– channels does not support previous measurements that pointed towards a surprisingly large asymmetry. The CDF collaboration reported on the observation of D0 – D0 mixing, confirming the previous measurement by LHCb. Concerning the exploration of rare decays, LHCb presented the first angular analysis of Bs0 → φμ+μ–.
Di-muons and more
In addition to this selection of highlights, one of the most prominent and rare B decays, the Bs0 → μ+μ– channel, was the focus of various discussions and presentations at Beauty 2013. In the Standard Model, this decay originates from quantum-loop effects and is strongly suppressed. Recently, LHCb was able, for the first time, to observe evidence of Bs0 → μ+μ– at the 3.5σ level. The reported (time-integrated) branching ratio of 3.2+1.5–1.2 × 10–9 agrees with the Standard-Model prediction. Although the current experimental error is still large, this measurement places important constraints on physics beyond the Standard Model. It will be interesting to monitor future experimental progress.
With recently proposed new observables complementing the branching ratio, the measurement of Bs0 → μ+μ– with increased precision will continue to be vital in the era of the LHC upgrade. ATLAS and CMS can also make significant contributions in the exploration of this decay. Important information will additionally come from stronger experimental constraints on B0 → μ+μ–, which has a Standard-Model branching ratio about 30 times smaller than that for Bs0 → μ+μ–; the current experimental upper bound is about one magnitude above the Standard-Model expectation. Assuming no suppression through new physics, B0 → μ+μ– should be observable at the upgraded LHC.
Altogether, there were 60 invited talks at Beauty 2013 in 12 topical sessions and 11 posters were displayed. In addition to the searches for new physics in the so-called “golden channels”, the talks covered many other interesting measurements, as well as progress in theory. Results on heavy-flavour production and spectroscopy at the B factories, the Tevatron and at the ALICE, ATLAS, CMS and LHCb experiments were presented. Despite the primary focus of the conference being on B physics, two sessions were devoted entirely to CP violation in top, charm and kaon physics. There were also presentations on the status of lepton flavour-violation and models of physics beyond the Standard Model, as well as talks on the status and prospects for future B-physics experiments, SuperKEKB/Belle II and the LHCb upgrade. Moreover, each session featured a theoretical review talk. Guy Wilkinson of Oxford University gave the exciting summary talk that concluded the conference.
The charming environment of the old city centre, dating from the Middle Ages, inspired informal physics discussions during tours through beautiful squares and churches. The programme included a visit to the Bologna Museum of History, followed by the conference dinner, with some jazz music to liven up the evening. The food lived up to the reputation of the traditional Bolognese cuisine and was particularly appreciated.
The 14th Beauty conference marked, for the first time, the dominance of the LHC experiments in the heavy-flavour sector. The field is now entering a high-precision phase for B physics, with the LHC and SuperKEKB promising to enrich it with many new measurements throughout this decade. The forthcoming increase in the beam energy of the LHC will double the b-quark production rate, strengthening its role in the exciting quest for physics beyond the Standard Model.
Quarks and Beauty: An Encounter at the Airport
Ten years ago, the Beauty 2003 conference took place in Pittsburgh. I had already been working on B physics for some years and I thought this would be an opportunity to learn what was happening in the field and talk to some of the experts. In particular, the programme included a talk by Ed Witten that I was keen to hear. Above all, the conference was being hosted by Carnegie-Mellon University, where I had studied physics in the 1960s. I was looking forward to visiting the campus after decades and meeting my mentor, Lincoln Wolfenstein, who was one of the organizers.
I was based at the University of Aachen but found out that there was a convenient flight from Brussels to Pittsburgh and, as a courtesy, the university proposed that one of its cars could drop me at the airport. On arrival in Brussels, I checked in and proceeded towards immigration. There was a long line of passengers heading to the US, who had to wait for special security clearance. After some time, a young woman representing the airline came to me to ask some questions. I told her I was going to Pittsburgh for a conference. She checked my papers confirming my conference registration and hotel reservation. Then she asked me what the conference was about. To avoid going into detailed explanations, I just said: “It is about elementary particles. About quarks.” She looked at me with raised eyebrows that suggested a degree of scepticism, so I decided to explain more about quarks.
“All of the matter you see around you is made of atoms. The centre of the atom is a tiny nucleus. The nucleus itself contains tinier constituents called quarks. There are two main varieties, called up-quark and down-quark. There are some rare varieties, too, which are heavier and unstable. One of them is called the beauty-quark. That is the one the conference is about.” I paused to see if she was registering what I said. She had a bemused look, not sure if I was being serious. I thought it was the nomenclature of quarks that confused her. So I said, helpfully: “These names up, down, beauty are sort of arbitrary. There are some people who call the beauty-quark bottom. Not a nice name, in my opinion. I much prefer beauty.”
At this stage she was distinctly nervous and went to fetch one of her superiors. This was an older woman with a no-nonsense manner. She asked to see the conference papers that I had in my hand. She glanced at the first page, which was a copy of the conference poster with the name “Beauty 2003” printed in bold letters. She immediately exclaimed: “It’s a conference on cosmetics! Why didn’t you say so?” Without waiting for my reaction, she picked up my hand-baggage and hustled me past the line of waiting passengers to the top of the queue, where I could proceed to passport control. She wished me a pleasant flight and disappeared.
I did not have the chance to tell her that the beauty-quark is not a cosmetic but rather a laboratory that might shed light on some of the deep mysteries of nature, such as why we exist and why time runs forwards.
• Lalit M Sehgal, Aachen.
For Lincoln Wolfenstein, an expert in the phenomenology of weak interactions, who turned 90 in February.
It is more than six years since Uppsala University was host to the first Workshop on Exotic Physics with Neutrino Telescopes. Since then, the large neutrino telescopes IceCube and ANTARES have been completed and indirect searches for dark matter, monopoles and other aspects of physics beyond the Standard Model are proceeding at full strength. Indeed, some theoretical models have already been called into question by recent results from these detectors. Meanwhile, searches for dark-matter candidates and indications of physics beyond the Standard Model in experiments at the LHC have set stringent constraints on many models, complementing those derived from the neutrino telescopes. The time was therefore ripe for a second workshop, with the Centre for Particle Physics of Marseilles (CPPM) as host, bringing together 47 experts on 3–6 April.
Dark matter
Review talks on supersymmetric dark-matter candidates and the status of experimental searches opened the first day’s sessions. Supersymmetry – still a well motivated candidate for physics beyond the Standard Model – has been put to the first serious tests at the LHC. The discovery there of a Higgs boson at a mass of 126 GeV can be seen either as just another confirmation of the Standard Model or as a first glimpse of physics beyond it. The lack of evidence so far for supersymmetry from direct searches at the LHC raises the limits of supersymmetric particle masses to the scale of tera-electron-volts and has implications for the dark-matter candidates arising in supersymmetric models. The current preferred mass-range for the lightest, stable neutralino is in the region of hundreds of giga-electron-volts. This is good news for neutrino telescopes, which – by design – are sensitive to high-energy particles. The downside is that the predicted rates from annihilation of neutralinos accumulated in heavy celestial objects are low if the constraints from the Wilkinson Microwave Anisotropy Probe and the LHC are taken into account. Only a handful of minimal supersymmetric Standard Model variants predict rates in cubic-kilometre neutrino telescopes that are of the order of 100 events per year or higher.
However, the neutralino in minimal supersymmetry is not the only viable candidate for dark matter. In models with R-parity violation, a long-lived but unstable gravitino with a mass between a few and a few hundred giga-electron-volts could be a component of the dark matter in the halo of galaxies. Neutrinos of any flavour can be produced in gravitino decay and can be detected by neutrino telescopes. A feature of gravitino dark matter is that it would leave no signal in direct-detection experiments because the cross-section for the interaction between a gravitino and normal matter is suppressed by the Planck mass to the fourth power.
Models with extra dimensions of sizes in the range 10–3–10–15 m can also provide dark-matter candidates. Extra dimensions can be accommodated (or even required) in supersymmetry, string-theory or M-theory, where they give rise to branons – weakly interacting and massive fluctuations of the field that represents the 3D brane on which the standard world lives. As stable and weakly interacting objects, branons make a good candidate for dark matter, following the usual scenario: relic branons left over after a freeze-out period during the evolution of the universe accumulate gravitationally in the halos of galaxies, where they annihilate into Standard Model particles that can be detected by gamma-ray telescopes, surface arrays or neutrino telescopes.
From the experimental side, the IceCube, ANTARES, Baikal, Baksan and Super-Kamiokande collaborations presented their latest results on the search for dark matter from different potential sources – the Sun, the Galaxy or nearby galaxies. Their search techniques are similar and based on looking for an excess of neutrinos over the known atmospheric-neutrino background from the direction of the sources. DeepCore, the denser array in the centre of IceCube, which was not part of the original design, has proved extremely useful in lowering the energy reach of the detector. It has opened up the possibility of pursuing physics topics that would otherwise be impossible with a detector designed for tera-electron-volt neutrino astrophysics. Using the surrounding strings of IceCube as a veto, starting and contained tracks can be defined, therefore turning IceCube into a 4π detector with an energy threshold of around 10 GeV, with access to the Galactic centre and the whole Southern Sky.
However, none of the experiments report any excess, and upper limits on the neutrino flux and on the cross-section for interactions between weakly interacting massive particles (WIMPs) and nucleons have been calculated over an ample range of WIMP masses, from about 1 GeV (Super-Kamiokande) to 10 TeV (IceCube). An example of the long-term search capability, as well as consistency in data analysis, was presented for the Baksan experiment. Although it has the smallest of the detectors mentioned above, it has gathered data over 24 years, from 1978 to 2009.
Monopoles, nuclearites and more
Monopoles and heavy, highly ionizing particles leave a unique signal in a neutrino telescope: a strong light-yield along the path of the particle, which is much more intense than the usual track-pattern of a minimum-ionizing muon. If the particle is nonrelativistic, then the separation of such a signal from relativistic muons traversing the detector is even easier. However, dedicated online or offline triggers are needed because for a nonrelativistic particle, light is deposited in the detector over a time of up to tens of milliseconds, instead of a few microseconds for a relativistic muon.
The best limit for fast (β > 0.75) monopoles, at a level of about 3 × 10–18 cm–2 s–1, was presented by the IceCube collaboration using data from its 40-string configuration, although the ANTARES limit – at a level of around 7 × 10–17 cm–2 s–1 – remains the best so far, at between 0.65 < β < 0.75. However, the sensitivity of the full IceCube detector could extend to β = 0.60 and reach a level of between 2 × 10–18 cm–2 s–1and 10–17 cm–2 s–1 in a one-year exposure, depending on the assumptions on the monopole spin. Results are expected soon, when the ongoing data analysis is finalized.
The Super-Kamiokande collaboration presented a novel way to search for monopoles using the Sun as the target. The idea is that super-heavy monopoles that have been gravitationally trapped in the Sun will induce proton decay along their orbits. Neutrinos with an energy of tens of mega-electron-volts will then be emitted by the decays of the muons and pions produced as the protons decay. This is a low-energy signal that is well below the threshold of large-scale neutrino telescopes but for which Super-Kamiokande has sensitivity. Indeed, this experiment provides the best limit so far on the flux of super-heavy monopoles in the range 10–5 < β < 10–2. At the other end of the kinematic spectrum, radio-Cherenkov detectors such as RICE and ANITA provide the best limits for ultrarelativistic monopoles of intermediate mass, at the level of 10–19 cm–2 s–1.
Another bright signature, although from a different process, is produced by slowly moving heavy nuclearites. These massive stable lumps of up, down and strange quarks could be detected in neutrino telescopes through the blackbody radiation emitted by the overheated matter along their path. From the analysis of 310 days of live time in the years 2007–2008, the ANTARES collaboration reported a flux limit at the level of 10–17 cm–2 s–1 sr–1 for nuclearite masses larger than 1014 GeV and β around 10–3. Indeed, the limit improves previous results from the MACRO experiment by a factor of between three and an order of magnitude, depending on the nuclearite mass.
The atmosphere, acting as a target for ultra-high-energy cosmic rays, can be a useful source for searches of physics beyond the Standard Model
The atmosphere, acting as a target for ultra-high-energy cosmic rays, can be a useful source for searches of physics beyond the Standard Model. The interaction of a cosmic ray of energy around 1011 GeV with a nucleon in the atmosphere takes place at a much higher centre-of-mass energy than is achievable in accelerator laboratories and a wealth of physics can be extracted from such collisions. Supersymmetric particles can be produced in pairs and, except for the lightest, they can be charged. Even if unstable, they can, because of the boost in the interaction, reach the depths of a detector and emit Cherenkov light as they traverse an array. The signature is two minimum-ionizing, parallel, coincident tracks separated by more than 100 m. These types of searches are being carried out by the two large neutrino-telescope collaborations, IceCube and ANTARES.
The same interactions of cosmic rays with the atmosphere can also be used to probe non-standard neutrino interactions arising from the effects of tera-electron-volt gravity and/or extra dimensions. At high energies, neutrino interactions with matter may become stronger and the atmosphere can become opaque to neutrinos with energies of peta-electron-volts. A signature in a neutrino telescope would be an absence of regular neutrinos with ultra-high energies accompanied by an excess of muon bundles at horizontal zenith angles. The same effect would take place with a cosmogenic neutrino flux – that is, the flux of neutrinos produced by the interactions of ultra-high-energy cosmic rays with the cosmic microwave background radiation. In the absence of a discovery so far, this flux can be assumed to be at a level compatible with gamma-ray constraints from the Fermi Gamma-ray Space Telescope. The neutrino-nucleon cross-section will depend on the number of extra dimensions, ND, and a lack of events over the expected flux can be transformed into a limit on ND. However, the effect in neutrino telescopes with volumes of a cubic kilometre or so is not big. For values of ND not excluded by the LHC, fewer than one event a year is estimated for IceCube. Only with the larger radio arrays is the expected number of events of the order of 10 per year.
The recent two peta-electron-volt events announced by the IceCube collaboration have already been used to set stringent limits on the violation of Lorentz invariance. If strict Lorentz invariance does not hold, then neutrino bremsstrahlung of electron–positron pairs (ν → νe+e–) is possible, so extragalactic neutrinos would rapidly lose energy via such a process. This would lead to a depletion of the ultra-high-energy neutrino flux at the Earth. Assuming that the IceCube events are, indeed, extragalactic (that is, they have travelled of the order of megaparsecs from the sources to the Earth) and that the extragalactic high-energy neutrino flux is at most at the level of the current IceCube limit of 2 × 10–8 cm–2 s–1 sr–1, a limit can be set on Lorentz invariance violation, parameterized by the factor δ, defined as (dE/dp)2-1. Under these assumptions, the bound obtained from the two IceCube events is δ <10–18, which is orders of magnitude smaller than the current best limit of 10–13.
Even if conventionally produced, the absolute normalization of the atmospheric lepton spectrum is not well understood
High-energy atmospheric muons and neutrinos present a background to many of the topics discussed in the workshop. Even if conventionally produced, the absolute normalization of the atmospheric lepton spectrum is not well understood – in particular the contribution from prompt charm decays. Calculations of an atmospheric lepton component, which is rarely considered, from the decays of unflavoured mesons (η, η’, ρ, ω, φ), were presented at the workshop. These mesons decay rapidly to μ+μ– pairs and in very-high-energy cosmic-ray interactions the products of the decays can be the dominant muon flux at energies above 106 GeV, forming a background that must be taken into account in exotic searches.
One of the unexpected developments in the field since the first ideas of building neutrino telescopes has been their use in neutrino-oscillation physics. On one hand, the detectors can probe oscillation physics at energies not reachable by the smaller detectors. On the other, an aggressive plan to lower the energy threshold of IceCube and the proposed KM3NeT array to the few-giga-electron-volt region is underway, and IceCube has already produced physics results with its low-energy subarray, DeepCore. Plans to build megatonne water-Cherenkov detectors with a giga-electron-volt energy threshold – PINGU at the South Pole and ORCA in the Mediterranean – were also discussed in the workshop. These detectors consist of about 20–50 strings of optical modules with an inter-string separation of 20 m, to be compared, for example, with the 125 m inter-string separation of IceCube or the 70 m inter-string separation of DeepCore. Such detectors may address the issue of the neutrino mass hierarchy at a relatively low cost and on a short timescale because the technology exists already and the deployment techniques are the same as in IceCube and ANTARES.
Most atomic nuclei that exist naturally are not spherical but have the shape of a rugby ball. While state-of-the-art theories are able to predict this behaviour, the same theories have predicted that for some particular combinations of protons and neutrons, nuclei can also assume an asymmetrical shape like a pear, with more mass at one end of the nucleus than the other. Now an international team studying radium isotopes at CERN’s ISOLDE facility has found that some atomic nuclei can indeed take on this unusual shape.
Most nuclear isotopes predicted to have pear shapes have for a long time been out of reach of experimental techniques. In recent years, however, the ISOLDE facility has demonstrated that heavy, radioactive nuclei, produced in high-energy proton collisions with a uranium-carbide target, can be selectively extracted before being accelerated to 8% of the speed of light. The beam of nuclei is directed onto a foil of isotopically pure nickel, cadmium or tin where the relative motion of the heavy accelerated nucleus and the target nucleus creates an electromagnetic impulse that excites the nuclei.
By studying the details of this excitation process it is possible to infer the nuclear shape. This method has now been used successfully to study the shape of the short-lived isotopes 220Rn and 224Ra. The data show that while 224Ra is pear shaped, 220Rn does not assume the fixed shape of a pear but rather vibrates about this shape.
The findings from the teams at ISOLDE are in contradiction with some nuclear theories and will help others to be refined. The experimental observation of nuclear pear shapes is also important because it can help in experimental searches for atomic electric dipole moments (EDMs). The Standard Model of particle physics predicts that the value of the atomic EDM is so small that it will lie well below the current observational limit. However, many theories that try to refine the model predict values of EDMs that should be measurable. Testing these theories requires improved measurements, the most sensitive being to use exotic atoms whose nuclei are pear shaped.
The new measurements will help to direct the searches for EDMs currently being carried out in North America and in Europe, where new techniques are being developed to exploit the special properties of radon and radium isotopes. The expectation is that the data from the nuclear-physics experiment at ISOLDE can be combined with results from atomic-trapping experiments that measure EDMs to make the most stringent tests of the Standard Model.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.