Comsol -leaderboard other pages

Topics

CERN pulls Strings together

CCstr1_10_08

The annual “Strings” conference draws together a large number of active researchers in the field from all over the world. As the largest and most important event on string theory, it aims to review the recent developments for experts, rather than give a comprehensive overview of the field. CERN was an attractive venue for the conference this year, with the imminent start-up of the LHC together with the longer-term Theory Institutes on string phenomenology and black holes taking place just before and after the event. Organized by CERN’s Theory Unit, the universities of Geneva and Neuchâtel, and the ETH Zurich, Strings 2008 attracted more than 400 participants from 36 countries. It opened in the presence of CERN’s management and the rector of the University of Geneva, who also represented the state of Geneva. Appropriately, the first talk was by Gabriele Veneziano, formerly of CERN and one of the initiators of string theory following his famous formula invention 40 years ago. There was a welcome reception at the United Nations in Geneva, and the conference banquet was held in the Unimail building at the university.

A framework for unified particle physics

String theory can be seen as a framework for generalizing conventional particle quantum field theory, with applications stretching across a broad range of areas, such as quantum gravity, grand unification, gauge theories, heavy-ion physics, cosmology and black holes. It allows the systematic investigation of many of the important features of such theories by providing a coherent and consistent way of formulating the problems at hand. As Hirosi Ooguri from the California Institute of Technology so aptly said in his summary talk, string theory can be viewed, depending on the application, as a candidate, a model, a tool and/or a language.

The richness of string theory makes it a candidate for a consistent framework that truly unifies all of particle physics, including gravity. It also provides a stage for analysing complicated problems, such as quantum black holes and strongly coupled systems, as in quark–gluon plasma, through the means of idealized, often supersymmetric, models. Moreover, string theory has been proved to be an invaluable tool for doing computations in particle physics in an extremely efficient manner. It also often provides a novel language, with which it miraculously transforms seemingly hard problems into simple ones by reformulating them in a “dual” way. This also includes certain hard problems in mathematics that become simple when translated into the language of string theory.

CCstr2_10_08

The talks displayed all of these four facets of string theory effectively. Essentially there were five key areas on which the conference focused, roughly reflecting the fields of highest activity and progress during the past year. In addition, there were three talks on the LHC and its physics by the project leader, Lyn Evans; CERN’s chief scientific officer, Jos Engelen; and Oliver Buchmuller from CERN and the CMS experiment. These were intended to educate the string community in down-to-earth physics.

The first area covered was string phenomenology, which uses string theory as model and candidate for the unification of all particles and forces. The various approaches for model building reviewed were mostly of a geometrical nature. That is, many properties of the Standard Model can be translated into geometrical properties of the compactification space that is used to make strings look four-dimensional at low energies. While this translation can be pushed a long way qualitatively, it seems exceedingly difficult technically to go much beyond this stage and obtain predictions that would be testable at the LHC. On the other hand, for the most optimistic case in which the string scale is low (namely of the order of the scale of the weak interactions), concrete predictions of string theory are fully possible, as reported in one of the talks.

Another area, which has become highly visible during the past year, is the computation of certain scattering amplitudes, often in theories with extended supersymmetries and notably in N = 8 supergravity. Extensive computations based on string-inspired methods suggest that this theory may be finite, owing to unexpected cancellations of Feynman diagrams. However, some researchers have suggested that Feynman diagrams might not provide the most efficient way to perform quantum field theory; the results may instead point to the existence of a yet-to-be-discovered dual formulation of the theory that would be much simpler. Other related results concern theories with less supersymmetry, as well as amplitudes of phenomenological relevance, such as multi-gluon scattering amplitudes.

It is well known that string theory is a theory not only of strings but also of membranes and other extended objects. A hot topic of the past year has been the “M-brane mini-revolution”. This deals with a novel description of M-theory membranes and has created some controversy about the meaning of the results. Several talks duly reviewed this subject and it became apparent that the issues had not yet been completely settled.

A key topic of every string conference within the last 10 years has been the gauge theory/gravity duality, which maps ordinary gauge theories to gravitational – i.e., string – theories. This year’s focus was mainly on the connection between systems that are strongly coupled – and in a sense hydrodynamical – and gravity. This leads to a stringy, dual interpretation of certain states in heavy-ion physics, such as the quark–gluon plasma. In particular, a link can be made between the decay of glueball states in QCD and the decay of black holes by Hawking radiation. While these ideas seem to work well on a qualitative level, quantitatively solid results are much harder to obtain because of the strongly coupled nature of the physics involved. The significance of this approach is the subject of ongoing debate and collaboration between heavy-ion physicists and string theorists.

A field of permanent activity and conceptual importance is that of black hole physics, to which string theory has made extremely important contributions during the past few years. As reviews at the conference showed, the identification and counting of microscopic quantum states in stringy toy models has been refined and made more precise, even to the level of quantum corrections. Moreover, fascinating connections between black holes and topological strings have been proposed, and testing those connections has been an important field of activity during the past few years. The results of topological string theory have also had a considerable impact on certain areas of mathematics, and have led to fruitful interactions with mathematicians.

Apart from these five focus areas, other subjects were reviewed at the conference. For example, there was a lecture on loop quantum gravity so that the string community could judge whether there might be connections to this seemingly different approach to quantum gravity.

Both during the conference and afterwards, many participants expressed the view that string theory continues to be a healthy, fascinating and important subject for theoretical work. This is despite the fact that the original main goal, namely to explain the Standard Model of particle physics, appears to be much harder to achieve (if, indeed, achievable at all) than initially hoped. In the final outlook talk, David Gross of the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara, presented a picture of string theory as an umbrella that covers most of theoretical physics, similar to the way in which CERN has emerged as an umbrella for the worldwide community of particle physicists.

Getting heavy on Capri

CCfla1_10_08

The Second Workshop on Theory, Phenomenology and Experiments in Heavy Flavour Physics took place on 16–18 June in Anacapri, Capri, Italy – the same location as the first meeting in the series in May 2006. The aim of the series is to bring together theoreticians and experimentalists to develop a dialogue on phenomenological issues. The focus is on discussion and interaction among physicists who are active in the field. The topics this year focused on results, especially in B-physics, as well as exploring the potential for heavy flavour physics in both current and future experiments. With participation by invitation only, owing to space problems at the venue, the 60 or so attendees took part in many fruitful and lively discussions after the seminars and during the free time.

During the past decade flavour physics has witnessed unprece-dented experimental and theoretical progress, opening up the era of precision tests of the Standard Model. The quark and lepton sectors of the Standard Model have been subjected to a series of stringent tests, and it has become customary to look for violations of the Standard Model by using unitarity triangles. Updates of the unitarity triangle analyses were presented by Marco Ciuchini from Rome III/INFN and Jérôme Charles from the Centre de Physique Théorique, Marseille, for the UTfit and CKMfitter collaborations, respectively.

Such updates have been possible, not only because of the large amount of data now available, but also through theoretical progress (e.g. in lattice calculations). As Chris Sachrajda from the University of Southampton and Fermilab’s Paul Mackenzie reported, many approximations in typical lattice calculations have been overcome in recent years. Numerous simulations are now being performed that include quark–antiquark pairs, or a pion mass less than or equal to 300 MeV. Moreover, it is now becoming possible to generate configurations on lattices that are sufficiently fine and large to allow direct simulation of the charm quark.

An impressive amount of data has come from the two asymmetric e+e B factories, PEP-II and KEKB, and their respective detectors, BaBar and Belle. The BaBar experiment concluded data taking in April 2008, having collected a total of 531 fb–1, of which 433 fb–1 was on the Υ(4S) peak, corresponding to about 470 × 106 BB pairs. In 2008, BaBar also collected 30 fb–1 on the Υ(3S) resonance and 14 fb–1 on the Υ(2S) resonance, with interesting results, such as the first evidence of the ηB – the long-sought bottomonium ground state. By the time of the workshop, Belle had accumulated about 850 fb–1, with 730 fb–1 on the Υ(4S) resonance.

The B factories have led the recent progress in knowledge of the unitarity triangle related to the B system, with angles α, β and γ (or φ2, φ1 and φ3). Christoph Schwanda from the Austrian Academy of Sciences and Giuseppe Finocchiaro from the Frascati National Laboratory/INFN reported on measurements of the angles and sides of the unitarity triangle. Paolo Gambino of Torino and Francesca Di Lodovico from Queen Mary, University of London, reviewed the main theoretical problems on the way to the long-sought precise theoretical inclusive determination of ΙVubΙ in the Cabibbo–Kobayashi–Maskawa (CKM) matrix.

Golden modes and penguins

The B factories’ golden modes for the extraction of sin2β are b → cc decays, and the latest measurement from BaBar gives sin2β = 0.714 ± 0.032 ± 0.018, in agreement with the results from Belle. Other interesting decays are the b → sqq “penguin dominated” decays, those study of which is motivated not only by the measure of sin2β, but mostly by the search for new physics.

The angle α can be studied in b → uud modes, and its determination is made complicated by the addition of the b → d penguin amplitude to the b → uud tree one. The first measurements of the B0 → ρ0ρ0 decay confirm the indication that the effect of penguin amplitudes is relatively small in ρρ decays, which in fact yield the most stringent constraints on α.

A precise measurement of the unitarity angle, γ, unthinkable when the B factories started, is now becoming possible with the large statistics accumulated by the B factories. Several new measurements of B± → D0 K± transitions have appeared recently, and strong evidence for direct CP violation in these decays is building up.

With the first runs of the LHC on the horizon, heavy flavour physics is entering a new phase.

Radiative penguin and leptonic B meson decays are another area of interest at B factories because they constitute a powerful probe of new physics. John Walsh of INFN/Pisa reported on recent experimental results in this field.

On the theory side, Luca Trentadue from Parma/INFN discussed the resummation of large logarithmic terms, which otherwise spoil the convergence of the perturbative series in the threshold region, in semileptonic charmed B decays. Cai-Dian Lü of the Institute of High Energy Physics (IHEP), Beijing, analysed charmless two-body decays of the B mesons to light vector and pseudoscalar mesons in the soft-collinear effective theory.

With the first runs of the LHC on the horizon, heavy flavour physics is entering a new phase. Natalia Panikashvili of Michigan, Andrei Starodumov of PSI and Stefania Vecchi of INFN/Ferrara described the role of flavour physics at the LHC for the ATLAS, CMS and LHCb experiments, respectively. Tobias Hurth of CERN/SLAC stressed that a large increase in statistics at LHCb for Bd → K *0μ+μ will make measurements possible with much greater precision, allowing for an indirect search for new physics.

A main goal at the LHC is to measure CP-violating parameters, such as the Bs0 mixing phase, which in the Standard Model is predicted to be small, and could be another way of evincing new physics. Luca Silvestrini of INFN/Roma presented a new analysis that claims evidence of new physics through a Bs0 mixing phase, that is much larger than expected in the Standard Model. However, this had not been confirmed by independent analyses at the time of the workshop. Future data from the Tevatron or an extended Υ(5S) run of Belle may be of help in assessing the new results. Joaquim Matias of the Institute for High Energy Physics, Barcelona, discussed poss-ible strategies to measure the weak mixing phase of the Bs0 system using B mesons decaying into vectors.

CCfla2_10_08

Charm physics, charm spectroscopy, CP violation in charm decays and searches for new physics were all discussed at length during the workshop. Ikaros Bigi of Notre Dame stated that comprehensive and detailed CP studies of charm decays provide a unique window into flavour dynamics. He emphasized the importance of LHCb in achieving the statistics required to find evidence of new physics in the charm sector. There is a powerful programme at LHCb for charm physics, which includes studying D0 mixing, observed for the first time at B factories in spring of last year, and CP violation in some specific decays. Pietro Colangelo of INFN/Bari discussed aspects of new charm spectroscopy and David Miller of Purdue presented the latest results from CLEO-c on W annihilation decays of the D+ and Ds+ mesons. The HERA electron–proton storage ring at DESY had come to the end of its scheduled operation roughly a year before the workshop, but data are still being analysed. Luis Labarga of the Universidad Autónoma de Madrid presented recent results from the H1 and ZEUS collaborations on charm production and fragmentation, in addition to results on beauty production.

The experiment BESII, at the Beijing Electron Positron Collider has accumulated 5.8 × 107 J/ψ events, 1.4 × 107 ψ(2S) events and 35.5 pb–1 ψ(3770) data. Xiaoyan Shen of IHEP reported on recent results, which include the observation of the Υ(2175) in the J/ψ → ηφf0(980) decay. In the past few years a wealth of new experimental results on heavy quarkonia and exotic states has become available. Riccardo Faccini of Rome “La Sapienza”/INFN summarized recent developments in the search for excited states of the scalar nonet among the light mesons, and reviewed the experimental evidence for new states. Ruslan Chistov of the Institute of Theoretical and Experimental Physics, Moscow, discussed the experimental status of X, Y and Z states at B factories. More new states, decays and production mechanisms have been discovered in the past few years than in the previous 30 years. Besides the regular quarkonium states (mostly quark–antiquark states), new exotic states have been found the composition of which (molecule, tetraquark, hybrids&ellip;) is currently the centre of a lively debate. It was the subject of a dedicated round table at the workshop, chaired by Nora Brambilla of Milan, with Antonello Polosa of INFN/Rome, Joan Soto of Barcelona and Antonio Vairo of Milan.

Flavour for all

There were many lively discussions on the role of flavour to evince new physics. Andrzej Buras of the Technical University of Munich reviewed several main results on flavour physics beyond the Standard Model, analysing in particular flavour and CP-violating processes in models with supersymmetry, “littlest Higgs” and extra dimensions. Ulrich Nierste of Karlsruhe summarized the role of the decays B0 → μ+μ, B+ → τ+ντ and B → D τ ντ in the hunt for new Higgs effects in the minimal supersymmetric Standard Model. Flavour and precision physics in the Randall–Sundrum model were the topics of discussion for Matthias Neubert of the Johannes Gutenberg University, Mainz, while Jernej Kamenik of INFN/Frascati talked about phenomenology in the context of minimal flavour violation.

While waiting for the LHC, Fermilab’s Tevatron has not only demon-strated the possibility of B-physics at hadron machines but also produced measurements that are highly competitive and complementary to those of the B factories. In particular, this was via the unique access that hadron machines have to the Bs sector. Vaia Papadimitriou of Fermilab and the CDF experiment, and Brad Abbott of Oklahoma and D0, presented the latest measurements on the production, spectroscopy, lifetimes and branching fractions for B mesons, B baryons, and quarkonia. Theoretical issues for the measurement of the top mass using jets, and implications for measurements of the top mass at the Tevatron, were discussed by Iain Stewart of the Massachusetts Institute of Technology. Monte Carlo programs have already proved indispensable for making exclusive theoretical predictions at the Tevatron. Christian Bauer from Lawrence Berkeley National Laboratory presented an improved Monte Carlo framework (GenEvA) mainly based on a new notion of phase space.

Tom Browder of Hawaii and Marcello Giorgi of INFN/Pisa made a strong science case for continued heavy flavour physics measurements at future Super-B machines, underlining their complementarity to the LHC programme. One focus for the Super-B factories would be studying, and possibly discovering, new sources of flavour-changing neutral currents and CP-violation. There are plans for a Super-B factory at KEK in Japan based on the existing KEK accelerator and Belle detector, as well as a proposal for a laboratory in Italy, the SuperB project. In both cases the goal is a luminosity of around 1 × 1036 cm–2s–1, approximately 60 times as high as achieved at present B factories.

The conference was extremely lively, and most goals of the workshop, such as promoting co-operation and a fruitful exchange of ideas among theoreticians and experimentalists, were fulfilled. Up to now, the data agrees globally with the CKM picture, but there are also hints of discrepancies, which if confirmed could signify new physics. With the advent of the new machines, it will be feas-ible to investigate possible flavour structure and new sources of CP violation beyond the Standard Model through studies of flavour processes. Heavy flavour, and therefore physics, continues to play a fundamental role in particle physics and has an exciting future.

The Higgs and the LHC

• Dedicated to the memory of Francisco (Paco) Ynduráin, a good friend and excellent physicist (1940–2008).

The LHC is gearing up to do real physics, and after all the astrophysical nonsense about the Big Bang and black holes we now face the cold reality of experiment. In this context, it may be useful to summarize our knowledge of the Higgs system to date, which is the purpose of this article.

CChig1_11_08

First, let me make a clear statement. Our present knowledge of the Standard Model is of course way beyond the knowledge of, say, 1959, when the PS at CERN started up. Clearly there have been many unanticipated discoveries, not to mention theoretical evolution. The Standard Model is chock full of facts crying out for explanation, such as the existence of three generations of quark–lepton families, or the many unexplained parameters of the model, such as the particle masses. That latter problem has, in today’s Standard Model, shifted to the many different particle–Higgs couplings, and is still not totally understood. Consider this: between the neutrino masses (10–3 eV) and the top quark mass (1.75 × 1011 eV) there is a difference of 14 unexplained orders of magnitude. Why? How? We can say nothing meaningful about these things and we have no idea if the LHC will illuminate the problem; at the very least, realizing all this, we should not have the arrogance to think that we know what is going to happen.

That being said, let’s see what we know. Our knowledge of the Higgs sector derives from the measurement of radiative corrections (plus the lower limit on the Higgs mass from direct experimentation at LEP), and the only quantities that depend on the details of the Higgs sector are radiative corrections to the masses of the vector bosons, including the photon. The masslessness of the photon is not automatic within the Standard Model, which provides a serious constraint.

Thus the measured radiative corrections are those affecting the W and Z masses, which come about – theoretically – through self-energy diagrams such as illustrated in figure 1. We really do not know what the X-line in the figure represents. It could be the propagator of one or more particles of distinct mass, or even some smeared-out mass (if the Higgs is heavy and strongly interacting), or of some continuous distribution. These various possibilities have been scrutinized for quite some time, but no definite view has emerged. Even so, it is useful to take a specific model, namely the simplest Higgs sector with one physical Higgs.

CChig2_11_08

In the first instance, in a renormalizable theory, masses are free parameters – to be renormalized and taken from experiment – and therefore radiative corrections are invisible. Nonetheless there are two facts that allow us to come to some conclusions.

The first fact is that the photon mass is zero. Such a mass is not subject to renormalization and we may thus ask under what circumstances the Higgs sector produces a photon mass of zero. It happens that the simplest Higgs model (with just one physical Higgs) produces a massless photon, while adding degrees of freedom to the Higgs sector in general destroys this prediction. To get a zero photon mass in more complicated Higgs systems requires tuning of parameters, in other words, the prediction of zero photon mass is lost. This, in my opinion, is a strong argument against more complicated Higgs systems. To abandon a prediction that agrees with experiment is not something one should do lightly. However, this is not without some nasty consequences.

Here I must mention the strong CP violation contained in the strong interactions in QCD. This effect, which indeed is not observed, is commonly explained away (in the Peccei–Quinn approach) by using two Higgs systems. While it is easy within such a model to tune the photon mass to zero, it is nonetheless a fact that the prediction of zero mass is lost. On top of that, in these models there is normally a particle of very small mass (the axion) of which there is no evidence experimentally. This is a worrying problem, for which there is no generally accepted solution, although there are some attempts at resolving it.

CChig4_11_08

In addition, there is supersymmetry. In supersymmetry one inevitably has more than one Higgs system, so a priori that ruins the prediction of zero photon mass of the simplest Higgs model. The simplest supersymmetric model accidentally escapes this problem and predicts a zero photon mass; however there are other difficulties with this model.

The second fact concerns the vector-boson masses. The simplest Higgs model predicts a certain ratio between the W and Z masses, which is not subject to renormalization. The associated parameter is the r parameter and, assuming that the only things that change the ratio arise from radiative corrections, one obtains a prediction for the Higgs mass (assuming the simplest model). This is the source of the predictions on the limits on the Higgs mass that are commonly quoted (figure 2). It should be pointed out that the corrections to the mass ratio also contain a prediction for the top quark mass that agrees very well with the observed value. So, indeed, we must assume that the Higgs sector is such that the prediction for the mass ratio of the W and Z bosons is that given by the simplest Higgs sector. This puts severe limits on theoretical models for the Higgs sector.

It is clear that our knowledge of the Higgs sector is scanty, and in particular a Higgs system with a very heavy Higgs is quite possible. The latter would probably produce a wide resonance for the X in figure 1, and it would be hard to make precise statements on the decays of such a resonance. Well, let us hope that the LHC clarifies the matter.

Exclusive events give new window on LHC physics

Most reactions at Fermilab’s Tevatron occur when the colliding proton and antiproton break apart into quarks and gluons that hadronize to form the particles that fly off into the detector. In exclusive interactions, however, the proton and antiproton avoid the breakup, glancing off each other in a process where the underlying interaction involves some combination of photons and/or gluons.

CCTev1_11_08

In 2006, the CDF collaboration at the Tevatron obtained the first clear evidence for exclusive interactions at a proton–(anti)proton collider, when they observed high-energy photon pairs in the central rapidity (barrel) region of the detector, but with nothing else, down to an angle of around 0.1° from the beam (±7.4 units of pseudorapidity). They found only three events initially, against a small background predicted to be at most 0.2 events (Aaltonen et al. 2007). These events were consistent with being produced via gluon–gluon “fusion” via a quark “box” where the gluons originate from the beam particles, as shown in figure 1a. An additional “screening” gluon is exchanged to cancel the colour of the interacting gluons and allow the leading hadrons to stay intact. The collaboration has since observed more exclusive two-photon final state candidates in new data.

The search for this unusual two-photon process at the Tevatron was originally proposed in 2001, when CDF physicists first explored the possibility that the Higgs boson could be produced by gluon–gluon fusion as described in figure 1b (Albrow et al. 2005). The idea is that if the Higgs field fills the vacuum, it should be possible to “excite the vacuum” into a real Higgs particle in a glancing collision of a proton and antiproton. Theorists had various estimates for the probability of this happening, but their predictions varied widely.

The two-photon process measured in the CDF detector is produced in much the same way as the Higgs would be, but much more prolifically, so making it a “standard candle” for the production of Higgs bosons. Theorists from the Centre for Particle Theory at the University of Durham predicted that there should be only about one clean two-photon event of this kind in data corresponding to 532 pb–1 of integrated luminosity taken by CDF in Run II at the Tevatron (Khoze et al. 2006). The three events that the CDF collaboration found confirmed this prediction. Thus, the similar process that could produce the elusive Higgs boson must also happen, and could be measured at the LHC, thereby providing measurements of the particle’s mass, spin and other properties.

CCtev2_11_08

In the process of checking this measurement, the CDF physicists came across another exclusive physics process that had never been seen before at a proton–(anti)proton collider. They found 16 events that are consistent with the QED prediction that photons travelling with the proton and antiproton can interact to produce only an electron–positron pair (γγ → e+e) without breaking up the proton and antiproton (Abulencia et al. 2007). In this case the Tevatron acts as a photon “collider”. As the backgrounds to this process are similar to the final state discussed above, the CDF team gained further confidence in their exclusive two-photon final state analysis. To date, they have found many more exclusive electron–positron candidate events. QED two-photon processes such as this, which have previously been observed in electron–positron, electron–proton and nuclear collisions, should provide a means of calibrating the momentum scale and resolution of forward proton spectrometers proposed for the ATLAS and CMS experiments at the LHC.

The CDF team then reasoned that they should also see exclusive muon-pair events produced by the same QED interaction, as in figure 2a. Apart from an indication of exclusive pair production at the ISR at CERN (Antreasyan et al. 1980), this would be another “first” at a proton–(anti)proton collider. In 2007 their supposition was confirmed, but with an added bonus. The expected process, γγ → μ+μ, was indeed detected according to QED expectations.

In addition, the CDF physicists recorded, for the first time in hadron-hadron collisions, exclusive photoproduction of the J/ψ and ψ (2S) decaying to a pair of muons (figure 2b). Figure 3 shows the clear, clean signals observed. The team also detected the contribution from exclusive production via gluon–gluon fusion of the χc0, decaying to a muon pair and a soft photon (figure 1c). Evidence for this state in CDF data had also been reported earlier, in 2003 (Wyatt 2003).

An analysis aimed at higher muon-pair masses also revealed the upsilon (Υ). The Υ(1S) and Υ(2S) have been clearly seen in CDF, with the Υ(3S) emerging, to be revealed by the higher statistics that are now available. In the case of the photoproduction of these bottomonium (Υ(1S), Υ(2S)) and the charmonium (ψ(1S), ψ(2S)) states, the Tevatron is acting as a “photon–pomeron collider” (figure 2b). The pomeron is well known from diffractive reactions, which are characterized by the exchange of a quark/gluon construct – the pomeron – with the quantum numbers of the vacuum. Because the exchange is colourless, in these reactions a large region in pseudorapidity space is left empty of particles (the “rapidity gap”). In perturbative QCD, the lowest order prototype of the pomeron is a colour-neutral system of two gluons.

CCtev3_11_08

This photoproduction of charmonium and bottomonium was previously studied in collisions at DESY’s electron–proton collider, HERA, with similar kinematics (√s = 100 GeV) and the cross sections are in agreement. A comparison of the J/ψ and ψ(2S) cross sections with predictions from HERA data is sensitive to a possible contribution from the elusive and enigmatic odderon. This is a partner of the pomeron with charge conjugation odd (C-odd) and in QCD is formed from three gluons in a colour-neutral state. Unfortunately these predictions have a spread, weakening the sensitivity of CDF’s search for odderon exchange, but still allowing a useful limit to be set on the production of odderons in this mode.

After publishing results on exclusive lepton-pair and photon-pair production, the CDF collaboration scored a hat-trick in 2008 when it published results on exclusive di-jet production, as in figure 1d (Aaltonen et al. 2008). Using a Roman Pot deployed tracker some 66 m from the interaction point to tag the unbroken antiproton in conjunction with a large rapidity gap on the deflected proton side, the team defined a sample of potentially exclusive events. The greater the share of the mass of the central system that the two jets enjoyed, the “more exclusive” the events were expected to be. This expectation was borne out by the Monte Carlo simulation (Monk and Pilkington 2005) for central exclusive production and in agreement with the predictions of the Durham Group (Khoze et al. 2007). Figure 4 shows an event display of an exclusive di-jet candidate. Also, as the di-jet fractional share of the overall central mass of the event tended to one – and the exclusive di-jet sample became purer and purer – evidence for b-jet suppression was seen, as theoretically expected. As in the case of exclusive gamma-gamma and χc0 production, this is an example of the Tevatron acting as a gluon–gluon collider. The detection at the Tevatron of these exclusive processes, resulting from gluon–gluon interactions, strongly suggests that exclusive production of the Higgs boson by the similar process would be detected at the LHC.

CCtev4_11_08

Although forward proton detectors have been used to study Standard Model physics for a couple of decades, the new landscape revealed by exclusive physics at hadron colliders has been fully realized only in the past few years. In this arena, the LHC is not only preparing to take the baton from the Tevatron, but also to enter the race with greatly improved tools. The FP420 R&D project is planning to provide the means to measure the displacement and angle of the outgoing protons from exclusive interactions by deploying high precision “edgeless” silicon trackers less than a centimetre from the beam, at ±420 m from the beam intersection points of the ATLAS and CMS experiments at the LHC (Albrow et al. 2008). This gives these experiments the ability to calculate the proton momentum loss and transverse momentum, allowing the mass of the centrally produced system to be reconstructed with a resolution of a few GeV/c2 per event whatever the central system. Broadly speaking then, in the exclusive physics arena, the LHC becomes a “multi-collider”, where the gluon–gluon, photon–photon, or photon–pomeron beam energy is known.

The ability of the FP420 detectors to measure intact protons from an exclusive interaction, in conjunction with the associated centrally produced system using the current ATLAS and/or CMS detector, will provide rich new perspectives at the LHC on studies in QCD, electroweak physics, the Higgs sector and beyond Standard Model physics. In some scenarios, these detectors may be the primary means of discovering new particles at the LHC, with unique ability to measure their quantum numbers. The addition of the FP420 detectors will thus, for a relatively small cost, significantly enhance the discovery and physics potential of the ATLAS and CMS experiments. The existence proof provided by the exclusive physics results from the Tevatron shows that such a programme is feasible.

Gamma-ray astronomers convene in Heidelberg

Over the past few years, the quality and diversity of data from modern imaging atmospheric Cherenkov telescopes (IACTs) has revolutionized gamma-ray astronomy. With ground-based instruments, detailed imaging of the gamma-ray sky at 100 GeV to 100 TeV has become a reality and a wealth of information is currently being gathered about the universe. The 4th Heidelberg International Symposium on High-Energy Gamma-Ray Astronomy (γ 2008) was a timely opportunity to review the status and perspectives of this young field of astroparticle physics.

CCgam1_11_08

The Heidelberg Symposium is a well established series of conferences organized by the Max-Planck-Institute for Nuclear Physics (MPIK) in Heidelberg, a leading institute of the H.E.S.S. collaboration, which operates an array of four IACTs in Namibia. After fruitful meetings in 1996, 2000 and 2004, the 4th symposium, which took place in July this year, celebrated an important breakthrough in gamma-ray astronomy. More than 50 very-high-energy (VHE) gamma-ray sources – with energies above 100 GeV – have been discovered since the last symposium, when only about 20 such sources were known.

This tremendous progress was reflected in the high-quality contributions at γ 2008. Twenty-six invited speakers reviewed the status of the field and related disciplines, and discussed the perspectives for gamma-ray astronomy and astroparticle physics in general. In addition, 56 spoken contributions and some 200 poster presentations addressed a range of topics. The number of abstracts submitted to the conference was significantly higher than for the 2004 symposium, reflecting again the growing interest in gamma-ray astronomy round the world. Talks were given in plenary sessions, allowing for lively discussions among the 300 experts from different fields of astroparticle physics. A significant amount of time was also devoted to the poster sessions, which took place in the relaxing atmosphere of “coffee and cake”, a typical German tradition.

VHE gamma-ray astronomy is currently being driven by four large installations of Cherenkov telescopes: the MAGIC telescope (La Palma, Canary Islands) and the VERITAS telescope array (Arizona, US) in the northern hemisphere; and the H.E.S.S. (Khomas Highlands, Namibia) and CANGAROO-III (Woomera, Australia) arrays in the southern hemisphere. While the northern instruments focus mainly on the observation of extragalactic objects and transient phenomena such as gamma-ray bursts, the southern arrays provide an excellent view of the inner Milky Way and are therefore also devoted to observations of Galactic objects.

CCgam2_11_08

As testified in short status reports at the symposium, MAGIC, VERITAS and H.E.S.S. are operating successfully. However, as Ryoji Enomoto of Tokyo University pointed out, the CANGAROO-III array is currently operating only two of its four telescopes, owing to severe mirror deterioration and lack of funding. There were also reports on results from joint observation campaigns on various targets, such as the nuclei of the active galaxies Mkn 421 and M 87. These campaigns provide a way of cross-calibrating the instruments and result in an enhanced energy coverage. Upgrades of MAGIC (with the installation of a second 17 m telescope) and H.E.S.S. (with the installation of a single 28 m telescope in the centre of the existing four 12 m telescopes) to increase their sensitivity are well underway, and first light is expected in late 2008 and 2009, respectively.

After almost a decade of successful operation, the Milagro experiment – a 2000 m2, large field-of-view water Cherenkov detector in New Mexico – has stopped data taking after mapping the northern gamma-ray sky at multi-tera-electron-volt energies. Compared to the Cherenkov telescopes that point to regions of the sky, Milagro’s wide field of view allowed it to monitor the sky continuously, albeit at a higher energy threshold and with rather worse angular resolution. Although energy estimation is difficult for Milagro, Petra Hüntemeyer of the Los Alamos National Laboratory reported on the experiment’s recent success in measuring the energy spectra of sources up to 100 TeV. Plans to replace the instrument by the High Altitude Water Cherenkov (HAWC) project, which would be 10 times more large and more sensitive, were presented in a special session dedicated to future instruments. This session also included discussion of the science issues related to the next generation of gamma-ray instruments.

CCgam3_11_08

The key European future project in VHE gamma-ray astronomy is the Cherenkov Telescope Array (CTA). Several tens of medium-sized Cherenkov telescopes will form the core of the CTA observatory, providing a 10-fold boost in sensitivity in the tera-electron-volt energy range compared with current instruments, as well as improved angular resolution. Additional large telescopes at the centre of the array will extend the energy range down to some tens of giga–electron-volts and a widespread halo of telescopes should add enough detection area to reach well into the 100 TeV range. CTA sites in the northern and southern hemispheres should allow full-sky coverage. In this context, the symposium served to foster the already intense communication between CTA and the project for the Advanced Gamma-ray Imaging System in the US, which has similar goals.

Just a few weeks before the conference, the astrophysics community celebrated the successful launch of the Fermi Gamma-ray Space Telescope (formerly GLAST) satellite, a gamma-ray observatory that will provide data in the energy range of approximately 10 MeV to 10 GeV (Fermi Gamma-ray Space telescope sees first light). Together with future ground-based instruments, this instrument will enable gamma-ray observations over seven decades of energy and a direct cross-calibration of ground-based and space-borne instruments for the first time. The perspectives for joint observations between Fermi and the Cherenkov telescopes was an important topic at the meeting, which was discussed by Stefan Wagner of the Landessternwarte Königstuhl in Heidelberg and Stefan Funk of SLAC, among others.

Physics highlights at γ 2008 included the discovery by the H.E.S.S. collaboration of the remnant of the historical supernova SN 1006 in VHE gamma rays. After more than 100 hours of observing time, H.E.S.S. now sees the remains of a massive stellar explosion, which Chinese astronomers reported in 1006, with a statistical significance of six standard deviations above the background. As Melitta Naumann-Godo of the Laboratoire Leprince-Ringuet pointed out, the preliminary image of the object seen by H.E.S.S. resembles the morphology of non-thermal X-ray filaments in the north-west and south-east part of the supernova remnant shell (see figure 1). Because these filaments are produced by synchrotron radiation of electrons that have been accelerated to an energy of about 100 TeV, SN 1006 has long been a prime target for Cherenkov telescopes; it is only the improved sensitivity of the current instruments that has made its discovery possible.

The detection of pulsed emission from the Crab pulsar by the MAGIC collaboration provided another highlight at the symposium. Many of the VHE gamma-ray sources in our galaxy can be identified with pulsar wind nebulae, but no VHE gamma-ray emission had previously been observed from a pulsar itself. The search for pulsed emission – which is well established at energies up to the giga-electron-volt range – matches the continuous efforts to minimize the energy threshold of Cherenkov telescopes. Using a special trigger setup, the MAGIC collaboration succeeded in lowering the threshold of their telescope to 25 GeV, making the detection of pulsed emission possible. Thomas Schweizer of the Max-Planck-Institute for Physics in Munich presented a VHE gamma-ray phasogram from 22 hours of observations of the Crab pulsar, which shows two distinct peaks corresponding to the main pulse and the interpulse. The data are in phase with observations at lower energies and with simultaneous measurements in the optical waveband carried out by the MAGIC collaboration.

Overall, the symposium showed that the stage is set for a bright future in gamma-ray astronomy. As Felix Aharonian of MPIK said in his concluding remarks: “Gamma-ray astronomy has evolved into a new astronomical discipline. Our observations meet all the key features usually attributed to astronomy: imaging, energy spectra, light curves, surveys…”. The community is now looking forward to seeing many new results at the next symposium, which will take place around 2012.

Fermi Gamma-ray Space Telescope sees first light

CCgla1_11_08

On 26 August, NASA and the US Department of Energy announced the first-light results of the Gamma-Ray Large Area Space Telescope (GLAST). At the same time GLAST changed its name to the Fermi Gamma-ray Space Telescope. Built in an international collaboration of astrophysicists and particle physicists with important contributions from research institutions in France, Germany, Italy, Japan, Sweden and the US, Fermi is expected to discover thousands of new sources of different classes, thus tackling many unresolved questions of fundamental physics, astronomy and cosmology. The telescope is already detecting high-energy gamma-rays from a wealth of cosmic sources – including super-massive black holes in active galactic nuclei, supernova remnants, neutron stars, galactic and solar system sources, and gamma-ray bursts (GRBs) – with more than 30 times the sensitivity of its successful predecessor, the Energetic Gamma Ray Experiment Telescope (EGRET).

Shedding light on many fundamental physics questions

Gamma-rays are produced by the interaction of high-energy charged particles with local matter, magnetic fields or ambient photons, and thus give insight into the physical conditions prevailing in these exotic sources. The physics of the particle acceleration mechanisms believed to be operational in many of these objects was first proposed by Enrico Fermi, who is now honoured with the new name of the telescope. Through investigation of the most extreme places in the universe, Fermi will shed light on many fundamental physics questions, such as, the nature of the ubiquitous dark matter. Dark matter particles could decay or annihilate into gamma-rays and possibly give rise to unambiguous signatures in gamma-ray spectra, which could be used to infer or constrain the properties of the original particles. In understanding dark matter, observations with Fermi will therefore be an essential complement to searches for new particles at CERN.

The main instrument on board Fermi is the Large Area Telescope (LAT), which detects gamma-rays between 20 MeV and 300 GeV. The addition of the secondary GLAST Burst Monitor (GBM) – an instrument primarily dedicated to the detection of GRBs between 8 keV and 30 MeV – gives Fermi a total coverage of seven decades in energy. The aspect ratio of the LAT allows for a large field of view, observing 20% of the whole visible sky at any instant, while the GBM provides complete sky coverage for the detection of GRBs.

Fermi was launched by NASA on 11 June from the Cape Canaveral Air Station in Florida, for a 5–10 year long mission. The first 60 days of data taking constituted the commissioning phase, which went smoothly thanks to the thorough preparatory work undertaken by the whole international Fermi Mission team. During this period, teams undertook the calibration and verification of the performance of the different subsystems. Background rejection, a key element to the success of the mission, proved very satisfactory. Then, on 14 August Fermi entered the phase of nominal science operations, surveying the complete gamma-ray sky every three hours.

The figure below shows the LAT all-sky image released on 26 August. Created using only 95 hours of “first light” observations from the early commissioning phase, this corresponds in source sensitivity to a whole year of observations by EGRET. The map shows gas and dust in the plane of the Milky Way emitting gamma rays owing to collisions with cosmic rays. Other clearly visible sources include the Crab, Geminga and Vela pulsars in our own Galaxy as well as the blazar 3C454.3, an active galaxy located 7.1 billion years away. This source appears particularly bright in the map as it was in a flaring state at the time of the acquisition, as the Fermi/GLAST collaboration announced through the Astronomer’s Telegram.

Fermi has since witnessed the intrinsically dynamic nature of the gamma-ray sky with the detection of another three active galactic nuclei in a high flaring state and the detection of two GRBs with giga-electron-volt energy emission. These bursts were detected by the LAT in coincidence with the GBM, which has also detected another 30, lower-energy bursts since its turn-on on 25 June.

The LAT is a pair-conversion telescope, which consists of an array of 4 × 4 towers, each comprising a precision converter/tracker and a calorimeter. Each tracker module has 18 x-y tracking planes, which contain single-sided silicon strip detectors (400 μm thick with a 228 μm pitch) interleaved with a high-Z converter material (tungsten). The tracker has an active surface of 70 m2 (comparable to the Inner Tracker of the ATLAS detector at CERN, with just over 60 m2) and 900,000 digital channels.

CCgla2_11_08

Each calorimeter module consists of 96 CsI(Tl) crystals, which are 2.7 cm × 2.0 cm × 2.6 cm in size and are arranged in eight layers of 12 crystals, each forming a hodoscope (x-y) array. The total depth of the calorimeter is 8.6 radiation lengths (out of 10.1 radiation lengths for the whole instrument). The dimensions of the crystals are comparable to the CsI radiation length (1.86 cm) and Moliere radius for electromagnetic showers (3.8 cm). The segmentation allows for spatial imaging of the shower profile and accurate reconstruction of the shower direction, thus making possible the high energy reach of the LAT and improving background rejection.

The tracker is surrounded by an anticoincidence detector (ACD), consisting of 89 plastic scintillator tiles of different sizes, which are read out by wavelength-shifting fibres coupled to photomultiplier tubes. The ACD is used to reject charged cosmic rays and therefore must have a high efficiency for charged particle detection (<0.9997). The segmentation is optimized to limit the effect of “backsplash” (secondaries produced in the interaction of high-energy photons with the heavy calorimeter, giving a signal in the ACD), which reduced the efficiency of EGRET by at least a factor of two at energies above 10 GeV. The calibration of the LAT is based on a combination of in-orbit and ground-based cosmic-ray data, together with beam tests performed at CERN (at the PS and SPS) and GSI and Monte Carlo simulations using Geant 4.

Opening new observational windows often yields completely unanticipated discoveries

The GBM, which is dedicated to the detection of GRBs, includes 12 sodium iodide (NaI) scintillation detectors and two bismuth germanate (BGO) scintillation detectors. The NaI detectors cover the lower part of the energy range, from a few kilo-electron-volts to about 1 MeV, and provide burst triggers and locations. The BGO detectors cover the energy range from about 150 keV to around 30 MeV, providing a good overlap with the NaI at the lower end, and with the LAT at the high end.

CCgla3_11_08

Within only a few days of turn-on, using data originally planned for observatory calibration, Fermi has already corroborated many of the great discoveries both of EGRET and of AGILE. The LAT instrument is already finding new sources. Such spectacular results have only been achieved thanks to an advanced design for the observatory, which makes use of state-of-the-art particle-physics instrumentation that gives Fermi exceptional resolution and sensitivity. As a result, understanding of the high-energy universe is sure to grow tremendously, but even more exciting could be the unexpected, as history shows that opening new observational windows often yields completely unanticipated discoveries.

The institutions participating in the collaboration built and qualified the LAT subsystems which then were integrated at SLAC. The detectors for the GBM were produced at the Max-Planck-Institute for Extraterrestrial Physics in Garching, and were integrated at the Marshall Space Flight Center in Huntsville, Alabama. Both instruments were integrated with the spacecraft at General Dynamics, in Phoenix, Arizona, to form the Fermi observatory. Environmental testing was then performed both at General Dynamics and at the Naval Research Lab in Washington DC.

INTEGRAL pinpoints acceleration

Several sources of very high-energy gamma-rays are associated with pulsars, revealing that these spinning neutron stars are extremely powerful particle accelerators. The discovery with ESA’s International Gamma-ray Astronomical Laboratory (INTEGRAL) satellite that the gamma-ray emission of the Crab Nebula is strongly polarized along the direction of its spin axis locates the acceleration site in the close vicinity of the pulsar.

CCast1_11_08

The Crab Nebula is the aftermath of a supernova explosion witnessed by Chinese and Arab astronomers in the year 1054. The core of the dying star collapsed to form a neutron star while the outer layers were expelled; their on-going interaction with the interstellar medium produces the beautiful remnant seen nowadays. A neutron star can be thought of as a giant atomic nucleus about 20–30 km across, in which each cubic millimetre weighs about 100,000 tonnes. The neutron star at the centre of the Crab Nebula is actually a pulsar sending radiation pulses 30 times per second, each time the magnetic pole of the spinning neutron star points towards the Earth.

The high-resolution X-ray image of the Crab Nebula obtained by NASA’s Chandra satellite revealed a complex geometry with a collimated jet, thought to be aligned with the spin axis of the neutron star surrounded by a toroidal, doughnut-shaped structure. However, the much lower resolution of current hard X-ray and gamma-ray instruments cannot locate precisely the site of high-energy emission within the Crab Nebula.

A possible clue comes from the study of the polarization properties of the high-energy radiation, a difficult task that has now been achieved for the first time by European astronomers analysing data from the INTEGRAL’s spectrometer. The study, led by Anthony Dean of the University of Southampton, is based on more than 600 individual observations of the Crab taken between February 2003 and April 2006.

The polarization of a gamma-ray photon can be derived if it is scattered off an electron from one detector element to another. This Compton-scattering has a preferred direction related to the polarization angle of the incoming photon. About half a million such events were detected from the Crab Nebula during the quiescent phase of the pulsar cycle, with photon energies between 0.1 and 1 MeV. These data were then fitted to the results of intensive Monte Carlo simulations using GEANT4. The best fit was obtained for a polarization of 46 ± 10% and a polarization angle of 123° ± 11°, closely aligned with the direction of the pulsar spin and the X-ray jet.

This large fraction of polarized photons implies that the high-energy electrons emitting them are accelerated with a high degree of order in a structure apparently closely linked to the spin axis of the pulsar. By considering either synchrotron radiation or curvature radiation, Dean and colleagues estimate a typical electron energy of 1014 to 1015 eV. This is about 1000 times the energy reached by CERN’s LEP collider and is enough to explain the production – by interactions with visible or microwave photons within the Crab Nebula – of the very high-energy gamma-rays detected by Cherenkov telescopes.

D0 observes b-version omega

The D0 collaboration at Fermilab’s Tevatron has made the first observation of the Ωb, consisting of two s quarks and a b quark. This follows the discovery at Fermilab of the strange b baryon, Ξb, in 2007, and echoes that of the original Ω particle.

CCnew5_11_08

The prediction of the original Ω dates back to the early 1960s, when assigning the known baryons to symmetry groups according to properties including spin, isospin and strangeness hinted at the existence of a new, triply strange spin–3/2 baryon with a charge of –1. In a triumphant interplay between experiment and theory, the particle was discovered in 1964 in a photograph made at the 80 inch bubble chamber at Brookhaven National Laboratory. Subsequent events turned up soon after at CERN. The success of the symmetry group structure led to the quark model, with three initial types or “flavours” of quark, u, d, and s, where the s quark endows the property of strangeness. The Ω is a baryon, consisting of three quarks, sss.

CCnew6_11_08

The subsequent decades revealed three additional flavours of quark, c, b and t, and the quark model now predicts the existence of baryons made of quarks of all flavours but t. (The heavy top quark, t, decays too quickly to form bound states.) This leads to new multiplets of spin–1/2 and spin–3/2 baryons of u, d, s and b quarks. The newly discovered Ωb baryon is a heavy cousin of the Ω, with a b quark replacing one of the s quarks occupying the position indicated in figure 1 for the spin–1/2 baryons.

Sifting through the data collected at the proton–antiproton collisions at the Tevatron during 2002–2006, the D0 collaboration identified 18 Ωb candidate events at a mass of 6.165 ± 0.017 GeV/c2, approximately six times as great as the proton mass (Abazov et al. 2008). This makes it the heaviest baryon observed so far. The Ωb candidates were reconstructed from decay daughter particles: Ωb → J/ψΩ, J/ψ → μ+μ, Ω → ΛK and Λ → pπ. While the Ω and Λ have decay lengths of a few centimetres, the Ωb travels only a millimetre or so before decaying. The analysis uses a sample of events with muon pairs from J/ψ decays, followed by successive reconstructions of Λ and Ω particles from charged tracks before a final combination of J/ψ and Ω candidates. Figure 2 shows the effective mass spectrum of the J/ψ and Ω combinations, with a peak of more than 5 σ significance and the observation of the Ωb.

The Ωb now joins the σb± and Ξb baryons recently observed at the Tevatron. These new states allow detailed study of the strong force, which holds quarks together to form all baryons, and the weak force, which is responsible for their decays.

Will the LHC surprise us?

Will the LHC surprise us? I hope so. Having failed to find any completely unexpected new physics for more than 30 years, we clearly need nature’s help to progress, and the case is good.

CCvie1_10_08

The last really big surprise in particle physics was the discovery of the third charged lepton (the tau) in 1975. There have of course been many extremely important discoveries since then, and our understanding of particle physics has advanced enormously. But the only real surprises have been how well the Standard Model has worked, the accuracy with which experiments have been able to check its predictions, and the failure to find its missing ingredient (the mechanism that gives particles their masses: Higgs?), or any other physics beyond the Standard Model, apart from the major discovery of neutrino masses (which, however, was not a huge surprise as no principle required zero mass).

By the time of the major LEP summer study in 1978 the Standard Model was accepted by many, but by no means all, theorists and gaining supporters among experimenters. It was thought that “the (CERN) proton–antiproton collider [which had just been launched] should discover the Z, but apart from measuring its mass (with considerable errors) it will not allow us to investigate its properties in detail (it may also discover the W but this looks more difficult)”. It was argued that LEP1 would be needed to study the Z in detail (or, if it did not exist, discover what else damps the rising weak cross section at LEP energies, where the phenomenological low energy theory had to be wrong), and measure the number of neutrinos into which it can decay; LEP2 would be needed to study the W, and find the Higgs boson (or whatever else generates masses) if it had not been found at LEP1. The surprises (at least for theorists like me) were how easy it was to detect the W (which was discovered in 1983, shortly before the Z) and the accuracy of the LEP results, which led to the exciting discovery that the strengths of the electromagnetic and strong forces converge at high energies, supporting the idea that they are different manifestations of a single “grand unified” force.

At the 1978 LEP summer study the importance of insisting on a relatively long tunnel in order not to compromise the energy of a later proton accelerator or LHC was discussed, and this argument was used when LEP was approved in 1981. The first serious discussion of LHC physics took place in 1984. It was obvious that the time had come to launch R&D on LHC magnets but “less clear whether it is sensible to discuss (LHC) physics…without more complete results from the SPS collider, let alone data from LEP, SLC and HERA…crystal gazing is unusually hazardous following recent tantalizing hints of new discoveries from UA1 and UA2”. These hints, which turned out to be spurious (along with other hints of non-standard physics, from Fermilab neutrino experiments, LEP, and other experiments), remind us of the difficulty of exploring the frontier: we should not be surprised if there are false dawns at the LHC.

CCVie2_10_08

In 1984 it was stressed that the physics of mass generation was almost certain to be discovered at the LHC, if the question had not been settled at LEP, and that there are good reasons for expecting physics beyond the Standard Model in the LHC energy range – perhaps supersymmetry, which was discussed in some detail (it was only mentioned briefly at the 1978 summer study, although in the event a huge effort went into unsuccessful searches for supersymmetry at LEP). The case for the LHC was developed in more detail during the 1980s, but its essence has not changed.

The formal proposal to build the LHC presented to the CERN Council in 1993 was introduced with the statement that it will “provide an unparalleled ‘reach’ in the search for new fundamental particles and interactions between them, and is expected to lead to new, unique insights into the structure of matter and the nature of the universe”. The LHC will take us a factor of 10 further in energy (at the level of the proton’s constituents) or equivalently to a tenth of the distance scale that has been explored so far. This alone is enough to whet scientific appetites. But pulses are really set racing by the knowledge that the LHC has a good chance of finding what generates masses (a single elementary Higgs field? Multiple or composite Higgs fields?…?) and may cast light on other mysteries, including: why the mass of the W is so small compared to the scale of the proposed grand unification of electroweak and strong interactions, the magnitude of the asymmetry between matter and anti-matter in the universe, the number of quarks and leptons, and the origin of the dark matter and dark energy that pervade the universe.

What do I expect? I am fairly confident that Higgs, in some form, will show up. If the LHC finds the standard Higgs boson and nothing else I would be extremely disappointed as we would learn essentially nothing. (The biggest surprise would be to find nothing, which would take us nowhere, while making the case for going to much higher energies compelling but probably impossible to sell.) I think there is a reasonable probability that supersymmetry will be found, and I hope this happens: the most convincing arguments are that it is the only possible symmetry allowed by quantum field theory (the mathematical language of particle physics) that has not been found (why would nature utilise all possibilities but one?); “local” supersymmetry (and all the other “continuous” symmetries are local) requires the existence of gravity; and the idea of connecting matter (fermions) with force carriers (bosons) is very appealing, although against this must be set the extravagant proliferation of particles (none found, yet?) that this implies. I am somewhat less impressed by the fact that supersymmetry would stabilize the mass of the W, which is one of the arguments that could put supersymmetry in reach of the LHC.

Thanks to the dedication of the CERN staff the LHC is now starting, and thanks to the community of users around the world, the experiments are ready to take data. It is a fantastic project. I am confident that it will work superbly. I am almost certain that it will make important discoveries, and I hope they will include surprises.

A mechanism for mass

There’s a famous photograph of a young Nepalese climber standing on top of Everest in 1953. It’s the only picture there is, but Tenzing Norgay was not alone. Edmund Hillary, who declined to be photographed, accompanied him to the top. Who got there first? For a while, the two climbers refused to be drawn, saying that what matters is the achievement. And so it is with a mechanism developed in the 1960s to account for the difference between long and short-range interactions in physics.

CCInt1_10_08

In the early 1960s, particle physics had a problem. Long-range interactions, such as electromagnetism and gravity, could be explained by the theories of the day, but the short-range weak interaction, whose influence is limited to the scale of the atomic nucleus, could not. The idea that the carriers of the weak force must be heavy, while the carriers of long-range forces would be massless could account for the difference. Conceptually it made sense, but theoretically it couldn’t be done: where would the heavy carriers get their mass? There was no way to reconcile massive and massless force carriers in the same theoretical framework.

Inspired by the new theory of superconductivity put forward in the late 1950s by John Bardeen, Leon Cooper and John Schreiffer, theorist Yoichiro Nambu paved the way to a solution by postulating the idea that a broken symmetry could generate mass. In doing so he in turn inspired three young physicists in Europe to take the next step.

A modest beginning

I met one of those physicists, Peter Higgs, in autumn 2007 in his apartment on the top floor of a walk-up block in Edinburgh new town with views over a leafy square. A slice from an LHC magnet greets visitors to the apartment, where the style is 1970s chic. Copies of Physics World and Scientific American are piled high on the coffee table, topped off with a copy of the satirical paper Private Eye. Bound copies of The Gramophone line the shelves, and the living room’s prominent feature is a chair, optimally placed to make best use of the audiophile Leak hi-fi system.

A few months later, I met Robert Brout and François Englert in a spartanly furnished office, of the kind frequently occupied by professors emeriti, at the Université Libre de Bruxelles. Do we speak English or French was my first question. “Robert will be happier with English,” came the reply. I hadn’t realised that Brout was a naturalized Belgian, and that the two had first worked together in 1959 when he’d hired Englert to join him in his work at Cornell University in statistical mechanics.

CCInt2_10_08

As is so often the way with good ideas, the concept of the generation of particle mass through symmetry breaking was developed in more than one place at around the same time, two of those places being Brussels and Edinburgh. It was a modest beginning for a scientific revolution: just two short pages published on 31 August 1964 by Brout and Englert, and little more than a page from Higgs on 15 September. But those two papers were set to influence profoundly the development of particle physics right to this day.

All three scientists are careful to attribute credit to their forerunners, Nambu most strongly. Hints of other influences come from the fact that Higgs has been known to call spontaneous symmetry breaking in particle physics the relativistic Anderson mechanism, a reference to the Nobel prize-winning physicist Philip Anderson who published on the subject in 1963; and in lectures at Imperial College London students are told about the Kibble–Higgs mechanism, in a reference to a later paper published by Gerald Guralnik, Carl Hagen and Tom Kibble.

Brout’s inspiration goes back much further, to another place that symmetry is broken spontaneously in nature with macroscopic effects. “Ferromagnetism was a puzzle in 1900,” he told me, and was solved by French physicist Pierre Weiss in 1907. Essentially, symmetry is broken by the Brout–Englert–Higgs (BEH) mechanism because the ground state of the vacuum is asymmetric, rather like the alignment of the electrons’ magnetic moments in a ferromagnetic material. In the case of the BEH mechanism, however, it’s structure in the vacuum itself that gives rise to particle masses. In the words of CERN’s Alvaro de Rújula: “The vacuum is not empty, there is a difference between vacuum and emptiness.”

The thing that fills the vacuum is a scalar field commonly known as the Higgs field. Some particles interact strongly with this field, others don’t, and it is the strength of the interaction with the field that determines the masses of certain particles. In other words, the carriers of the weak interaction, the W and Z particles, are sensitive to the structure of empty space. This is how the BEH mechanism can accommodate short and long-range interactions in a single theory. The long-awaited confirmation of the mechanism is expected in the form of excitations of the field appearing as scalar bosons (Higgs particles).

Esoteric as this may seem, there are potential astronomical implications, since what particle physicists call the Higgs field, cosmologists call the cosmological constant, or dark energy. A substance that appears to make up some 70% of the universe’s matter and energy, dark energy made itself apparent as recently as 2003 in observations of the farthest reaches of the universe.

Renormalization

Despite the emergence of the BEH mechanism, particle physics still had a problem in the mid-1960s, because the underlying theory was literally not normal. It predicted abnormal results, such as probabilities of more than 100% for given outcomes. It needed to be renormalized, and that would take the best part of a decade. Brout and Englert toyed with the idea in 1966, but a rigorous renormalization had to wait until 1971, when Gerardus ‘t Hooft, a student of Martinus Veltman at Utrecht University, published the first of a series of papers by student and supervisor that would rigorously prove the renormalizability of the theory. They were rewarded with a trip to Stockholm in 1999 to collect the Nobel Prize in Physics.

CCInt3_10_08

If Brout, Englert and Higgs had provided a cornerstone of the Standard Model, ‘t Hooft and Veltman gave it its foundations. From then, theoretical and experimental progress was rapid, and accompanied by a rich harvest of Nobel Prizes. In 1973, a team at CERN led by André Lagarrigue found the first evidence for heavy carriers of the weak interaction. In 1979, Sheldon Glashow, Steven Weinberg and Abdus Salam received the Nobel Prize for Physics for their work on unifying the electromagnetic and weak interactions, the theory in which the BEH mechanism plays its crucial role. Then in 1984, Carlo Rubbia and Simon van der Meer received the Nobel Prize for their decisive contributions to the programme that discovered the carriers of the weak force, the W and Z particles, at CERN in 1982–1983.

“The experimental discovery of the W and Z particles confirmed both the validity of the electroweak model,” explained François Englert “and of the BEH mechanism.” There remained, however, a missing ingredient. A machine was needed that could shake the scalar boson of the BEH mechanism out of its hiding place in the vacuum of space. That machine is the LHC. Many scientists would, and indeed have, bet on the discovery of the particle, but however elegant and enticing the work of Brout, Englert and Higgs, no-one can be sure it is right until the scalar boson has been seen. Nature might have chosen to endow particles with mass in a different way, so until the particle is found, the BEH mechanism remains no more than speculation. Whatever the case, the LHC will give us the answer.

There are many stories as to how the BEH mechanism and its associated particle came to be named after Higgs. The one Higgs told me involves a meeting that he had with fellow theorist Ben Lee at a conference in 1967, at which they discussed Higgs’s work. Then along came renormalization, making field theory fashionable, and another conference. “The conference at which my name was attached to pretty well everything connected with spontaneous symmetry breaking in particle physics was in ’72,” explained Higgs. It was a conference at which Lee delivered the summary talk.

Brout, Englert and Higgs have rarely met, but they have much in common. All came to a field, unfashionable with particle theorists at the time, from different areas of science. “Sometimes you do things in a domain in which you are not an expert and it plays a big role,” explained Englert. “We had no reason to dismiss field theory because people didn’t use it.” The three also agree on many things – their inspiration for one. “What was interesting me back in the early 1960s was the work of Nambu, who was proposing field theories of elementary particles in which symmetries were broken spontaneously in analogy to the way that it happens in a superconductor,” said Higgs. Englert said it slightly differently: “We were very impressed by the fact that Nambu transcribed superconductivity in terms of field theory,” he said. “That’s a beautiful paper.”

The three are in agreement about the results that the LHC might bring. “The most uninteresting result would be if we find nothing other than that which we’re most expecting,” said Englert. According to Higgs: “The most uninteresting result would be if they found the Higgs boson and nothing much else.” “If the Standard Model works, then we’re in trouble,” said Brout. “We’ll have to rely on human intelligence to go further,” said Englert completing the thought. And the most interesting direction for physics? Gravity, they all concur. “Any crumbs that fall off it would have major effects on the world of elementary particles,” said Brout, “in my heart, gravity is the secret to everything.”

Physicists and mountaineers have much in common. They are on the whole fiercely competitive, yet collaborative at the same time, and they can be magnanimous to an extraordinary degree. “I was delighted to discover that we are sharing the prize,” Higgs said on being informed that the European Physical Society had awarded him a prestigious prize in 1997. “I get a lot of publicity for this work, but (Brout and Englert) were clearly ahead of me.”

So who did get there first? At Everest, it turns out to have been Hillary who put his foot on the summit first. In physics Brout and Englert were first to publish, but that’s not what matters. In physics, as in mountaineering, it’s the achievement that counts.

bright-rec iop pub iop-science physcis connect