Comsol -leaderboard other pages

Topics

The CMS experiment puts physics onto the menu

CCcms1_03_11

When the LHC operates at peak luminosity, about a 1000 million interactions will be produced and detected each second at the heart of the CMS experiment. However, only a tiny fraction of these events will be of major importance. As in many particle-physics experiments, a trigger system selects the most interesting physics in real time so that data from just a few of the collisions are recorded. The remaining events – the vast majority – are discarded and cannot be recovered later. The trigger system, therefore, in effect determines the physics potential of the experiment for ever.

The traditional trigger system in a hadron-collider experiment is comprised of three tiers. Level 1 (L1) is mostly hardware and low-level firmware that selects about 100,000 interactions from the 1000 million or so produced each second. Level 2 (L2), which is typically a combination of custom-built hardware and software, then filters a few thousand interactions to be sent to the next level. Level 3 (L3), in turn, invokes higher-level algorithms to select the couple of hundred events per second that require detailed study.

At the LHC, proton bunches cross in the experiments at a rate of up to 40 million times a second – with up to 20 or so interactions per crossing. At CMS, each crossing can produce around 1 MB of data. The aim of the trigger system is to reduce the data rate to about 1 GB/s, which is the speed at which the data-acquisition system can record data. This implies reducing the event rate to around 100 Hz.

The novelty of the CMS trigger system is that the traditional L2 and L3 components are merged into a single system – the high-level trigger (HLT). This is a commercial PC farm that takes all of the interactions from L1 and selects the best 200–300 events each second. Therefore, at CMS the reduction in data rate is carried out in two steps. The L1 trigger, based on custom-built electronics, first reduces the number of events by a factor of around 400, while another factor of about 1000 comes from the HLT.

The data from the collisions are initially stored in buffers, but the L1 electronics still has less than 3 μs to make a decision and transfer that data on to the HLT. Given this short time frame, the L1 trigger acts only on information with coarse granularity from the muon detectors and the calorimeters, which is used to identify important objects, such as muons and jets. By contrast, the HLT works with a modified version of the CMS event offline reconstruction software, with full granularity for all of the sub-detectors, including the central tracker. To reduce the time taken, usually only the regions identified by the L1 trigger are read out, and reconstructed in a “regional reconstruction” process.

Such a system has never before operated at a particle collider. The advantage that this design buys is additional flexibility in the online selection system: the CMS experiment can run the more sophisticated L3 algorithms on a larger fraction of the collisions. In a three-tier system, experiments do this only on events that have been filtered through the second stage. With a two-tier trigger, CMS can do the more sophisticated filtering earlier in the game, so the experiment can look for more exotic events that might not have been recorded in a traditional trigger system. The price that CMS pays for this flexibility is a higher-capacity network switch and a larger “filter farm” of around 5000 CPUs.

Events à la carte

Running the trigger for a large experiment is a complex process because there are typically many conflicting needs coming from different detector and physics groups within the collaboration. As far as possible, everyone’s needs have to be covered – but this is no easy task. The CMS experiment is sophisticated and can do a great deal of different physics, but it all comes down to whether or not the events have been selected by the trigger. There is a constant struggle to make sure that the collaboration can maximize the physics potential of the experiment as a whole, while at the same time catering to the assorted tastes of the various groups.

CCcms2_03_11

The trigger “menu” can be thought of as a selection of triggers to suit all tastes. Some groups order just the entrée of established Standard Model physics, while others look to tuck in to the main course of Higgs particles, supersymmetry (SUSY), heavy-ion physics, CP-violation and so on. Those with a sweet tooth come with their minds set predominantly on the dessert of exotica – all of the new physics that is not related to the main course.

At a practical level, the menu consists of various paths that fall into one of three categories. First, inclusive trigger paths look at overall properties, such as total energy or missing transverse energy, which are particularly important for detector studies. Second, single-object paths identify objects, for example, an electron or a jet. These are valuable for physics studies, particularly for Standard Model processes. Third, multi-object paths contain a combination of single objects. The trigger menu pulls the various paths together and the filter farm executes the HLT algorithms as much as possible in parallel – the HLT has less than 100 ms to make a decision for a L1 rate of about 50 kHz. Figure 1 shows rates for several HLT paths for an instantaneous luminosity of 8 × 1031 cm–2 s–1.

The menu has to cover a range of physics: it must be as inclusive as possible, not only to accommodate more physics needs but also to make room for things that had not been considered when the experiment was running. For example, some theorists might come up with a new idea only after CMS has finished collecting data, but the experiment may have already captured what is needed if it has run with an “inclusive” trigger.

CCcms3_03_11

As the luminosity of the LHC increases, so does the collision rate, which means that tighter selection criteria need to be applied and the menu must constantly evolve to accommodate these needs. At CMS, physics groups – as well as detector groups – regularly submit proposals for triggers that they would like to have implemented. Requests are merged whenever possible into common triggers to simplify the menu. This makes it easier to maintain the menu as well as to spot mistakes and fix them. In addition, the bandwidth can be maximized if two groups share a trigger. For example, instead of two groups receiving a rate of 2 Hz each, they could devote 4 Hz to a common, more economical, trigger.

Once the proposals have been made, the Trigger Menu Development and HLT Code Integration Groups come up with a menu prototype, rather like a “tasting menu”. This takes all of the proposals into account and tries to implement them in a coherent trigger menu that adheres to every parameter to satisfy all appetites. While attempts are made to accommodate as many triggers as possible, if there are conflicting needs from different groups then the Physics and Trigger co-ordination has the final word.

The Trigger Performance Group then takes the prototype and runs it on “signal” events – using either real data or simulated – from all of the physics groups to test whether the menu picks out what it is supposed to select. If problems are found – and they often are – then the teams go back and fix them to produce the next prototype. At some point, the prototype will appear to be good enough to be deployed by the Trigger Menu Integration Group. This team then puts the menu online to test it, making sure that everything functions as expected. One important aspect of this validation is to verify that the full menu can run at the HLT within the budgeted time (figure 2).

Ever-changing ingredients

The CMS experiment has evolved since the early running period, when it was in commissioning mode, so that by the end of the 2010 the collaboration could maximize the physics output. The trigger system has adjusted in parallel to reflect this changing reality. During the 2010 proton run, the Trigger Studies Group produced more than a dozen menus of L1 and HLT “dishes”, which successfully filtered CMS physics data over five orders of magnitude in luminosity, over the range 1 × 1027 – 2 × 1032 cm–2 s–1.

Most of the triggers for the LHC start-up in March 2010 covered what was needed to understand the detector, such as calibration, alignment, noise studies and commissioning in general. Since then, these triggers have been gradually reduced to a minimum. The menu is now dominated by physics triggers, including a whole suite of new SUSY triggers that were deployed last September.

CCcms4_03_11

As mentioned above, the complexity of the trigger menu increases as a function of luminosity. Because the early interactions were at low luminosities, it was possible to be inclusive – to record as many events as possible. As the luminosity has increased, however, certain triggers have had to be sacrificed. Triggers for Standard Model physics have been the first to be reduced because the priority is to discover new physics. However, a fraction of the trigger bandwidth always goes to Standard Model physics, which is used as a reference.

Sometimes, triggers are removed because they are no longer needed or they have been replaced by more advanced versions. At other times, there is an overlap period to understand what the new trigger does compared with the old one.

The incredible performance of the LHC – which reached the luminosity target for the 2010 proton–proton collisions run of 1032 cm–2 s–1 several weeks earlier than expected – has kept the trigger-system team on its toes. Over the next few years, the evolution of luminosity will continue to require the trigger “chefs” to produce creative menus to cope with the ever-changing range of ingredients on offer.

CMS studies energy imbalance in jets in heavy-ion collisions

CCnew3_02_11

The CMS experiment has released results of a new study that sheds more light on the phenomenon known as di-jet energy imbalance, which was recently observed in lead–lead collisions at the LHC. Indeed, during the first days of the heavy-ion run in November last year both the ATLAS and CMS experiments observed collisions with the production of jets – streams of particles collimated in a small cone around a given direction. In particular, they saw collisions containing two high-energy jets (di-jets), produced more or less back to back, in which there is an unusually large imbalance in the jet energy. In other words, the energy of the jet on one side was much less than that of the jet on the other side.

This energy imbalance could result from a modification of the energy and showering properties of the partons (quarks and gluons) created in the hard scattering collision, as they traverse quark–gluon plasma that may have formed in the head-on collisions. The results on this large di-jet asymmetry, shown in figure 1 for the CMS experiment, were presented publicly by the LHC experiments at a special seminar on 2 December. The measurements were based on the detection of high-energy deposits in the calorimeters by particles emerging from the collision, which were used to characterize the jets. The momentum imbalances observed in the data are significantly larger than those predicted by the simulations, especially for collisions that have a large “centrality”, i.e. for the most violent head-on collisions.

Since then the CMS collaboration has continued its efforts to try to understand this phenomenon in more detail, in particular by also studying the tracks of charged particles produced in head-on lead–lead collisions. Such an analysis can address basic questions. For example, how does the energy redistribution in the lowest-energy jet work? Does the energy flow sideways, out of the jet cone? Or does it end up as low-energy particles that remain within the jet cone, but become difficult for the calorimeters to detect efficiently?

The new data analysis suggests that in fact both effects are present.

CCnew4_02_11

Based on the analysis of the charged particles correlated with the jets, CMS observes that the lowest-energy jet indeed becomes wider and the particles in the jet become softer in energy. An important question is then how the energy of the most energetic jet becomes exactly balanced in these collisions. Figure 2 shows the result of the energy-balance study, with the total missing transverse momentum projected onto the jet axis of the leading jet, as a function of the di-jet energy asymmetry, for the most central type of collisions. The contribution to the missing momentum is decomposed in contributions from particles in different intervals of particle momentum, to gain insight into what sort of particles contribute.

The top row shows Monte Carlo predictions, which do not include any physics effects that lead to asymmetries in jet energy; the bottom row shows the CMS data. The left plots sum only the momenta of particles within the jet cone – the right ones the momenta of particles outside the cones. These distributions show clearly that part of the energy of the most energetic jet becomes balanced by particles on the opposite side, outside the jet cone. They also reveal that in the data – but not in the simulation – a large fraction of the balancing momentum is carried by particles with rather low momenta.

These results provide qualitative constraints on the nature of the jet modification in lead–lead collisions and a quantitative input to models of the transport properties of the medium created in these collisions. However, this is just the proverbial tip of the iceberg towards a detailed understanding of this phenomenon and many more studies can be expected soon.

LHCb makes first observations of interesting B0s decays

CCnew5_02_11

Using data collected in proton–proton collisions at the LHC at a centre-of-mass energy of 7 TeV, the LHCb experiment has observed new rare decay modes of B0s mesons for the first time. The decay B0s → J/ψ f0(980) will be important for studying CP violation in the B0s system, while the semileptonic decay B0s → D*–s2+ν will be valuable for testing QCD-based theoretical predictions.

The first new decay mode observed is of the hadronic decay B0s → J/ψ f0(980). This is particularly interesting because it is to a CP eigenstate, which means that it can be used in measuring mixing-induced CP violation. The B0s consists of a b antiquark (b) bound with an s quark, and can decay to a J/ψ (cc) together with an ss state, which can be a φ or, more rarely, an f0. While the φ decays to K+K, the f0 decays to π+π. The collaboration analysed J/ψK+K and J/ψπ+π events to search for the relevant decay candidates. Finally, using a fit to the π+π mass spectrum with two interfering f0 resonances (f0(980) and f0(1370)), they measured a ratio of the B0s decays to J/ψ f0(980) and J/ψ φ of 0.252+0.046-0.032(stat.)+0.027-0.033 (syst.) (LHCb collaboration 2011a). The events close to the f0(980) could be used to measure the CP-violating phase for B0s decays, which is some 20 times smaller than in B0 mixing and hence much more sensitive to physics beyond the Standard Model.

CCnew6_02_11

The LHCb collaboration has also made the first observation of another decay, B0s → D*–s2+ν. The most frequent decays of the B0s involve the b quark changing into a c quark, resulting in a cs charm hadron, such as a Ds or D*–s , or other excited states. The relative proportion of such final states provides valuable information for testing theoretical models based on QCD. To investigate decays of this kind, the collaboration looked for final states in which the decay D0 → K+π formed a vertex with a K and a μ+. The analysis revealed two structures in the D0 K mass spectrum at masses consistent with the Ds1(2536) and D*s2(2573) mesons (LHCb collaboration 2011b). While the Ds1(2536) has been observed previously in B0s decays by the DØ collaboration at Fermilab’s Tevatron, LHCb’s result marks the first observation of the D*s2(2573) state in B0s decays. The measured branching fraction relative to the total B0s semileptonic rate for the D*s2(2573) comes out at 3.3±1.0(stat.)±0.4(syst.)%, while the value for the Ds1(2536) is measured to be 5.4±1.2 ±0.5 %. These values agree well the prediction of the updated Isgur-Scora-Grinstein-Wise quark model, ISGW2.

The observation of these two new decay modes demonstrates that the LHCb experiment is already competitive in the field of heavy flavour physics. Great progress is expected with the larger data sample due from the coming run, with the potential to constrain, or even observe, new physics.

ALPHA collaboration gets antihydrogen in the trap

CCalp1_02_11

On 17 November 2010 the ALPHA collaboration at CERN’s Antiproton Decelerator (AD) reported online in the journal Nature that they had observed trapped antihydrogen atoms by releasing them quickly from the magnetic trap in which they were produced and detecting the annihilation of the antiproton – the nucleus of the antihydrogen atom (Andresen et al. 2010a). This exciting result from a proof-of-principle experiment paves the way to detailed study of antimatter atoms.

Do matter and antimatter obey the same laws of physics? One intriguing way to test this would be to compare the spectra of hydrogen and its antimatter twin: antihydrogen. Such studies would build on almost a century of detailed theoretical and experimental investigation of the hydrogen atom, from the Bohr model to the ultraprecise measurements of Nobel laureate Theodor Hänsch and colleagues. The frequency of the 1s–2s transition in hydrogen has been measured with a precision of about 2 parts in 1014. The CPT theorem requires that this frequency must be exactly the same in antihydrogen. The goal of the ALPHA experiment is to test this claim – at least from the high-energy physics point of view. To the atomic physicist, for whom hydrogen is the basic, elegant workhorse of the evolution of quantum mechanics, the question is perhaps: “How could you possibly have access to antihydrogen and not try to measure that?”

CCalp2_02_11

While our colleagues at the LHC have been busily setting new records for the highest energy stored hadrons, we at the AD have been headed in the other direction – setting a new record for the lowest energy anti-hadrons. The antihydrogen atoms in ALPHA can be trapped only if their kinetic energy, in temperature units, is less than 0.5 K. This corresponds to about 9 × 10–5 eV, or 3 × 10–17 times the energy of protons in the LHC, which represents quite a dynamic range for CERN.

The low temperature necessary has been a daunting challenge for the ALPHA experimenters. Antihydrogen is formed by mixing antiprotons from the AD with positrons from a special accumulator fuelled by a 22Na positron emitter. The particles are mixed in cryogenic Penning traps, which feature strong solenoidal magnetic fields for transverse confinement and electrostatic fields for longitudinal confinement (figure 1). The resultant antihydrogen, which is electrically neutral, can be confined only by the weak interaction of its magnetic dipole moment with an external magnetic trapping-field. The strength of this dipole interaction is such that, for ground state antihydrogen, a 1 T deep magnetic well can confine atoms with kinetic energy up to 0.7 K.

The atom trap in ALPHA comprises an octupole magnet and two solenoidal “mirror coils” (figure 1). These produce a magnetic minimum at the position at which the antihydrogen atoms are formed. If the atoms are formed with a kinetic energy of less than about 0.5 K (in temperature units), they are trapped. (This is for the ground state; excited atoms can have a larger magnetic moment and experience a deeper well.)

The difficulty lies in the transition from plasmas of charged particles to neutral atoms. The space-charge potential energies in the plasmas can be of order 10 eV – about 120 000 K in temperature equivalent. So one of the experimental challenges for antihydrogen trapping has been to learn how to cool and carefully manipulate the charged species to produce cold, trappable atoms.

At ALPHA, we mix about 30 000 antiprotons with about two million positrons in each attempt to trap antihydrogen. The two plasmas are placed in adjacent potential wells, as in figure 1, and the antiprotons are then driven into the positron plasma using a frequency-swept, axial electric field (Andresen et al. 2011). This drive is “autoresonant”, i.e. the amplitude of the antiproton oscillation automatically matches the corresponding drive frequency in the nonlinear potential well. The idea is to control the energy of the antiprotons precisely by carefully tailoring the drive frequencies. The antiprotons enter the positron cloud with low relative energy and do not heat the positron cloud on entry.

CCalp3_02_11

The positrons themselves are self-cooling: they lose energy by radiation in the 1 T magnetic field in the Penning trap. We supplement this process using evaporative cooling. Starting with an equilibrated positron plasma in a potential well, we lower one side of the well, allowing the hottest positrons to escape. The remaining positrons re-equilibrate through collisions, settling to a lower temperature. The technique, which is well known in the field of Bose-Einstein condensation for neutral atoms, was also demonstrated by ALPHA on antiprotons in 2009 (Andresen et al. 2010b). After evaporative cooling, the positrons in ALPHA are at about 40 K. Under ALPHA conditions, the antiprotons can enter the positron plasma and come into thermal equilibrium before making antihydrogen. Thus, only a small fraction of the antihydrogen atoms produced will have a kinetic energy equivalent to less than 0.5 K.

Antiprotons and positrons are allowed to interact or “mix” for 1s to produce antihydrogen, after which we remove any charged particles that remain trapped in the potential wells and then ground the electrodes of the Penning trap. The decisive step is to shut down the magnetic atom trap quickly to see if there are any trapped antihydrogen atoms that escape and annihilate on the walls of the device. However, even with the Penning trap’s electric fields turned off, there is still a small chance that antiprotons could be magnetically trapped due to the mirror effect in the strong magnetic field gradients in the atom trap. To eliminate this possibility, we apply pulsed electric fields along the axis of the trap, in alternating directions, so as to kick any stubborn antiprotons out of the trapping volume.

The ALPHA experiment’s superconducting atom-trap magnets, manufactured at Brookhaven National Laboratory, can be turned off with a time constant of about 9 ms. This fast shutdown helps to discriminate between antihydrogen annihilations and cosmic rays.

Antiproton annihilations are detected by an imaging, three-layer silicon vertex detector (see figure 2) that surrounds the cryostat for the traps and magnets. To be absolutely sure that any annihilations observed come from neutral antimatter and not from charged antiprotons, we apply an axial electric “bias” field to the trap while it is shutting off. While antiprotons would be deflected by this field, antihydrogen is not, and we can see the result using the position-sensitive silicon vertex detector. The silicon detector is also extremely useful in topologically rejecting cosmic rays.

The result of many trapping attempts is shown in figure 3, reproduced from the article in Nature. Each trapping attempt takes about 20 minutes of real time. In 335 trapping attempts, we observed 38 annihilations consistent with the controlled release of trapped antihydrogen atoms. The spatial distribution of these annihilations is not consistent with the expected behaviour of charged particles (figure 3). We can conclude that neutral antihydrogen atoms were trapped for at least 172 ms, which is the time it took to eject the charged particles from the trap and to apply the multiple field pulses to ensure the clearing of mirror-trapped antiprotons.

In subsequent experiments, we made good progress on improving the trapping probability and investigated the storage lifetime of antihydrogen atoms in the trap. At holding times up to 1000s, we still see the signal for release of trapped atoms. This is an encouraging result that leads us to be optimistic about the future of spectroscopic studies with trapped antihydrogen.

When the AD starts up again in 2011, we hope to pick up where we left off in 2010. The first step is to continue to improve the trapping probability for produced antihydrogen atoms, by, for example, working on reducing the positron temperature and studying improvements in the mixing manipulations to make colder antihydrogen. As regards the spectrum of antihydrogen, the 1s to 2s laser-frequency transition described above is not the only game in town. Microwaves can interact with antiatoms in the magnetic trap, either with the positron spin (positron spin resonance) or with the antiproton spin (antinuclear magnetic resonance). Paradoxically, using rare atoms of antimatter can offer a detection bonus for such experiments, as a resonant interaction can lead to loss and annihilation of the trapped atom – an event that can be detected with high efficiency. At ALPHA we hope to take the first steps towards microwave spectroscopy – the first resonant look at the inner workings of an antiatom – in 2011. At the same time we will be working on a new atom-trapping device that is optimized for precision measurements with both lasers and microwaves.

Having demonstrated trapping of antihydrogen atoms, the ALPHA collaboration was able to finish off the year by celebrating the honour of being recognized as the Physics Breakthrough of the Year for 2010 by Physics World magazine. We shared this honour with our friends across the wall at the AD in the ASACUSA collaboration, who produced antihydrogen in a new type of device that could lead to in-flight studies of the antiatoms. Finally, the American Physical Society news staff named our trapping of antihydrogen as one of the top ten physics-related news stories of 2010. All in all, 2010 was a vintage year for antimatter at the AD.

At the cusp in ASACUSA

CCasa1_02_11

Last December, the cusp-trap group of the Japanese–European ASACUSA collaboration demonstrated for the first time the efficient synthesis of antihydrogen, in a major step towards the production of a spin-polarized antihydrogen beam. Such a beam will allow, for the first time, high-precision microwave spectroscopy of ground-state hyperfine transitions in antihydrogen atoms, enabling tests of CPT symmetry (the combination of charge conjugation, C, parity, P, and time reversal, T) – the most fundamental symmetry of nature. The new experiment may also shed light on some of the most profound mysteries of our universe: the asymmetry of matter and antimatter in our universe. Why is it that the universe today is made up almost exclusively of matter, and not antimatter? Scientists believe that the answer may lie in tiny differences between the properties of matter and antimatter, manifested in violations of CPT symmetry.

Testing CPT symmetry

Antihydrogen, made up of an antiproton and a positron, is attractive for testing CPT symmetry given its simple structure. In particular, comparisons of antihydrogen’s transition frequencies with those of ordinary hydrogen atoms will provide stringent tests of CPT symmetry. For this purpose, the ATRAP and ALPHA experiments under way at CERN’s Antiproton Decelerator (AD) aim to make high-precision measurements of the transition frequency between the ground state (1s) and first excited state (2s) of antihydrogen, which is close to 2466 THz, in the realm of laser spectroscopy. The ALPHA collaboration made an essential breakthrough in this approach when they successfully trapped antihydrogen for the first time in November.

The ASACUSA experiment, also at the AD, is taking the complementary approach of measuring precisely the transition frequency between the two substates of the ground state that arise from hyperfine splitting as a result of the interaction between the two magnetic moments associated with the spins of the antiproton and the positron. The collaboration aims to measure the ground-state hyperfine transition frequency, which is about 1420 MHz in the microwave region, by extracting a spin-polarized antihydrogen beam in a field-free region. Last December, the cusp-trap group of ASACUSA reported that the cusp trap, which is designed not to trap antihydrogen but to concentrate spin-polarized antiatoms into a beam, succeeded in synthesizing antihydrogen atoms with an efficiency as high as 7%. This is a big step towards the realization of high-precision microwave spectroscopy of the ground-state hyperfine transition in antihydrogen.

CCasa2_02_11

The cusp trap uses anti-Helmholtz coils, which are like Helmholtz coils but with the excitation currents antiparallel rather than parallel to each other. This arrangement yields a magnetic quadrupole field that has axial symmetry about the coil axis: a so-called cusp magnetic field (figure 1). In addition, an axially symmetric electric field is generated by an assembly of multi-ring electrodes (MREs) that is coaxially arranged with respect to the coils. Having axial symmetry, these magnetic and electric fields guarantee the stable storage and manipulation of a large number of antiprotons and positrons simultaneously – one of the unique features of the cusp trap. Furthermore, the magnetic field distribution of the cusp trap can produce an intensified antihydrogen beam with high spin-polarization in low-field-seeking (LFS) states. In other words, antihydrogen atoms can be tested for CPT symmetry in a field-free (or weak field) region – a vital condition for making high-precision spectroscopy a reality. These properties are exclusive to the cusp-trap scheme.

As figure 1 shows, the extracted beam is injected into a microwave cavity, followed by a sextupole magnet and a spin analyser, and then focused on an antihydrogen detector (shown in red). When the microwave frequency is in resonance with one of the hyperfine transition frequencies, it induces a spin flip, which converts the LFS state into a high-field-seeking (HFS) state. In this case, the antihydrogen beam becomes defocused (shown in purple), a transition that is easily monitored by an intensity drop in the antihydrogen detector. As is evident from this description, the cusp trap scheme does not need to trap antihydrogen atoms, but it can do so if necessary. The big advantage is that a large number of antihydrogen atoms with higher temperatures can participate in the measurements.

CCasa5_02_11

The AD at CERN supplies a pulsed antiproton beam of around 3 × 107 particles per pulse at 5.3 MeV, which is slowed down to 120 keV in ASACUSA by the radio-frequency quadrupole decelerator. For the antihydrogen experiments the beam is then injected into an antiproton catching trap (called the MUSASHI trap) through two layers of thin degrader foil. In this way, about 1.5 × 106 antiprotons per AD shot are accumulated in the trap where they are cooled with preloaded electrons. The antiproton cloud is then radially compressed by a “rotating wall” technique to allow efficient transportation into the cusp trap. The positrons that make up the antihydrogen are supplied via a compact all-in-one positron accumulator that was designed and developed for this research. Both antiprotons and positrons are then injected into the cusp trap to synthesize cold antihydrogen atoms. A 3D track detector monitors the cusp track to determine the annihilation position of antiprotons by tracking charged pions. The detector comprises two pairs of two modules, each with 64 horizontal and 64 vertical scintillator bars that are 1.5 cm wide.

Inside the cusp trap

CCasa3_02_11

Figure 2 shows schematically the structure of the central part of the cusp trap. The MRE is housed in a cryogenic ultrahigh vacuum tube held at a temperature of several Kelvin with a good heat contact, while still being insulated electrically. Thermal shields at 30 K located on both ends of the MRE prevent room-temperature radiation creeping in from the beamline. Outside the MRE part of the bore tube, five superconducting coils installed symmetrically with respect to the MRE centre provide the cusp magnetic field. On the downstream side, the bore diameter is expanded for efficient extraction of the antihydrogen beam.

CCasa4_02_11

In the recent experiment, antihydrogen atoms were synthesized by mixing antiprotons and positrons at the nested trap region, as shown by the blue solid line (φ1) in figure 3. As antihydrogen atoms are neutral, they are not trapped and move more or less freely so some of them reached the field-ionization trap (FIT). If the antihydrogen atoms were formed via a three-body-recombination process in high Rydberg states, i.e. relatively loosely bound, they are field-ionized and their antiprotons are accumulated in the FIT. During the experiment, the FIT was opened (as indicated by the dash dotted line, φ2) every 5 s and the antiprotons accumulated were released and counted by the 3D tracker through their annihilations. This gave the antihydrogen synthesis rate as a function of time since the start of the mixing process. Figure 4 shows an example of the evolution of the synthesis rate for 3 × 105 antiprotons and 3 × 106 positrons, in which the rate grew in the first 20–30 s, and then gradually decreased. In this case, a total of around 7 × 103 antihydrogen atoms were synthesized.

CCasa6_02_11

The ASACUSA collaboration is now looking forward to starting the microwave spectroscopy of hyperfine transition frequencies – which may lead to groundbreaking insights into the nature of antimatter and symmetry.

Exploring the low-energy precision frontier

CCpsi1_02_11

PSI2010, the 2nd International Workshop on the Physics of Fundamental Symmetries and Interactions at low energies and at the precision frontier, brought together experimentalists and theoreticians, united by a common quest for experimental precision using probes as diverse as neutrons, antiprotons, muons, atoms, molecules and even condensed-matter samples. The meeting, which was aimed at consolidating recent results and planning future directions in the field, took place at the Paul Scherrer Institut (PSI) on 11–14 October and was supported by PSI and the Swiss Institute for Particle Physics (CHIPP).

With 146 participants from 17 countries, the form of the workshop led to lively discussions, helping to promote the transfer of information within the community. Results were presented in 65 plenary talks and some 30 posters, most of which related to experiments. PSI being a world-leading centre for muon, pion and neutron physics, many presentations were related to investigations with neutrons (40%) and pions or muons (30%). This reflected both the high local interest as well as the strength of the worldwide community – about three-quarters of the presentations were on work at facilities other than PSI.

Gearing up for new physics

The workshop began with a talk on “How to look at low-energy precision physics in the era of the LHC” given by Daniel Wyler of the University of Zurich. He described how low-energy precision physics is complementary to the search for new physics at the LHC and how it can even answer specific questions that reach beyond the LHC – a theme that was highlighted in other talks. The final results from the TRIUMF Weak Interaction Symmetry Test (TWIST) experiment on muon decay demonstrate the impact of precision results on, for example, left–right symmetric models or sterile neutrinos, as TRIUMF’s Glen Marshall explained.

Fundamental neutron physics, introduced by Torsten Soldner of Institut Laue-Langevin (ILL), cropped up in several sessions. These covered recent controversial results on neutron-lifetime measurements in storage bottles and results on neutron decay at ILL and the Los Alamos National Laboratory (LANL), as well as new proposals for measurements with higher sensitivity. Peter Geltenbort of ILL provided a special twist to the topic with his results on the efficient guiding capabilities for ultracold neutrons (UCNs) using coated commercial Russian water hoses.

The search for permanent electric dipole moments (EDMs) of fundamental particles was discussed by several speakers, who covered the majority of the present worldwide efforts. Michael Ramsey-Musolf of the University of Wisconsin discussed the paramount importance of permanent EDMs and their cosmological implications, and he set the scene for several talks on the experimental searches for a neutron EDM at ILL, the Spallation Neutron Source, PSI, Osaka and TRIUMF. Ben Sauer of Imperial College showed new data on the search for the electron EDM in ytterbium fluoride, while Blayne Heckel reported on activities to improve on the present world record in the experiment on mercury at the University of Washington. Future directions for EDM searches and co-magnetometers using 129Xe or neutron crystal-diffraction were introduced in further talks, as well as in posters.

Part of the workshop was devoted to violations of space–time symmetry. Ralf Lehnert of Universidad Nacional Autónoma de México outlined the theoretical framework of the extension to the Standard Model that causes oriented universal fields, which could typically manifest themselves in daily or yearly time variations of physics observables. On the experimental side, Michael Romalis of Princeton University and Werner Heil of the University of Mainz presented impressive new limits from searches for violations in Lorentz symmetry in clock-comparison experiments using, respectively, the K-3He system and 129Xe and 3He.

Searches for extra forces were introduced by Hartmut Abele of the Technical University Vienna, who described using gravitational states of UCNs, while Anatoli Serebrov of the Petersburg Nuclear Physics Institute (PNPI) discussed the potential of stored UCNs for detecting dark matter. Several presentations also covered the search for tensor-type weak currents in nuclear beta-decay, using the WITCH experiment at ISOLDE at CERN and the LPCTrap facility at GANIL. Seth Hoedl of the University of Washington showed new results of an axion search based on a torsion pendulum. There were also reports on the status of the ALPHA and ASACUSA experiments at CERN, which aim at atomic spectroscopy of antihydrogen and related CPT tests and CERN’s Michael Doser explained the AEGIS experiment to probe gravity with antihydrogen.

On the facilities side, a special session provided an excellent overview of the present status of UCNs – a flourishing global area. This included reports on the performance of UCN sources in operation at LANL and the University of Mainz, as well as on the status of construction at the Technical University Munich and commissioning at PSI. Proposals for future UCN sources at the Japan Proton Accelerator Research Complex (J-PARC), TRIUMF and the PNPI were also shown at the workshop.

Several sessions were devoted to muon physics. Peter-Raymond Kettle of PSI reported on the latest results of the MEG experiment searching for the lepton-flavour violating μ → e + γ decay. The community is currently planning ahead for the next generation of searches for rare muon decays, as became clear when Bob Bernstein from Fermilab explained the Mu2e proposal, which will search for the neutrinoless conversion of muons to electrons, and Andre Schöning of Heidelberg University suggested a new μ → 3e search at PSI. Efforts towards considerably higher muon beam intensities were presented for the Research Centre for Nuclear Physics at Osaka, J-PARC and PSI, and Harry Van der Graaf of Nikhef presented new silicon-gas detectors that could be used at such future facilities.

In one of the highlights, Dave Hertzog from University of Washington presented the newly released final result on the muon lifetime from the MuLan experiment at PSI, which gives a new determination to 0.6 ppm of the Fermi weak coupling constant. The competing muon lifetime experiment at PSI, FAST, was presented by Eusebio Sanchez of CIEMAT, who showed the current status of the analysis and gave the outlook for results expected soon.

Laura Marcucci of the University of Pisa explained the motivations for precision measurements in muon capture in the context of theoretical efforts in effective field theory, while Peter Winter of the University of Washington detailed the on-going MuSun experiment to determine precisely the rate of muon capture in deuterium. Results and opportunities from pion decays were discussed by Dinco Pocanic of Virginia.

The new proton charge radius result from the muonic hydrogen Lambshift experiment, presented by Aldo Antognini of the Swiss Federal Institute of Technology (ETH) Zurich, revived a heated discussion about the results published earlier in 2010. Theory still struggles to explain the discrepancy between the muonic and ordinary hydrogen Lambshift results, both of which involve QED calculations. While optical hydrogen spectroscopy and QED appear to be in agreement with electron scattering data, the muonic hydrogen result, which is far more precise, is 5 σ from the CODATA value. Antognini went on to explain how all systematic errors in the muonic experiment are found to be far below the observed difference.

CCpsi2_02_11

Aside from the programme of talks, the poster session provoked lively discussions among participants, enhanced by locally brewed draught beer and grilled specialities. There was also the opportunity to gather at organized evening events. In particular, a special trumpet concert linked music to physics through the performance of modern interpretations of Baroque master-works and through the demonstration of acoustic phenomena in a special quadrophenia opus composed by one of the performers, Eckhard Kopetzki. The workshop dinner took place at the local historic grape-pressing cellar (Trotte), an easy stroll from the workshop site. The Swiss speciality of raclette cheese was served freshly melted accompanied by the sounds of alphorns.

Many participants expressed their wish for a repeat of this low-energy precision physics workshop at PSI – the best indication of the workshop’s success. This also showed the growing interest in the field, in which various experiments and particle sources will soon come online.

ATLAS observes striking imbalance of jet energies in heavy ion collisions

CCnew3_01_11

The ATLAS experiment has made the first observation of an unexpectedly large imbalance of energy in pairs of jets created in lead-ion collisions at the LHC (G Aad et al. 2010). This striking effect, which is not seen in proton–proton collisions, may be a sign of strong interactions between jets and a hot, dense medium (quark-gluon plasma) formed by the colliding ions.

Concentrated jets of particles are formed in the head-on (central) collisions of lead ions at the LHC. The jets materialize from the hadronization of quarks and gluons scattered from the protons and neutrons in the colliding ions. If a quark-gluon plasma is formed in the collisions of the high-energy ions, then as the jets materialize they will traverse this hot, dense medium. In so doing they should lose energy to the medium through multiple interactions, in a process called jet quenching.

The jets are most often produced in pairs (dijets) travelling in opposite directions with equal transverse energies, but if the jets travel different distances before escaping the medium, then their energies will no longer be equal. Experiments at the Relativistic Heavy Ion Collider at Brookhaven observed signs of this effect in single-particle distributions; however, the result from ATLAS represents the first direct observation of energy loss by jets, and the first in which the effect is visible on an event-by-event basis (figure 1).

The excellent angular coverage, segmentation and energy resolution of its calorimeters make ATLAS well suited to measuring jets. For this analysis, the collaboration looked at a sample of 1693 events with at least one jet having transverse energy greater than 100 GeV. They then characterized the difference in energy in the dijets by the ratio of the difference of the jet energies to the sum of the energies. In studying this dijet asymmetry ratio they found that it varies as a function of the centrality of the colliding nuclei, as figure 2 shows, where the fraction of events with a given asymmetry is plotted versus the measured asymmetry for four different ranges of centrality, the most central events in the plot at the right and the least central at the left.

CCnew4_01_11

The plots show the asymmetry for lead-ion collisions at 2.76 TeV/nucleon in the centre-of-mass and for 7 TeV proton–proton collisions together with the prediction from a Monte Carlo simulation that does not include interactions between the jets and the medium. The measured asymmetry clearly increases with centrality: the distribution broadens and the mean shifts to higher values. To confirm the effect, the collaboration performed numerous studies to verify that events with large asymmetry are not produced by energy fluctuations, background, or detector effects.

The observation of this centrality-dependent dijet asymmetry by ATLAS has a natural interpretation in terms of QCD energy loss and may point to a strong energy loss by the jets in the quark-gluon plasma. The asymmetry has also been reported by the CMS collaboration and a related effect in single particle distributions has been reported by the ALICE collaboration, at a seminar at CERN together with ATLAS on 2 December. The result, together with others presented at the seminar, marks the beginning of a broad and exciting programme of heavy-ion physics at the LHC.

CMS announces first results of search for SUSY

CCnew5_01_11

At the “LHC end-of-year jamboree” at CERN on 17 December, the CMS collaboration announced the first results of its search for supersymmetry (SUSY) at the LHC.

SUSY is one of the strong candidates for physics beyond the Standard Model that could be detected in proton–proton collisions at the LHC. If it exists in nature, it could solve many of the outstanding issues in particle physics, such as the gauge hierarchy problem. SUSY can reveal itself through the production of new heavy particles and so could deliver a natural candidate particle to explain the large density of dark matter in the universe.

This first result is based on proton–proton collision events with multiple jets and missing transverse energy. The dataset corresponded to an integrated luminosity of 35 pb–1 collected between March and October 2010 at a centre-of-mass energy of 7 TeV. Large, missing transverse energy is a key characteristic of SUSY event candidates, reflecting the supposition that the lightest SUSY particle is expected to be neutral, stable, and weakly interacting – thereby escaping detection.

After stringent cuts to reduce the background arising from Standard Model processes that can fake missing transverse energy or that may contain escaping neutrinos, 13 events remained. The collision data also allowed estimates of the expected numbers of background events from Standard Model processes and these are consistent with the number of observed events. As a consequence, the present data do not yet show evidence for SUSY; however, they significantly extend previous search results.

The figure illustrates the reach of the CMS analysis with respect to other experiments in the plane of the universal scalar and gaugino masses (m0 and m1/2, respectively) at the grand unified theory scale of the constrained minimal supersymmetric extension of the Standard Model (CMSSM), after just one year of LHC data-taking. The observed limit significantly improves those set previously by other experiments, thus further constraining the masses of SUSY particles.

Physicists are now looking forward to the 2011 physics run at the LHC, which is expected to bring a data sample that could be as much as two orders of magnitude larger than the present one.

Planck reveals a stellar first year

CCpla1_01_11

The cosmic microwave background (CMB) is one of the most powerful resources that cosmologists have to investigate the evolution of the universe since its earliest moments. Like a “fabric” that permeates the cosmos, it holds information about the temperature distribution, keeping a permanent memory of all of the events that the universe has gone through. In particular, its anisotropies – deviations from the isotropic distribution that characterizes the universe – contain the signatures of the primordial perturbations that gave birth to the large-scale structure of the universe observed today.

Reading among these ripples in the CMB is by no means easy because they appear as tenuous fluctuations (1 part in 100,000) in a cold background at 3 K. In May 2009, ESA’s Planck spacecraft was launched into space to prise out the secrets hidden there. The result of about 20 years of work by the international Planck collaboration, it is a third-generation satellite that follows on from the Cosmic Background Explorer (COBE) and the Wilkinson Microwave Anisotropy Probe (WMAP). Since mid-July 2009, Planck has been orbiting at the second Lagrangian point (L2) of the Earth–Sun system, 1.5 million kilometres from Earth. It carries on board a Low Frequency Instrument (LFI), consisting of an array of 22 radiometers, and a High Frequency Instrument (HFI), which has 48 bolometric detectors. Since its launch, Planck has performed extremely well. The two instruments have so far scanned the whole sky almost three times in nine different frequency channels, with a sensitivity that is up to 10 times better and an angular resolution up to 3 times better than that of its most recent predecessor, WMAP (figure 1).

On 11 January, the Planck collaboration released its first catalogue of compact astrophysical sources. This is the first full-sky source catalogue to cover the frequency range 30–857 GHz at nine different frequencies. It includes a variety of different types of sources, from nearby objects in the galaxy to various classes of radio galaxies, and dusty galaxies to distant clusters of galaxies.

Because Planck is optimized to measure the CMB, the catalogue turns out to be an extremely powerful tool for identifying the cold objects that populate the interstellar medium (ISM) and measuring their temperature accurately. In this task Planck is allied with the Herschel space observatory, which was launched by ESA on board the same spacecraft. Herschel, designed to study cold objects, is not a survey telescope; rather, its purpose is to look closely at one part of the sky at a time. Planck and Herschel are thus good companions, whereby Planck provides the whole-sky survey and points Herschel to interesting locations that it can focus on.

Among the sources detected by Planck are “protostellar objects”, that is, clusters of matter that could give rise to a star. The complex processes at the origin of stars are among the hottest topics for astronomers, who carefully investigate the properties of the ISM to identify the trigger factors for star formation. Researchers at many Earth-based observatories will be able to use data from Planck to improve our understanding of these processes.

After only a few months of observation, Planck is also shedding light on another component of the ISM: namely, spinning dust grains. These are tiny aggregates of matter that appear to be slightly bigger than molecules such as CO2. They spin and radiate with a particular spectrum. Planck has for the first time been able to reconstruct this spectrum at high frequencies and so confirm that the spinning dust grains really do exist. This opens up a completely new field of study for astronomers, who will now have to understand the exact nature and behaviour of this intriguing component of the ISM.

Moving away from the interior of the Galaxy, one of the major contributions of the first part of Planck’s scientific programme is the identification of clusters of galaxies and the study of their properties through the signature that they leave in the CMB when its photons travel through the hot gas of the cluster. This is the Sunyaev-Zel’dovich effect, in which photons in the CMB increase in energy through inverse Compton scattering off hot electrons in the galaxy clusters. As a consequence, along the cluster direction, the CMB temperature increases at high frequency (>217 GHz) and decreases at low frequency (<217 GHz) with a well defined frequency spectrum, observable by Planck thanks to its wide frequency coverage.

Matter in the universe is grouped in enormous clusters surrounded by vast, empty spaces. These clusters can contain hundreds of galaxies and large amounts of dark matter. Dark matter consists of particles observed so far only through their gravitational effect; their exact nature remains unknown. Observing clusters of galaxies is crucial to understanding why matter of any kind aggregates in this fashion. The Sunyaev-Zel’dovich effect can be used to estimate the total mass of the cluster, which, when combined with X-ray observations, can in turn provide evaluations of the proportion of dark matter. The list of sources of this type that Planck has identified through the Sunyaev-Zel’dovich effect is 2–3 times larger than those published so far by the best observatories on Earth.

Planck has also been able to extend the spectrum of conventional radio sources. Previously this was known up to about 100 GHz but Planck has now pushed this to 857 GHz, giving new insight into the behaviour of these sources and the physical processes involved.

This first set of results is just the beginning of the Planck adventure. There will be more accurate catalogues and further findings in astrophysics, followed in early 2013 by Planck’s crucial contributions to cosmology. While the theoretical models used at present in cosmology seem to fit the current observations well, they require important components whose nature is not yet known – dark matter and dark energy. A major aim of the Planck mission is to cast light on both of these enigmatic components.

Dark energy is yet another contribution to the energy density of the universe, being different from dark and ordinary matter. It is presumed to provide the current acceleration to the expansion of the universe and its existence is inferred from observations of Type Ia supernovae, of the CMB and of the baryon acoustic oscillations that are determined by surveying galaxies at different cosmic epochs. The equation-of-state of dark energy characterizes the late and future evolution of the universe. Planck will be able to measure the parameters ρ (energy density) and w (ratio of pressure to ρ) of the equation-of-state with an accuracy that is expected to be an order of magnitude greater than for the previous data from WMAP. Moreover, studies of CMB anisotropies will allow the Planck collaboration to distinguish between various theoretical models that do not consider new ingredients in the energy-budget of the universe (such as dark energy and dark matter) but, rather, change the Einstein equations (as for example in “modified gravity” models).

CCpla2_01_11

As far as gravity is concerned, Planck’s contribution will depend on which theoretical model best describes the evolution of the universe. Among the many models that try to explain the initial conditions of the Big Bang, two have gained particular prominence: one is the inflationary model, in which the early universe underwent a period of exponential expansion; the other is the “bouncing model”, where the universe is described as something that was contracting and then “bounced” at the time when quantum gravity was important and began to re-expand. Inflationary models generally generate gravitational waves that can in principle be detected by Planck, depending on their amplitude, the value of which is a feature of the specific inflationary model. By contrast, the bouncing models do not predict gravitational waves. Planck will also constrain the expected deviation from the Gaussian distribution of the primordial fluctuations that are imprinted in the CMB. This feature characterizes more the bouncing models than the inflationary ones. While Planck will not have the final say in this field, it will indeed have the opportunity to rule out several models.

In addition to questions directly related to cosmology and astrophysics, Planck will also address a number of problems that are linked to particle physics and the Standard Model. It will increase by at least a factor three the accuracy of limits on the mass and number of neutrino species that WMAP currently sets at 0.56 eV and 4.3+/–0.8, respectively. Planck may also provide limits on the mass of the Higgs boson in certain theoretical models for a non-minimally coupled Higgs-inflation field with gravity.

According to current theories, the conditions of the universe today were set at the time of inflation, about 10–35 s after the Big Bang. The LHC below ground and Planck in deep space are allies in probing these first moments of the universe’s evolution. While the physicists at CERN are seeking to reproduce the conditions of the early universe, with Planck we observe the first light that came out of this “soup” of matter and radiation. Particle physicists, as well as astrophysicists and cosmologists, must work towards a concordant description of this early epoch in which data from the different sources fit together to give a consistent picture of the universe that we all inhabit.

Warwick hosts a feast of flavour

In September 2010, the University of Warwick played host to CKM2010, the 6th International Workshop on the CKM Unitarity Triangle. The CKM workshops, named after the Cabibbo-Kobayashi-Maskawa matrix that describes quark mixing in the Standard Model, date from 2002 when the first meeting took place at CERN. The workshop has since established itself as one of the most important meetings in the field.

With a two-year gap since the previous meeting, there was much at CKM2010 for theorists and experimentalists alike to discuss. This was the first time since the inauguration of the series that the workshop occurred with neither of the B-factory experiments – BaBar and Belle – being operational. A generation of experiments in charm and kaon physics have also completed data-taking. While much is being done to archive the knowledge that has been accumulated from this era, the organizers of CKM2010 chose instead to look to the future.

Uncharted territory

Only by looking forwards is it possible to address the many open questions in flavour physics, which Paride Paradisi of the Technische Universität München presented in the first of the opening plenary sessions. The biggest issue, perhaps, concerns the fact that there is still no real understanding of the underlying reason for the flavour structure of the Standard Model. More pressing, however, is the so-called “new-physics flavour puzzle”: how is the need for physics beyond the Standard Model at the tera-electron-volt scale – to resolve the hierarchy problem – to be reconciled with the absence of such new physics in precision flavour measurements? The most popular solution is the “minimal flavour violation” hypothesis, which can be tested by observables that are either highly suppressed or precisely predicted in the Standard Model.

Two sectors where the experimental measurements do not yet reach the desired sensitivity are those of the D0 and Bs mesons. Guy Wilkinson of the University of Oxford described the progress made at Fermilab’s Tevatron over the past few years, emphasizing the potential of the LHC experiments at CERN – particularly LHCb – to explore uncharted territory. It will be interesting to see if the datasets with larger statistics confirm the hints of contributions from new physics to Bs mixing that have been seen by the CDF and DØ experiments at the Tevatron. The large yields of D, J/ψ, B and U mesons already observed by the LHC experiments augur well for exciting results in the near future.

However, the LHC will not be the only player in flavour physics in the next decade. Yangheng Zheng of the Graduate University of Chinese Academy of Sciences and Marco Sozzi of the Università di Pisa and INFN described the new facilities and experiments that are coming online in the charm and kaon sectors, respectively. The BEPCII collider in Beijing has achieved an instantaneous luminosity above 3×1032 cm–2 s–1, and the BES III collaboration has already published the first results from its world’s largest datasets of electron–positron collisions in the charmonium resonance region. The kaon experiments NA62 at CERN and K0T0 at J-PARC are well on the way towards studies of the ultra-rare decays K→ π+νν and K→ π0νν.

CCckm1_01_11

Meanwhile, there are plans for a new generation of B factories, which Peter Križan of the University of Ljubljana and J Stefan Institute described. The clean environment of electron–positron colliders provides a unique capability for various measurements, such as B+→τντ. The upgrade of the KEKB facility and the Belle detector to allow operation with a peak luminosity of 8×1035 cm–2 s–1 (40 times higher than achieved to date) has been approved and construction is now ongoing, with commissioning due to start in 2014. The design shares many common features – most notably the “crab-waist” collision scheme – with the SuperB project, recently approved by the Italian government (Italian government approves SuperB).

Maximizing the impact of these new experiments will require progress in lattice QCD calculations. Junko Shigemitsu of Ohio State University described recent developments in this field, showing that accuracy below a per cent has been reached for several parameters in the kaon sector, with calculations using different lattice actions giving consistent results. In the charm sector, determinations of constants are approaching the per cent level of precision; this advance, when combined with new measurements, appears to have resolved the apparent discrepancy in the value of the Ds decay constant. Further work is needed to reach the desired level of precision in B physics but excellent progress is being made by several groups around the world.

The main body of the workshop consisted of parallel meetings of six working groups, which provided opportunities for detailed discussions between experts. The summaries from these working groups were presented in two plenary sessions on the final day.

Working group I, convened by Federico Mescia of the Universitat de Barcelona, Albert Young of the University of North Carolina and Tommaso Spadaro of INFN Frascati, focused on the precise determination of |Vud| and |Vus|. A measurement of the muon lifetime at a precision of one part per million by the MuLAN collaboration determines the reference value of the Fermi coupling. Improved measurements of |Vud| and |Vus|, mainly from nuclear β-decay and (semi-)leptonic kaon decay, respectively, set constraints on the unitarity of the first row of the CKM matrix at better than 1 permille. Interesting discrepancies in the measurements of the neutron lifetime and of |Vus| demand further studies.

Hint of new physics?

Working group II, convened by Jack Laiho of the University of Glasgow, Ben Pecjak of the University of Mainz and Christoph Schwanda of the Institute of High Energy Physics in Vienna, had as its subject the determination of |Vub|, |Vcb|, |Vcs| and |Vcd|. This is an area where dialogue between theorists and experimentalists has been extremely fruitful in driving down the uncertainties. Lively discussions continue, stimulated in part by the apparent discrepancies between inclusive and exclusive determinations of both |Vcb| and |Vub|. The latest data on the leptonic decay B+→τντ, which is sensitive to contributions from charged Higgs bosons, show an interesting discrepancy that may prove to be a first hint of new physics.

Working group III, convened by Martin Gorbahn of the Technische Universität München, Mitesh Patel of Imperial College London and Steven Robertson of the Canadian Institute for Particle Physics, at McGill University and SLAC, tackled rare B, D and K decays. One particularly interesting decay is B→K*l+l, where first measurements of the forward-backward asymmetry by BaBar, Belle and CDF hint at non-standard contributions. This is exciting for LHCb, where additional kinematic variables will be studied. Inclusive rare decays, such as b→sγ, and those with missing energy in the final state are better studied in electron–positron collisions and help to motivate the next generation of B factories. Among other golden modes, improved results on Bs→μ+μ and K→πνν remain eagerly anticipated by theorists, who continue to refine the expectations for these decays in various models.

The fourth working group, convened by Alexander Lenz of the Technische Universität Dortmund and Universität Regensburg, Olivier Leroy of the Centre de Physique des Particules de Marseille and Michal Kreps of the University of Warwick, was concerned with the determination of the magnitudes and relative phases of Vtd, Vts and Vtb. While the Tevatron experiments have started to set constraints on these quantities from direct top production, with further improvement anticipated at the LHC, the strongest tests at present come from studies of the oscillations of charm and beauty mesons. Hints for new physics contributions in the Bs sector provided the main talking point, but the potential for and the importance of improved searches for CP violation in charm oscillations was also noted.

Measurements of the angles of the unitarity triangle were the subject of the remaining two working groups. Working group V, convened by Robert Fleischer of NIKHEF and Stefania Ricciardi of the Rutherford Appleton Laboratory, focused on determinations of the angle γ using B→DK decays, while working group VI, convened by Matt Graham of SLAC, Diego Tonelli of Fermilab and Jure Zupan of the University of Ljubljana and the J Stefan Institute, covered measurements using charmless B decays. The angle γ plays a special role because it is has negligible theoretical uncertainty. The precision of the measurements is not yet below 10°, leaving room for results from LHCb – combined with measurements from charm decays – to have a big impact on the unitarity triangle fits. The measurements based on charmless decays, which are dominated by loop (“penguin”) amplitudes, tend to have significant theoretical uncertainties that must be tamed to isolate any new physics contribution. The main issue concerns developing methods to understand whether existing anomalous results (such as the pattern of CP asymmetries in B→Kπ decays) are caused by QCD corrections or by something more exotic.

A common feature of all working groups was the strong emphasis on the sensitivity to new physics and the utility of flavour observables to distinguish different extensions of the Standard Model. Less than two years after the award of the Nobel prize to Kobayashi and Maskawa “for the discovery of the origin of the broken symmetry which predicts the existence of at least three families of quarks in nature”, their greatest legacy – and that of Nicola Cabibbo (see box) – will perhaps be a discovery that finally goes beyond the paradigm of the Standard Model.

• CKM2010 was generously supported by the University of Warwick, the Science and Technology Facilities Council, the Institute for Particle Physics Phenomenology and the Institute of Physics.

bright-rec iop pub iop-science physcis connect