Comsol -leaderboard other pages

Topics

Experiment recreates ‘seeds’ of the universe’s magnetic fields

CCnew5_02_12

How did magnetic fields arise in the universe? An experiment using a high-power laser to create plasma instabilities may have glimpsed the processes that created magnetic fields during the period of galaxy formation.

Magnetic fields pervade the cosmos. Measurements of synchrotron emission at radio frequencies from cosmic rays and of Faraday rotation reveal that they exist in Galaxy clusters on the megaparsec scale, with strengths that vary from a few nanogauss to a few microgauss. Intergalactic magnetic fields weave through clusters of galaxies forming even larger-scale structures. In these clusters the temperatures can often be greater than 108 K, making them strong X-ray emitters. It is possible that the energy to heat the plasma comes from the magnetic field through some plasma instability. In general, wherever intergalactic hot matter is detected, magnetic fields with strengths greater than 10–9 G are also observed – with weaker magnetic fields tending to occur outside galaxy clusters. The magnetic field therefore appears to play a role in the structure of the universe.

The only way to explain the observed magnetization is through a magnetic dynamo mechanism in which it is necessary to invoke a “seed” field – but the origin of this seed field remains a puzzle. Prior to galaxy formation, density inhomogeneities would drive violent motions in the universe, forming shock waves that would generate vorticity on all scales. In 1997 Russell Kulsrud suggested that the “Biermann battery effect” could create seed magnetic fields as small as 10–21 G that would be amplified by the protogalaxy dynamo. In this effect, proposed by astrophysicist Ludwig Biermann in 1950, electric fields can arise in a plasma as the electrons and the heavier protons respond differently to external pressure and tend to separate. The Biermann battery acts to create the seed magnetic fields whenever the pressure and density gradients are not parallel.

Now, an international team of scientists has performed an experiment to recreate the conditions similar to those in the pregalactic epoch where shocks and turbulent motions form. They used a high-power laser at the Laboratoire pour l’Utilisation des Lasers Intenses in Paris to explode a rod of carbon surrounded by helium gas in a field-free environment. Magnetic induction coils monitored the magnetic fields created in the resulting shock waves. The team found that the explosion generated strong shock waves around which strong electric currents and magnetic fields formed, through the Biermann battery effect, with fields as high as 10–30 G existing for 1–2 μs at 3 cm from the blast. When scaled through 22 orders of magnitude, the measurements matched the predynamo magnetic seeds predicted by theory prior to galaxy formation.

Looking at the top for new physics

Last year one of the properties of top-quark production, the tt charge asymmetry, attracted much interest with the publication of measurements by the CDF and DØ collaborations at Fermilab’s Tevatron (T Aaltonen et al. 2011, V M Abazov et al. 2011). They reported results that were 2σ above the predicted values. This deviation can be explained by several theories that go beyond the Standard Model by introducing new particles that contribute to top-quark production. The CMS collaboration has now measured this top-quark property for the first time at the LHC – and finds a different result.

CCnew6_02_12

In the Standard Model, a difference in angular distributions between top quarks and antiquarks (commonly referred to as the charge asymmetry) in tt production through quark–antiquark annihilation appears in QCD calculations at next-to-leading order. It leads to there being more top quarks produced at small angles to the beam pipe, while top antiquarks are produced more centrally (figure 1). As a consequence, the pseudorapidity distribution of top quarks is broader than that of top antiquarks, which makes the difference of the respective pseudorapidities, Δ|η| = |ηt| – |ηt|, a suitable observable for measuring the charge asymmetry (figure 2). The Standard Model predicts a small asymmetry of AC = 0.0136 ± 0.0008, which translates into an excess of about 1% of events with Δ|η| > 0 compared with events with Δ|η| < 0 (Kuhn and Rodrigo 2011). The existence of new sources of physics could enhance this asymmetry.

CCnew7_02_12

CMS has measured the tt charge asymmetry using data corresponding to an integrated luminosity of 1 fb–1 (CMS collaboration 2011). A total of 12 757 events were selected in the lepton+jets channel, where one top quark decays into a b quark, a charged lepton (electron or muon) and the corresponding neutrino, while the other top quark decays into three quarks. The background contribution to this dataset is about 20%. The measurement of the charge asymmetry is based on the full reconstruction of the four-momenta of the top quarks, which have to be reconstructed from the observed leptons, jets and missing transverse energy. The dependency of the selection efficiency both on the Δ|η| value and on the smearing of the momenta of the top-quark decay products, because of finite detector resolution, are accounted for when calculating the final result.

The measured value of AC = –0.017 ± 0.032 (stat.)+0.025 –0.036 (syst.) is consistent with the Standard Model prediction and does not provide any indication for a new physics contribution. CMS also measured the uncorrected charge asymmetry as a function of the invariant mass of the top-quark pair, mtt. Previous measurements by the CDF collaboration had found an asymmetry that was more than 3σ above the predicted value for large values of mtt. However, the current analysis by CMS dampens the excitement that the CDF result caused because it reveals no hints for a deviation from the Standard Model predictions.

The hunt for long-lived exotic beasts

The hunt for exotic massive long-lived particles is an important element in the ATLAS collaboration’s programme of searches. The signatures associated with such long-lived objects are particularly striking and experimentally challenging. At the LHC they could appear as slow-moving and highly ionizing objects that could slip into the next bunch-crossing, saturate the read-out electronics and confound the event reconstruction software. An alternative approach to the direct detection of moving long-lived particles is to search for those that stop in the detector and subsequently decay. This is the method used in a recent search by the ATLAS collaboration.

CCnew8_02_12

The new search looks for metastable R-hadrons that would be formed from gluinos and light quarks (ATLAS collaboration 2011a). If produced, some R-hadrons would stop in the dense calorimeter material – following electromagnetic and hadronic interactions. Within the scenario of split-supersymmetry, an R-hadron could decay to a final state of jets and a neutralino. During 2010, the experiment used jet triggers to record candidate decays in empty bunch crossings when no proton–proton collisions were intended. With the subsequent analysis, which required estimations of cosmic and beam-related backgrounds along with the uncertainties on R-hadron stopping rates, ATLAS has set upper limits on the pair-production cross-section for gluinos with lifetimes in the range 10–5–103 s. From this, the collaboration has obtained a lower mass limit for the gluino of around 340 GeV at the 95% CL (see figure). Although the search was inspired by split-supersymmetry, the results are generally applicable for any heavy object decaying to jets.

This complex work complements other, more conventional, searches for long-lived particles that interact or decay in the ATLAS detector. These results allow stringent limits to be set on topical models of new physics. Moreover, the collaboration is performing experimentally driven searches up to the limits of the detector’s capability to detect long-lived objects. For example, a search based on early collision data sought exotic particles with large electric charge (up to 17e).

With more data and a continually improving knowledge of the detector response, the ATLAS collaboration is aiming at a set of comprehensive searches for long-lived objects, which possess a range of colour, electric and magnetic charges, and appear as stable objects or decay to a variety of final states.

Einstein ring reveals dwarf dark-matter galaxy

If a remote galaxy is located exactly behind another galaxy then it can be seen deformed by gravitational lensing as an “Einstein ring”. The careful analysis of such an effect has recently revealed the presence of an invisible dwarf galaxy contributing to the gravitational lens. It could be one of the many dark-matter satellites expected by simulations of cold dark matter.

Playing hide-and-seek is not easy for galaxies: they appear even bigger and brighter when they hide behind each other. When two galaxies are perfectly aligned on the line-of-sight, the image of the remoter one can even be deformed into a complete annulus around the nearer source. This strong gravitational lensing effect is called an “Einstein ring” because it is described by Albert Einstein’s general theory of relativity. The gravity of the closer galaxy acts like a magnifying glass. It bends the light paths around it by deforming space–time locally.

According to standard cosmology, the universe is composed of about 5% ordinary matter (baryons), 23% dark matter and 72% dark energy. This result is derived from fluctuations of the cosmic microwave background (CMB) with additional constraints from type Ia supernovae and from the large-scale distribution of galaxies (CERN Courier May 2008 p8). Because dark-matter particles are still of unknown nature, they are distinguished as “cold” or “hot”, where “hot” means that they are moving with a highly relativistic speed, like neutrinos, for instance.

If dark matter is cold, small-scale galaxies should form first and subsequently merge to form larger galaxies; if it is hot, large halos would form first and then fragment into galaxies of various sizes. Numerical simulations of the formation of large-scale structures from CMB fluctuations suggest that dark matter should be dominated by a cold component to account for the observed distribution of galaxies. There are, however, several problems with cold dark matter. One of them is that such simulations greatly over-predict the number of small galaxies orbiting fully fledged spiral or elliptical galaxies. While cold dark matter should result in thousands of dwarf satellite galaxies around the Milky Way, astronomers have so far observed only about 30 of them. So either the models are missing a key ingredient or there should be many dark-matter galaxies around the Milky Way – but too scarcely populated with stars to be detectable.

While some astronomers search for dark galaxies around the Milky Way, others try to detect their presence around distant galaxies using gravitational lensing. Simona Vegetti, a postdoctoral researcher at the Massachusetts Institute of Technology, belongs to the second group and was lucky enough to detect the signature of one such elusive galaxy at cosmological distance. Together with her colleagues in the Netherlands and US, she studied the B1938+666 lens system, in which the gravitational lensing effect of a massive elliptical galaxy at a redshift of z≃0.9 is making a background galaxy (z≃2.1) appear as an almost perfect Einstein ring. The study is based on infrared observations obtained both with the Keck 10 m telescope on Mauna Kea, Hawaii, and with the Hubble Space Telescope.

The team found a slight deformation of the Einstein ring that would be the imprint, according to their modelling, of a small companion to the lensing elliptical galaxy. The dwarf galaxy’s weight was estimated at 190 million solar masses and its luminosity has an upper limit of 54 million solar luminosities. This means that its mass-to-light ratio is at least 3.5 times higher than that of the Sun. Such a ratio is typical of dwarf galaxies orbiting the Milky Way, such as Fornax and Sagittarius, but it is still an order of magnitude below the expectation of numerical simulations. As long as the Galaxy remains invisible even to future facilities, the discovery of this object will support the cold dark-matter scenario, although many more such small dark galaxies must be discovered to be consistent with simulations.

ALICE unveils mysteries of the J/ψ

J/ψ suppression

The J/ψ meson, a bound state of a charm (c) and an anticharm (c) quark, is unique in the long list of particles that physicists have discovered over the past 50 years. Found almost simultaneously in 1974 – at Brookhaven, in proton–nucleus collisions, and at SLAC, in e+e collisions – this particle is the only one with two names, given to it by the two teams. With a mass greater than 3 GeV it was by far the heaviest known particle at the time and it opened a new field in particle physics, namely the study of “heavy” quarks.

The charm quark and its heavier partners, the bottom and top quarks (the latter discovered more than 20 years later, in 1995), have proved to be a source of both inspiration and problems for particle physicists. By now, thousands of experimental and theoretical papers have been published on these quarks and the production, decay and spectroscopy of particles containing heavy quarks have been the focus of intense and fruitful investigations.

In conclusion, a particle that has been known for almost half a century continues to be a source of inspiration and progress

However, despite a history of almost 40 years, the production of the J/ψ itself still represents a puzzle for QCD, the standard theory of strong interactions between quarks and gluons. On the one hand, the creation of a pair of quarks as “heavy” as charm (mc ≈ 1.3 GeV/c2) in a gluon–gluon or quark–antiquark interaction is a process that is “hard” enough to be treated in a perturbative way and therefore well understood by theory. On the other hand however, the binding of the pair is essentially a “soft” process – the relative velocity of the two quarks in a J/ψ is “only” about 0.5 c – and this proves to be much more difficult to model.

J/ψ production

About fifteen years ago, the results obtained at Fermilab’s Tevatron collider first showed a clear inconsistency with the theoretical approach adopted at the time to model J/ψ production, the so-called colour-singlet model. This unsatisfactory situation led to the formulation of the more refined approach of nonrelativistic QCD (NRQCD), which brought a better agreement with data. However, other quantities such as the polarization of the produced J/ψ, i.e. the extent to which the intrinsic angular momentum of the particle is aligned with respect to its momentum, were poorly reproduced. This uncomfortable situation also arose partly because of controversial experimental results from the Tevatron, where the CDF experiment’s results on polarization from Run1 disagreed with those from Run2. Considerable hope is therefore placed on the results that the LHC can obtain for this observable (more on this later).

Nevertheless, despite these unresolved mysteries surrounding its production, the J/ψ has an important “application” in high-energy nuclear physics and more precisely in the branch that studies the formation of the state of (nuclear) matter where quarks and gluons are no longer confined into hadrons: the quark–gluon plasma (QGP). If such a state is created, it can be thought of as a hot “soup” of coloured quarks and gluons, where colour is the “charge” of the strong interaction. In the usual world, quarks and gluons are confined within hadrons and colour cannot fly over large distances. However, in certain situations, as when ultrarelativistic heavy-ion collisions take place, a QGP state could be formed and studied. Indeed, such studies form the bulk of the physics programme of the ALICE experiment at the LHC.

The J/ψ is composed of a heavy quark–antiquark pair with the two objects orbiting at a relative distance of about 0.5 fm, held together by the strong colour interaction. However, if such a state were to be placed inside a QGP, it turns out that its binding could be screened by the huge number of colour charges (quarks and gluons) that make up the QGP freely roaming around it. This causes the binding of the quark and antiquark in the J/ψ to become weaker so that ultimately the pair disintegrates and the J/ψ disappears – i.e. it is “suppressed”. Theory has shown that the probability of dissociation depends on the temperature of the QGP, so that the observation of a suppression of the J/ψ can be seen as a way to place a “thermometer” in the medium itself.

Such a screening of the colour interaction, and the consequent J/ψ suppression, was first predicted by Helmut Satz and Tetsuo Matsui in 1986 and was thoroughly investigated over the following years in experiments with heavy-ion collisions. In particular, Pb–Pb interactions were studied at CERN’s Super Proton Synchrotron (SPS) at a centre-of-mass energy, √s, of around 17 GeV per nucleon pair and then Au–Au collisions were studied at √s=200 GeV at Brookhaven’s Relativistic Heavy-Ion Collider (RHIC).

The J/ψ and its suppression can be seen as a thermometer in the medium created in the collision

As predicted by the theory, a suppression of the J/ψ yield was observed with respect to what would be expected from a mere superposition of production from elementary nucleon–nucleon collisions. However, the experiments also made some puzzling observations. In particular, the size of the suppression (about 60–70% for central, i.e. head-on nucleus–nucleus collisions) was found to be approximately the same at the SPS and RHIC, despite the jump in the centre-of-mass energy of more than one order of magnitude, which would suggest higher QGP temperatures at RHIC. Ingenious explanations were suggested but a clear-cut explanation of this puzzle proved impossible.

At the LHC, however, extremely interesting developments are expected. In particular, a much higher number of charm–anticharm pairs are produced in the nuclear interaction, thanks to the unprece¬dented centre-of-mass energies. As a consequence, even a suppression of the J/ψ yield in the hot QGP phase could be more than counter-balanced by a statistical combination of charm–anticharm pairs happening when the system, after expansion and cooling, finally crosses the temperature boundary between the QGP and a hot gas of particles. If the density of heavy quark pairs is large enough, this regeneration process may even lead to an enhancement of the J/ψ yield – or at least to a much weaker suppression with respect to the experiments at lower energies. The observation of the fate of the J/ψ in nuclear collisions at the LHC constitutes one of the goals of the ALICE experiment and was among its main priorities during the first run of the LHC with lead beams in November/December 2010.

The ALICE experiment is particularly suited to observing a J/ψ regeneration process. For simple kinematic reasons, regeneration can be more easily observed for charm quarks with low transverse-momentum. Contrary to the other LHC experiments, both detector systems where the J/ψ detection takes place – the central barrel (where the J/ψ→e+e decay is studied) and the forward muon spectrometer (for J/ψ→μ+μ) – can detect J/ψ particles down to zero transverse momentum.

As the luminosity of the LHC was still low during its first nucleus–nucleus run, the overall J/ψ statistics collected in 2010 were not huge, of the order of 2000 signal events. Nevertheless, it was possible to study the J/ψ yield as a function of the centrality of the collisions in five intervals from peripheral (grazing) to central (head-on) interactions.

Clearly, suppression or enhancement of a signal must be established with respect to a reference process. And for such a study, the most appropriate reference is the J/ψ yield in elementary proton–proton collisions at the same energy as in the nucleus–nucleus data-taking. However, in the first proton run of the LHC the centre-of-mass energy of 7 TeV was more than twice the energy of 2.76 TeV per nucleon–nucleon collision in the Pb–Pb run. To provide an unbiased reference, the LHC was therefore run for a few days at the beginning of 2011 with lower-energy protons and J/ψ production was studied at the same centre-of-mass energy of Pb–Pb interactions.

The two parameters

The Pb–Pb and p–p results are compared using a standard quantity, the nuclear modification factor RAA. This is basically a ratio between the J/ψ yield in Pb–Pb collisions, normalized to the average number of nucleon–nucleon collisions that take place in the interaction of the two nuclei and the proton–proton yield. Values smaller than 1 for RAA therefore indicate a suppression of the J/ψ yield, while values larger than 1 represent an enhancement.

The results from the first ALICE run are rather striking, when compared with the observations from lower energies (figure 1). While a similar suppression is observed at LHC energies for peripheral collisions, when moving towards more head-on collisions – as quantified by the increasing number of nucleons in the lead nuclei participating in the interaction – the suppression no longer increases. Therefore, despite the higher temperatures attained in the nuclear collisions at the LHC, more J/ψ mesons are detected by the ALICE experiment in Pb–Pb with respect to p–p. Such an effect is likely to be related to a regeneration process occurring at the temperature boundary between the QGP and a hot gas of hadrons (T≈160 MeV).

The picture arises from these observations is consistent with the formation, in Pb–Pb collisions at the LHC, of a deconfined system (QGP) that can suppress the J/ψ meson, followed by a hadronic system in which a fraction of the charm–anticharm pairs coalesce and ultimately give a J/ψ yield larger than that observed at lower energies. This picture should be clarified by the Pb–Pb data that were collected in autumn 2011. Thanks to an integrated luminosity for such studies that was 20 times larger than in 2010, a final answer on the fate of the J/ψ inside the hot QGP produced at the LHC seems to be within reach.

ALICE is also working hard to help solve other puzzles in J/ψ production in proton–proton collisions, in particular by studying, as described above, the degree of polarization. A first result, recently published in Physical Review Letters, shows that the J/ψ produced at not too high a transverse momentum are essentially unpolarized, i.e. the angular distribution of the decay muons in the J/ψ→μ+μ process is nearly isotropic (figure 2). Theorists are now working to establish if such behaviour is compatible with the NRQCD approach that up to now is the best possible tool for understanding the physics related to J/ψ production.

In conclusion, a particle that has been known for almost half a century continues to be a source of inspiration and progress. However, even if particle and nuclear physicists working at the LHC are confident of being able finally to understand its multifaceted aspects, the future often brings the unexpected. So stay tuned and be ready for surprises.

Designs on higher luminosity

Long-term programme of the LHC

The Large Hadron Collider (LHC) has been exploring the new high-energy frontier since 2009, attracting a global user-community of more than 7000 scientists. At the start of 2011, the long-term programme for the LHC had a minimum goal of an integrated luminosity (a measure of the number of recorded collisions) of at least 1 fb–1. Thanks to better-than-anticipated performance, the year ended with almost six times this amount delivered to each of the two general-purpose experiments, ATLAS and CMS.

The LHC is the pinnacle of 30 years of technological development. Set to remain the most powerful accelerator in the world for at least two decades, its full exploitation is the highest priority in the European Strategy for Particle Physics, adopted by the CERN Council and integrated into the European Strategy Forum on Research Infrastructures (ESFRI) Roadmap. However, beyond the run in 2019–2021, halving the statistical error in the measurements will require more than 10 years of running – unless the nominal luminosity is increased by a considerable amount. The LHC will need a major upgrade after 2020 to maintain scientific progress and exploit its full capacity. The aim is to increase its luminosity by a factor of 5–10 beyond the original design value and provide 3000 fb–1 in 10 to 12 years.

From a physics perspective, operating at a higher luminosity has three main purposes: to perform more accurate measurements on the new particles discovered at the LHC; to observe rare processes that occur at rates below the current sensitivity, whether predicted by the Standard Model or by the new physics scenarios unveiled by the LHC; and to extend exploration of the energy frontier, to increase the discovery reach with rare events in which most of the proton momentum is concentrated in a single quark or gluon.

Technological challenges

The LHC will also need technical consolidation and improvement. For example, radiation sensitivity of electronics may already be a limiting factor for the LHC in its current form. Transferring equipment such as power supplies from the tunnel to the surface requires a completely new scheme for “cold powering”, with a superconducting link to carry some 150 kA over 300 m with a vertical step of 100 m – a great challenge for superconducting cables and cryogenics.

With such a highly complex and optimized machine, an upgrade must be studied carefully and will require about 10 years to implement (figure 1). This has given rise to the High-Luminosity LHC (HL-LHC) project, which relies on a number of key innovative technologies, representing exceptional technological challenges, such as cutting-edge 12 T superconducting magnets with large aperture, compact and ultraprecise superconducting cavities for beam rotation, new types of collimators and 300-m long, high-power superconducting links with almost zero energy dissipation.

The high-luminosity upgrade represents a leap forward for key hardware components

The high-luminosity upgrade therefore represents a leap forward for key hardware components. The most technically challenging aspects of these cannot be done by CERN alone but will instead require strong collaboration involving external expertise. For this reason part of the HL-LHC project is grouped under the HiLumi LHC Design Study, which is supported in part by funding from the Seventh Framework programme (FP7) of the European Commission (EC).

Six work packages

HiLumi LHC comprises six work packages (WP), which are all overseen by the project management and technical co-ordination (WP1). Accelerator physics (WP2) is at the heart of the design study and it relates closely to the WPs that are organized around the main equipment on which the performance of the upgrade relies (figure 2). The first aim is to reduce β* (the beam focal length at the collision point), so the insertion-region magnets (WP3) that accomplish this function are the first set of hardware to consider. Crab cavities (WP4) will then make the decreased β* really effective by eliminating the reduction caused by geometrical factors; they will also provide levelling of the luminosity during the beam spill. Collimators (WP5) are necessary to protect the magnets from the 500 MJ stored energy in the beam – a technical stop to change a magnet would take 2–3 months. Superconducting links (WP6) will avoid radiation damage to electronics and ease installation and integration in what is a crowded zone of the tunnel. The remaining WPs of HL-LHC are not included in the FP7 Design Study as they refer to accelerator functions or processes that will be carried out within CERN (with the exception of the 11 T dipole project for collimation in the cold region of the dispersion suppressor, which is the subject of close collaboration with Fermilab).

Work Packages of the HiLumi LHC FP7

The 20 participants within the HiLumi LHC Design Study include institutes from France, Germany, Italy, Spain, Switzerland and the UK, as well as organizations from outside the European Research Area, such as Russia, Japan and the US. As well as providing resources, participants are sharing expertise and responsibilities for the intellectual challenges.

The Japanese and US contributions constitute roughly one third of the manpower for the design study and are well anchored in existing partnerships formed during the construction of the LHC, namely the CERN-KEK collaboration and the US LHC Accelerator Research Program (LARP). Japan participates as a beneficiary without funding and the US laboratories are associates connected to the project via a memorandum of understanding. The participation of leading US and Japanese laboratories enables the implementation of the construction phase as a global project. The proposed governance model is tailored accordingly and could pave the way for the organization of other global research infrastructures.

The four-year HiLumi LHC Design Study was launched last November with a meeting attended by almost 160 participants, half of whom were from institutes beyond CERN. The meeting was held jointly with LARP because HL-LHC builds on both US and European activities. It included a meeting of the collaboration board, during which Michel Spiro, president of the CERN Council, presented the necessary steps for inclusion in the updated European Strategy for Particle Physics. CERN Council will discuss the updated strategy in March 2013 and plans to adopt it in a special session in Brussels in early summer 2013. Spiro’s presentation showed that with respect to the initially proposed timeline of HiLumi LHC, the Preliminary Design Report will now need to advance by one year to be ready by the end of 2012.

The FP7 HiLumi LHC Design Study thus combines and structures the efforts and R&D of a large community towards the ambitious HL-LHC objectives. It acts as a catalyst for ideas, helping to streamline plans and formalize collaborations. When evaluated by the EC, the design study proposal scored 15 out of 15 and was ranked top of its category, receiving funding of €4.9 million. “The appeal of the HiLumi LHC Design Study is that it goes beyond CERN and Europe to a worldwide collaboration,” stated Christian Kurrer, EC project officer of HiLumi LHC at the meeting in November. “This will further strengthen scientific excellence in Europe.”

• For more details about the High Luminosity upgrade and the HiLumi LHC Design Study, see http://cern.ch/HiLumiLHC.

Luis Alvarez: the ideas man

CCalv1_02_12

Luis Alvarez – one of the greatest experimental physicists of the 20th century – combined the interests of a scientist, an inventor, a detective and an explorer. He left his mark on areas that ranged from radar through to cosmic rays, nuclear physics, particle accelerators, detectors and large-scale data analysis, as well as particles and astrophysics. On 19 November, some 200 people gathered at Berkeley to commemorate the 100th anniversary of his birth. Alumni of the Alvarez group – among them physicists, engineers, programmers and bubble-chamber film scanners – were joined by his collaborators, family, present-day students and admirers, as well as scientists whose professional lineage traces back to him. Hosted by the Lawrence Berkeley National Laboratory (LBNL) and the University of California at Berkeley, the symposium reviewed his long career and lasting legacy.

A recurring theme of the symposium was, as one speaker put it, a “Shakespeare-type dilemma”: how could one person have accomplished all of that in one lifetime?

Beyond his own initiatives, Alvarez created a culture around him that inspired others to, as George Smoot put it, “think big,” as well as to “think broadly and then deep” and to take risks. Combined with Alvarez’s strong scientific standards and great care in executing them, these principles led directly to the awarding of two Nobel prizes in physics to scientists at Berkeley – George Smoot in 2006 and Saul Perlmutter in 2011 – in addition to Alvarez’s own Nobel prize in 1968.

Invaluable talents

Rich Muller, who was Alvarez’s last graduate student, described some of his mentor’s work during the Second World War. Alvarez’s talents as an inventor made him invaluable to the war effort. Among his contributions was the ground-controlled approach (GCA) that allowed planes to land at night and in poor visibility. For the rest of his life, at least once a year, Alvarez would bump into someone who thanked him for GCA, explaining: “I was a pilot in the Second World War and you saved my life.” In 1948, when the Soviets imposed a blockade of Berlin, GCA allowed the Berlin Airlift to succeed by assuring the cargo planes’ safe landing in difficult circumstances.

In the early post-war period, Alvarez’s inventions included the proton linear accelerator (with Wolfgang Panofsky) and a tandem Van de Graaff accelerator. Over his lifetime he was granted more than 40 US patents. He applied for his first in 1943 and his last in 1988, the year he died. He was one of the first inductees to the Inventor’s Hall of Fame. He loved thinking and, heeding the advice of his physiologist father, frequently made time to sit and think. Muller recalled that Alvarez told him that only one idea in 10 is worth pursuing, and only one in 100 might lead to a discovery. Considering how many of his ideas bore remarkable fruit, Muller concluded that Alvarez must have had thousands of them.

One such idea originated from his interactions in 1953 with Don Glaser, who invented the bubble chamber. Alvarez thought that a large liquid-hydrogen bubble chamber was needed to solve all of the puzzles generated by the many particles that had been recently discovered. He immediately put his two graduate students, Frank Crawford and Lynn Stevenson, and a number of his technicians to work on the project. The first tracks in a hydrogen bubble chamber were seen in the summer of 1954.

A succession of chambers then culminated in the 72-inch bubble chamber, which began operating in 1959. Jack Lloyd, an engineer in the Alvarez group at the time, believes that it was probably one of the largest tanks of liquid hydrogen ever made (400 l), interfaced with the most enormous piece of optical glass ever made. As Lloyd recalled: “It had to work at around 60 or 70 psi and it pulsed every 6 s, with about a 10 psi pulse, which is a frightening thing to an engineer because of potential fatigue problems.”

Observation of the traces left by particles in the bubble chamber needed additional equipment for scanning the photographs and measuring the tracks. In the end, it was necessary to use computers to handle the wealth of information coming from the measurements. The latter task was assigned to Art Rosenfeld, who came to the Alvarez group as a post-doc on the recommendation of Enrico Fermi. Fermi said that, given the politics of the two men, they would be on speaking terms about 80% of the time. “That was right,” Rosenfeld recalled. Keeping the peace was not among Alvarez’s strengths.

By 1967 the Alvarez group was analysing more than a million events a year. An army of scanners examined the films for events of interest and a battalion of computer programmers wrote code to analyse them. The bubble-chamber team was at that time the largest high-energy physics group in the world, totalling several hundred people. The development of the chamber and the analysis systems resulted in an explosion of new particle discoveries that helped to establish the quark model. It was this work that earned Alvarez the Nobel Prize in Physics in 1968.

Among his other attention-grabbing ideas was the use of cosmic rays to search for secret chambers in Chephren’s pyramid in Giza. Jerry Anderson, who collaborated on the project in the late 1960s, told of Alvarez’s work in assembling a team of Egyptian and US scientists to design and carry out the experiment. After pointing detectors in several different directions they found no evidence of any voids in the solid pyramid. Afterwards, if anyone commented that the team had not found any hidden chambers, Alvarez would respond, “We found that there were no hidden chambers” – an important distinction.

Around that same time he became interested in a film taken with a home-movie camera that captured the assassination of President John F Kennedy. Charles Wohl, a student in his group, described Alvarez’s careful analysis of the film and his conclusions, which clarified some of the uncertainties in the official investigation of the assassination.

The Alvarez philosophy

More than one speaker quoted this passage from Alvarez’s autobiography: “I’m convinced that a controlled disrespect for authority is essential to a scientist. All of the good experimental physicists I have known have had an intense curiosity that no ‘Keep Out’ sign could mute.” Yet, Alvarez had a perfect safety record while building and operating the huge bubble chamber. Stanley Wojcicki, who was a graduate student in the group, said that when Alvarez was retiring as the head of the bubble-chamber group and a new head was about to be appointed, someone asked him about the replacement’s responsibilities. “He’s the person who talks to the widows after an accident,” was Alvarez’s response.

CCalv2_02_12

Saul Perlmutter, who was Muller’s graduate student, felt Alvarez’s influence keenly when he came to LBNL in 1982. He characterized it as a “can-do, cowboy spirit”. He explained: “As a physicist, you had the hunting licence to look at any problem whatsoever and also you had the arrogance to think you were going to be able to solve that problem – or at least be able to make a measurement that might be relevant to the problem. And if there was equipment around, you would use it; and if there was not equipment around, you would invent it. And you had the wealth of talent around you, of the engineers, mechanical and electrical, who knew how to put these things together and how to make them work.” That was fertile ground for discovery.

Perlmutter also benefited from Alvarez’s philosophy in another way. When he and Carl Pennypacker wanted to start looking for supernovae at greater distances, Muller was sceptical about their prospects for success. However, he had learnt from Alvarez that part of the job of a group leader is to support people in trying things even if you are not sure that they are the right things to do. This is what he did – and Perlmutter’s Nobel prize for that work attests to Muller’s (and Alvarez’s) wisdom.

CCalv3_02_12

In the final decade of his life, Alvarez’s “hunting licence” led him into geology and palaeontology. His son, Walter Alvarez, a geologist studying the Cretaceous-Tertiary boundary in an outcropping in Italy, gave his father a rock showing the clay boundary separating a layer of limestone with abundant fossils of diverse species and a layer largely devoid of signs of life. Luis was intrigued and eventually conceived of a way of determining how long it took for this clay layer to accumulate. This would signal whether the mass extinction was sudden or gradual. Neutron activation analysis showed an anomalous abundance of iridium in the clay layer, a result that led eventually to the Alvarez’s theory that an impact from a comet or asteroid caused the dinosaurs and other species to die out. The announcement stirred a controversy that was only recently settled in their favour.

Other speakers during the day attested to Luis Alvarez’s inventive spirit and his fearlessness in asking original and important questions in far-flung fields in which he had no previous experience. He eagerly embraced new ideas and unhesitatingly took on new challenges throughout his career. The lively and enlightening day ended with a reception and dinner, where more family members and colleagues related their recollections of this icon of 20th-century science and technology.

DIRAC observes dimeson atoms and measures their lifetime

CCdir2_02_12

The study of nonstandard atoms has a long tradition in particle physics. Such exotic atoms include positronium, muonic atoms, antihydrogen and hadronic atoms. In this last category, mesonic hydrogen in particular has been investigated extensively in different experiments at CERN, PSI and Frascati. Dimeson atoms also belong to this category. These electromagnetically bound mesonic pairs, such as the π+π atom (pionium, A) or the πK atom (AπK), offer the opportunity to study the theory of the strong interaction, QCD, at low energy, i.e. in the confinement region.

CCdir1_02_12

This strong interaction leads to a broadening and a shift of atomic levels, and dominates the lifetime of these exotic atoms in their s-states. The ππ interaction at low energy, constrained by the approximate chiral symmetry SU(2) for two flavours (u and d quarks), is the simplest and best understood hadron–hadron process. Since the bound-state physics is well known, a measurement of the A lifetime provides information on hadron properties in the form of scattering lengths – the basic parameters in low-energy ππ scattering.

CCdir3_02_12

Moreover, the low-energy interaction between the pion and the next lightest, and strange, meson – the kaon – provides a promising probe for learning about the more general three-flavour SU(3) structure (u, d and s quarks) of hadronic interactions, which is a matter not directly accessible in pion–pion interactions. Hence, data on πK atoms are valuable because they provide insights into the role played by the strange quarks in the QCD vacuum.

The experiment

The mesonic atoms A (AπK) are produced by final-state Coulomb interactions between oppositely charged ππ (πK) pairs that are generated in proton–target reactions (Nemenov 1985). In the DImeson Relativistic Atom Complex (DIRAC) experiment at CERN, they are formed when a 24 GeV/c proton beam from the Proton Synchrotron hits a thin target, typically a 100 μm-thick nickel foil (figure 1). After production, the mesonic atoms travel through the target and some of them are broken up (ionized) as they interact with matter. This produces “atomic pairs”, which are characterized by their small relative momenta, Q < 3 MeV/c, in the centre of mass of the pair. These pairs are detected in the DIRAC apparatus. The remaining atoms mainly annihilate into π0π0 pairs, which are not detected, or they survive and annihilate later. The number of “atomic pairs” from the break-up of atoms, nA, depends on the annihilation mean-free-path, which is given by the atom’s lifetime, τ, and its momentum. Thus, the break-up probability, Pbr, is a function of the A’s lifetime, τ.

CCdir4_02_12

The interactions between the protons and the target also produce oppositely charged free ππ pairs, both with and without final-state Coulomb interactions, depending on whether or not the pairs are produced close to each other. This gives rise to “Coulomb pairs” and “non-Coulomb pairs”. The latter includes meson pairs in which one or both mesons come from the decay of long-lived sources. Furthermore, two mesons from different interactions can contribute as “accidental pairs”. The total number of atoms produced, NA, is proportional to NC, the number of Coulomb pairs with low relative momenta, NA = kNC, where the coefficient, k, is precisely calculable. DIRAC measures the break-up probability for the A, which is defined as the ratio of the observed number of “atomic pairs” to the number of produced atoms: Pbr(τ)= nA/NA. NA is calculated from the number of “Coulomb pairs”, NC, obtained from fits to the data.

The purpose of the DIRAC set-up is to record oppositely charged ππ (πK) pairs with small relative momenta, Q. As figure 2 shows, the emerging pairs of charged pions travel in vacuum through the upstream part where co-ordinate and ionization detectors provide initial track data, before they are split by the 2.3 Tm bending magnet into the “positive” (T1) and “negative” (T2) arms. Both arms are equipped with high-precision drift chambers and trigger/time-of-flight detectors, as well as Cherenkov, preshower and muon counters. The relative time resolution between the two arms is around 200 ps.

CCdir5_02_12

The momentum reconstruction in the double-arm spectrometer uses the drift chamber information from both arms as well as the measured hits in the upstream co-ordinate detectors. The resolution on the longitudinal (QL) and transverse (QT) components of the relative momentum of the pair, Q, defined with respect to the direction of the total momentum of the pair in the laboratory, is 0.55 MeV/c and 0.1 MeV/c, respectively. A system of fast-trigger processors selects the all-important events with small Q.

Observing and measuring lifetimes

The observation of the A was reported from an experiment at Serpukhov nearly 20 years ago. This was followed at CERN 10 years later with a measurement of the A lifetime by DIRAC (Adeva et al. 2005). Last autumn, DIRAC presented the most recent value for the A lifetime in the ground state, τ = 3.15 × 10–15 s, with a total uncertainty of around 9%, based on the statistics of 21 200 “atomic pairs” collected with the nickel target in 2001–2003 (Adeva et al. 2011). Figure 3 (overleaf) shows the characteristic accumulation of events at low QL from the break-up of the π+π atom: the A signal appears as an excess of pairs over the background spectrum in the low Q region.

S-wave ππ scattering is isospin-dependent, so this lifetime can be used to calculate a scattering-length difference, |a0–a2|, where a0 and a2 are the S-wave ππ scattering lengths for isospin 0 and 2, respectively. The measured lifetime yields a result of |a0–a2| = 0.253 (mπ–1) with around 4% precision, in agreement with the result obtained by the NA48/2 experiment at CERN (Batley et al. 2009). The corresponding theoretical values are 0.265 ± 0.004 (mπ–1) for the scattering-length difference (Colangelo et al. 2000) and (2.9 ± 0.1) × 10–15 s for the lifetime (Gasser et al. 2001). These results demonstrate the high precision that can be reached in low-energy hadron interactions, in both experiment and theory.

The first evidence for the observation of the πK atom, AπK, was published by the DIRAC collaboration in 2009 (Adeva et al. 2009). In this case, the mesonic atoms were produced in a 26 μm-thick platinum target and the DIRAC spectrometer had been upgraded for K identification with heavy-gas (C4F10) and aerogel Cherenkov detectors. An enhancement observed at low relative momentum corresponds to the production of 173 ± 54 πK “atomic pairs”. From this first data sample, the collaboration derived a lower limit on the πK atom lifetime of τπK > 0.8 × 10–15 s (90% CL), to be compared with the theoretical prediction of (3.7 ± 0.4) × 10–15 s (Schweizer 2004). The ongoing detailed analysis of a much larger data sample aims first to extract a clear signal for the production of the AπK atom and then to deduce from these data a value for the AπK lifetime.

Future investigations

Pionium is an atom like hydrogen and the properties of its states vary strongly with their quantum numbers. An illustration of this is the 2s→1s two-photon de-excitation in hydrogen (τ2s ≈ 0.1s), which is many orders of magnitude slower than the 2p→1s radiative transition (τ2p = 1.6 ns). In pionium, the situation is similar but opposite: the decay A (2s-state)→2π02s = 23.2 fs) is roughly three orders of magnitude faster than the 2p→1s radiative transition (τ2p = 11.7 ps). The DIRAC collaboration aims to measure Lamb shifts in pionium by exploiting the properties of these specific states and in 2010 started to study the possibility of observing long-lived A states (Nemenov et al. 2002).

CCdir6_02_12

The energy shifts, ΔEns–np, for levels with the principal quantum number, n, and orbital quantum number, l, are another valuable source of information. These shifts contain a dominant strong contribution, ΔEstrong ns–np, together with minor QED contributions, ΔEQED ns–np, from vacuum polarization and self-energy effects. The strong s-state energy shift,ΔEstrong ns–np, is proportional to (2a0+a2), i.e. it depends on the same scattering lengths, a0 and a2, as the pionium lifetime. As figure 4 shows, for the principal quantum number n=2, the strong and electromagnetic interactions shift the 2s level below the 2p level by ΔE2s–2p = ΔEstrong ns–np + ΔEQED 2s–2p = – 0.47 eV – 0.12 eV = – 0.59 eV (Schweizer 2004).

By studying the dependence of the lifetime of long-lived A (with l≥1) on an applied electric field – the Stark-mixing effect – the DIRAC experiment is in a unique position to investigate the splitting of pionium energy levels. This will allow another combination of pion-scattering lengths to be extracted, so that a0 and a2 can finally be determined individually.

Micropattern detector conference goes east

CCdet1_02_12

Micropattern gaseous detectors (MPGDs) have opened a new era in state-of-the-art technologies and are the benchmark for gas-detector developments beyond the LHC. They could eventually enable a plethora of new radiation-detector concepts in fundamental science, medical imaging, security inspection and industry. Given the ever-growing interest in this rapidly developing field, an international conference series on MPGD detectors was founded in 2009 to provide a scientific forum to review the current highlights, new results and concepts, applications and future trends, with the first conference organized in Crete. The second in the series, MPGD2011, took place in Kobe on 29 August – 1 September. With it being two years since the previous meeting, there were many new developments to discuss at MPGD2011.

The conference was held at the Maiko Villa Kobe hotel, which is located near the Akashi Strait Bridge. Connecting the Japanese mainland with Awaji island, this is the world’s largest suspension bridge. It was clearly visible from the venue and symbolically emphasized the connection and synergy of the worldwide communities. Half of the 120 participants were from overseas, visiting from 16 countries. Attendance was clearly unaffected by the Great East Japan Earthquake on 11 March 2011, which was in contrast to many other international conferences and events in Japan in 2011 that were cancelled owing to low participation from foreign countries following the disaster.

Japan is the most advanced of any country in terms of having a successful partnership between academia and industry in the development of particle-physics detectors. MPGD developments have been an active field in the country since the early 1990s, shortly after the invention of the micro-strip gas chamber (MSGC). However, in the Asian region and especially in Japan, most MPGD R&D has been carried out independently from other countries. Elsewhere, worldwide interest in the technological development and the use of the novel MPGD technologies led to the establishment of the international research collaboration RD51 at CERN in 2008. By 2011, 80 institutes from 25 countries had joined the collaboration. Only one institute from Japan – Kobe University – has so far joined RD51, although there is an annual domestic MPGD workshop with some 80 participants and around 30 presentations. Holding the international MPGD conference in Japan, followed by a meeting of the RD51 collaboration on 2–3 September, was highly important from the perspective of improving communication and enhancing the synergy between the worldwide MPGD communities.

MPGDs are a relatively novel kind of particle detector, based on gaseous multiplication using micro-pattern electrodes instead of thin wires in a multiwire proportional chamber (MWPC). By using a pitch size of a few hundred micrometres, which is an order-of-magnitude improvement in granularity over wire chambers, these detectors offer an intrinsic high rate-capability (> 106 Hz/mm), excellent spatial resolution (around 30–50 μm) and single-photoelectron time resolution in the nanosecond range. The MSGC, a concept invented by Anton Oed in 1988, was the first of the microstructure gas detectors. Further advances in photolithography techniques gave rise to more powerful devices, in particular, the gas-electron multiplier (GEM) of Fabio Sauli in 1997, and the micromesh gaseous structure (Micromegas) of Ioannis Giomataris and colleagues in 1996. Both of these devices exhibit improved operational stability and increased radiation hardness. During their evolution, many types of MPGDs have arisen from the initial ideas, such as the thick GEM (THGEM), the resistive thick GEM (RETGEM), the microhole and strip plate (MHSP) and the micropixel gas chamber (μ-PIC).

Today, a large number of groups worldwide are developing MPGDs for:

• future experiments at particle accelerators (upgrades of muon detectors at the LHC and novel MPGD-based devices for time-projection chambers (TPCs) and digital hadron calorimetry at a future linear collider);

• experiments in nuclear and hadron physics (KLOE2 at DAΦNE, the Panda and CMB experiments at the Facility for Antiproton and Ion Research, STAR at the Relativistic Heavy Ion Collider, SBS at Jefferson Lab and many others);

• experiments in astroparticle physics and neutrino physics;

• and industrial applications such as medical imaging, material science and security inspection.

This report cannot summarize all of the interesting developments in the MPGD field but it illustrates the richness with a few conference highlights and their implications.

During the three days of MPGD2011, results were presented in 39 plenary talks – including three review talks – and some 30 posters. Five industrial companies linked closely to MPGD technologies also exhibited their products.

Marcel Demarteau of Argonne National Laboratory discussed the paramount importance of the interplay between future physics challenges and the development of advanced detector concepts, with instrumentation being the enabler of science, both pure and applied. The greatest payoffs will come from fundamentally reinventing mainstream technologies under a new paradigm of integration of electronics and detectors, as well as integration of functionality. As an example, several conference talks discussed recent progress in the development of integrated Micromegas (InGrid) directly on top of a CMOS micropixel anode (the Timepix chip), which offers a novel and fully integrated read-out solution. These detectors will be used in searching for solar axions in the CAST experiment at CERN and are also are under study for a TPC at the International Linear Collider and for a pixellized tracker (the “gas on slimmed silicon pixels” or GOSSIP detector) for the upgrades of the LHC experiments.

A key point that must be solved to advance with MPGDs is the industrialization of the production and manufacturing of large-size detectors. Rui de Oliveira of CERN discussed the current status of the new facility for large-size MPGD production at CERN, which will be able to produce 2 m × 0.6 m GEMs, 1.5 m × 0.6 m Micromegas and 0.8 m × 0.4 m THGEMs. He also presented recent developments and improvements of fabrication techniques – single-mask GEMs and resistive “bulk Micromegas”. GEMs and Micromegas prototypes have been produced in the CERN workshop with a size of nearly 1 m2 for the ATLAS and CMS muon upgrades for the future high-luminosity LHC (the HL-LHC project, Designs on higher luminosity). Large-area cylindrical GEMs are currently being manufactured for the KLOE2 inner tracker.

CCdet2_02_12

Moving away from applications in particle physics, large-area MPGDs are being developed for muon tomography to detect nuclear contraband and for tomographic densitometry of the Earth. Industry has also become interested in manufacturing MPGD structures; technology-transfer activities and collaboration have been actively pursued during the past year with several companies in Europe, Japan, Korea and the US.

One of the highlights of MPGD2011 was the recent trend in the development of MPGDs with resistive electrodes. This technique is an attractive way to quench discharges, thus improving the robustness of the detector against sparks. There were more than 10 presentations devoted to resistive MPGDs. The resistive bulk Micromegas for the ATLAS muon upgrade (MAMMA) employs a 2D read-out board utilizing resistive strips on top of the insulator, covering copper strips (figure 2). This industrial-assembly process allows regular production of large, robust and inexpensive detector modules. The design has achieved stable operation in the presence of heavily ionizing particles and neutron background, similar to the conditions expected in the ATLAS cavern in the HL-LHC upgrade. There were also other presentations describing basic developments of the GEM, THGEM and μ-PIC using resistive materials.

Alexey Buzulutskov of the Budker Institute of the Nuclear Physics, Novosibirsk, reviewed recent advances in cryogenic avalanche detectors, operated at low temperatures (from a few tens of degrees kelvin down to a few degrees). Recent progress in the operation of cascaded MPGDs at cryogenic temperatures could pave the road toward their potential application in: the next-generation neutrino physics and proton-decay experiments; liquid argon TPCs for direct dark-matter searches; positron-emission tomography (PET); and a noble-liquid Compton telescope combined with a micro-PET camera.

CCdet3_02_12

The MPGD2011 conference also featured a physics presentation announcing the observation of electron-neutrino appearance events using the beam from the Japan Proton Accelerator Research Complex to the Super-Kamiokande detector. The results were mainly based on the three large-volume TPCs, instrumented with bulk Micromegas detectors and read out via some 80,000 channels. This is a good example of the interplay between physics and technology. Last, but not least, interesting results of gaseous photomultipliers with caesium-iodide and bialkali photocathodes coupled to GEM, THGEM and Micromegas structures, were reported at the conference. A sealed prototype of an MPGD sensitive to visible light has been produced by Hamamatsu.

• For more information about the conference and for the presentations, see http://ppwww.phys.sci.kobe-u.ac.jp/~upic/mpgd2011. The contributions will be submitted for publication in the open-access journal JINST, http://jinst.sissa.it.

Weaving the Universe: Is Modern Cosmology Discovered or Invented?

By Paul S Wesson
World Scientific
Hardback: £45 $65
E-book: $85

61ZO4+XRcxL

Aimed at a broad audience, Weaving the Universe provides a thorough but short review of the history and current status of ideas in cosmology. The coverage of cosmological ideas focuses on the early 1900s, when Einstein formulated relativity and when Sir Arthur Eddington was creating relativistic models of the universe. It ends with the completion of the LHC in late 2008, after surveying modern ideas of particle physics and astrophysics – weaved together to form a whole account of the universe.

bright-rec iop pub iop-science physcis connect