Comsol -leaderboard other pages

Topics

Pursuit of deconfinement returns to the SPS

CCna63_01_12

The quark model of hadron classification proposed by Murray Gell-Mann and George Zweig in 1964 motivated the opinion that a new state of matter, namely strongly interacting matter composed of subhadronic constituents, may exist. Soon thereafter, quantum chromodynamics (QCD) was formulated as the theory of strong interactions, with quarks and gluons as elementary constituents. As a natural consequence, the existence of a state of quasi-free quarks and gluons – the QCD quark–gluon plasma (QGP) – was suggested by Edward Shuryak in 1975. These events, together with the rapid development of particle-accelerator and detector techniques, mark the beginning of the experimental search for this hypothetical, subhadronic form of matter in nature.

First indications

The experimental efforts received a boost from the first acceleration of oxygen and sulphur nuclei at CERN’s Super Proton Synchrotron (SPS) in 1986 (√sNN ≈ 20 GeV) and of lead nuclei in 1994 (√sNN ≈ 17 GeV). Measurements from an array of experiments were surprisingly well described by statistical and hydrodynamical models. They indicated that the created system of strongly interacting particles is close to at least local equilibrium (Heinz and Jacob 2000). Thus, a necessary condition for QGP creation in heavy-ion collisions was found to be fulfilled. The “only” remaining problem was the identification of unique experimental signatures of QGP.

The strategy is clear: look for a rapid change of energy dependence of hadron production properties.

Unfortunately, precise quantitative predictions are currently impossible to calculate within QCD and predictions of phenomenological models suffer from large uncertainties. Therefore, the results of the measurements were only suggestive of the production of QGP in heavy-ion collisions at the top SPS energy. The same situation persisted at the top energies of Brookhaven’s Relativistic Heavy Ion Collider (RHIC) and seems to be repeated at the LHC. Despite many arguments in favour of the creation of QGP at these energies, its discovery cannot be claimed from these data alone.

A different strategy for identifying the creation of QGP was followed by the NA49 experiment at the SPS and is now being continued by its successor NA61/SHINE, as well as by the STAR experiment at RHIC. The idea is to measure quantities that are sensitive to the state of strongly interacting matter as a function of collision energy in, for example, central lead–lead collisions.

The reasoning is based on simple arguments. First, the energy density of matter created at the early stage of heavy-ion collisions increases monotonically with collision energy. Thus, if two phases of matter exist, the low-energy density phase is created in collisions at low energies and the high-energy density phase in collisions at high energies. Second, the properties of both phases differ significantly, with some of the differences surviving until the freeze-out to hadrons and so can be measured in experiments. The search strategy is therefore clear: look for a rapid change of the energy dependence of hadron production properties that are sensitive to QGP, because these will signal the transition to the new state of matter and indicate its existence.

CCna61_01_12

This strategy, and the corresponding NA49 energy-scan programme, were motivated in particular by a statistical model of the early stage of nucleus–nucleus collisions (Gazdzicki and Gorenstein 1999). It predicted that the onset of deconfinement should lead to rapid changes of the collision-energy dependence of bulk properties of produced hadrons, all appearing in a common energy domain. Data from 1999 to 2002 on central Pb+Pb collisions at 20A, 30A, 40A, 80A and 158A GeV were recorded and the predicted features were observed at low SPS energies.

An independent verification of NA49’s discovery is vital and calls for further measurements in the SPS energy range. Two new experimental programmes are already in operation: the ion programme of NA61/SHINE at the SPS; and the beam-energy scan at RHIC. Elsewhere, the construction of the Nuclotron-based Ion Collider at JINR, Dubna, is in preparation. The basic goals of this experimental effort are the confirmation and the study of the details of the onset of deconfinement and the investigation of the transition line between the two phases of strongly interacting matter. In particular, the discovery of the hypothesized second-order critical end-point would be a milestone in uncovering properties of strongly interacting matter.

Four pointers

Last year rich data from the RHIC beam-energy scan programme were released (Kumar 2011, Mohanty 2011). Furthermore, the first results from Pb+Pb collisions at the LHC were revealed (Schukraft et al. 2011, Toia et al. 2011). It is therefore time to review the status of the observation of the onset of deconfinement. The plots in figure 1 summarize relevant results that became available in June 2011. They show the energy dependence of four hadron-production properties measured in central Pb+Pb (Au+Au) collisions, which reveal structures referred to as the “horn”, “kink”, “step” and “dale” – all located in the SPS energy range.

The horn. The most dramatic change of the energy dependence is seen for the ratio of yields of kaons and pions (figure 1a). The steep threshold rise of the ratio characteristic for confined matter changes at high energy into a constant value at the level expected for deconfined matter. In the transition region (at low SPS energies) a sharp maximum is observed, caused by the higher production ratio of strangeness-to-entropy in confined matter than in deconfined matter.

The kink. Most particles produced in high-energy interactions are pions. Thus, pions carry basic information on the entropy created in the collisions. On the other hand, entropy production depends on the form of matter present at the early stage of collisions. Deconfined matter is expected to lead to a final state with higher entropy than confined matter. Consequently, the entropy increase at the onset of deconfinement is expected to lead to a steeper increase with collision energy of the pion yield per participating nucleon. This effect is observed for central Pb+Pb collisions (figure 1b). When passing through low SPS energies, the slope of the rise in the ratio <π>/<NP> with the Fermi energy measure F ≈ √√sNN increases by a factor of about 1.3. Within the statistical model of the early stage, this corresponds to an increase of the effective number of degrees of freedom by a factor of about 3.

The step. The experimental results on the energy dependence of the inverse slope parameter, T, of K transverse-mass spectra for central Pb+Pb (Au+Au) collisions are shown in figure 1c. The striking features of these data can be summarized and interpreted as follows (Gorenstein et al. 2003). The T parameter increases strongly with collision energy up to low SPS energies, where the creation of confined matter at the early stage of the collisions takes place. In a pure phase, increasing collision energy leads to an increase of the early-stage temperature and pressure. Consequently the transverse momenta of produced hadrons, measured by the inverse slope parameter, increase with collision energy. This rise is followed by a region of approximately constant value of the T parameter in the SPS energy range, where the transition between confined and deconfined matter with the creation of a mixed phase is located. The resulting softening of the equation of state “suppresses” the hydrodynamical transverse expansion and leads to the observed plateau or even a minimum structure in the energy dependence of the T parameter. At higher energies (RHIC data), T again increases with collision energy. The equation of state at the early stage again becomes stiff and the early-stage pressure increases with collision energy, resulting in a resumed increase of T.

The dale. As discussed above, a weakening of the transverse expansion is expected to result from the onset of deconfinement because of the softening of the equation of state at the early stage. Clearly, the latter should also weaken the longitudinal expansion (Petersen and Bleicher 2006). This expectation is confirmed in figure 1d, where the width of the π rapidity spectra in central Pb+Pb collisions relative to the prediction of ideal Landau hydrodynamics is plotted as a function of the collision energy. In fact, the ratio has a clear minimum at low SPS energies.

A smooth evolution is observed between the top SPS energy and the current energy of the LHC.

The results shown in figure 1 include new results on central Pb+Pb collisions at the LHC and data on central Au+Au collisions from the RHIC beam-energy scan. The RHIC results confirm the NA49 measurements at the onset energies while the LHC data demonstrate that the energy dependence of hadron-production properties shows rapid changes only at low SPS energies. A smooth evolution is observed between the top SPS energy (17.2 GeV) and the current energy of the LHC (2.76 TeV). This strongly supports the interpretation of the NA49 structures as arising from the onset of deconfinement. Above the onset energy only a smooth change of the QGP properties with increasing collision energy is expected.

The first LHC data thus confirm the following expected effects:

• an approximate energy independence of the K++ ratio above the top SPS energy (figure 1a);

• a linear increase of the pion yield per participant with F ≈ √√sNN with the slope defined by the top SPS data (figure 1b);

• a monotonic increase of the kaon inverse-slope parameter with energy above the top SPS energy (figure 1c).

The width of the π rapidity spectra in central Pb+Pb collisions should increase continuously from the top SPS energies to the LHC energies, as predicted by ideal gas Landau hydrodynamics. LHC data on rapidity spectra are required to verify this expectation.

The NA61/SHINE experiment

The confirmation of the NA49 measurements and their interpretation in terms of the onset of deconfinement by the new LHC and RHIC data strengthen the arguments for the NA61/SHINE experiment, which will use secondary proton and Be, as well as primary Ar and Xe beams in the SPS beam momentum range (13A–158A GeV/c). The basic components of the NA61 detector were inherited from NA49. Several important upgrades – in particular, the new and faster read-out of the time-projection chambers, the new, state-of-the-art resolution Projectile Spectator Detector and the installation of background-reducing helium beam pipes – allow the collection of data of high statistical and systematic accuracy. In parallel to the ion programme, the NA61/SHINE experiment is also making precision measurements of hadron production in proton- and pion-nucleus collisions for the Pierre Auger Observatory’s studies of cosmic rays and the T2K long-baseline neutrino experiment.

CCna62_01_12

NA61 has already begun a two-dimensional scan in collision energy and the size of the colliding nuclei (figure 2). Data on proton–proton interactions at six collision energies were recorded in 2009–2011 and a successful test of secondary ion beams took place in 2010. The first physics run with secondary Be beams came in November/December 2011. Most important for the programme are runs with primary Ar and Xe beams, expected for 2014 and 2015, respectively. The collaboration between CERN and the iThemba Laboratory in South Africa is ensuring a timely optimization of the ion-source parameters. This all adds up to a future where the results from NA61 will allow a detailed study of the properties of the onset of deconfinement and a systematic search for the critical point of strongly interacting matter.

SuperKEKB goes in hunt of flavour at the terascale

 

A schematic view of SuperKEKB.
Image credit: KEK.

 

What is dark matter? Why is there more matter than antimatter in the universe? These are some of the most fundamental puzzles in modern particle physics. The clues to the answers might reside in the physics of the symmetry breaking of electroweak forces at tera-electron-volt scale. Projects at the energy frontier, such as CERN’s LHC, are complementary to the approach of flavour physics, where measurements of rare processes at lower energies can be sensitive to new types of interaction. SuperKEKB will provide a promising path to such new physics by enabling detailed studies of the decay processes of heavy quarks and leptons.

From 1998 to 2010, KEK, the Japanese High-Energy Accelerator Research Organization, operated its B factory, KEKB – an asymmetric electron–positron collider, 3 km in circumference – and achieved the world’s highest luminosity of 2.11 × 1034 cm–2 s–1, more than double the design value. There, the Belle experiment precisely analysed the characteristics of pairs of B and B mesons produced in the collisions and confirmed the effect of CP-violation as indicated by the theory of Makoto Kobayashi and Toshihide Maskawa, who both received the Nobel prize in physics in 2008.

The antechamber reduces the effect of electron cloud in the positron beam pipe.
Image credit: Rey Hori.

 

KEK has played a central role in flavour physics not only with the record-breaking KEKB but also with the world’s first long-baseline neutrino oscillation experiment, KEK-to-Kamioka (K2K). Now, because large facilities are increasingly required for experiments in particle physics, a new style of international competition and co-operation has become necessary between institutes around the world. Together with the Japan Proton Accelerator Complex, which is a high-intensity proton facility built jointly with the Japan Atomic Energy Agency, KEK is continuing to play a leading role as one of the international focal points in flavour physics. As an international collaboration, SuperKEKB is open to the world and currently more than 400 physicists from 60 institutions from 17 countries and regions are working together to upgrade the Belle experiment to Belle II.

Forty times more collisions

SuperKEKB is essentially an upgrade project to increase the luminosity to 8 × 1035 cm–2 s–1, or 40 times greater than at KEKB. Engineering a higher luminosity in a collider involves both increasing the beam current and reducing the beam size at the interaction point. The original approach for the upgrade was to increase the beam current and the beam–beam parameter – the “high current option”.

In March 2009 the SuperKEKB design changed course based on ideas from SuperB, using a large crossing-angle at the interaction point and beams squeezed to nanometre-scale to increase luminosity – the “nano-beam option”. The scheme has the advantage of reaching 8 × 1035 cm–2 s–1 with only the double the current, but that is not to say that there are no other challenges.

Artistic impression of Belle II.
Image credit: Rey Hori.

 

The high current (3.6 A in the 4 GeV positron ring, 2.6 A for the 7 GeV electron ring) will lead to “electron cloud” effects in the positron ring when synchrotron radiation hits the walls of the beam pipe. To counteract such phenomena, SuperKEKB will use a special vacuum chamber for the positron ring with two small “antechambers”, one on the outside of the main beam pipe and one on the inner side. One suppresses the formation of electron clouds and the other contains vacuum pumps. In addition, the chambers between magnets will be wrapped with solenoid coils to reduce the effect of secondary electrons. Test antechambers developed at KEK from a prototype produced in collaboration with a team at the Budker Institute of Nuclear Physics have already been tried out in sections of the KEKB ring.

Further changes

The new design will be based on a larger crossing-angle (4.8°), requiring a redesign of the final-focus with eight higher-gradient quadrupole magnets placed nearer the interaction region to squeeze the beam to nanometre scale. These are unusually small superconducting quadrupole magnets, whose inner diameters are only 4–8 cm, or one-sixth the diameter of the KEKB quadrupoles. These magnets require current controls to protect them from quenches because the current density in the superconductor can go beyond 2000 A/mm2 and on quenching the temperature will rise to more than 1000 K within around 50 ms. In addition, the accelerator design demands much smaller fabrication errors to ensure a quadrupole field quality of a few 10–4.

SuperKEKB will provide a promising path to new physics via detailed studies of the decay processes of heavy quarks and leptons.

Therefore, while SuperKEKB will be based on KEKB, various parts of the accelerator will be replacements. Others will be new. A new positron source and a flux concentrator will help in generating a high-current positron beam and a new damping ring (135 m in circumference) will be added to reduce the emittance of the positron beam. Moreover, as well as changing the beam pipe in the main positron ring for one with “antechambers” as described above, the dipole magnets in this ring will be replaced. A new RF system will also help in accelerating high-current beam.

Because the SuperKEKB accelerator will produce electron–positron collisions at a much higher rate, the detector will also need to be upgraded. The aim is to accumulate an integrated luminosity of 50 ab–1 by 2021, which is 50 times more data than the previous Belle detector acquired. Thus the Belle II detector will also be an upgrade. The data-acquisition system will be redesigned with a network of optical fibres. Trigger electronics will be replaced with a new system. A pixel detector will be added for better resolution of particle tracking and a silicon vertex detector will cover a larger solid angle. A central tracking chamber, a time-of-propagation chamber and an aerogel ring-imaging Cherenkov detector are also being newly built.

The first beam of SuperKEKB is expected in 2014 and the physics run will start in 2015. Ultimately, Belle II should collect 40 times more B-meson samples per second than its predecessor – roughly 800 BB pairs per second. This will allow the Belle II collaboration to examine the effects of unknown particles in a higher energy region in the search for clues to new physics. Belle II at SuperKEKB will form one of the international focal points for particle physics in the coming decade.

The SuperB approach to high luminosity

CCsup1_01_12

In 2010 the Italian government gave the green light for SuperB – a next-generation B factory based on an asymmetric electron–positron (e+e) collider, which is to be constructed on the Tor Vergata campus of Rome University (figure 1). The intention is to deliver a peak luminosity of 1036 cm–2 s–1 to allow the indirect exploration of new effects in the physics of heavy quarks and flavours through studies of large samples of B, D and τ decays. Building on the wealth of results produced by the previous two B Factories, PEP-II and KEKB, and their associated detectors, BaBar and Belle, SuperB will produce an unprecedented amount of data and make accessible a range of new investigations.

The SuperB concept represents a real breakthrough in collider design. The low-emittance ring has its roots in R&D for the International Linear Collider (ILC) and could be used as a system-test for the design of the ILC damping ring. The invention of the crab-waist final focus could also have an impact on the current generation of circular colliders.

The SuperB e+e collider will have two rings with a 1.25 km circumference, one for electrons at 4.18 GeV and one for positrons at 6.7 GeV. There will be one interaction point (IP) where the beams will be squeezed down to a vertical size of only 36 nm rms. The design results from a combination of knowledge acquired at the previous B factories as well as the concepts developed for linear colliders.

The innovative crab-waist principle, which has been successfully tested at Frascati’s Φ factory – the DAΦNE e+e collider – will allow SuperB to overcome some of the requirements that have proved problematic in previous e+e collider designs, such as high beam currents and short bunches. While SuperB will have beam currents and bunch lengths similar to those of its predecessors, the use of smaller emittances and the crab-waist scheme for the collision region should produce a leap in luminosity from some 1034 cm–2 s–1 to an unprecedented level of 1036 cm–2 s–1, without increasing the background levels in the experiments or the machine’s power consumption.

High luminosity in particle colliders not only depends on high beam-intensity, it also requires a small horizontal beam size and horizontal emittance (a measure of the beam phase space) and a very small value for the vertical β function at the IP, β*y. (The β function in effect gives the envelope of the possible particle trajectories and has a parabolic behaviour around the IP.) However, β*y cannot be made much smaller than the bunch length without running into trouble with the “hourglass” effect, in which particles in the bunch-tails experience a much higher β*y and a loss in luminosity.

CCsup2_01_12

Unfortunately it is difficult to shorten the bunch length in a high-current ring without exciting instabilities and therefore paying in radio-frequency voltage. One way to overcome this is to make the beam crossing-angle relatively large and the horizontal beam size small, so that the region where the two colliding beams overlap is much smaller than the bunch length. In addition, in the crab-waist scheme, two sextupoles at suitable phase-advances from the IP are used to rotate the waist in the β function of one beam such that its minimum value is aligned along the trajectory of the other beam, so maximizing the number of collisions occurring at the minimum β (figure 2). This technique can substantially increase luminosity without having to decrease the bunch length. A crab-waist scheme was tested at the DAΦNE in 2008 allowing a peak luminosity three times higher than the previous record for similar currents in the two rings.

The combination of large crossing-angle and small beam sizes, emittances and beam angular divergences at the IP in the SuperB design will also be effective in decreasing the backgrounds present at the IP with respect to the previous B factories. A limited beam current also contributes to keeping these levels very low at SuperB. However, luminosity-related backgrounds are still relevant and impose serious shielding requirements.

The high luminosity of SuperB, representing an increase of nearly two orders of magnitude over the current generation of B factories, will allow exploration of the contributions of physics beyond the Standard Model to the decays of heavy quarks and heavy leptons. Indeed, new physics can affect rare B-decay modes through observables such as branching fractions, CP-violating asymmetries and kinematic distributions. These decays do not typically occur at tree level, so their rates are strongly suppressed in the Standard Model. Substantial variations in the rates and/or in angular distributions of final-state particles could result from the presence of new heavy particles in loop diagrams, providing clear evidence of new physics. Moreover, because the pattern of observable effects is highly model-dependent, measurements of several rare decay modes can provide information regarding the source of the new physics.

The SuperB data sample will contain unprecedented numbers of charm-quark and τ-lepton decays. Such data are of great interest, both in a capacity to improve the precision of existing measurements and in sensitivity to new physics. This interest extends beyond weak decays; the detailed exploration of new charmonium states is also an important objective. Limits on rare τ decays, particularly lepton-flavour-violating decays, already provide important constraints on models of new physics and SuperB may have the sensitivity to observe such decays. The accelerator design will allow for longitudinal polarization of the electron beam, making possible uniquely sensitive searches for a τ electric dipole moment, as well as for CP-violating τ decays.

Studies of CP-violating asymmetries are among the primary goals of SuperB. In addition to known sources of CP, new CP-violating phases arise naturally in many extensions of the Standard Model. These extra phases produce measurable effects in the weak decays of heavy-flavour particles. The detailed pattern of these effects, as well as of rare-decay branching fractions and kinematic distributions, will be made accessible by SuperB’s high luminosity. Such studies will provide unique constraints in, for example, ascertaining the type of supersymmetry breaking or the kind of extra-dimensional model behind the new phenomena. A natural consequence of such detailed studies will be an improved knowledge of the unitarity triangle to the limit allowed by theoretical uncertainties.

In addition to pursuing important research in fundamental physics, SuperB is also taking up the challenge to combine it with a rich programme of applied physics: the synchrotron light emitted by the machine will have a high brightness and will be suitable for studies in life sciences and material science. Current proposals include: the creation and exploitation of beamlines for laser ablation on biomaterials (a technique that, by modifying the surface of the material with a laser, allows the creation of patterns of biological systems); femtochemistry studies (a field that includes the structural study of small numbers of molecules); and the development of new phase-contrast imaging techniques to improve the reconstruction of morphological information related to tissues and organs.

The construction of SuperB, which is funded by the Italian government and supported by a large international collaboration that includes scientists from Europe, the US and Canada, is planned to take about six years. The newly established “Nicola Cabibbo Laboratory” Consortium will provide the necessary infrastructure for the exploitation of the new accelerator. In November, the Consortium appointed Roberto Petronzio as director with an initial three-year mandate. The machine will reuse several components from PEP-II, such as the magnets, the magnet power-supplies, the RF system, the digital feedback-system and many vacuum components. This will reduce the cost and engineering effort needed to bring the project to fruition.

The exciting physics programme foreseen for SuperB can only be accomplished with a large sample of heavy-quark and heavy-lepton decays produced in the clean environment of an e+e collider. The programme is complementary to that of an experiment such as LHCb at a hadron collider. Indeed, a “super” flavour factory such as SuperB will, perforce, be a partner together with experiments at the LHC, and eventually at an ILC, in ascertaining exactly what kind of new physics nature has in store.

Accelerating sustainability in large-scale facilities

The United Nations General Assembly has designated 2012 the international year of sustainable energy for all. With leadership from UN Secretary-General Ban Ki-moon, a coordinating group of 20 UN agencies (UN-Energy) will tackle the crucial challenges of sustainable access to energy, energy efficiency and renewable energy at the local, national, regional and international levels. So what can big science do for global climate and energy challenges?

Catherine Césarsky, High Commissioner for Atomic Energy and member of the CERN Council, believes that research infrastructures (RIs) in particular are appropriate tools for addressing these challenges scientifically, validating and providing scientific knowledge and in this way contributing to the decision-making process. When it comes to technical solutions, large-scale RIs, being intrinsically energy intensive, can provide their know-how in improving energy management and share their mid- and long-term strategies for reliable, affordable and sustainable carbon-neutral energy supply.

Act now, save later

Research infrastructures have considerable expertise regarding energy savings and efficiency approaches.

It was with this message that Césarsky opened the first Joint Workshop on Energy Management in Large Scale Research Infrastructures, which was organized by CERN, the European Spallation Source (ESS) and the European Association of National Research Facilities (ERF). It took place in Lund, where the ESS will be built as the first carbon-neutral research facility, and brought international experts on energy together with representatives from research laboratories and future large-scale research projects all over the world. The objective was to identify the challenges and best practice for energy efficiency, optimization and supply at large research facilities and to consider how these capabilities could be better oriented to respond to this general challenge for society.

The quality of energy required and the levels of consumption mean that RIs have a considerable, unique expertise regarding energy savings and efficiency approaches, ranging from research in materials sciences to demonstrators/prototypes of energy efficiency. In particular, the workshop helped to identify several key points:

• The development and demonstration of co-generation (combined heat and power) plus renewable energy go hand in hand with the improvement in the quality of electrical power and a better use of transmission lines (in peak-shaving methods to reduce power drawn at peak times and in storage), while decreasing instrumental black-outs.

• It is important to maximize the re-use of thermal energy generated in various systems, both for heating and cooling (e.g. with heat pumps and absorption refrigerators), thus decreasing the use of primary energy.

• The design of systems should allow the recovery of heat at higher temperatures than in usual design standards, to allow a better re-use and an interaction with local communities to develop district heating if not yet available.

• While new RIs are in the position to introduce energy-saving approaches, there is a need for special support to allow existing RIs to re-fit and increase efficiency; this could be a driver for improved returns to the hosting territory, through increased technology and knowledge transfer.

RIs employ some of the best technicians and applied researchers in the world, who are trained continuously in cutting-edge technology by responding to the technical challenges brought to them by the best researchers. RIs could be the test-bed for completely innovative research-based solutions, such as the use of superconducting lines to manage different energy flows, the installation of superconducting magnetic-energy storage for energy quality control, the transformation of energy between radio frequency and direct current, and other novel schemes involving advanced concepts.

An increase in efficiency in the use of energy will be the major contributor to limiting carbon emissions at large-scale facilities. Energy efficiency will be driven by introducing and demonstrating appropriate methods and breakthrough technologies, including the recycling of waste heat into useful applications.

A recurrent theme of discussion during the workshop was the importance of evaluating the different energy options both in the design of new research facilities and in the upgrade of existing ones. The inclusion of energy-efficiency and recycling requirements at the design stage opens many possibilities and initiatives to all of the stakeholders. For example, high-temperature waste water can be recovered with high efficiency, but equipment manufacturers are rarely asked if high-temperature cooling water can be used to cool their equipment.

As recommendations, the workshop proposed that:

• The design and the construction of facilities should aim at optimizing scientific performance while including the best approach to energy use.

• The optimal balance between investment and operation costs must have a long-term view. A total “cost of ownership” approach is required.

• A clear and objective assessment of overall energy consumption – equipment, buildings and associated information and communication technologies – must be available.

• The use of relatively fine-grained monitoring and active feedback-control tools (including modelling), as well as the specific role of an energy manager, are required.

Towards renewables

In addition to technical aspects, the workshop tackled socioeconomic issues in parallel sessions. These advised investigation and a long-term approach in matters such as: government legislation (tax exemptions, permits and licenses), contracts with energy suppliers, innovative financing, understanding of the energy-load profile, contracts for steady-state and peaks, socioeconomic and environmental impacts and benefits at the host site.

Renewable energies will be important as future sustainable energy sources for RIs. In turn, the RIs can be instrumental in supporting renewable-energy research and technological development through, for example, new and improved materials (for photo-voltaic, fuel cells, improved motors and turbines etc.), the development of environmentally friendly biofuels, and new and safe methods of carbon capture.

Large-scale RIs are able to generate innovative solutions that can be used profitably elsewhere and be at the base of “win-win” partnerships with industries. Their capabilities and staff could also be mobilized for large international projects, e.g., the development of solar power generated in the sun-rich regions of North Africa and the Middle East (MENA). This could supply up to 15% of Europe’s energy needs by 2050 as advocated by the DESERTEC foundation. Technologies to exploit this potential, such as concentrated solar power, exist and are proven. Realizing such ambitious projects, however, will require a new energy and science partnership between Europe and MENA and a closer integration of MENA into the European Research Area.

RIs can be particularly effective in training young researchers, operators and managers to face the upcoming energy challenges.

The workshop showed that several RIs are already mobilizing their unique resources and technical skills to respond to the “energy grand challenge”. They can act as a test-bed for implementing appropriate energy-supply and procurement schemes as well as efficient energy-use. RIs can also be particularly effective in training young researchers, operators and managers to face the upcoming energy challenges in order to co-operate on R&D, exchange best practices and provide know-how. Planned by Frédérick Bordry (head of CERN’s Technology Department), Thomas Parker (ESS energy manager) and Carlo Rizzuto (chair of ERF and president of Sincrotrone Trieste – Italy), the workshop attracted 150 participants, indicating a clear requirement for this type of initiative. The unanimous consensus on such a need was supported by CERN with the proposal to host a second workshop in 2013.

NEMO 3: the goals, results and legacy

CCnem1_01_12

Located under 1700 m of rock in the Modane Underground Laboratory (LSM) at the middle of the Fréjus Rail Tunnel, the NEMO 3 experiment was designed to search for neutrinoless double beta decay, with the aim of discovering the nature of the neutrino – whether it is a Majorana or Dirac particle – and measuring its mass. The experiment ran for seven years before it finally stopped taking data in January 2010. While the sought-after decay mode remained elusive, NEMO 3 nevertheless made impressive headway in the study of double beta decay, providing new limits on a number of processes beyond the Standard Model.

Standard double beta decay (ββ2ν) involves the simultaneous disintegration of two neutrons in a nucleus into two protons with the emission of two electrons accompanied by two antineutrinos, (A,Z) → (A,Z+2) + 2e +2ν. It is a second-order Standard Model process and for it to occur the transition to the intermediate nucleus accessible by normal beta decay, (A,Z) → (A,Z+1) + e + ν, must be forbidden by conservation of either energy or angular momentum. In nature, there are 70 isotopes that can decay by ββ2ν and experiments have observed this process in 10 of these, with half-lives ranging from 1018 to 1021 years. However, ββ2ν decay is not sensitive to the nature or mass of the neutrino, unlike double beta decay with no emitted neutrinos (ββ0ν). This process, (A,Z) → (A,Z+2) + 2e, is forbidden by the Standard Model electroweak interaction because it violates the conservation of lepton number (ΔL = 2). Such a decay can occur only if the neutrino is a Majorana particle (a fermion that is its own antiparticle). Non-Standard Model processes that can lead to ββ0ν decay include the exchange of a light neutrino, in which case the inverse of the ββ0ν half-life depends on the square of the effective neutrino mass. Other possible processes involve a right-handed neutrino current, a Majoron coupling or supersymmetric particle exchange.

The experimental signature for double beta-decay processes appears in the sum of the energy of the two electrons. For ββ0ν decay, this would have a peak at the Qββ transition energy (typically 2–4 MeV), while for ββ2ν decay it takes the form of a continuous spectrum from zero to Qββ. There are also two other observables: the angular distribution between the two electrons and the individual energy of the electrons. These two variables can distinguish which process is responsible for ββ0ν decay, if it is observed.

The NEMO collaboration – where NEMO stands for the Neutrino Ettore Majorana Observatory – has been working on ββ0ν decay since 1989. The design of the NEMO 3 detector, which evolved from two prototypes, NEMO 1 and NEMO 2, began in 1994 and construction started three years later. The method uses a number of thin source foils of enriched double beta-decay emitters surrounded by two tracking volumes and a calorimeter.

The challenge for any search for ββ0ν decay is the control of the backgrounds from cosmic rays, natural radioactivity, neutrons and radon. The background comes from any particle interactions or radioactive decays that can produce two electrons in the source foils. Because the signal level is so low, even third- and fourth-order processes can be a problem. Cosmic rays are suppressed by installing the experiment in a deep underground laboratory, as at the LSM. Natural radioactivity is reduced by material selection and purification of the source isotopes: the source foils in NEMO 3 had a radioactivity level a million times less than the natural level of radioactivity (around 100 Bq/kg). Neutrons and high-energy γ-rays are suppressed by specially designed and adapted shielding.

The NEMO 3 detector

The principle of NEMO 3 was to detect the two emitted electrons and to measure their energy as well as their angular distribution and their individual energies. The identification of the electrons reduces drastically the background compared with the calorimetric techniques of other experiments. The price of this advantage is a rather modest energy resolution, partly as a result of the electron’s energy loss in the source foils. However, the experimental sensitivity for ββ0ν depends on the product of the energy resolution and the number of background events. The source foils in NEMO 3 had a thickness of around 100 μm, which corresponded to a compromise between the amount of radioactive isotope and the electrons’ energy losses.

Another advantage of this experimental technique is the possibility of using different isotopes. The double beta-decay source inside NEMO 3 had a total mass of 10 kg, which was shared as follows: 6.914 kg of 100Mo, 0.932 kg of 82Se, 0.405 kg of 116Cd, 0.454 kg of 130Te, 37.0 g of 150Nd, 9.4 g of 96Zr and 7.0 g of 48Ca. These isotopes were enriched in Russia. In addition, two ultrapure sources of copper (0.621 kg) and natural tellurium (0.491 kg) were used to measure the external background. It is the first time that a detector has measured seven different double beta-decay emitters at the same time.

The NEMO 3 detector was made of 20 identical sectors. The tracking volume consisted of 8000 drift chambers working in Geiger mode. The volume was filled with a mixture of helium, 4% alcohol, 1% argon and a few parts per million of water to ensure the stable behaviour of the chamber. Electrons could be tracked with energy down to 100 keV with an efficiency of greater than 99%.The calorimeter was made of 2000 plastic scintillators coupled to low-radioactivity Hamamatsu phototubes. The choice of plastic scintillator was driven by the low Z to reduce back scattering, the low radioactivity and the cost. The calorimeter allowed measurements of both the energy (σ=3.6% at 3 MeV) and the time of flight (σ= 300 ps at 1 MeV).

A coil created a magnetic field of 0.003 T to enable the identification of the sign of the electrons. The shielding was made of 20 cm of iron to reduce γ-ray background and 30 cm of water to reduce the neutron background. A tent flushed with air containing just 15 mBq/m3 of radon surrounded the whole detector.

CCnem2_01_12

The unique feature of the NEMO 3 experiment was its ability to identify electrons, positrons, γ-rays and delayed α-particles. Figure 2 shows a typical double beta-decay event in NEMO 3 with two electrons emitted from a source foil, with the track curvature in the magnetic field identifying the charge and the struck scintillator blocks measuring the energy and the time of flight. The timing is important to distinguish a background electron crossing the detector (Δt=4 ns) from two electrons coming from a source foil (Δt=0 ns).

The experiment has measured the background through various analysis channels: single e, e+γ, e+α, e+α+γ, e+γ+γ, e+e+ and so on. This allows measurements to be made of the actual backgrounds from residual contamination of the source foils as well as from the surrounding materials. Figure 3 demonstrates the ability of the experiment to identify the many sources of external background in the eγ channel (as an example) for the 100Mo source foil.

CCnem3_01_12

NEMO 3 has produced an impressive list of results. The main result is, of course, related to the search for ββ0ν decay. Figure 4 shows the sum of the electron energy for 7 kg of 100Mo after 4.5 years of data-taking, zoomed into the region where the signal for ββ0ν decay is expected. The measurement of all of the kinematic parameters and the identification of all of the sources of background allows a 3D likelihood analysis to be performed. The result is a limit on the half-life of T1/2 > 1×1024 years, corresponding to a neutrino mass limit <mν> < 0.3–0.9 eV. The range corresponds to the spread associated with the different nuclear matrix-element calculations that must be used to extract the effective neutrino mass. This limit obtained with 7 kg of 100Mo is one of the best limits, together with the result of <mν> <0.3 – 0.7 eV from the Cuoricino experiment (12 kg of 130Te) and of <mν> < 0.3–1.0 eV from the Heidelberg-Moscow experiment (11 kg of 76Ge).

CCnem4_01_12

One possible scenario for ββ0ν involves the emission of the Majoron, the hypothetical massless boson associated with the spontaneous breaking of baryon-number minus lepton-number (B-L) symmetry. NEMO 3 has obtained the best limit so far for the Majoron-neutrino coupling, with gM < (0.4–1.8) × 10–4. The experiment has also set a limit on the λ parameter in models where a right-handed current exists for neutrinos, with λ < 1.4 × 10–6. These limits were obtained by analysing the angular distributions of the decay electrons and they are therefore unique to NEMO 3.

In addition, NEMO 3 has measured the half-lives for seven ββ2ν decays, providing a high-precision test of the Standard Model and nuclear data that can be used in theoretical calculations. In seven years, more than 700,000 events were recorded for ββ2ν emission from 100Mo. Figure 5 shows the energy spectrum, angular distribution and single energies measured for 100Mo. The first direct detection of ββ2ν decay to the 0+ excited state has also been measured for this nucleus and the first limit on the bosonic component of the neutrino has been obtained.

CCnem5_01_12

The NEMO 3 detector has demonstrated a powerful method for searching for neutrinoless double beta decay, with the unique capability of measuring all kinematic parameters of the decay. The next step for the NEMO collaboration is to build the SuperNEMO detector, which will accommodate 100 kg of source foil (82Se, 150Nd or 48Ca) to reach a sensitivity of 50 meV on the effective mass of the neutrino. A demonstrator module is under construction in several laboratories around the world and will start operation in 2013 in the LSM, with 7 kg of 82Se. The main improvement in this larger detector over NEMO 3 will be the energy resolution (σ=1.7% at 3 MeV) and the reduction of the background by a factor of 10. This demonstrator will improve the current limit on the effective neutrino mass and is expected to reach the goal of a zero-background experiment for 7 kg of source and two years of data-taking, which has never been done before. With this demonstration, the collaboration will be ready to build more Super NEMO modules up to the maximum source mass possible.

• The NEMO and SuperNEMO collaboration is formed by laboratories from France, the UK, Russia, the US, Japan, the Czech Republic, Slovakia, Ukraine, Chile and Korea. The LSM is operated by the CNRS and the CEA.

Isotope toolbox turns 10

CCiso1_01_12

The evolution of nuclear structure through the list of stable nuclear isotopes was well established by the late 1960s. During the following decades, however, the discovery of more and more short-lived nuclei expanded the nuclear chart – revealing several surprises. For example, the nuclear shells, which give the classical “magic numbers” along the line of stability, have been seen to change position and sometimes even to dissolve in highly unstable (exotic) nuclei. Only now is the field approaching a fundamental understanding of how nuclear shells evolve. To follow these changes in nuclear structure, nuclei must be probed in many complementary ways. Therefore the leading nuclear-physics facilities not only give access to many different isotopes but also allow a variety of experiments to be performed.

The introduction of REX-ISOLDE at CERN’s ISOLDE facility a decade ago (Kester et al. 2000) allowed a major step forward, as ions produced in the Isotope On-Line (ISOL) facility could be accelerated to a completely new energy region. Before the introduction of REX-ISOLDE, the experiments at ISOLDE took place at low energy (up to 60 keV) via decay studies, ion-beam measurements or manipulation. The natural extension of these techniques was to include reaction studies such as Coulomb excitation, capture reactions and transfer reactions. The challenge was to devise a universal, fast, efficient and cost-effective acceleration scheme that would take full advantage of the large range of isotopes available at ISOLDE.

The idea for the REX-ISOLDE “post-accelerating” scheme emerged in 1994, the acronym coming from “Radioactive beam EXperiments at ISOLDE”. The added accelerator had to increase the beam energy to a few million electron-volts per atomic mass unit (MeV/u). Its key ingredient is an innovative scheme for preparing ions that combines a Penning trap with an electron-beam ion source (EBIS) – REXTRAP and REXEBIS, as illustrated in figure 1. The semi-continuously released radioactive 1+ ions from the ISOLDE target, produced by the impact of 1.4 GeV protons from the Proton Synchrotron Booster, are accumulated and phase-space cooled in the buffer-gas-filled Penning trap, before being sent in a bunch to REXEBIS. So called “charge breeding” takes place inside the EBIS, i.e. the conversion of the ions from 1+ to q+ by bombardment with a dense, energetic electron beam. The highly charged ions, now with a reduced mass-to-charge ratio (A/q <4.5), are extracted and separated before being post-accelerated in a room-temperature linear accelerator (linac). The high charge state allows for efficient acceleration in the compact linac. REX-

ISOLDE pioneered this charge-breeding scheme for radioactive ions and now several facilities around the world are replicating the concept (Wenander 2010).

Versatile acceleration

Although REX-ISOLDE has a modest final beam energy compared with other CERN accelerators, it compensates by being agile and flexible. It was initially designed to perform post-acceleration of neutron-rich Na and K isotopes, all with masses below A = 50. Since then the mass range has been extended and radioactive elements from light 8Li to heavy 224Ra have been accelerated for experiments. To accelerate the heavier elements, high charge states – for example, above 50+ for Ra – have to be achieved to fulfil the A/q requirement of the linac. Neither stripping foils nor gas-jet stripping can be used to obtain such charge states at low energies, so the challenging task falls entirely on the charge breeder. By increasing the breeding time, sometimes up to 300 ms, REXEBIS can nevertheless efficiently convert the ions to the required high charge states.

CCiso2_01_12

Because REX-ISOLDE also has the ability to cover the high mass-range for ions it is possible to make full use of ISOLDE’s capability to produce heavy radioactive elements by spallation processes in targets of uranium carbide. This is a unique feature that so far no other radioactive ion-beam facility can challenge. The combination of a Penning trap and EBIS has also proved capable of accepting almost all chemical elements produced by ISOLDE because the ions are kept within the traps without any surface contact.

The duration of the cooling and the charge breeding is of secondary importance for radioactive elements with long half-lives. On the other hand, some radioactive isotopes of interest have short half-lives, potentially leading to decay losses of the rare ions. By optimizing the cooling and the breeding, even elements such as 11Li (t1/2 = 8.5 ms) and 12Be (t1/2 = 23.6 ms) have been post-accelerated successfully. To reduce the decay losses further, continuous injection from the ISOLDE target-ion source into REXEBIS – without prior bunching and cooling in REXTRAP – can be performed at the expense of a slightly lower transmission efficiency.

The purity of the radioactive ion beams is an important factor. Because there are often only a few thousand ions per second, corresponding to subfemtoampere beams, there is a real interest in suppressing as much contaminating beam components as possible. The excellent vacuum of REXEBIS is one of the requirements for good beam purity. Still, even with a vacuum of better than 10–11 mbar, the residual gas ions – such as C, N, O, Ar and Ne (the latter being the buffer gas in the Penning trap) – usually dominate the beam extracted from the EBIS. In the A/q spectrum shown in figure 2, the residual gas ions appear in discrete peaks, while the background between the peaks is very clean. Thus, by correctly choosing the A/q value of the radioactive beam – in this case abundantly injected 129Cs – the contaminating beam components can be avoided. By adjusting the time the radioactive ions are trapped within the EBIS, and therefore the time ions are exposed to the electron-impact ionization process, the charge state and hence the A/q of the extracted ions is changed.

The low-energy side of REX is a toolbox full of means for beam-manipulation exercises. One of the latest tools to be added is the “in-trap decay” method, used for producing elements that are not readily available from ISOLDE for chemical reasons, such as Fe. A short-lived isotope of abundantly produced Mn is taken from ISOLDE, injected into the EBIS and kept there for a few hundred milliseconds until the major fraction has decayed to Fe, before being accelerated to the experiment. This method can be used to access isotopes of several elements new to ISOLDE, such as B, Si, Ti and Zr.

Another tool, aimed at improving the beam purity and suppressing isobaric contaminants from ISOLDE, for instance Rb superimposed on Sr, is the molecular sideband method. Instead of extracting the ions of interest as atomic 1+ ions from ISOLDE, a carrier gas – in this case CF4 – is introduced and the ions are extracted as SrF+. Because of the electron configuration, the Rb contaminant does not form an RbF+ molecule and can therefore be suppressed in the ISOLDE separator. Inside the EBIS, the SrF+ is broken up and the Sr charge bred in the normal way before being accelerated to the experiment.

Yet another method of beam purification is to make use of the inherent mass-selectivity of the Penning trap. The injected ion cloud from ISOLDE, containing both the ion of interest and the isobaric contamination, is excited to a large radius inside the 3 T magnetic field of REXTRAP. Thereafter, a mass-selective cooling mechanism is applied, re-centring only the ions of interest, while the contamination is lost on extraction. A mass resolution in the order of 30,000 – a factor six times higher than with the High-Resolution Separator at ISOLDE – has been demonstrated for ions with mass number A in the range of 30 to 40.

Selected results

At the start, REX-ISOLDE made use of five room-temperature accelerating cavities to reach a maximum energy of 2.2 MeV/u. In 2004, a 9-gap interdigital H-type cavity was added to the linac, which boosted the final energy to approximately 2.9 MeV/u. Through stepped activation of the six accelerating cavities, the energy of the ion beam can be varied from 300 keV/u (the energy of the RF quadrupole cavity) up to the maximum energy.

CCiso3_01_12

The demand for beam time by the experiments is high, and many different beams – up to 10 elements and 20 isotopes plus several stable calibration beams – have to be delivered every year. This has to be done efficiently, in terms of both set-up time and beam transmission, because the exotic ions are difficult to produce. Until now, REX-ISOLDE has accelerated close to 100 isotopes of 30 different elements (P Van Duppen and K Riisager 2011).

A major theme in the experiments performed so far has been the tracking of the evolution of nuclear shells. The first hints for the breaking of the classical magic numbers came from experiments at CERN on neutron-rich nuclei with about 20 neutrons, in what is now called the “island of inversion”. Several REX experiments have contributed to clarifying the structure in this region. One of the latest combined a radioactive 30Mg beam with a radioactive tritium target to do two-neutron transfer to two 0+ states in 32Mg, the data being consistent with the closed-shell configuration in the excited state rather than the ground state.

In the region of the classical neutron magic number 50, a campaign of experiments has probed how shells evolve towards 78Ni, presumably still a double magic nucleus. Both transfer- and Coulomb-excitation experiments have been performed, the latter including one that made use of another speciality of ISOLDE: isomeric beams. ISOLDE’s laser ion source can in certain cases – such as the heavy Cu isotopes – produce beams of an isotope that is mainly in either its ground state or in a long-lived (isomeric) state. This extra selectivity helps greatly in interpreting complex spectra that result when these beams react.

In the light mass region REX experiments are also testing the extent of shell-breaking in the nucleus 12Be (neutron number 8), but the physics implications are broader because the accelerated isotope in this case, 11Be, is a halo nucleus with an unusually large spatial extent. The halo structure implies that continuum degrees of freedom play an important role. An extreme example of this is the neighbouring halo nucleus 11Li that is bound although its subsystem 10Li is particle unbound. The structure of 10Li has also been studied in transfer experiments at REX.

CCiso4_01_12

As a final example, at the other end of the nuclear chart several experiments are tracking the sizeable changes in shape that are known to take place systematically among light isotopes of elements around Pb. Coulomb excitation favours transitions among collective states and has allowed the identification of different shapes in nuclei from 182Hg to 204Rn.

Apart from these nuclear physics examples, the REX-ISOLDE accelerator has also been used for physics-application studies. These include the calibration of plastic foils of polyethylene terephthalate (PET), for use as solid-state nuclear track detectors, and for development work on diamond detectors.

REX-ISOLDE is a machine undergoing constant development to fulfil the changing requests of the experiments. Currently, the possibility of producing polarized nuclear beams with the tilted foil method is being investigated. The beam energy will also be increased in a few years’ time – first to 5 MeV/u and ultimately to 10 MeV/u – within the framework of the High Intensity and Energy ISOLDE project, HIE-ISOLDE. This will be achieved by adding superconducting cavities to the accelerating linac (Pasini et al. 2008). The increased energy range will open up a wide field of reaction studies and keep REX-ISOLDE fully booked for at least another decade.

Towards a more confident future

CCvie1_01_12

Gloom and despondency about the economic climate fill the newspapers to saturation point. The eurozone is, once again, destined to fragment and disappear – there is a depressingly strong sensation that many people never wanted it to thrive in the first place and would be delighted (not openly of course) should it disappear. Interestingly, the issue of the real climate has fallen off the radar screens for the moment. There is no doubt that these issues and many others – most notably the atrocious inequality of living standards worldwide – are of immense importance as we enter another new year. As one year follows another, are we, as a society, content to accept that these problems are insoluble and, if not, how are we – international scientists, engineers and administrators – able to contribute best?

In my view there is a strong element of lack of confidence floating around but confidence is what is needed now. Science has, in the past and increasingly so today, often been the source of inspiration, meaning and confidence in people’s lives. The moon landing is an obvious example – vision creates the confidence and the confidence creates the reality. The all-important and necessary details follow from the vision and not the other way around as many people today believe.

When we stand back and look at big-science facilities today we are justified in feeling awed. The LHC has been a lesson in determination and belief. Well done to all! The space telescopes as well as the ground-based telescopes provide breathtaking images and insights, and move the whole human race away from superstition towards a more realistic and healthy view of our place in the universe.

Today, the fragmentation of science into different disciplines, which was very evident in the second and third quarters of the previous century, is in reverse. Collaboration across these boundaries is today evident and productive, and this will surely continue. When I went to the Institut Laue-Langevin in Grenoble in 1999 I came into contact with the incipient grouping of the then seven large European science laboratories, which became EIROforum. Not the snappiest name in the universe, it has to be said, but a surprisingly effective and egalitarian organization in which not only the minnows certainly benefited but also the big-hitters gained. In many ways the international organizations mirror the national/international dilemma of the EU’s Directorate-General for Research and there was a natural affinity, which opened doors, increased influence and created togetherness between the different players. I was a big fan of EIROforum and remain so. It has added an extra dimension.

However, the European Spallation Source (ESS), which I now head up, is not yet ready to join this group of eight laboratories. The ESS sits plum in the middle of the size-scale between the cosmic scale of the telescopes and the submicroscopic scale of the LHC; it deals with materials science in all of its complexity and in all of its diversity. The ESS is not yet operational and it will not be such until the end of this decade when it will be the world’s most intense source of slow neutrons for the investigation of materials – from bio membranes and drug-delivery mechanisms to magnetic structures and metallurgical properties. But, crucially, the ESS is driven by the same engine that drives the LHC: a high-intensity proton linear accelerator, with its superconducting niobium accelerating cavities. And collaboration between the ESS and CERN is thriving.

Costs are important, however, and the spending of taxpayers’ contributions to scientific endeavours carries with it immense responsibility. This also, it must be said, applies to the spending of investors’ contributions in private companies. Not before time it is becoming increasingly recognized that there are not two distinct colours of money in our economies. The capital cost of the ESS (€1.5 bn) could be funded comfortably from the bonuses awarded to US bankers – for 24 days! (All in 2008 values.)

To put this figure into context with other public building projects – the proposed 160 km high-speed rail link between London and Birmingham is expected to cost €20 bn. Travellers would barely have reached the outskirts of London before the cash registers had exceeded the €1.5 bn figure for the ESS.

So let us keep matters in perspective and press our governments to stand by their promises made in Lisbon in 2000 and in Barcelona to lift the percentage of European GDP spent on science from the 1.85% then to 3% (by 2010). The figure remains below 1.9% today. Science has a role in society!

Constructing Reality: Quantum Theory and Particle Physics

By John Marburger
Cambridge University Press
Hardback: £17.99 $29
E-book: $23

CCboo3_10_11

This is easily the best introduction to quantum theory and particle physics that I have ever seen. The book is remarkable both for what it covers and for what it does not. Unlike many recent popular books, this one avoids references to unproven hypotheses such as grand unification, supersymmetry, strings and extra dimensions. The total space devoted to these ideas is under two pages. Rather, the book describes the story of the development of physical theory from Newtonian mechanics through the changes that were required by relativity and quantum mechanics. It continues all the way through to a lucid description of the Standard Model, nuclear physics and the periodic table and conveys tremendous excitement at how far physics has advanced while sticking to what is really known. It presents a clear and deep account of the physicist’s view of the basic bits that make up the world and how they interact.

The author provides a great deal of mathematical detail, but this never requires anything beyond what would be expected of a high-school student or first-year university undergraduate. Even concepts such as complex numbers, vectors, matrices and Hilbert space are introduced just enough to make the basic ideas clear without getting bogged down in detail. If I hadn’t just read the book, I would have doubted that such a presentation would even be possible.

Each chapter has detailed notes and references at the end. These could easily lead a serious reader a good way into an undergraduate physics education. Without “dumbing things down”, the mathematical concepts are presented with clear physical insight and motivated by their necessity to understand observed reality.

One caveat is that there is little detail on experiments, but I think this sacrifice is worthwhile to maintain focus and keep the book to under 300 readable pages. Certainly the key role played by experiments in physics is made extremely clear. Perhaps the best single overall feature of the book from the view of a practicing particle physicist, is that you can give it to any bright person to give them a good idea of the field and not have them wondering which parts correspond to tested ideas and which are purely speculative.

Many friends and colleagues have asked me to point them to something that could give them a clear picture of what’s actually known and this is, in every way, just the sort of book I’ve wanted. Sadly, the author died this past July after having been director of Brookhaven National Laboratory and also director of the Office of Science and Technology Policy under US President George W Bush. I’d like to think of the book as a parting gift to those he left behind. He has done a real service to all of us in the field and I recommend the book heartily to everyone. I’ll certainly be buying quite a few for Christmas.

The Beautiful Invisible: Creativity, Imagination, and Theoretical Physics

By Giovanni Vignale
Oxford University Press
Hardback: £16.99 $22

CCboo1_10_11

Most things in life are not “invariants”. Consider two identical glasses of good wine. A thirsty person quickly drinks the first one and then complains about the ensuing headache. The other glass has its aromas and textures slowly appreciated, mixed with the whispering sounds of waves breaking on a nearby shore, dimly illuminated by the crimson shades of the late afternoon sun – still bright but so tired from the long day’s journey that its descent behind the shallow mountains can be directly followed, triggering an everlasting memory associating the wine’s flavours with a pleasant feeling. The Beautiful Invisible is a truly remarkable opus, better appreciated if read in a slow and relaxed mood, savouring each sentence, each paragraph. I wonder if I have ever read another book with so few misprints, unclear sentences, or misplaced arguments. Each word is the right word, in the right place. And yet, as if to disturb the poet Stefan Mallarmé (“We do not write poems with ideas, but with words”), the continuum of great ideas is, at least for physicists, what makes this book such a wonderful “poem”.

Giovanni Vignale, besides being a professor at the University of Missouri and a condensed-matter theorist, is a connoisseur of literature, art, theatre and cinema, and seems to have spent plenty of time in transatlantic flights to conceive this “travel guide”. It takes the interested reader through a journey of invisible fields and virtual characters, intertwined with the reality of surreal but beautiful landscapes, surpassing the most imaginative creations of the human mind. As every condensed-matter theorist knows, “more is different”, and if you dive into this book, your mind will be filled with much more than just physics. Saint-Exupery, Musil, Bulgakov, Borges, García Márquez, Elliot, Poe, Shakespeare, Magritte, Vermeer and many others will walk along with you on this path to enlightenment. Some of the scenery is impressive and breathtaking. Maxwell’s discovery of electromagnetic waves by a purely theoretical argument, Dirac’s bringing together of quantum mechanics and special relativity, and other magnificent viewpoints welcome you along this incredible journey, which connects mechanics, thermodynamics, relativity, electrodynamics and quantum mechanics and ends on superconductivity – “one of the highest achievements of the physics of the 20th century” – a natural stop for a book published in 2011.

Along the way, casually dropped here and there by the side of the path, you might find some pearls of wisdom: “we must already know what we are looking for, in order to see it”; “theory grows at the confluence of fantasy and truth”; “there is no better way to test a theory than to apply it to a scenario different from the one that initially prompted its development”. We are also reminded of the fascinating and paradoxical mysteries of quantum mechanics: “there is nothing I can say to demystify it, words attempt the task and come back defeated”. And we are given some good advice: “complex calculations often simplify dramatically when approached from the right angle”; “different representations stimulate our imagination in different ways, producing vastly different results”; certain issues are “ignorable in the limit of interest”. At the end, after almost 300 pages, the pilgrim is offered some final take-home souvenirs: “there are no final truths at the frontier, only an inexhaustible activity that creates and continuously destroys its own creations”, “the search for the truth has more value than the truth itself”.

Vignale shows, convincingly, that no one should think that physicists are any less imaginative than novelists or poets. In summary, this book is the best Christmas reading for physicists this year (even better than Dirac’s Principles of Quantum Mechanics), at least for physicists who manage to relax for a week or two. I will surely enjoy reading it again, some day. But first I would like to follow up some of the many “suggestions for further reading”. Maybe I will start by reminding myself of Le Petit Prince: “anything essential is invisible to the eye and one sees clearly only with the heart”.

LHC

By Peter Ginter, Rolf-Dieter Heuer, Franzobel
Edition Lammerhuber
Hardback: €64

CCboo2_10_11

This large-format, lavishly produced volume in its psychedelic slipcase is a fitting celebration of the “world machine” that is the LHC. To describe it as a coffee-table book is to demean it. Long after the LHC has been superseded, it will remain as a beautiful record of the astonishing complexity and achievement of what currently hums and whirls beneath placid Swiss and French fields.

LHC is built around the photographs of Peter Ginter, master of the demanding craft – and art – of photographing technology and industry. The pictures are complemented by an interview with Rolf-Dieter Heuer, director-general of CERN, and an essay by Franzobel (pseudonym of the Austrian writer Stefan Griebl). Together with the explanatory picture captions, the text (in English, German and French) builds up to provide an excellent layperson’s introduction to the LHC, how it works and what it aims to achieve.

The book divides into sections on the collider, the four big detectors (ALICE, ATLAS, CMS, LHCb), event displays and computing (“from www to grid”), together with a brief history of CERN and the LHC project.

The photographs are magnificent. To any child or adult unfamiliar with particle physics, and even to people who visit CERN frequently or work there every day, they reveal the LHC and its detectors as a soaring pinnacle of research, built with the precision, coordination and search for truth that informed the great medieval cathedrals, updated to the 21st century.

One photograph shows four Russian workmen perched on a mound of artillery shell casings; a second shows the brass from the casings turned into a wheel of giant golden segments arranged like the iris diaphragm of a camera; and a third shows this huge “wheel” – designed to cause showers of secondary particles after an initial collision – being installed as one of the elements of the CMS detector.

And there is more. A physicist abseils into the gleaming innards of LHCb, like a mountaineer into a crevasse. Pakistani workmen pose beside one of the “feet” on which the 14,000 tonne CMS will sit. Engineers are dwarfed as one of the giant coils of the ATLAS toroid is manoeuvred into position. ALICE’s innermost detector gleams with myriad silicon faces like a futuristic Fabergé egg.

This book is a great photographic feat by Ginter; the result of endless visits to CERN over many years. Each picture has been planned, negotiated, composed and lit, representing many hours of work and inspiration.

Edition Lammerhuber has produced a magnificent volume to the highest publishing standards. Everyone concerned with or interested in CERN should have a copy. I also urge the publishers to produce an e-book version that could reach a mass audience worldwide. These pictures would look glorious on a tablet computer.

bright-rec iop pub iop-science physcis connect