Comsol -leaderboard other pages

Topics

From the Tevatron to Project X

Pier Oddone

The end of September marks the end of an era at Fermilab, with the shut down of the Tevatron after 28 years of operation at the frontiers of particle physics. The Tevatron’s far-reaching legacy spans particle physics, accelerator science and industry. The collider established Fermilab as a world leader in particle-physics research, a role that will be strengthened with a new set of facilities, programmes and projects in neutrino and rare-process physics, astroparticle physics and accelerator and detector technologies.

The Tevatron exceeded every expectation ever set for it. This remarkable machine achieved luminosities with antiprotons once considered impossible, reaching more than 4 × 1032 cm–2s–1 instantaneous luminosity and delivering more than 11 fb–1 of data to the two collider experiments, CDF and DØ. Such luminosity required the development of the world’s most intense, consistent source of antiprotons. The complex process of making, capturing, storing, cooling and colliding antiprotons stands as one of the great achievements by Fermilab’s accelerator team.

As the world’s first large superconducting accelerator, the Tevatron developed the technology that allowed later accelerators – including CERN’s LHC – to push beam energy and intensity even higher. But beyond its scientific contributions, an enduring legacy to mankind is the role it played in the development of the superconducting-wire industry. The construction of the accelerator required 135,000 lb of niobium-titanium-based superconducting wire and cable at a time when annual world production of these materials was only a few hundred pounds. Fermilab brought together scientists, engineers and manufacturers who developed a large-scale manufacturing capability that quickly found huge demand in another emerging field: MRI machines.

The life of the Tevatron is marked by historic discoveries that established the Standard Model. Tevatron experiments discovered the top quark, five B baryons and the Bc meson, and observed the first τ neutrino, direct CP violation in kaon decays, and single top production. The CDF and DØ experiments measured top-quark and W-boson masses, as well as di-boson production cross-sections. Limits placed by CDF and DØ on many new phenomena and the Higgs boson guide searches elsewhere – and continuing analysis of Tevatron data may yet reveal evidence for processes beyond our current understanding. Chris Quigg’s article in this issue gives further details on the Tevatron’s scientific legacy and results still to come (Long live the Tevatron).

As we bid farewell to the Tevatron, what’s next for Fermilab? Over the next decades, we will develop into the foremost laboratory for the study of neutrinos and rare processes – leading the world at the intensity frontier of particle physics.

Fermilab’s accelerator complex already produces the most intense high-energy beam of neutrinos in the world. Upgrades in 2012 will allow the NOνA experiment to push neutrino oscillation measurements even further. The Long-Baseline Neutrino Experiment, which will send neutrinos 1300 km from Fermilab to South Dakota, will be another leap forward in the quest to demystify the neutrino sector and search for the origins of a matter-dominated universe.

The cornerstone for Fermilab’s leadership at the intensity frontier will be a multimegawatt continuous-beam proton-accelerator facility known as Project X. This unique facility is ideal for neutrino studies and rare-process experiments using beams of muons and kaons; it will also produce copious quantities of rare nuclear isotopes for the study of fundamental symmetries. Coupled to the existing Main Injector synchrotron, Project X will deliver megawatt beams to the Long-Baseline Neutrino Experiment. A strong programme in rare processes is developing now at Fermilab with the muon-to-electron conversion and muon g-2 experiments. A strong foundation for Project X exists at Fermilab, with expertise in high-power beams, neutrino beamlines, and superconducting RF technology.

Project X’s rare-process physics programme is complementary to the LHC

Project X’s rare-process physics programme is complementary to the LHC. If the LHC produces a host of new phenomena, then Project X experiments will help elucidate the physics behind them. Different models postulated to explain the new phenomena will have different consequences for very rare processes that will be measured with high accuracy using Project X. If no new phenomena are discovered at the LHC, the study of rare transitions at Project X may show effects beyond the direct reach of particle colliders. Project X could also serve as a foundation for the world’s first neutrino factory, or – even further in the future – as the front end of a muon collider.

In parallel with the development of its intensity frontier programme, Fermilab will remain a strong part of the LHC programme as the host US laboratory and a Tier-1 centre for the CMS experiment, as well as through participation in upgrades of the LHC accelerator and detectors. Fermilab will also continue to build on its legacy as the birthplace of the understanding of the deep connection between cosmological observations and particle physics. The Dark Energy Survey, which contains the Fermilab-built Dark Energy Camera, will see first light in 2012. Better detectors are in development for the Cryogenic Dark Matter Search, and the COUPP dark-matter search is now operating a 60 kg prototype at Fermilab.

As Fermilab’s staff and users say goodbye to the Tevatron, we look forward to working with the world community to address the field’s most critical and exciting questions at facilities in the US, at CERN and around the world.

The Poetry of Physics and the Physics of Poetry

By Robert K Logan
World Scientific
Hardback: £42 $64
Paperback: £30 $43
E-book: $83

CCboo3_08_11

Robert Logan is a physicist who since 1971 has taught an interdisciplinary course, “The Poetry of Physics and the Physics of Poetry”, at the University of Toronto. In this book, which grew out of the course, he introduces the evolution of ideas in physics by first briefly recalling the ancient science of Mesopotamia, Egypt and China before addressing in detail the revolutions that started in the 16th century and the more modern advances, including the birth of the Standard Model of particle physics. Sprinkled with quotations from leading physicists of the respective times, the book reports in an interesting way the historical connections that lead from one discovery to another and the impact physics had on (and received from) other branches of science, philosophy, arts, theology, etc. Thus he hopes to convey not only the poetry or beauty of physics but also how physics has influenced the humanities.

The word “physics” derives from the Greek word phusis, meaning “nature”, and Logan wonders what physics would be without the ancient Greek philosophers. However, even with them, interest in science declined as theology became the dominant concern of the day. It was mainly thanks to René Descartes, who refused to accept past philosophical truths that he could not verify for himself (“Cartesian doubt”), and to other contemporary philosophers, that a change in attitude towards science began to develop in the beginning of the 17th century. During that period, Galileo Galilei, Johannes Kepler and several other scientists uncovered many mysteries of nature, which eventually led to Isaac Newton’s breakthroughs. In return, the philosophy of the British (Locke, Berkeley, Hume) and French (Voltaire, Condillac, Diderot, Condoret) movements was heavily influenced by Newton’s physics: their reflections were based directly on the scientific method.

Moving on, the scientific advances of the 20th century would not have been possible without the abstract mathematical concepts developed in the 19th century or technological breakthroughs such as the invention of the vacuum pump, which paved the way for the study of all gas-discharge experiments and led to the discovery of X-rays and the electron. Logan connects these and other discoveries very naturally, claiming along the way that the distinction between physics and chemistry is artificial and a “historic accident”.

Breakthroughs in science are based on the gift of abstract thinking, astronomy being one of the earliest examples. It is interesting to realize that the structure of certain languages is intimately connected to abstract thinking. According to the Toronto school of thought in communication theory, to which Logan has contributed, “the use of a phonetic alphabet and its particular coding led the Greeks to deductive logic and abstract theoretical science”. This was probably one of the main reasons that “abstract theoretical science is a particular outgrowth of Western culture” – as opposed to Eastern cultures, which use a much more complex alphabet.

Apart from discussing major physics discoveries, Logan also triggers readers (or at least his students) to acquire a critical attitude, quoting thinkers such as Thomas Kuhn and Karl Popper: “Science cannot prove that a hypothesis is correct. It can only verify that the hypothesis explains all observed facts and has passed all experimental tests of its validity.” After all, a physics course is more than just conveying acquired knowledge.

I can gladly recommend this book to anyone wanting to refresh their physics basics or who would like to learn about the implications that physics has for other disciplines, and vice versa. I certainly enjoyed reading it and nostalgically recalled several moments from my undergraduate studies. It is a pity that there are many misprints and some unclear sentences.

Introduction to the Theory of the Universe: Hot Big Bang Theory and Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory

Introduction to the Theory of the Universe: Hot Big Bang Theory
By Dmitry S Gorbunov and Valery A Rubakov
World Scientific
Hardback: £103 $158
Paperback: £51 $78
E-book: $200

Introduction to the Theory of the Universe: Cosmological Perturbations and Inflationary Theory
By Dmitry S Gorbunov and Valery A Rubakov
World Scientific
Hardback: £101 $156
Paperback: £49 $76
E-book: $203

CCboo2_08_11

When a field is developing as fast as modern particle astrophysics and cosmology, and in as many exciting and unexpected ways, it is difficult for textbooks to keep up. The two-volume Introduction to the Theory of the Early Universe by Dmitry Gorbunov and Valery Rubakov is an excellent addition to the field of theoretical cosmology that goes a long way towards filling the need for a fully modern pedagogical text. Rubakov, one of the outstanding masters of beyond-the-Standard Model physics, and his younger collaborator give an introduction to almost the entire field over the course of the two books.

The first book covers the basic physics of the early universe, including thorough discussions of famous successes, such as big bang nucleosynthesis, as well as more speculative topics, such as theories of dark matter and its genesis, baryogenesis, phase transitions and soliton physics – all of which receive much more coverage than is usual. As the choice of topics indicates, the approach in this volume tends to be from the perspective of particle theory, usefully complementing some of the more astrophysically and observationally oriented texts.

CCboo1_08_11

The second volume focuses on cosmological perturbations – where the vast amounts of data coming from cosmic-microwave background and large-scale structure observations have transformed cosmology into a precision science – and the related theory of inflation, which is our best guess for the dynamics that generate the perturbations. Both volumes contain notably insightful treatments of many topics and there is a large variety of problems for the student distributed throughout the text, in addition to extensive appendices on background material.

Naturally, there are some missing topics, particularly on the observational side, for example a discussion of direct and indirect detection of dark matter or of weak gravitational lensing. There are also some infelicities of language that a good editor would have corrected. However, for those wanting a modern successor to The Early Universe by Edward Kolb and Michael Turner (Perseus 1994) or John Peacock’s Cosmological Physics (CUP 1999), either for study of an unfamiliar topic or to recommend to PhD students to prepare them for research, the two volumes of Theory of the Early Universe are a fine choice and an excellent alternative to Steven Weinberg’s more formal Cosmology (OUP 2008).

Maîtriser le nucléaire : Que sait-on et que peut-on faire après Fukushima ?

par Jean Louis Basdevant

Editions Eyrolles

Broché: €17,50

CCboo2_06_11

Jean Louis Basdevant, ancien professeur à l’Ecole Polytechnique où il donna d’excellents cours, et auteur de nombreux livres pédagogiques, vient de réussir un tour de force en écrivant un livre sur les problèmes du nucléaire en moins d’un mois à la suite de la catastrophe de Fukushima. C’est un livre très pédagogique qui commence par un historique de la radioactivité, puis un exposé du b-a ba de la physique nucléaire, suivi d’une description des avantages et des dangers de la radioactivité dont il quantifie les effets. Suit une description de la fission, puis de la production d’énergie nucléaire présente et future (si les hommes le veulent!), y compris la proposition de fission induite par accélérateur tel que celui proposé par Carlo Rubbia. Vient alors la description de accidents : Lucens (négligeable), Three Mile Island, Tchernobyl (y compris une mise au point sur les doses reçues en France, en fait faibles), et Fukushima, avec une analyse des erreurs et même des fautes qui ont conduit à ces catastrophes.

On passe alors à la fusion, par confinement magnétique et aussi inertielle. Le diagnostic n’est pas très optimiste. Le calendrier d’ITER est sans cesse repoussé et ITER ne produira pas d’énergie électrique. Tout ceci suivi de quelques données sur l’énergie.

On passe aux armes nucléaires et thermonucléaires et leur fonctionnement ou encore les affreuses bombes à neutrons, la lutte contre la prolifération, les dangers du terrorisme, avec la facilité de construire des bombes artisanales, et aussi les bombes classiques contenant des matériaux radioactifs.

Finalement le dernier chapitre est intitulé : « que penser et que faire après Fukushima ». L’auteur se contente de donner des éléments de réponse, sans prendre explicitement parti. Les décideurs devraient certainement lire ce livre pour se faire une opinion sérieuse au lieu de se laisser aller à des réactions émotionnelles incontrôlées. Je recommande vivement la lecture de ce livre.

Bien qu’il soit en Français, je recommande également ce livre aux anglophones : le Français est simple et compréhensible.

Numerical Relativity. Solving Einstein’s Equations on the Computer

By Thomas W Baumgarte and Stuart L Shapiro

Cambridge University Press

Hardback: £55 $90

E-book: $72

CCboo1_06_11

Symmetries are a powerful tool for solving specific problems in all areas of physics. However, there are situations where both exact and approximate symmetries are lacking and, therefore, it is necessary to employ numerical methods. This, in essence, is the main motivation invoked for the use of large-scale simulations in relativistic systems where gravity plays a key role, such as black-hole formation, rotating stars, binary neutron-star evolution and even binary black-hole evolution.

Numerical Relativity by Thomas Baumgarte and Stuart Shapiro is an interesting and valuable contribution to the literature on this subject. Both authors are well known in the field. Shapiro, together with Saul Teukolsky, wrote a monograph on a related subject – Black Holes, White Dwarfs and Neutron Stars (John Wiley & Sons 1983) – that is familiar to students and researchers. The careful reader will recognize various similarities in the overall style of the presentation, with systematic attention to the details of the mathematical apparatus. In Numerical Relativity, 18 chapters are supplemented by a rich appendix. The first part could be used by students and practitioners for tutorials on the so-called Adler-Deser-Misner formalism and, ultimately, on the correct formulation of the Cauchy problem in general relativity.

It seems that the authors implicitly suggest that the future of numerical relativity is closely linked to our experimental ability to observe directly general relativistic effects at work. While astrophysics and gravitational waves have so far provided a rich arena for the applications, the intrinsic difficulties in detecting high-frequency gravitational waves with wide-band interferometers, such as LIGO and VIRGO, might suggest new cosmological applications of numerical techniques in the years to come. This book will take you into an exciting world populated by binary neutron stars and binary black holes.

Still, the achievements of numerical relativity (as well as those of all of the other areas of physics where large-scale computer simulations are extensively used) cannot be reduced simply to the quest for the most efficient algorithm. At the end of nearly 700 pages, the reader is led to reflect: is it wise to commit the research programme of young students and post-docs solely to the development of a complex code? After all, the lack of symmetry in a problem might just reflect the inability of physicists to see the right symmetries for the problem. A balanced perspective for potential readers can be summarized in the words of Vicky Weisskopf, when talking about the proliferation of numerical methods in all areas of physics: “[…] We should not be content with computer data. It is important to find more direct insights into what a theory says, even if such insights are insufficient to yield the numerical results obtained by computers” (Joy of Insight: Passions of a Physicist, Basic Books, 1991).

SuperB Factory set to be built at the University of Rome ‘Tor Vergata’

Plans for SuperB

Roberto Petronzio, president of INFN has announced that the SuperB Factory, will be built at the University of Rome ‘Tor Vergata’. The facility tops the list of 14 flagship projects of the National Research Plan of the Italian Ministry for Education, Universities and Research.

The SuperB project involves the construction underground of a new asymmetric high-luminosity electron–positron collider. It will occupy approximately 30 hectares on the campus of the University of Rome ‘Tor Vergata’ and be closely linked to the INFN Frascati National Laboratories, located nearby. The project, which will ultimately cost a few hundred-million euros, obtained funding approval for €250 million in the Italian government’s CIPE Economic Planning Document. It has also attracted interest from physicists in many other countries. At the end of May, some 300 physicists from all over the world gathered on the island of Elba for a meeting that started the formal formation of the SuperB collaboration, a crucial milestone on the road towards realization of the accelerator.

SuperB will be a major international research centre for fundamental and applied physics. The high a design luminosity of 1036 cm–2 s–1 will allow the indirect exploration of new effects in the physics of heavy quarks and flavours through the studies of large samples of B, D and τ decays. The same infrastructure will also provide new technologies and advanced experimental instruments for research in solid-state physics, biology, nanotechnologies and biomedicine.

EuCARD reviews progress at annual meeting

The second annual meeting of the European Co-ordination for Accelerator Research and Development (EuCARD) project took place on 11–13 May at the headquarters of the Centre National de Recherche Scientifique in Paris, attended by more than 120 participants. EuCARD is a four-year project co-funded by the European Union’s Framework Programme 7 and involves 37 European partners.

Among the many results and issues discussed was the progress of the engineering design for a 13 T niobium-tin (Nb3Sn) dipole. The first results on its high-temperature superconductor coil insert showed the need for a second iteration; the Nb3Sn undulator also requires optimization with respect to instabilities. New materials have been identified for more robust collimators; intelligent collimators for the LHC and cold collimators for the Facility for Antiproton and Ion Research are undergoing beam tests.

Linear collider technologies are on the move as well and new findings were reported on the origin of breakdowns in cavities. Stabilization to below 0.5 nm at 1 Hz has been demonstrated and there have been advances in instrumentation and femtosecond synchronization. Several superconducting bulk or coated cavities are in either final design, construction or test stages. These include crab cavities for both the LHC and the Compact Linear Collider study. Finally, novel concepts are progressing, including the new crab-waist crossing being tested at DAFNE, the commissioning of EMMA (the fixed-field alternating gradient machine at Daresbury) and the emittance measurement of tiny laser-driven, plasma-accelerated beams.

The networking activities in neutrino facilities, accelerator performance and RF technologies have confirmed their efficiency as exchange platforms. They have made the case for their expansion in the EuCARD2 proposal, which is under preparation and will be submitted by November 2011. Transnational access to the UK Science and Technology Facilities Council’s MICE facility (precision beams and muon-ionization cooling equipment) is continuing. HiRadMat at CERN, which offers pulsed irradiation, will open this autumn. Potential external users can benefit from financial support from the European Commission.

This year, the meeting dedicated one day to accelerator research and development in France, as well as to topics outside the scope of EuCARD, including the SuperB project (SuperB Factory set to be built at the University of Rome ‘Tor Vergata’), neutrino facilities and Siemens medical accelerators. There was also a visit to the large accelerator platforms at the Institut de Physique Nucléaire d’Orsay and CEA-Saclay.

New European novel accelerator network formed

The European Network for Novel Accelerators (EuroNNAc) was formally launched at a workshop held at CERN on 3–6 May as part of the EuCARD project. The aim was to form the network and define the work towards a significant Framework Programme 8 proposal for novel accelerator facilities in Europe.

The workshop was widely supported, with 90 participants from 51 different institutes, including 10 from outside Europe, and had high-level CERN support, with talks by Rolf Heuer, Steve Myers and Sergio Bertolucci. There were also contributions from leading experts in the field such as Gerard Mourou of the Institute Lumiere Extreme and Toshi Tajima of Ludwig Maximilians Universität, two senior pioneers in this field.

The field of plasma wakefield acceleration, which the new network plans to develop, is changing fast. Interesting beams of 0.3–1 GeV, with 1.5–2.5% energy spread, have now been produced in several places including France, Germany, the UK and the US, with promising reproducibility. Conventional accelerator laboratories are now interested to see if an operational accelerator can be built with these parameters. To avoid replication of work, a distributed test facility spread across many labs is envisaged for creating such a new device.

If a compact, 1 GeV test accelerator were pioneered, it could be copied for use around the world. Possible applications include tests in photon science or as a test beam for particle detectors. This could ease the present restrictions on beam time experienced by many researchers. These developments are currently being restricted to electron accelerators because they can be useful even when not fully reliable. Proton machines for medical purposes would, however, need to be more reliable.

In addition to the R&D aspects, the network discussed plans to create a school on Conventional to Advanced Accelerators – possibly linked to the CERN Accelerator School – and to establish a European Advanced Accelerator Conference.

The network activities will be closely co-ordinated with the TIARA and ELI projects. There is currently high funding support for laser science in Europe – about €4 billion in the next decade. EuroNNAc will help in defining the optimal way towards a compact, ultra-high-gradient linac. CERN will co-ordinate this work with help from the École Polytechnique and the University of Hamburg/DESY.

LHC physics meets philosophy

CCphi1_06_11

At the end of March, the first Spring School on Philosophy and Particle Physics took place in Maria in der Aue, a conference resort of the archbishopric of Cologne in the rolling hills of the area called Bergisches Land, between Cologne and Wuppertal. It was organized by the members of the Deutsche Forschungsgemeinschaft’s interdisciplinary research project, “Epistemology of the Large Hadron Collider”, which is based at the Bergische Universität Wuppertal. Part of the time was reserved for lecture series by distinguished representatives of each field, including: Wilfried Buchmüller, Gerardus ’t Hooft, Peter Jenni and Chris Quigg from physics; Jeremy Butterfield, Doreen Fraser and Paul Hoyningen-Huene from philosophy; and Helge Kragh from the history of science. The afternoons were devoted to five working groups of philosophy and physics students who discussed specific topics such as the reality of quarks and grand unification. The students then presented their results at the end of the school.

The large number of applications – more than 100 for 30 available places – from PhD students and young post-docs from all over the world demonstrated the strong interest in this interdisciplinary dialogue. There was an almost equal share of applicants from physics and philosophy. The pairing of students and lecturers from such different backgrounds made the school a great success. Almost all of the students rated it “very good” or “excellent” in their evaluations.

Theory and reality

The diverse academic backgrounds of the participants stimulated plenty of discussions during the lectures and working groups, as well as late into the night over beer. They centred on the main lecture topics: the reality of physical theories and concepts, experimental and theoretical methods in particle physics, and the history and philosophy of science.

For example, one of the working groups was concerned with the question, “Are quarks real?” Most physicists would, of course, answer “yes”. But then again, the existence of quarks is inferred in a way that is indirect and theory laden – much more than for, say, chairs and tables. Are there different levels of reality? Or are quarks just auxiliary constructs that will be superseded by other concepts in the future, as happened with the ether in the 19th century, for example? A comprehensive picture of philosophical attitudes towards the reality content of physical theories was discussed by the philosopher Hoyningen-Huene of the University of Hannover. His lecture series also presented critically other aspects of the philosophy of science, focusing on the classic ideas of Karl Popper and Thomas Kuhn: What qualifies as a scientific theory? Are physical theories verifiable? Are they falsifiable? How do physical theories evolve over time?

Fraser, of the University of Waterloo, and Butterfield, of the University of Cambridge, discussed the scope and applicability of particle and field concepts in the interpretation of quantum field theory (QFT), an area that is certainly one of the most successful achievements in physics. However, Fraser pointed out that the need for renormalization in QFT, as used in particle physics, reflects a conceptional problem. On the other hand, the more rigorous algebraic QFT does not allow for an interpretation in terms of particles, at least in the traditional sense.

Another topic that has attracted the attention of philosophers in recent years concerns gauge theories and spontaneous symmetry breaking, as Holger Lyre, of Otto-von-Guericke-Universität Magdeburg, discussed in his lecture. He asked whether it is justified to speak of “spontaneous breaking of a gauge symmetry” given that gauge symmetries are unobservable, a theme that was also discussed in a working group. Again, most physicists would take the pragmatic view that it is justified as long as all physical predictions are observed. Philosophers, however, look for the aspects of gauge theories that can count as being “objectively real”.

The contrarian attitudes between physicists and philosophers were put in a nutshell when a renowned physicist was asked whether he considers the electron to be a field or a particle, and the physicist replied: “Well, I usually think of it as a small yellow ball.” Pragmatism – motivated by a remarkably successful theoretical and experimental description of particle physics – clashed with the attempt to find unambiguous definitions for its basic theoretical constructs. It was one of the goals of the school to understand each other’s viewpoints in this context.

The physics lectures covered both experiment and theory. On the experimental side, Jenni, of CERN, and Peter Mättig, of the University of Wuppertal, discussed methods and basic assumptions that allow us to deduce the existence of new particles from electronic detector signals. As also discussed in one of the working groups, the inference from basic (raw) detector signals to claiming evidence for a theory is a long reach. The related philosophical question is on the justification of the various steps and their theory-ladenness; i.e. in which sense do theoretical concepts bias experimentation, and vice versa. Close to this is the additional question addressed in the discussion as to what extent the LHC experiments are fit to find any new particle or interaction that may occur.

The theory lectures of Robert Harlander, of the University of Wuppertal, Michael Krämer, of RWTH Aachen, and Quigg, of Fermilab, focused on the driving forces for new theories beyond the Standard Model. Apart from cosmological indications – comprehensively reviewed by DESY’s Buchmüller in one of the evening sessions – there is no inherent need for such a theory. Yet, almost everyone expects the LHC to open the door to a more encompassing theory. Why are physicists not happy with the Standard Model and what are the aims and criteria of a “better” theory? One of the working groups discussed specifically the quest for unification as one of the driving forces for a more aesthetic theory.

A current, highly valued guiding principle for model building is the concept of “naturalness”. To what extent are small ratios of natural parameters acceptable, such as the size of an atom compared with the size of the universe? As Nobel laureate ’t Hooft discussed in an evening talk, again there is no direct physics contradiction in having arbitrarily small parameters. But the physicists’ attitude is that large hierarchies are crying out for an explanation. Naturalness requires that a small ratio can arise only from a slightly broken symmetry. This is the background for many models that increase the symmetry of the Standard Model to justify the smallness of the weak scale relative to the Planck scale. Another idea that ’t Hooft discussed is to invoke anthropic arguments fuelled, for example, by the discovery of the string landscape consisting of something like 10500 different vacua.

Closely related to the philosophy of science is the history of science. The development of the Standard Model was the subject of one of the working groups and was also comprehensively discussed by Kragh, of the University of Aarhus. Looking at the sometimes controversial emergence of the Standard Model revealed lessons that may well shape the future. Kragh reminded the audience that what is considered “certain” today only emerged after a long struggle against some “certain facts” of former times.

At first glance, philosophical questions may not be directly relevant for our day-to-day work as physicists. Nevertheless, communication between the two fields can be fruitful for both sides. Philosophy reminds us to retain a healthy scepticism towards concepts that appear too successful to be questioned. In return, the developments of new experimental and theoretical methods and ideas may help to sharpen philosophical concepts. Looking into the history of physics may teach us how sudden perspectives can change. Coming at the brink of the possible discovery of new physics at the LHC, the school was a great experience, reflecting about what we as physicists take for granted. The plan is to have another school in two years.

ICARUS takes flight beneath the Gran Sasso

CCica1_06_11

Historically, imaging detectors have played a crucial role in particle physics. In particular, bubble-chamber detectors – such as Gargamelle at CERN – were an incredibly fruitful tool, permitting the visualization and measurement of particle interactions in an unprecedented way and providing fundamental contributions, in particular in neutrino physics. However, in the search for rare phenomena, bubble chambers are limited mainly by the impossibility to scale their size to larger masses and by their duty cycle, which is intrinsically limited by the mechanics of the expansion system.

The concept of the liquid-argon time-projection chamber (LAr-TPC) was conceived more than 30 years ago: it allows the calorimetric measurement of particle energy together with 3D track reconstruction from the electrons drifting in an electric field in sufficiently pure liquid argon (Rubbia 1977). The LAr-TPC successfully reproduces not only the imaging features of the bubble chamber – its medium and spatial resolution being similar to those of heavy-liquid bubble chambers – but it also has the further achievement of being a fully electronic detector, which is potentially scalable to multikilotonne masses. In addition, it provides excellent calorimetric measurements, with the big advantage of being continuously sensitive and self-triggering.

The ICARUS LAr-TPC

CCica2_06_11

The ICARUS T600, the largest LAr-TPC ever built, contains 760 tonnes of liquid argon (LAr). It represents the state of the art of this technique and marks a major milestone in the practical realization of large-scale LAr detectors. Installed in Hall B of the underground Gran Sasso National Laboratory (LNGS) of the Instituto Nazionale di Fisica Nucleare (INFN), it is collecting neutrino events from the beam of the CERN Neutrinos to Gran Sasso (CNGS) project. Produced at CERN, the neutrinos reach Gran Sasso after a journey of around 730 km. The detector also acts as an underground observatory for atmospheric, solar and supernovae neutrinos. In addition it will search for proton decay (in particular into exotic channels) in one of its 3 × 1032 nucleons, with zero background.

The ICARUS T600 detector consists of a large cryostat that is split into two identical, adjacent half-modules (with internal dimensions of 3.6 × 3.9 × 19.6 m3), which are filled with ultrapure liquid argon (Amoruso et al. 2004). Each half-module houses two TPCs separated by a common cathode, with a drift length of 1.5 m. Ionization electrons, produced by charged particles along their paths, are drifted under a uniform electric field (ED = 500 V/cm) towards the TPC anode made of three parallel wire planes that face the drift volume (figure 1). A total of approximately 54,000 wires are deployed with 3 mm pitch, orientated on each plane at a different angle (0°, +60° and –60°) with respect to the horizontal direction. By appropriate voltage biasing, the first two planes (the induction-1 and induction-2 planes) provide signals in a non-destructive way; finally, the ionization charge is collected and measured on the last plane (the collection plane).

The relative time of each ionization signal, combined with the electron drift-velocity information (vD ˜ 1.6 mm/μs), provides the position of the track along the drift coordinate. Combining the wire coordinate on each plane at a given drift time, a 3D image of the ionizing event can be reconstructed with a remarkable resolution of about 1 mm3. The absolute time of the ionizing event is provided by the prompt UV-scintillation light emitted in the LAr and measured through arrays of photomultiplier tubes (PMTs), installed in the LAr behind the wire planes.

CCica3_06_11

The electronics for data acquisition allow continuous read-out, digitization and independent waveform recording of signals from each wire of the TPCs. The electronic noise is 1500 electrons r.m.s. to be compared with around 15,000 free electrons produced by a minimum-ionizing particle in 3 mm.

To permit electrons produced by ionizing particles to travel “unperturbed” from the point of production to the wire planes, electronegative impurities (mainly O2, H2O and CO2) in the LAr must be kept at a low concentration level (below 0.1 ppb). Therefore, both gaseous and liquid argon are continuously purified by recirculation through standard Hydrosorb/Oxysorb filters.

Preassembly of the ICARUS T600 detector began in 1999 in Pavia and one of the two 300-tonne half-modules was brought into operation in 2001 and tested with cosmic rays at the Earth’s surface. To meet safety and reliability requirements for underground operation in Hall B at LNGS, the ICARUS T600 module – illustrated in figure 2 – was equipped with dedicated technical infrastructures. Assembly of the complete detector was achieved in the first months of 2010 and it was finally brought into operation with its subsequent commissioning.

Operation at LNGS

In the spring of 2010, the detector was filled with ultrapure LAr and activated immediately. Events from the CNGS neutrino beam and cosmic rays were observed with a trigger system that relied on both the scintillation light signals provided by the internal PMTs and the CNGS proton-extraction time. The “early warning” signal, sent from CERN to LNGS some 80 ms before the first proton spill extraction, allows the opening of two gates of around 50 μs, corresponding to the predicted extraction times. The first observed CNGS neutrino event is shown in figure 3 other beautiful events with a muon crossing both chambers of a module and two neutral pions are shown in the middle and bottom parts of figure 3, respectively.

CCica4_06_11

LAr purity is monitored continuously by measuring the charge attenuation along the tracks of ionizing cosmic muons that cross the full drift path. With the liquid recirculation turned on, the LAr purity steadily increased, the value of the free-electron lifetime exceeding 6 ms in both half-modules after a few months of operation (figure 4). This corresponds to a maximum free-electron yield attenuation of 16%. Sudden degradations of purity owing to periodic pump stops for maintenance are always recovered promptly within a few days.

The performance of LAr-TPCs has been studied progressively over the past two decades by exposing different detectors to cosmic rays and neutrino beams, culminating in the successful achievement of the T600 operation. The high resolution and granularity of the detector imaging allow the precise reconstruction of event topology, which is completed by a calorimetric measurement.

Particles are identified by studying both the dE/dx versus range and the decay/interaction topology. Electrons are identified by the characteristic electromagnetic showering, being well separated from π0 via γγ reconstruction, dE/dx signal comparison and the π0 invariant mass measurement at the level of 10–3. This feature guarantees a powerful identification of the charged current (CC) electron-neutrino interactions, while rejecting neutral-current (NC) interactions to a negligible level. The electromagnetic energy resolution σ(E)/E = 0.03/√(E(GeV)) ⊕ 0.01 is estimated in agreement with the π0 → γγ invariant mass measurements in the sub-giga-electron-volt energy range, while σ(E)/E = 0.30/√(E(GeV)) has been inferred for hadronic showers.

CCica5_06_11

For long muon tracks that escape the detector, momentum is determined by measuring the displacements arising from multiple scattering along the track. The procedure, implemented through a Kalman filter algorithm and validated on stopping muons, allows a resolution of Δp/p that can be as good as 10%.

During the 2010 CNGS run, the T600 acquired neutrino interaction events with steadily increasing efficiency, a live time of up to 90% and increasing quality. In the last 2010 period, about 100 neutrino CC events were collected and classified, in agreement with expectations.

As an example of the detector capabilities, figure 5 shows a CNGS νμ CC event with a 13 m-long muon track, together with zoomed projections on the collection and induction-2 planes. The use of two different views allows the recognition of two distinct electromagnetic showers pointing to – but detached from – the primary vertex. Even though the two photons overlap in the collection view it was possible to determine the associated invariant mass m12* = 125±15 MeV/c2, which is compatible with the π0 mass. The initial ionization of the closer photon amounts to 2.2 minimum ionizing particles. This is a clear signature for pair conversion, thus confirming the expected e/π0 identification capabilities of the detector.

CCica6_06_11

The momentum of the long muon track in figure 5 has been measured to be via the multiple-scattering method pμ = 10.5±1.1 GeV/c. The other primary long track is identified as a pion that interacts to give a secondary vertex. A short track from the secondary vertex is identified as a kaon, decaying in flight into a muon. From the decay topology and energy deposition, the kaon momentum can be evaluated as 672±44 MeV/c.

The capability for identifying and reconstructing low-energy kaons is a major advantage of the LAr-TPC technique for proton-decay searches. In the event described, the kaon momentum is not far from the average (300 MeV/c), for instance in the p → ν K+ channel. Also, the ability to identify π0s, as in this event, is effective for many nucleon-decay channels, as well as for the discrimination of NC events when looking for νμ → νe oscillations.

The missing transverse-momentum reconstructed is 250 MeV/c. Despite the non-full containment of the event, this value is consistent with the theoretical expectation from the Fermi motion of target nucleons. The reconstructed total energy is 12.6±1.2 GeV, well within the energy range of the CNGS beam (Bailey et al. 1999).

bright-rec iop pub iop-science physcis connect