An extremely bright and long afterglow of a rather common gamma-ray burst (GRB) was observed during four months by the X-ray telescope of NASA’s Swift satellite. Such a long-lasting X-ray emission challenges theoreticians who now suggest that some bursts are powered by a neutron star rather than by a black hole.
With the detection of about 100 GRBs each year, NASA’s Swift spacecraft – launched in November 2004 – is by far the most efficient hunter of these brief gamma-ray flashes. The strength of Swift is its ability to repoint its X-ray and ultraviolet/optical telescopes to the location of the burst within two minutes. This allows it to study in detail the early phases of the GRB afterglow emission (CERN Courier December 2005 p20). From all of these observations, researchers derived a general pattern of the X-ray brightness evolution. The X-ray emission usually starts with a rapid decay extending that of the GRB itself. After a few minutes, the X-ray decay rate slows down and remains moderate for at most a few hours before accelerating again for a final decay at an intermediate rate. After several weeks the source becomes generally undetectable for Swift. In some cases, additional bright X-ray flares have been observed during the first 15–20 minutes after the burst (CERN Courier October 2005 p11).
Not all GRBs follow this general trend. Two recent studies describe unusual afterglow emissions. Dirk Grupe from the Pennsylvania State University is leading an analysis of the GRB of 29 July 2006 (GRB 060729). At a redshift of z = 0.54, this not too remote burst was exceptionally bright in X-rays and, thanks to its slow fading, Swift’s X-ray telescope detected it during 125 days: a record for Swift only bettered by the closer “Rosetta stone” GRB of 29 March 2003 (CERN Courier September 2003 p15). A phase of very slow X-ray decrease, starting about 10 minutes after the burst and lasting almost half a day, characterized the lightcurve of GRB 060729. This requires continuous energy injection from the central engine over this long period. One possibility is that the collapsed core of the dying star at the origin of the GRB is a magnetar rather than a black hole. The ultra-powerful magnetic field of such a neutron star would force it to spin-down rapidly and the associated energy would be continuously injected into the X-ray emitting blast wave.
The study of another burst that Swift detected on 10 January 2007 also suggests that magnetars power some GRBs. GRB 070110’s afterglow is slightly shorter in duration, but its time evolution shows a plateau of almost constant X-ray flux lasting 5 hours followed by an extremely fast decay before returning to a more typical late afterglow behaviour. This unusual plateau state is again interpreted as being due to a magnetar by an international team led by Eleonora Troja working at the Palermo division of the Italian National Institute for Astrophysics (INAF).
These results shake the general belief that GRBs are always powered by a black hole and less energetic X-ray flashes could be explained by the presence of a magnetar instead of a black hole (CERN Courier October 2006 p13). It seems that GRBs are willing to keep some of their mystery and will continue to challenge theoreticians for several years.
For four years, the Genoa Festival of Science, which took place in 2006 on 26 October – 7 November, has been one of the best-attended events in European scientific communication. The aim is to create a crossroads where people and ideas can meet.
One of the many influential speakers at the 2006 festival was Fritjof Capra, founding director of the Center for Ecoliteracy in Berkeley, CA, which promotes ecology and systems thinking in primary and secondary education. Capra is a physicist and systems theorist, who received his PhD from the University of Vienna in 1965 before spending 20 years in particle-physics research. He is the author of several international bestsellers, including The Tao of Physics, The Turning Point, The Web of Life and The Hidden Connections, and at the festival he gave a talk entitled “Leonardo da Vinci: the unity of science and art”.
You started your career as a researcher in particle physics and became well known for writing a very popular book in 1975, The Tao of Physics, which linked 20th-century physics with mystical traditions. Did you expect such a success when you wrote the book?
During the late 1960s I noticed some striking parallels between the concepts of modern physics and the fundamental ideas in Eastern mystical traditions. At that time, I felt very strongly that these parallels would some day be common knowledge and that I should write a book about it. The subsequent success of the book surpassed all my expectations.
Recently, I was especially gratified to learn that my work as a writer was acknowledged by CERN. When CERN was given a statue of Shiva Nataraja, the Lord of Dance, by the Indian government to celebrate the organization’s long association with India, a special plaque was installed to explain the connection between the metaphor of Shiva’s cosmic dance and the “dance” of subatomic matter with several quotations from The Tao of Physics.
Particle physics can be seen as a reductionist approach, but you moved towards advocating viewing systems as a whole. When did you begin to move into systems theory and what guided your thoughts?
In the epilogue to The Tao of Physics, I argued that “the world view implied by modern physics is inconsistent with our present society, which does not reflect the harmonious interrelatedness we observe in nature”. To connect the conceptual changes in science with the broader change of world view and values in society, I had to go beyond physics and look for a broader conceptual framework. In doing so, I realized that our major social issues – health, education, human rights, social justice, political power, protection of the environment, the management of business enterprises, the economy, and so on – all have to do with living systems: with individual human beings, social systems and ecosystems. With this realization, my research interests shifted and in the mid-1980s I stopped doing research in particle physics.
This now seems to be becoming a popular approach with increasing interest in the ideas of complexity. Are you pleased to see how complexity is developing?
Yes, I am. I think the development of nonlinear dynamics, popularly known as complexity theory, in the 1970s and 1980s marks a watershed in our understanding of living systems. The key concepts of this new language – chaos, attractors, fractals, bifurcations, and so on – did not exist 25 years ago.
Now we know what kinds of questions to ask when we deal with nonlinear systems. This has led to some significant breakthroughs in our understanding of life. In my own work, I developed a conceptual framework that integrates three dimensions of life: the biological, the cognitive and the social dimension. I presented this framework in my book The Hidden Connections.
How did you become involved in the Center for Ecoliteracy at Berkeley?
For the past 30 years I have worked as a scientist and science writer, and also as an environmental educator and activist. In 1995, some colleagues and I founded the Center for Ecoliteracy to promote ecology and systems thinking in public schools. Over the past 10 years, we developed a special pedagogy, which we call “education for sustainable living”. To create sustainable human communities means, first of all, to understand the inherent ability of nature to sustain life, and then to redesign our physical structures, technologies and social institutions accordingly. This is what we mean by being “ecologically literate”.
How successful would you say your projects are, and how do you measure their success?
I am happy to say that our work has had a tremendous response from educators. There is an intense debate about educational standards and reforms, but it is based on the belief that the goal of education is to prepare our youth only to compete successfully in the global economy. The fact that this economy is not life-preserving but life-destroying is usually ignored, and so are the real educational challenges of our time – to understand the ecological context of our lives, to appreciate scales and limits, to recognize the long-term effects of human actions and, above all, to “connect the dots”.
Our pedagogy, “education for sustainable living”, is experiential, systemic and multidisciplinary. It transforms schools into learning communities, makes young people ecologically literate and gives them an ethical view of the world and the skills to live as whole persons.
From what you know of education on both sides of the Atlantic, do you think there are major differences between the education systems in Europe and the US, and do you think they can learn from each other?
The educators attending our seminars include people from many parts of the world. These dialogues have made us realize that, although our pedagogy has inspired people in many countries – in Europe as well as in Latin America, Africa and Asia – it cannot be used as a model in those countries in a straightforward way.
The principles of ecology are the same everywhere, but the ecosystems in which we practice experiential learning are different, as are the cultural and political contexts of education in different countries. This means that education for sustainability needs to be re-created each time.
Can physics contribute to the vision of sustainable living?
Absolutely. Ecology is inherently multidisciplinary because ecosystems connect the living and non-living world. Ecology, therefore, is grounded not only in biology, but also in many other sciences, including thermodynamics and other branches of physics.
The flow of energy, in particular, is an important principle of ecology, and the challenge of moving from fossil fuels to renewable energy sources is one in which physicists can make significant contributions. It is no accident that one of the world’s foremost experts on energy, Amory Lovins, director of the Rocky Mountain Institute, is a physicist.
You are currently working on a new book about the science of Leonardo da Vinci. In your seminar at the Genoa Festival of Science you explained that what we need today is exactly the kind of science that Da Vinci anticipated. How do you think physics should – or could – evolve in the future? Is there, in your opinion, a future for physics?
Well, you are asking several questions here, all of them very substantial. I’m not sure whether I can do them justice in this short space. We can indeed learn a lot from Leonardo’s science. As our sciences and technologies become increasingly narrow in their focus, unable to understand the problems of our time from an interdisciplinary perspective, and dominated by corporations with little interest in the well-being of humanity, we urgently need a science that honours and respects the unity of all life, recognizes the fundamental interdependence of all natural phenomena, and reconnects us with the living Earth. This is exactly the kind of science that Leonardo da Vinci anticipated and outlined 500 years ago.
Physicists have a lot to contribute to the development of such a new scientific paradigm. In modern science, the fundamental interdependence of all natural phenomena was first recognized in quantum theory, and various branches of physics are essential for a full understanding of ecology.
However, to contribute significantly to the great challenge of creating a sustainable future, physicists will need to acknowledge that their science can never provide a “theory of everything”, but is only one of many scientific disciplines needed to understand the biological, ecological, cognitive and social dimensions of life.
As in many other countries and regions, the Canadian subatomic-physics community has recently completed an in-depth study of its strengths in particle and nuclear physics, and has developed a focused Long Range Plan (LRP) for the coming decade. While primarily focusing on the community’s scientific goals, the planning process compiled a list of the economic and training benefits that have resulted from research in subatomic physics and took stock of the extraordinary financial resources that have been available over the past decade. Operating with a budget surplus for much of that time, the Canadian government has invested heavily in all areas of fundamental research, including subatomic physics. Recent studies by the Organisation for Economic Co-operation and Development (OECD) show that these investments have moved Canada to the top of the G8 in public funding per capita for scientific research (OECD 2003). Some of this funding has targeted the hiring of top researchers at Canadian universities, but much of it has rejuvenated research infrastructure in Canada – including the construction of the Sudbury Neutrino Observatory (SNO) and the funding of Canada’s Tier-1 LHC computing centre.
While there are many similarities between the Canadian LRP and others recently released, there is one important difference. Particle and nuclear physics receive joint funding in Canada not only for university-based researchers – who are funded by the Natural Sciences and Engineering Research Council (NSERC), the sponsor of the LRP process – but also for TRIUMF, the national laboratory for particle and nuclear physics. The LRP balances Canadian priorities for particle and nuclear physics in the coming decade. The five priorities that the plan identifies are seen as crucial if the Canadian subatomic-physics community is to build on its recent successes (see box 1). These five priorities encompass the main research activities of more than three-quarters of the experimental subatomic-physics community in Canada.
Canadian particle physicists were founding members of the ATLAS experiment at CERN’s LHC in the 1990s. In addition to contributing major pieces of the hadronic endcap and forward calorimeters, Canada, through TRIUMF, has made important in-kind contributions to refurbishing the CERN proton-injector complex. Canadians are now leading commissioning efforts for the ATLAS calorimeter and are preparing for in situ calibrations using the initial data expected later this year. A growing contingent of recently hired faculty, bringing their experience from Fermilab’s Tevatron, are contributing to the ATLAS high-level trigger system – crucial to the extraction of LHC physics. At home, researchers are taking full advantage of the state-of-the-art Canadian computer network infrastructure, integrating the operations of our Tier-1 centre at TRIUMF with those of our Tier-2 centres in Toronto/Montreal and Vancouver/Victoria. The high profile of ATLAS attracts the best graduate students and also serves as a focal point, bringing together Canadian theorists and experimentalists as they prepare to unravel the LHC phenomenology. The LRP prioritizes the support of these researchers to capitalize on Canada’s investment in the LHC programme. In addition to preparations for initial ATLAS physics the LRP anticipates a continued involvement and proposes that significant funding be made available for upgrades to the LHC and ATLAS in the second half of the plan.
One of the great Canadian successes of the past decade has been SNO, which has provided unequivocal evidence that electron-neutrinos produced in solar fusion oscillate into muon- and τ-neutrinos at a sufficient rate to explain the long-standing solar-neutrino deficit (see SNO: solving the mystery of the missing neutrinos). As a result of this great success the Canadian government has funded the expansion of the SNO experimental facilities. The new SNOLAB infrastructure is almost complete, nearly tripling the floor space for experiments and generating significant interest from researchers in underground physics from around the world. Out of twenty expressions of interest for SNOLAB experiments, nine are still being vetted for first-round space in the new laboratory.
The main scientific goals include searches for dark matter, neutrino-less double beta decay and the study of lower energy solar and geo-neutrinos. With such a world-class facility in Canada, the LRP prioritizes support for Canadian researchers to lead the construction of one or more major experiments. The SNO+ experiment has an advanced engineering design to replace the heavy water in SNO with liquid scintillator to allow the study of neutrinos from the solar “pep” chain. It may also be possible to dope the scintillator with enriched neodinium, making SNO+ a competitive neutrino-less double beta-decay detector. The DEAP/CLEAN experiment is at prototype stage, exploiting the novel signal properties of dark matter in liquid argon and neon. First-round experiments are expected to begin before the end of the decade.
Canadian subatomic physicists are also at the forefront of the study of nuclear astrophysics and the quest to understand the basic hadronic building block of nature – the nucleus – using radio-active beams at TRIUMF’s Isotope Separator and Accelerator Complex (ISAC). The ISAC facility delivers some of the world’s most intense rare beams using the world’s highest power on target (up to 50 kW). One highlight was an experiment with 21Na that provided incisive measurements, refining our understanding of stellar evolution and modelling nuclear synthesis. The new ISAC-II facility extends the accelerator to 12 MeV for each nucleon using superconducting RF cavities. The first experiment, using 11Li (t1/2 = 8 ms), was carried out in December 2006: a European, US and Canadian collaboration investigated the unexpected behaviour of this halo nucleus.
The unique capabilities of ISAC and ISAC-II, including state-of-the-art instrumentation, make this the prime location for a worldwide user network; however, it is configured as a single-user facility. There is contention for beam time between the first-rate science programme and the development of new targets and ion sources. To alleviate this, the LRP prioritizes the full exploitation of ISAC and ISAC-II and the development of a second isotope production line.
TRIUMF is also the nexus for Canada’s contribution to the Tokai-to-Kamioka (T2K) project in Japan. With its expertise in remote target handling, developed at ISAC, TRIUMF is consulting on the T2K neutrino-beam target station. Canadian researchers are leading the construction of the T2K near detector, building modules of the time projection chamber tracker, as well as the fine-grained calorimeter.
The LRP identified a further future priority, foreseeing a fully fledged Canadian participation in an International Linear Collider. TRIUMF accelerator physicists are already engaged in the ILC Global Design Effort. Members of the Canadian subatomic-physics community are working to identify industrial partners and are encouraging them to become full participants in the North American ILC industrial forum. Canadian university-based researchers have a long history of important contributions to electron–positron collider experiments, including the OPAL experiment at CERN’s LEP and more recently the BaBar experiment at SLAC. These researchers have been actively engaged in Canadian detector R&D efforts for ILC detectors.
The Canadian subatomic-physics community has seen significant growth this century. As a result of targeted hiring and replacing retiring faculty, 35% of the subatomic-physics faculty in Canada has been hired in the past six years. A 45% surge in the number of graduate students has accompanied this faculty renewal. Further growth is anticipated as the new faculty members establish their research programmes and recruit their full complement of students and postdoctoral researchers. This growth in subatomic-physics graduate student numbers appears to be counter to the experience in other OECD nations, and bodes well for subatomic physics in Canada.
The LRP Committee has therefore found that subatomic physics in Canada is strong and healthy, but the news is not all good. Despite the significant infusion of capital from the government’s novel funding mechanisms, support for traditional sources of sub-atomic physics in Canada have not kept pace with inflation over the past 10 years. The growth and renewal in the community has put ever increasing pressure on the ongoing operational support. One main goal of the LRP exercise was to identify and quantify these pressures, so as to provide a firmer basis for requests for increased operational support for fundamental research in general and subatomic physics in particular.
In 2001, GSI, together with a large international science community, presented the Conceptual Design Report (CDR) for a major new accelerator facility for beams of ions and antiprotons in Darmstadt (Henning et al. 2001). The following years saw the consolidation of the proposal for the project, which was named the Facility for Antiproton and Ion Research (FAIR). During that process high-level national and international science committees evaluated the project’s feasibility, scientific merit and discovery potential, as well as the estimated costs. About 2500 scientists and engineers from 45 countries contributed to this effort, which resulted last year in the FAIR Baseline Technical Report (BTR) (Gutbrod et al. 2006).
The International Steering Committee has accepted the BTR as the basis for international negotiations on funding for FAIR. The plan is to found a company, FAIR GmbH, as project owner for the construction and operation of the FAIR research facility under international ownership. Currently 14 countries (Austria, China, Finland, France, Germany, Greece, India, Italy, Poland, Romania, Russia, Spain, Sweden and the UK) have signed the Memorandum of Understanding for FAIR, indicating their intention to participate in the FAIR project; the European Union, Hungary and the US have observer status. The investment cost for the project will be about €1000 million, and about 2400 man-years will be required to execute the project. Negotiations at governmental level to secure the funding started in summer 2006. The aim is to complete this process in summer 2007 and begin construction in autumn. The construction plan foresees a staged completion of the facility in which the first experimental programmes commence as early as 2012 while the entire facility will be completed in 2015 (figure 1).
The research programme of FAIR can be grouped in the following specific fields:
• Nuclear structure and nuclear astrophysics, using beams of stable and short-lived (radioactive) nuclei far from stability.
• Hadron structure, in particular quantum chromodynamics (QCD) – the theory of the strong interaction – and the QCD vacuum, using primarily beams of antiprotons.
• The nuclear-matter phase diagram and quark–gluon plasma, using beams of high-energy heavy ions.
• Physics of very dense plasmas, using highly compressed heavy-ion beams in unique combination with a petawatt laser.
• Atomic physics, quantum electrodynamics (QED) and ultra-high electromagnetic fields, using beams of highly charged heavy ions and antimatter.
• Technical developments and applied research, using ion beams for materials science and biology.
The BTR lists 14 experimental proposals as elements of the core research programme. However, additional experiments as future options are already being considered and evaluated. In particular, experiments with polarized antiprotons could add an entirely new research field to the FAIR programme. One addition to the core research programme, as presented in 2001, is the Facility for Low-Energy Antiproton and Ion Research (FLAIR), which will exploit the high flux of antiprotons at FAIR. Here cooled beams of antiprotons with energies well below 100 keV can be captured efficiently in charged-particle traps or stopped in low-density gas.
The new SIS100/300 double synchrotron, with a circumference of about 1100 m and with magnetic rigidities of 100 and 300 Tm in the two rings, will meet experimental requirements concerning particle intensities and energies. This constitutes the central part of the FAIR accelerator facility (figure 2). The two synchrotrons will be built on top of each other in a subterranean tunnel. They will be equipped with rapidly cycling superconducting magnets to minimize both construction and operating costs.
For the highest intensities, the 100 Tm synchrotron will operate at a repetition rate of 1 Hz, i.e. with ramp rates for the bending magnets of up to 4 T/s. The goal of the SIS100 is to achieve intense pulsed (5 × 1011 ions per pulse) uranium beams (charge state q = 28+) at 1 GeV/u and intense (4 × 1013) pulsed proton beams at 29 GeV. A separate proton linac will be constructed as injector to the SIS18 synchrotron to supply the high-intensity proton beams required for antiproton production. It will be possible to compress both the heavy-ion and the proton beams to the very short bunch lengths required for the production and subsequent storage and efficient cooling of exotic nuclei (around 60 ns) and antiprotons (around 25 ns). These short, intense ion bunches are also needed for plasma-physics experiments.
The double-ring facility will provide continuous beams with high average intensities of up to 3 × 1011 ions per second at energies of 1 GeV/u for heavy ions, either directly from the SIS100 or by slow extraction from the 300 Tm ring. The SIS300 will provide high-energy ion beams of maximum energies around 45 GeV/u for Ne10+ beams and close to 35 GeV/u for fully stripped U92+ beams, respectively. The maximum intensities in this mode will be close to 1.5 × 1010 ions for each spill. These high-energy beams will be extracted over time periods of 10–100 s in quasi-continuous mode, which is the limit that the detectors used for nucleus–nucleus collision experiments can accept.
A complex system of storage rings adjacent to the SIS100/300 double-ring synchrotron, together with the production targets and separators for antiproton beams and radioactive secondary beams (the Super Fragment Separator), will provide an unprecedented variety of particle beams at FAIR. These rings will be equipped with beam-cooling facilities, internal targets and in-ring experiments.
The Collector Ring (CR) serves for stochastic cooling of radio-active and antiproton beams and will allow mass measurements of short-lived nuclei using the time-of-flight method when in isochronous operation mode. The Accumulator Ring (RESR) will accumulate antiproton beams after stochastic pre-cooling in the CR and also provide fast deceleration of radioactive secondary beams with a ramp rate of up to 1 T/s.
The New Experimental Storage Ring (NESR) will be dedicated to experiments with exotic ions and with antiproton beams. The NESR is to be equipped with stochastic cooling and electron cooling and additional instrumentation will include precision mass-spectrometry using the Schottky frequency spectroscopy method, internaltarget experiments with atoms and electrons, an electron–nucleus scattering facility, and collinear laser spectroscopy. Moreover, the NESR will serve to cool and decelerate stable and radioactive ions as well as antiprotons for low-energy experiments and trap experiments at the FLAIR facility.
The High-Energy Storage Ring (HESR) will be optimized for anti-proton beams at energies of 3 GeV up to a maximum of 14.5 GeV. The ring is to be equipped with electron cooling up to a beam energy of 8 GeV (5 MeV maximum electron energy) and for stochastic cooling up to 14.5 GeV. The experimental equipment includes an internal pellet target and the large in-ring detector PANDA, as well as an option for experiments with polarized antiproton beams.
The design of the FAIR facility has incorporated parallel operation of the different research programmes from the beginning. The proposed scheme of synchrotrons and storage rings, with their intrinsic cycle times for beam acceleration, accumulation, storage and cooling, respectively, has the potential to optimize parallel and highly synergetic operation. This means that for the different programmes the facility will operate more or less like a dedicated facility, without the reduction in luminosity that would occur with simple beam splitting or steering to different experiments.
The realization of the facility involves some technological challenges. For example, it will be necessary to control the dynamic vacuum pressure. The synchrotrons will need to operate close to the space-charge limits with small beam losses in the order of a few per cent; in this respect, the control of collective instabilities and the reduction of the ring impedances is a subject of the present R&D phase. Fast acceleration and compression of the intense heavy-ion beams requires compact RF systems. The SIS100 requires superconducting magnets with a maximum field of 2 T and with 4 T/s ramping rate, while the SIS300 will operate at 4.5 T with a ramp rate of 1 T/s in the dipole magnets – technology that will benefit other accelerators. Lastly, electron and stochastic cooling at medium and high energies will be essential for experiments with exotic ions and with antiprotons.
The past five years have seen substantial R&D effort dedicated to the various technological aspects. This has been funded by the German BMBF and by FAIR member states, as well as by the European Union. The work has made considerable progress and has demonstrated the feasibility of the proposed technical solutions. Now the next stage is underway and prototyping of components has started.
The end of an era came on 28 November 2006 when the Sudbury Neutrino Observatory (SNO) stopped data-taking after eight years of exciting discoveries. During this time the observatory saw evidence that neutrinos, produced in the fusion of hydrogen in the solar core, change type – or flavour – while passing through the Sun on their way to Earth. This observation explained the long-standing puzzle as to why previous experiments had seen fewer solar neutrinos than predicted and also confirmed that these elusive particles have mass.
Ray Davis’s radiochemical experiment first detected solar neutrinos in 1967, a discovery for which he shared the 2002 Nobel Prize in Physics (CERN Courier December 2002 p15). Surprisingly, he found only about a third of the number predicted from models of the Sun’s output. The Kamiokande II experiment in Japan confirmed this deficit, which became known as the solar-neutrino problem, while other detectors saw related shortfalls in the number of solar neutrinos. A possible explanation, suggested by Vladimir Gribov and Bruno Pontecorvo in 1969, was that some of the electron-neutrinos, which are produced in the Sun, “oscillated” into neutrinos that could not be detected in Davis’s detector. This oscillation mechanism requires that neutrinos have non-zero mass.
In 1985, the late Herb Chen pointed out that heavy water (D2O) has a unique advantage when it comes to detecting the neutrinos from 8B decays in the solar-fusion process, as it enables both the number of electron neutrinos and the number of all types of neutrinos to be measured. In heavy water neutrinos of all types can break a deuteron into its constituent proton and neutron (the neutral-current reaction), while only electron neutrinos can change the deuteron into two protons and release an electron (the charged-current reaction). A comparison of the flux of electron neutrinos with that of all flavours can then reveal whether flavour transformation is the cause of the solar-neutrino deficit. This is the principle behind the SNO experiment.
Scientists from Canada, the US and the UK designed SNO to attain a detection rate of about 10 solar neutrinos a day using 1000 tonnes of heavy water. Neutrino interactions were detected by 9456 photomultiplier tubes surrounding the heavy water, which was contained in a 12 m diameter acrylic sphere. This sphere was surrounded by 7000 tonnes of ultra-pure water to shield against radioactivity. Figure 1 shows the layout of the SNO detector, which is located about 2 km underground in Inco’s Creighton nickel mine near Sudbury, Canada, so as to all but eliminate cosmic rays from reaching the detector. Figure 2 shows what the detector “sees”: the photo-multiplier tubes that were hit following the creation of an electron by an electron neutrino.
It was crucial to the success of this experiment to make the components of SNO very clean and, in particular, to reduce the radio-activity within the heavy water to exceedingly low levels. To achieve this aim the team constructed the detector in a Class-2000 cleanroom and entry to SNO was via a shower and changing rooms to reduce the chance of any dust contamination from the mine. The fraction of natural thorium in the D2O had to be less than a few parts in 1015, roughly equivalent to a small teaspoonful of rock dust added to the 1000 tonnes of heavy water. Such purity was necessary to reduce the break-up of deuterons by gamma rays from natural uranium and thorium radioactivity to a small fraction of the rate from the solar neutrinos. This required complex water purification and assay systems to reduce and measure the radioactivity. Great care in handling the heavy water was also needed as it is on loan from Atomic Energy of Canada Ltd (AECL) and is worth about C$300 million.
SNO’s results from the first phase of data-taking with unadulterated D2O were published in 2001 and 2002, and provided strong evidence that electron neutrinos do transform into other types of neutrino (CERN Courier June 2002 p5). The second phase of SNO involved adding 2 tonnes of table salt (NaCl) to the D2O to enhance the detection efficiency for neutrons. This large “pinch of salt” enabled SNO to make the most direct and precise measurement of the total number of solar neutrinos, which is in excellent agreement with solar-model calculations (CERN Courier November 2003 p5). The results to date reject the null hypothesis of no neutrino flavour change by more than 7 σ.
Together with other solar-neutrino measurements, the SNO results are best described by neutrino oscillation enhanced by neutrinos interacting with matter as they pass through the Sun – a resonant effect that Stanislav Mikheyev, Alexei Smirnov and Lincoln Wolfenstein predicted in 1985. To a good approximation, the electron-neutrino flavour eigenstate is a linear combination of two mass eigenstates with masses m1 and m2. The mixing angle between these two mass eigenstates, which the ratio (measured by SNO) of the electron-neutrino flux to the total neutrino flux constrains, is found to be large (around 34°) but is excluded from maximal mixing (45°) by more than 5 σ. The matter enhancement enables the ordering (hierarchy) of the two mass eigenstates to be defined, with m2 > m1 and a difference of around 0.01 eV/c2. The KamLAND experiment, which uses 1000 tonnes of liquid scintillator to detect anti-neutrinos from Japan’s nuclear reactors, confirmed in 2003 that neutrino mixing occurs and is large, as seen for solar neutrinos.
After the removal of salt from the heavy water, the third and final phase of SNO used an array of proportional counters in the heavy water to improve further the neutrino detection. Researchers filled 36 counters with 3He and four with 4He gas. Figure 3 shows part of this array during its deployment with a remotely operated submarine. The additional information from this phase will enable the SNO collaboration to determine better the oscillation parameters that describe the neutrino mixing. Data analysis is still in progress.
SNO’s scientific achievements were marked at the end of data-taking when the collaboration received the inaugural John C Polanyi Award (figure 4) of the Canadian funding agency, the Natural Sciences and Engineering Research Council (NSERC). The completion of SNO does not mark the end of experiments in Sudbury, however, as SNOLAB, a new international underground laboratory, is nearly complete, with expanded space to accommodate four or more experiments (see Canada looks to future of subatomic physics). SNOLAB has received a number of letters of interest from experiments on dark matter, double beta decay, supernovae and solar neutrinos. In addition, a new collaboration is planning to put 1000 tonnes of scintillator in the SNO acrylic vessel once the heavy water is returned to the AECL by the end of 2007. This experiment, called SNO+, aims to study lower-energy solar neutrinos from the “pep” reaction in the proton–proton chain, and to study the double beta decay of 150Nd by the addition of a metallo-organic compound.
As a historical anecdote, SNO was not the first heavy-water solar-neutrino experiment. In 1965, Tom Jenkins, along with other members of Fred Reines’ neutrino group, at what was then the Case Institute of Technology, began the construction of a 2 tonne heavy-water Cherenkov detector, complete with 55 photomultiplier tubes, in the Morton salt mine in Ohio. Unlike Chen’s proposal, Jenkins had only considered the detection of electron neutrinos through the charged-current reaction as other flavours were not expected, and the neutral-current reaction had not yet been discovered. This experiment finished in 1968 after Davis had obtained a much lower 8B solar-neutrino flux than had been predicted.
This article was adapted from text in CERN Courier vol. 47, May 2007, pp26–28
The principal goal of the experimental programme at the LHC is to make the first direct exploration of a completely new region of energies and distances, to the tera-electron-volt scale and beyond. The main objectives include the search for the Higgs boson and whatever new physics may accompany it, such as supersymmetry or extra dimensions, and also – perhaps above all – to find something that the theorists have not predicted.
The Standard Model of particles and forces summarizes our present knowledge of particle physics. It extends and generalizes the quantum theory of electromagnetism to include the weak nuclear forces responsible for radioactivity in a single unified framework; it also provides an equally successful analogous theory of the strong nuclear forces.
The conceptual basis for the Standard Model was confirmed by the discovery at CERN of the predicted weak neutral-current form of radioactivity and, subsequently, of the quantum particles responsible for the weak and strong forces, at CERN and DESY respectively. Detailed calculations of the properties of these particles, confirmed in particular by experiments at the Large Electron–Positron (LEP) collider, have since enabled us to establish the complete structure of the Standard Model. Data taken at LEP agreed with the calculations at the per mille level, and recent precise measurements of the masses of the intermediate vector boson W and the top quark at Fermilab’s Tevatron agree very well with predictions.
These successes raise deeper problems, however. The Standard Model does not explain the origin of mass, nor why some particles are very heavy while others have no mass at all; it does not explain why there are so many different types of matter particles in the universe; and it does not offer a unified description of all the fundamental forces. Indeed, the deepest problem in fundamental physics may be how to extend the successes of quantum physics to the force of gravity. It is the search for solutions to these problems that define the current objectives of particle physics – and the programme for the LHC.
Understanding the origin of mass will unlock some of the basic mysteries of the universe: the mass of the electron determines the sizes of atoms, while radioactivity is weak because the W boson weighs as much as a medium-sized nucleus. Within the Standard Model the key to mass lies with an essential ingredient that has not yet been observed, the Higgs boson; without it the calculations would yield incomprehensible infinite results. The agreement of the data with the calculations implies not only that the Higgs boson (or something equivalent) must exist, but also suggests that its mass should be well within the reach of the LHC.
Experiments at LEP at one time found a hint for the existence of the Higgs boson, but these searches proved unsuccessful and told us only that it must weigh at least 114 GeV. At the LHC, the ATLAS and CMS experiments will be looking for the Higgs boson in several ways. The particle is predicted to be unstable, decaying for example to photons, bottom quarks, tau leptons, W or Z bosons (figure 1 and figure 2). It may well be necessary to combine several different decay modes to uncover a convincing signal, but the LHC experiments should be able to find the Higgs boson even if it weighs as much as 1 TeV.
While resolving the Higgs question will set the seal on the Standard Model, there are plenty of reasons to expect other, related new physics, within reach of experiments at the LHC. In particular, the elementary Higgs boson of the Standard Model seems unlikely to exist in isolation. Specifically, difficulties arise in calculating quantum corrections to the mass of the Higgs boson. Not only are these corrections infinite in the Standard Model, but, if the usual procedure is adopted of controlling them by cutting the theory off at some high energy or short distance, the net result depends on the square of the cut-off scale. This implies that, if the Standard Model is embedded in some more complete theory that kicks in at high energy, the mass of the Higgs boson would be very sensitive to the details of this high-energy theory. This would make it difficult to understand why the Higgs boson has a (relatively) low mass and, by extension, why the scale of the weak interactions is so much smaller than that of grand unification, say, or quantum gravity.
This is known as the “hierarchy problem”. One could try to resolve it simply by postulating that the underlying parameters of the theory are tuned very finely, so that the net value of the Higgs boson mass after adding in the quantum corrections is small, owing to some suitable cancellation. However, it would be more satisfactory either to abolish the extreme sensitivity to the quantum corrections, or to cancel them in some systematic manner.
One way to achieve this would be if the Higgs boson is composite and so has a finite size, which would cut the quantum corrections off at a relatively low energy scale. In this case, the LHC might uncover a cornucopia of other new composite particles with masses around this cut-off scale, near 1 TeV.
The alternative, more elegant, and in my opinion more plausible, solution is to cancel the quantum corrections systematically, which is where supersymmetry could come in. Supersymmetry would pair up fermions, such as the quarks and leptons, with bosons, such as the photon, gluon, W and Z, or even the Higgs boson itself. In a supersymmetric theory, the quantum corrections due to the pairs of virtual fermions and bosons cancel each other systematically, and a low-mass Higgs boson no longer appears unnatural. Indeed, supersymmetry predicts a mass for the Higgs boson probably below 130 GeV, in line with the global fit to precision electroweak data.
The fermions and bosons of the Standard Model, however, do not pair up with each other in a neat supersymmetric manner. The theory, therefore, requires that a supersymmetric partner, or sparticle, as yet unseen, accompanies each of the Standard Model particles. Thus, this scenario predicts a “scornucopia’ of new particles that should weigh less than about 1 TeV and could be produced by the LHC (figure 3).
Another attraction of supersymmetry is that it facilitates the unification of the fundamental forces. Extrapolating the strengths of the strong, weak and electromagnetic interactions measured at low energies does not give a common value at any energy, in the absence of supersymmetry. However, there would be a common value, at an energy around 1016 GeV, in the presence of supersymmetry. Moreover, supersymmetry provides a natural candidate, in the form of the lightest supersymmetric particle (LSP), for the cold dark matter required by astrophysicists and cosmologists to explain the amount of matter in the universe and the formation of structures within it, such as galaxies. In this case, the LSP should have neither strong nor electromagnetic interactions, since otherwise it would bind to conventional matter and be detectable. Data from LEP and direct searches have already excluded sneutrinos as LSPs. Nowadays, the “scandidates” most considered are the lightest neutralino and (to a lesser extent) the gravitino.
Assuming that the LSP is the lightest neutralino, the parameter space of the constrained minimal supersymmetric extension of the Standard Model (CMSSM) is restricted by the need to avoid the stau being the LSP, by the measurements of b → sγ decay that agree with the Standard Model, by the range of cold dark-matter density allowed by astrophysical observations, and by the measurement of the anomalous magnetic moment of the muon (gμ–2). These requirements are consistent with relatively large masses for the lightest and next-to-lightest visible supersymmetric particles, as figure 4 indicates. The figure also shows that the LHC can detect most of the models that provide cosmological dark matter (though this is not guaranteed), whereas the astrophysical dark matter itself may be detectable directly for only a smaller fraction of models.
Within the overall range allowed by the experimental constraints, are there any hints at what the supersymmetric mass scale might be? The high precision measurements of mW tend to favour a relatively small mass scale for sparticles. On the other hand, the rate for b → sγ shows no evidence for light sparticles, and the experimental upper limit on Bs → μ+μ– begins to exclude very small masses. The strongest indication for new low-energy physics, for which supersymmetry is just one possibility, is offered by gμ–2. Putting this together with the other precision observables gives a preference for light sparticles.
Other proposals for additional new physics postulate the existence of new dimensions of space, which might also help to deal with the hierarchy problem. Clearly, space is three-dimensional on the distance scales that we know so far, but the suggestion is that there might be additional dimensions curled up so small as to be invisible. This idea, which dates back to the work of Theodor Kaluza and Oskar Klein in the 1920s, has gained currency in recent years with the realization that string theory predicts the existence of extra dimensions and that some of these might be large enough to have consequences observable at the LHC. One possibility that has emerged is that gravity might become strong when these extra dimensions appear, possibly at energies close to 1 TeV. In this case, some variants of string theory predict that microscopic black holes might be produced in the LHC collisions. These would decay rapidly via Hawking radiation, but measurements of this radiation would offer a unique window onto the mysteries of quantum gravity.
If the extra dimensions are curled up on a sufficiently large scale, ATLAS and CMS might be able to see Kaluza–Klein excitations of Standard Model particles, or even the graviton. Indeed, the spectroscopy of some extra-dimensional theories might be as rich as that of supersymmetry while, in some theories, the lightest Kaluza–Klein particle might be stable, rather like the LSP in supersymmetric models.
Back to the beginning
By colliding particles at very high energies we can recreate the conditions that existed a fraction of a second after the Big Bang, which allows us to probe the origins of matter. This may be linked to the question of why there are so many different types of matter particles in the universe. Experiments at LEP revealed that there are just three “families” of elementary particles: one that makes up normal stable matter, and two heavier unstable families that were revealed in cosmic rays and accelerator experiments. The Standard Model does not explain why there are three and only three families, but it may be that their existence in the early universe was necessary for matter to emerge from the Big Bang, with little or no antimatter. It seems likely that the answers to these questions are linked at a fundamental level.
Andrei Sakharov was the first to point out that particle physics could explain the origin of matter in the universe by the fact that matter and antimatter have slightly different properties, as discovered in the decays of K and B mesons, which contain strange and bottom quarks, members of the heavier families. These differences are manifest in the phenomenon of CP violation. Present data are in good agreement with the amount of CP violation allowed by the Standard Model, but this would be insufficient to generate the matter seen in the universe.
The Standard Model accounts for CP violation within the context of the Cabibbo–Kobayashi–Maskawa (CKM) matrix, which links the interactions between quarks of different type (or flavour). Experiments at the B-factories at KEK and SLAC have established that the CKM mechanism is dominant, so the question is no longer whether this is “right”. The task is rather to look for additional sources of CP violation that must surely exist, to create the cosmological matter–antimatter asymmetry via baryogenesis in the early universe. It is an open question whether these may provide new physics at the tera-electron-volt scale accessible to the LHC. On the other hand, if the LHC does observe any new physics, such as the Higgs boson and/or supersymmetry, it will become urgent to understand its flavour and CP properties.
The LHCb experiment will be dedicated to probing the differences between matter and antimatter, notably looking for discrepancies with the Standard Model. The experiment has unique capabilities for probing the decays of mesons containing both bottom and strange quarks. It will be able to measure subtle CP-violating effects in Bs decays, and will also improve measurements of all the angles of the unitarity triangle, which expresses the amount of CP violation in the Standard Model. The LHC will also provide high sensitivity to rare B decays, to which the ATLAS and CMS experiments will contribute, in particular, and which may open another window on CP violation beyond the CKM model.
In addition to the studies of proton–proton collisions, heavy-ion collisions at the LHC will provide a window onto the state of matter that would have existed in the early universe at times before quarks and gluons “condensed” into hadrons, and ultimately the protons and neutrons of the primordial elements. When heavy ions collide at high energies they form for an instant a “fireball” of hot, dense matter. Studies, in particular by the ALICE experiment, may resolve some of the puzzles posed by the data already obtained at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven. These data indicate that there is very rapid thermalization in the collisions, after which a fluid with very low viscosity and large transport coefficients seems to be produced. One of the surprises is that the medium produced at RHIC seems to be strongly interacting (see Theory ties strings round jet suppression). The final state exhibits jet quenching and the semblance of cones of energy deposition akin to Machian shock waves or Cherenkov radiation patterns, indicative of very fast particles moving through a medium faster than sound or light.
Experiments at the LHC will enter a new range of temperatures and pressures, thought to be far into the quark–gluon plasma regime, which should test the various ideas developed to explain results from RHIC. The experiments will probably not see a real phase transition between the hadronic and quark–gluon descriptions; it is more likely to be a cross-over that may not have a distinctive experimental signature at high energies. However, it may well be possible to see quark–gluon matter in its weakly interacting high-temperature phase. The larger kinematic range should also enable ideas about jet quenching and radiation cones to be tested.
First expectations
The first step for the experimenters will be to understand the minimum-bias events and compare measurements of jets with the predictions of QCD. The next Standard Model processes to be measured and understood will be those producing the W and Z vector bosons, followed by top-quark physics. Each of these steps will allow the experimental teams to understand and calibrate their detectors, and only after these steps will the search for the Higgs boson start in earnest. The Higgs will not jump out in the same way as did the W and Z bosons, or even the top quark, and the search for it will demand an excellent understanding of the detectors. Around the time that Higgs searches get underway, the first searches for supersymmetry or other new physics beyond the Standard Model will also start.
In practice, the teams will look for generic signatures of new physics that could be due to several different scenarios. For example, missing-energy events could be due to supersymmetry, extra dimensions, black holes or the radiation of gravitons into extra dimensions. The challenge will then be to distinguish between the different scenarios. For example, in the case of distinguishing between supersymmetry and universal extra dimensions, the spectra of higher excitations would be different in the two scenarios, the different spins of particles in cascade decays would yield distinctive spin correlations, and the spectra and asymmetries of, for instance, dileptons, would be distinguishable.
What is the discovery potential of this initial period of LHC running? Figure 5a shows that a Standard Model Higgs boson could be discovered with 5 σ significance with 5 fb–1 of integrated and well-understood luminosity, whereas 1 fb–1 would already suffice to exclude a Standard Model Higgs boson at the 95% confidence level over a large range of possible masses. However, as mentioned above, this Higgs signal would receive contributions from many different decay signatures, so the search for the Higgs boson will require researchers to understand the detectors very well to find each of these signatures with good efficiency and low background. Therefore, announcement of the Higgs discovery may not come the day after the accelerator produces the required integrated luminosity!
Paradoxically, some new physics scenarios such as supersymmetry may be easier to spot, if their mass scale is not too high. For example, figure 5b shows that 0.1 fb–1 of luminosity should be enough to detect the gluino at the 5 σ level if its mass is less than 1.2 TeV, and to exclude its existence below 1.5 TeV at the 95% confidence level. This amount of integrated luminosity could be gathered with an ideal month’s running at 1% of the design instantaneous luminosity.
We do not know which, if any, of the theories that I have mentioned nature has chosen, but one thing is sure: once the LHC starts delivering data, our hazy view of this new energy scale will begin to clear dramatically. Particle physics stands on the threshold of a new era, in which the LHC will answer some of our deepest questions. The answers will set the agenda for future generations of particle-physics experiments.
The gas electron multiplier (GEM) detector developed at CERN by Fabio Sauli has several unique features. For example, it can operate at relatively high gains in pure noble gases, and can be combined with other devices of the same kind to operate in a cascade mode. Indeed, cascaded GEM structures now feature in several large-scale high-energy physics experiments, such as COMPASS, TOTEM and LHCb at CERN. The basic device consists of a metallized polymer foil chemically pierced to form a dense array of microscopic holes. Applying a voltage across the foil creates a high electric field in the holes which then act as tiny proportional counters, amplifying ionization charge. However, despite great progress in its development and optimization, the GEM is still a rather fragile detector. It requires very clean and dust-free conditions during its manufacture and assembly and it can be easily damaged by sparks, which are almost unavoidable when operating at high gain.
To try to overcome these problems, a few years ago a team of physicists from CERN and the Royal Institute of Technology in Stockholm developed a more robust version of the GEM, which was further improved by a team at the Weizman Institute of Science in Rehovot. Called the thick GEM (TGEM), it is based on printed circuit boards (PCBs) metallized on both sides, with an array of tiny holes drilled through (figure 1). Typically 0.5–1.0 mm thick, it is manufactured using the standard industrial PCB processing techniques for precise drilling and etching. The TGEM has excellent rate characteristics and can operate at higher gains than the GEM, but it can still be damaged by sparks.
Now a small team from CERN and INFN has developed a new, more spark-resistant version of the GEM in which the metallic electrode layers are replaced with electrodes of resistive material. We built the first prototypes from a standard PCB 0.4 mm thick. We glued sheets of resistive kapton (100XC10E5) 50 μm thick onto both surfaces of the PCB to form resistive electrode structures, and drilled holes 0.3 mm in diameter with a pitch of 0.6 mm using a CNC machine. The surface resistivity of the material created in this way varied from 500 to 800 kΩ/square, depending on the particular sample. After the drilling was finished, the copper foils were etched from the active area of the detector (30 mm × 30 mm), leaving only a copper frame for the connection of the high-voltage wires in the circular part of the detector (figure 2). We call this the resistive-electrode thick GEM (RETGEM).
The detector operates in the following way. When a high voltage is applied to the copper frames, the kapton electrodes act as equipotential layers, owing to their finite resistivity, and the same electric field forms inside and outside of the holes as occurs in the TGEM with the metallic electrodes. So at low counting rates the detector should operate as a conventional TGEM, while at high counting rates and in the case of discharges the detector’s behaviour should be more like that of a resistive-plate chamber. The RETGEM is only seven times thicker than the conventional GEM structures and could easily be bent to form a semi-cylindrical shape, as is preferred in some cases, such as in the future NA49 experiment at CERN.
We have made systematic studies and further developments of the RETGEM in collaboration with the High Momentum Particle Identification (HMPID) group of the ALICE Collaboration and the ICARUS research group from INFN Padova. These investigations show that the maximum achievable gain before sparks appear in the RETGEM is at least 10 times higher than in the case of the conventional GEM (figure 3). Moreover, when sparks do appear at higher gains, the current in these discharges is of an order of magnitude less than in the case of the TGEMs, so they do not damage either the detector or the front-end electronics.
We have since manufactured RETGEMs 1 and 2 mm thick with active areas of 30 mm × 30 mm and 70 mm × 70 mm in the TS/DEM/PMT workshop at CERN and successfully tested the devices. The maximum gain achieved was 2–3 times higher than with the device that was only about 0.4 mm thick, reaching a value of close to 105; as before, sparks did not damage the detector. The RETGEMs could operate at up to 10 kHz/cm2 without a noticeable drop in the signal amplitude, while at higher counting rates the signal amplitude began dropping, as happens with resistive-plate chambers. We also found that double RETGEMs can operate stably in a cascade mode; we observed no charging-up effect despite the high resistivity of the electrodes and achieved gains close to 106 with the double-step RETGEMs.
The most interesting discovery was that if we coat the cathode of the RETGEM with a caesium iodide (CsI) photosensitive layer, the detector acquires high sensitivity to ultraviolet light – an approach that has already been used with the conventional GEM with metallic electrodes. In contrast to these earlier attempts, however, in our case, the CsI was deposited directly onto the dielectric layer, that is, there was no metallic substrate present. Surprisingly enough, this detector worked very stably in the pulse-counting mode, easily achieving gains of 6 × 105 in double-step operation. The measured quantum efficiency was 34% at a wavelength of 120 nm, which is sufficient for some applications such as ring imaging Cherenkov detectors (RICH) or for the detection of the scintillation light from the noble liquids.
These studies have shown that RETGEMs can compete with the GEM in many applications that do not require very fine position resolution. Indeed the RETGEM offers a maximum achievable gain that is 10 times higher, is intrinsically protected against sparks and is thus very robust, can be assembled in ordinary laboratory conditions without using a clean room, and can operate in poorly quenched gases and gas mixtures. Other resistive coatings could also be used and the resistivity optimized for each application.
We believe that the new detector will have a great future and will find a wide range of applications in many areas. In high-energy physics it can be used, for example in RICH, muon detectors, calorimetry and noble-liquid time projection chambers.
• The RETGEM team comprises Rui de Oliveira (CERN TS/DEM/PMT workshop), Paolo Martinengo (ALICE HMPID group), Vladimir Peskov (ALICE HMPID group), Francesco Pietropaolo (INFN Padova) and Pio Picchi (INFN Frascati).
Particle physics often describes itself, and correctly so, as having brought countries and people together that previously had been unable to co-operate with each other. In Europe, CERN was born out of a desire for co-operation. This was evident later, for example, when Russian and Chinese scientists worked well together within the US throughout the Cold War. This spirit of connection across national boundaries led to success for our science – and for us all as scientists. The strong innate desire to understand our universe transcends our differences. Our field was in many ways, or so we like to say, the first and most successful model in modern international relations. CERN embodies this co-operation.
Nowadays, however, we cannot rest on our laurels. This co-operation is happening in almost every other field of research; international facilities and multinational teams of researchers are no longer unique to particle physics. So what is the next level of co-operation for us? To some it might be obvious. We should continue to strive for a seamless global vision of science projects, and we should distribute those projects around the world so as to maximize the benefits of science in all countries, large or small, rich or poor. The ITER and LHC projects perhaps exemplify global projects: the world unites to select, design, build and operate a project. Particle physicists, as everyone knows, are considering another one, an International Linear Collider (ILC).
The Global Design Effort (GDE) for an ILC is not “flat” globally, but is a merging of regions. The world has been divided into three geographical areas: Asia, the Americas and Europe. In this mixture, Canada is an interesting case study. TRIUMF, Canada’s National Laboratory for Particle and Nuclear Physics is located in Vancouver, on the Asia–Pacific rim, yet only a few miles north of the US border. TRIUMF, though a small laboratory, hosts more than 550 scientists, engineers, technicians, postdoctoral fellows and students, and more than 1000 active users from Canada, the US and around the world. Historically, TRIUMF and the Canadian particle-physics community have made significant intellectual contributions to the major projects – both on the accelerator side and detector-physics side – in Europe at DESY with HERA and ZEUS, LHC and ATLAS at CERN, and most recently in Japan with T2K at JPARC. Canadian particle physicists have also been active in experiments in the US, such as SLD and BaBar at SLAC, CDF and D0 at Fermilab and rare-kaon experiments at Brookhaven National Laboratory.
TRIUMF also has a world-leading internal radioactive-beam programme using the ISOL technique, familiar at CERN’s ISOLDE. TRIUMF’s nuclear physicists are collaborating with China and India and have strong ties to France (Ganil), Germany (GSI), the UK and Japan. TRIUMF is truly global, reflecting that Canada is close to Europe in culture, close to the US geographically and culturally, and is on the Asia–Pacific rim. Canada also continues to merge the culture of nuclear and particle physics, just as CERN is doing at the LHC with ALICE, ATLAS and CMS. A good example is the Sudbury Neutrino Observatory (SNO), where particle and nuclear physicists came together and did great science. SNOLAB will also merge nuclear and particle physics to pursue neutrino and dark-matter searches (see Canada looks to future of subatomic physics). TRIUMF’s infrastructure and technical resources allowed Canadian physicists to help build SNO and will be important in the future for experiments at SNOLAB.
TRIUMF is not yet fully engaged in the ILC effort. Given its history, it is obvious that it will want to participate significantly. Canadian particle physicists are big proponents of an ILC and believe that it is a great opportunity and that it has tremendous discovery potential. However, the area of TRIUMF’s involvement and with which regions it will partner is under discussion.
One fact remains: involvement in any international science project must also feed back to help the internal national programmes. Advances in accelerator technology and detector development for the LHC help the entire national science programme, including nuclear physics, life sciences and condensed matter physics. ILC and superconducting radio-frequency (SRF) development will also be important for Canada and TRIUMF’s internal programmes. The latest ILC technology will bootstrap other vanguard technical developments in each country just as we hope that the globally distributed computing for the LHC, such as TRIUMF’s Tier_1 centre, will have a similar impact.
A strong national science programme supports educational advances and is necessary for innovation and economic prosperity. We should keep this in mind as the world considers the ILC and other large projects, such as next-generation neutrino observatories or underground laboratories. TRIUMF’s and Canada’s strategy is to develop niches of national expertise while participating in exciting international science projects such as the LHC and ILC. The development of such niches is essential to the future prosperity of our field.
All of this will require strategic regional and global planning in particle and nuclear physics. Surely, we are up for this challenge!
After investing in ATLAS and LHC for many years, Canada and TRIUMF are looking forward to a decade or more of great discoveries.
Giuseppe Paulo Stanislao Occhialini, or Beppo to his many friends across the world, was a charismatic, dynamic leader of discovery in particle and astrophysics for more than 50 years from the 1930s. These essays and reminiscences, by 30 colleagues and others who knew him, review his life to celebrate the centenary of his birth in 1907.
The early years of Occhialini’s career were remarkable for two close encounters with the Nobel Prize: through his work on cosmic rays with Patrick Blackett and a decade later with Cecil Powell. His interest in cosmic rays began while studying at the Institute of Physics at Arcetri, part of the University of Florence, where he learnt to use new coincidence circuitry for Geiger–Muller counters from its developer, Bruno Rossi. After graduating in 1929 Occhialini stayed in research and in 1931 Rossi sent him to Cambridge to learn about Wilson cloud chambers from Blackett – who in turn learnt from Occhialini the advantages of using counters in coincidence to trigger the chamber. Soon, although unluckily a week or so after Carl Anderson at Caltech, they saw their first “positive electrons”, but, unlike Anderson, they observed e+e– pairs and recognized that Paul Dirac’s new relativistic quantum theory predicted this. Occhialini was a keen member of the “Kapitza Club” at Cambridge’s Cavendish Laboratory, where he met Dirac.
Returning to Arcetri in 1934, Occhialini found that things had changed. Facism was taking power in Italy so he left for an appointment at the University of Sao Paulo in Brazil where he stayed throughout the Second World War. He built a strong group there using counters in cosmic-ray research before leaving at the end of 1944 for England at the invitation of Blackett, who thought that his help would be valuable in the work on an atomic bomb. Since Occhialini was Italian, this was not allowed and in autumn 1945 he went to Bristol to join Powell, who was using photographic emulsions to study low-energy nuclear reactions. Occhialini was immediately intrigued and impressed by the elegance and power of the method, but saw the need to improve the emulsions’ sensitivity. So he contacted the technical staff at Kodak and Ilford to add his influence. Ilford then produced the C2 emulsion, with eight times the silver halide concentration, which Powell and Occhialini “warmly welcomed”, according to Ilford’s man in charge, Cecil Waller.
Occhialini proposed exposing C2 plates to cosmic radiation at the top of the Pic du Midi (2800 m) in the Pyrenees, and did so in summer 1946. In January 1947 Occhialini and Powell published in Nature the first of a series of papers from the Bristol group establishing the discovery of the π-meson, its decay to the μ-meson and, after Kodak produced the first emulsions able to detect minimum ionization, the μ’s decay to an electron.
It was at Bristol that Beppo met Connie: Constance Charlotte Dilworth, who was born in 1924 in Streatham, London. She started postgraduate studies in theoretical solid-state physics at Bristol in about 1946, then switched to join Powell’s group. Together with Occhialini and others, she contributed significantly to processing thick photographic emulsions. In 1948, when Occhialini was invited to Brussels to start a new nuclear emulsion group, Connie went with him. They were married in 1950 and their daughter, Etra, who contributed to this book, was born the next year. Connie and Beppo became a very effective team, a formidable duo who would provide strong leadership in Italian and European science. Beppo’s excitable Italian temperament was complemented by the calm, organized approach of Connie, a notable scientist herself who always understood how Beppo’s aspirations could be realized.
In 1950 the Occhialinis moved to Genoa and in 1952 to Milan University where Beppo was director until he retired in 1974. He built up a strong emulsion group at Milan, making major contributions to the “G-stack” and other collaborations flying emulsions on balloons. He was always looking for new challenges in physics and advances in experimental techniques. On returning from a visit to Rossi at MIT in 1960, he showed his group a new detector made of silicon, saying “think what you can do with this”. They did, and established an expertise that later became the basis for Milan’s major contribution to the central detector for the DELPHI experiment at LEP.
As machines replaced cosmic rays as a source for particle physics, and while maintaining a major presence for his group at CERN, Beppo turned to other techniques to continue his interest in cosmic rays, first with balloon-borne spark chambers and then adapting these to flights on satellites. Both Beppo and Connie were influential members of advisory and scheduling bodies for the European Space Research Organisation and together, as one contributor puts it, they pushed Italy into a leading position in astronomy. Milan was a “power house” for space research, with leading roles in two satellite experiments that mapped the sky for X-ray and gamma-ray sources: COS-B launched in 1975 and Beppo-SAX in 1996. Beppo maintained his interest in the design of the latter until his death in 1993, when it was named after him. Connie died in 2004.
Research into the origins of intense gamma-ray bursts (GRBs) – by far the brightest events known – is a scientific legacy of Beppo still very much alive. Until Beppo-SAX made the first accurate locations in 1997, no GRB had been associated with a visible galaxy. His most long-lasting legacies, however, are the young scientists who entered research in his care: his irrepressible enthusiasm inspired them; his lateral, dialectical probing tested their ideas; and his quick wit, wide cultural interests in art, literature and thoughts on “the film I saw last night” entertained them. This collection of essays portrays a complex personality for whom life was never dull, who was always ready to “brain storm” with colleagues, and who experienced the excitement of discovery in his research.
One question remains: why didn’t he share one of the two Nobel prizes, Blackett’s in 1948 or Powell’s in 1950?
Regular readers of CERN Courier will be familiar with the Particle Physics and Astronomy Research Council (PPARC) which has supported the UK’s research in particle physics for the past decade. Now it is time to say goodbye (and thanks) to PPARC, and to welcome its successor, the Science and Technology Facilities Council (STFC). The new council will be formed by merging PPARC with the Council for the Central Laboratory of the Research Councils, which operates the Rutherford Appleton Laboratory (RAL) and Daresbury Laboratory in the UK.
These laboratories have long been a key component in the UK’s particle-physics programme, particularly through their capabilities in engineering and instrumentation. “Rutherford cable” is well known in superconducting magnets worldwide. For the LHC, RAL has taken on important roles in engineering for the ATLAS endcap toroids, and in constructing the ATLAS silicon tracker and the CMS endcap calorimetry in conjunction with UK universities. Daresbury Laboratory hosts a strong accelerator group who have, among other things, assumed major responsibilities within the International Linear Collider global design effort.
Responsibility for nuclear physics will also transfer to STFC, so the new research council will combine support for particle physics, nuclear physics and astronomy with responsibility for large science facilities, such as synchrotron light sources, high-power lasers and the ISIS spallation neutron source at RAL. Overall STFC will be responsible for a budget of more than £500 million (including international subscriptions), will have about 2000 employees and more than 10,000 scientific users. The new council formally takes over on 1 April 2007 and Keith Mason, previously in charge of PPARC, will be its chief executive.
Among the motivations for the new council is a desire to create a more integrated approach within the UK to large scientific facilities, especially for long-term projects involving several countries acting together, and to deliver increased economic impact and knowledge exchange between industry, universities and the STFC’s national laboratories. We want to promote new and innovative ideas that cut across entrenched domains and benefit from cross-fertilization.
As part of this aim, new Science and Innovation Campuses have been set up at Daresbury and Harwell (adjacent to RAL) with the goal of promoting connections with industry and universities. STFC will develop a single science strategy across its programme, which will be used to inform its investment choices. Ownership of this strategy will be shared with the research communities and will involve both university and in-house expertise. As now, independent advisory and peer-review panels will guarantee that the best scientific advice is available.
Readers will likely be asking what this means for particle physics. In the short term, continuity is assured. Support for university groups and experiments will be maintained at the currently planned levels and the broad physics strategy developed over the past few years will continue. In the longer term, however, the new larger council offers the possibility to exploit new synergies and connections between particle-physics activities and other areas of STFC’s responsibility.
An interesting example is in accelerator R&D, where the technologies developed and needed for particle physics also underpin the development of new synchrotrons or free-electron light sources and of new high-power neutron-scattering facilities. Projects that develop competencies in these areas will thus benefit both particle physics machines and user facilities for the physical and life sciences. The price to be paid for having broader opportunities is, of course, that future particle-physics projects will necessarily be tensioned against a wider range of future options in STFC. Particle physicists will need to be able to make a compelling case for their aspirations in a broad forum, and I am confident that they will be able to do so.
I am pleased that the UK particle-physics community has shown support for the creation of the new council, and has focused on the opportunities that it brings. We in STFC look forward to working with the science community, both nationally and internationally, and with our colleagues at CERN and elsewhere, as part of our mission to enable world-class research and deliver access to state-of-the-art facilities.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.