Some years ago, it was customary to divide work in the exact sciences of physics, chemistry and biology into three categories: experimental, theoretical and computational. Those of us breathing the rarified air of pure theory often considered numerical calculations and computer simulations as second-class science, in sharp contrast to our highbrow elaborate analytical work.
Nowadays, such an attitude is obsolete. Practically all theoreticians use computers as an essential everyday tool and find it hard to imagine life in science without the glow of a monitor in front of their eyes. Today an opposite sort of prejudice seems to hold sway. A referee might reject an article demonstrating the nearly forgotten fine art of rigorous theoretical thought and reasoning if the text is not also full of plots showing numerous results of computer calculations.
Sometimes it seems that the only role that remains for theoreticians – at least in nuclear physics, which I know best – is to write down a computer code, plug in numerical values, wait for the results and finally insert them into a prewritten text. However, any perception of theorists as mere data-entry drones misses the mark.
First, to write reasonable code one needs to have sound ideas about the underlying nature of physical processes. This requires clear formulation of a problem and deep thinking about possible solutions.
Second, building a model of physical phenomena means making hard choices about including only the most relevant building blocks and parameters and neglecting the rest.
Third, the computer results themselves need to be correctly interpreted, a point made by the now-famous quip of theoretical physicist Eugene Wigner. “It is nice to know that the computer understands the problem,” said Wigner when confronted with the computer-generated results of a quantum-mechanics calculation. “But I would like to understand it, too.”
We live in an era of fast microprocessors and high-speed internet connections. This means that building robust high-performance computing centres is now within reach of far more universities and laboratories. However, physics remains full of problems of sufficient complexity to tax even the most powerful computer systems. These problems, many of which are also among the most interesting in physics, require appropriate symbiosis of human and computer brains.
Consider the nuclear-shell model, which has evolved to be a powerful tool for achieving the most specific description of properties of complex nuclei. The model describes the nucleus as a self-sustaining collection of protons and neutrons moving in a mean field created by the particles’ co_operative action. On top of the mean field there is a residual interaction between the particles.
Applying the model means being immediately faced by a fundamental question: What is the best way to reasonably restrict the number of particle orbits plugged into the computer? The answer is important since information about the orbits is represented in matrices that must subsequently be diagonalized. For relatively heavy nuclei these matrices are so huge – with at least many billions of dependent variables – that they are intractable even for the best computers. This is why, at least until a few years ago, the shell model was relegated for use describing relatively light nuclei.
The breakthrough came by combining the blunt power of contemporary computing with the nuanced theoretical intellect of physicists. It was theorists who determined that a full solution of the shell-model problem is unnecessary and that it is sufficient to calculate detailed information for a limited number of low-lying states; theorists who came up with a statistical means to average the higher-level states by applying principles of many-body quantum chaos; and theorists who figured out how to use such averages to determine the impact on low-lying states.
Today physicists have refined techniques for truncating shell-model matrices to a tractable size, getting approximate results, and then adding the influence of the higher-energy orbits with the help of the theory of quantum chaos. The ability to apply the shell model to heavier nuclei may eventually advance efforts to understand nucleosynthesis in the cosmos, determine rates of stellar nuclear reactions, solve condensed-matter problems in the study of mesoscopic systems, and perform lattice QCD calculations in the theory of elementary particles. Eventually, that is, because many answers to basic physics questions remain beyond the ken of even the most innovative human–computer methods of inquiry.
So yes, one can grieve over the fading pre-eminence of theory. However, few among us would want to revert to the old days, despite our occasional annoyance with the rise of computer-centric physics and the omnipresent glow of the monitor on our desks. As for my opinion, I happen to agree with the chess grandmaster who, when recently complaining about regular defeats of even the best human players by modern chess computers, said: “Glasses spoil your eyes, crutches spoil your legs and computers your brain. But we can’t do without them.”
Physics has always had a relatively low proportion of female students and researchers. In the EU there are on average 33% female PhD graduates in the physical sciences, while the percentage of female professors amounts to 9% (ECDGR 2006). At CERN the proportion is even less, with only 6.6% of the research staff in experimental and theoretical physics being women (Schinzel 2006). The fact that there is no proportional relationship between the number of PhD graduates and professors also suggests that women are less likely to succeed in an academic career than men.
Before examining the findings of various studies, it is worth asking if this low representation of women in physics is a problem – do we actually need more female physicists? In my opinion this question has to be answered from three perspectives: the perspective of society, the perspective of science and the perspective of women.
Starting from the viewpoint of society, there are several issues to consider. First, physics is a field of innovation. Many technological advancements that have a huge impact on society and everyday life come directly or indirectly from physics. Being a physicist therefore means having access to people and knowledge that set the technological agenda.
Second, in many countries research and academic positions are regarded as high-status jobs. Academic staff are often appointed to committees that fund research projects or advise governments on issues that are closely related to their field of expertise. As such, scientists influence the focus of research and the general development of society.
Finally, it is a democratic principle that power and influence should be distributed equally and proportionally among different groups in society. An EU average of 9% female physics professors does not even come close to equal representation in this field. The fact that women fund research through tax payments adds to the demand for more female scientists.
From a scientific point of view, the lack of women represents a huge waste of talent. For physics to develop further as a science, it needs more people with excellent analytical, communicational and social skills. There are also reports that departments without women suffer in many ways (Main 2005).
From the perspective of women, they will of course benefit from increased influence in society, but contributing to physics is not only about struggling for influence and power. Fundamental questions have been asked throughout history by men and women alike. Contributing to physics is to participate in a human project, driven by curiosity and wonder that seeks to understand the world around us.
What the studies find
So why do women fail to advance to the top levels in academia? Some reports state that it is because women are less likely to give priority to their career (Pinker and Spelke 2005), while others cite inferiority in the ability to do science compared with men or the lack of some of the abilities necessary to be successful in science. For example, one report suggests that men are on average more aggressive than women, and that this characteristic (among others) is necessary to succeed in academic work (Lawrence 2006). What these reports have in common is that they all conclude that there will never be as many women as men in academia because of innate differences between the genders, and also that these differences are the main reason for the under-representation of women.
Other reports state that women do not succeed in physics because of prejudice, discrimination and unfriendly attitudes towards them. Studies have shown that women need to be twice as productive as men to be considered equally competent (Wennerås and Wold 1997). In fact both men and women rate men’s work higher than that of women (Goldberg 1968). There is also the psychological mechanism called “stereotype threat”, which causes individuals who are made aware of the negative stereotypes connected to the social group to which they belong – such as age, gender, ethnicity and religion – to underperform in a manner consistent with the stereotype. White male engineering students will for instance perform significantly worse on tests when they are told that Asian students usually outperform them on the same tests (Steele 2004). It is important to remember that these prejudices are present in most human beings and do not necessarily arise from bad will or conscious hostility.
A survey designed to identify issues that are important to female physicists also reported on their negative experiences as a minority group owing to the male domination in the field (Ivie and Guo 2005). In this survey 80% state that attitudes towards women in physics need to be improved, while 65% believed discrimination is a problem that needs to be dealt with. This survey also reported on positive experiences among female physicists, in particular their love for their field and the support that they have received from others.
To produce an exhaustive list of reasons for why so few women are able to reach the highest positions in academia would be a tedious endeavour with many conflicting opinions. However, if we agree that we need more women in physics, it is clear that we need to take action. In this regard it is important to recognize that some of these actions will also be beneficial to men, improving their ability to succeed in a scientific career.
In academia several things can be changed to eliminate discrimination and hostile attitudes towards women (and men):
• Transparency in selection processes for scholarships, funding and positions, i.e. making all evaluation done by the selection committees public so that any discriminating mechanism can be unveiled. This will also benefit men, since they are also subjects of discrimination (Wennerås and Wold 1997).
• Investigate hostile attitudes in institutes and laboratories. Those who discriminate tend not to see how their behaviour affects their environment, and those discriminated against are usually reluctant to admit it. The Institute of Physics in London visits institutes, on invitation only, to investigate their attitudes towards women (Main 2005).
• Make the career path more predictable. Both genders suffer from the unpredictability and requirement of mobility in an academic physics career, and this can also conflict with the desire to start a family (Ivie and Guo 2005).
• Awareness of discrimination. Nobody wants to discriminate against others; the use of stereotypes and prejudice is a part of the human mind. It is therefore important to be aware of how these properties affect the way that we evaluate and treat others. Awareness of discriminating procedures have caused changes. Both the US National Institutes of Health (Carnes 2006) and the Swedish Medical Research Council (Wennerås and Wold 1997) changed their routines after being made aware that their evaluation and recruitment schemes were prejudiced against women.
There is no doubt that the under-representation of women in physics is a sensitive issue. Women and men who have never experienced discrimination or bias towards their gender often feel repelled when the issue is discussed. However, I believe the numbers speak for themselves: women do not have the same possibility to succeed in academia as men. As individuals we would like to think that we can all approach any branch of society without being met with hostility or bias, no matter what ethnic group, social class, religion or gender we might belong to. In the end most women would just like to be able to make the same mistakes, produce the same number of papers and be respected, accepted or rejected on the same conditions as their male colleagues, not more, not less.
• With thanks to the ATLAS Women Group, David Milstead, Robindra Prabhu, Helene Rickhard, Josi Schinzel, Jonas Strandberg and Sara Strandberg.
In 1948 Giampietro Puppi, Gianni to his friends, published a paper in Il Nuovo Cimento where he distinguished the neutral counterpart of the muon – now known as the muon neutrino, νμ – from the neutral counterpart of the electron, now called the electron neutrino, νe (Puppi 1948). Fourteen years later, what Puppi had proposed in his famous paper was demonstrated experimentally by a team led by Leon Lederman, Mel Schwartz and Jack Steinberger (Danby et al. 1962). Puppi had calculated three weak processes – pion decay, muon capture and muon decay – and was able to prove that these three different processes were described by “approximately” the same fundamental weak coupling. The coupling of the three vertices of the “Puppi triangle” described all weak processes known at the time with the same strength, represented by the sides of his equilateral triangle.
This work was the first step towards the universality of the weak forces and indeed attracted the attention of Enrico Fermi, since it was the first proof that all weak processes could be described by the same coupling. It came just a year after the discovery by Marcello Conversi, Ettore Pancini and Oreste Piccioni that the negative cosmic-ray “mesons” (now known to be the leptons called muons) were disintegrating as if they were not strongly coupled to the nuclear forces (Conversi et al. 1947).
Fermi, together with Edward Teller and Victor Weisskopf, pointed out that the lifetime of this meson was 12 powers of 10 longer than the time needed for the long-sought Yukawa meson to be captured by a nucleus via the nuclear forces (Fermi et al. 1947). The solution of the puzzle was soon found by Cesare Lattes, Giuseppe Occhialini and Cecil Powell, who discovered that the cosmic-ray muon was the decay product of a particle, now known as the π meson or pion, which the authors considered to be the “primary meson” (the origin of the symbol π, for primary; Lattes et al. 1947).
To prove that rates for pion decay, muon decay and muon capture were “approximately” equal as expected by the universality of the Fermi coupling was indeed remarkable. These were great times for the understanding of the weak forces, and at the time the topic of the universality of the weak interaction was a central focus of the physics community (Klein 1948, Lee et al. 1949 and Tiomno and Wheeler 1948). The Puppi triangle played a crucial role in revealing the basic property of the new fundamental force of nature, the strength of which appeared to be so much weaker than that of the electromagnetic and of the nuclear forces.
At this time, cosmic rays provided the only source of high-energy particles, and Puppi made another valuable contribution to the field with his paper on the energy balance of cosmic rays (Puppi 1953). However, the direction of research was soon to change with the advent of particle accelerators and the newly invented bubble_chamber technology. In 1953 Puppi established the first group of the Bologna section of INFN, which led to the existence of a large collaboration in the field of bubble-chamber physics, leading to the observation of parity non-conservation in hyperon decays in 1957.
I have a personal reason for being grateful to Puppi around this time. When he was research director at CERN (1962–63) and later chair of the Experimental Committee (1964–65), he played a crucial role by being a strong supporter of my Non-Bubble Chamber (NBC) project. Physics was at the time dominated by bubble-chamber technology, and Puppi himself had been fully engaged in promoting the National Hydrogen Bubble Chamber in Italy, and in establishing large international collaborations for the analysis of bubble-chamber pictures. It was the need for powerful computing for this analysis that led him to establish the first computing facility in Bologna, the development of which through the subsequent decades produced what is now the largest computing centre in Italy.
Bubble-chamber technology had revealed an enormous number of baryons and mesons and Puppi was interested in what this could mean. The question arose of whether to encourage other technologies, and in particular to do what? In a meeting in his office as research director at CERN, the subject came up of studying the rare-decay modes of mesons – especially the electromagnetic decay modes. This needed NBC technology. As a typical exponent of the classical culture of Venice, Puppi was open to new horizons and made the point that new technologies had to be encouraged; and this is how the NBC project began. He was no longer at CERN when, in 1968, thanks to the NBC set-up, a new decay mode of the X0 meson (now η’) into two photons was discovered, thus establishing that this heavy meson could not be the missing member of the tensor octet. This was the first step in determining directly the correct value of the pseudoscalar meson mixing.
During a meeting on Meson Resonances and Related Electromagnetic Phenomena at the European Physical Society conference in Bologna in 1971, Dick Dalitz pointed out that it was thanks to physics leaders of the calibre and vision of Puppi that new horizons in the physics of mesons has been opened. The problem of the vector and pseudoscalar meson mixings needed NBC technology in order to be investigated experimentally. These were times when no data on vector mesons existed from electron–positron colliders and direct measurements of the pseudoscalar and vector-meson mixings did not exist. As we now know, to understand the mesonic mixings, it was necessary first to discover the theory of quantum chromodynamics and then to discover instantons. No-one could have imagined these developments, rooted in the physics of mesons, when, in the 1960s, CERN’s research director had encouraged the young fellows to propose new ways to go beyond bubble-chamber technology, and the knowledge of the meson-mixings was based only on their masses, which Puppi correctly considered a tautology.
No one should underestimate the fact that CERN has the remarkable property of being unique in the world.
Gianni Puppi
Puppi’s scientific interests extended beyond particle physics to space physics, which is why he became president of the European Space Research Organization (ESRO) and co-founder of the European Space Agency (ESA). Also, in the field of ecology and the protection of the treasures of civilization, he founded the Istituto delle Grandi Masse to study, on a rigorous scientific basis, the sea-water dynamics so vital for the future of his beloved Venice.
The last time I had the pleasure and the privilege to meet my teacher was a few weeks before he departed this life in December 2006. He never stopped pursuing a multitude of interests, including the future of CERN, having been not only a research director but also a member of the CERN Council. He was very concerned when he learned that the Council now does not always express its full support for the laboratory’s activities.
“During my time, the CERN Council was a strong supporter of the decisions taken, always, for the strengthening of the scientific excellence of the results, to be obtained in the most civilized competition mankind can put forward: physics. No one should underestimate the fact that CERN has the remarkable property of being unique in the world.” These were his last words.
Experiments at accelerators have produced many key breakthroughs in particle physics during the past 50 years. Today, as exploration begins of physics at the “terascale”, the machines needed are extremely large, costly and time-consuming to build. In 1982, however, recognizing that this is how the field would evolve, the US Department of Energy (DOE) began a programme to develop new ideas for particle acceleration, which has now become extremely active. From the outset it was clear that developing an entirely new concept for accelerating charged particles would be a multi-disciplinary endeavour, requiring a sustained research effort of several decades to come to fruition (HEPAP 1980). Here I would like to examine just how far one advanced concept – plasma-based particle accelerators – has come after 20 or so years of research, and to indicate how it is likely to develop in the next decade.
Historical background
The first suggestions for using “collective fields” generated by a medium-energy electron beam to accelerate ions to high energies can be traced to Gersh Budker and Vladimir Veksler. However, plasma-based accelerators did not take off until John Dawson and his co-workers at the University of California, Los Angeles (UCLA) proposed the use of a space-charge disturbance, or a “wakefield”, to accelerate electrons (Joshi 2006; Joshi and Katsouleas 2003). Serendipitously, the ideas that Dawson developed between 1978 and 1985 coincided with the DOE’s initiative on advanced accelerator techniques and were supported first in the US and then in other countries.
Wakefields in a plasma can be driven by an intense laser pulse (the laser-wakefield accelerator) or an electron-beam pulse (the plasma-wakefield accelerator) that is about half a plasma wavelength long. In the former it is the radiation pressure of the laser pulse that pushes away the plasma electrons, whereas in the latter this is achieved by the space-charge force of the (highly relativistic and therefore stiff) electron beam. The plasma electrons are predominantly blown out radially, but because of the space-charge attraction of the plasma ions, they are attracted back towards the rear of the laser (or the particle) beam where they overshoot the beam axis and set up a wakefield oscillation. In a 1D picture the wake resembles a series of capacitors where the mostly transverse electric field of the laser (particle) beam has been transformed into a longitudinal electric field of the wake. Charged particles in an appropriately phased trailing pulse can then extract energy from the wakefield (figure 1a).
The mixture of physics disciplines involved meant that even proof-of-concept experiments on plasma accelerators required an expertise in plasma physics, lasers and beam physics. Since such expertise resided in universities, most of the early work was carried out by small university groups. By the 1990s many teams around the world had confirmed that plasma wakes did indeed have accelerating gradients of the order of 100 GeV/m and could accelerate electrons, often trapped from the plasma itself, with a continuous energy spectrum up to 100 MeV. However, there remained two important goals for the proponents of plasma-based accelerators to provide beams of interest to the end user of this technology – the high-energy physics community. They needed to show that plasma accelerators could produce a “monoenergetic beam” of electrons and that the high-gradient acceleration could be maintained over scales of a metre. There has been significant progress in achieving both of these goals in the past couple of years.
The “plasma bubble” accelerator
Most laser-driven and particle-driven plasma-wakefield accelerators now operate in the “bubble regime”. Here the drive pulse is so intense that it expels all of the plasma electrons for which subsequent trajectories enclose a “bubble” of ions. The resulting wakefield structure is 3D and the longitudinal wakefield is highly nonlinear (figure 1b). The phase velocity of the wakefield is tied to the group velocity of the drive beam, which is approximately the velocity of light, c.
In present laser-wakefield accelerator experiments, even though the phase velocity is relativistic, the accelerating particles eventually outrun the wave in a relatively short distance, of the order of a few millimetres to a centimetre – this is called the dephasing limit. While this dephasing limits the maximum energy gain, it has the benefit of generating a monoenergetic electron beam. How does this happen? First, as the radially blown-out plasma electrons rush back toward the axis, a significant number of them are trapped by the longitudinal field of the wake. Second, this self-trapping is severe enough to load the wake with so many electrons that the energy they extract reduces its amplitude, thereby turning off any further trapping – an effect known as beam loading. As the trapped electrons are accelerated their energy initially increases monotonically. However, eventually the electrons in the front dephase and begin to lose energy, while the electrons behind them continue to gain energy (phase-space rotation). This produces a quasi-monoenergetic bunch.
Research groups have now seen such monoenergetic bunches in at least half a dozen laser-wakefield accelerator experiments around the world. Recently the group at Lawrence Berkeley National Laboratory, in collaboration with Oxford University, has used a plasma discharge in a 3.3 cm long capillary tube to produce a hydrogen plasma channel. When the team guided a 40 TW laser pulse through this channel, they produced a monoenergetic beam with an energy up to 1 GeV (Leemans et al. 2006). To go to higher particle energies, laser pulses of even higher power need to be propagated over longer distances in plasma channels. In the next few years we will see if 100 TW class pulses can be guided through plasma channels 10–30 cm long with a plasma density in the range of 1017 cm–3, to produce 10 GeV pulses of high beam quality.
Plasma-wakefield accelerator
There are fewer particle-beam-driven plasma acceleration experiments compared with laser-accelerator experiments. This is because there are fewer suitable beam facilities in the world compared with facilities that can deliver ultra-short laser pulses. The first beam-driven plasma-wakefield experiments were carried out at the Argonne Wakefield Accelerator Facility in the 1980s. Now however, a series of elegant experiments done at SLAC by the UCLA/USC/SLAC collaboration has mapped the physics of electron and positron beam-driven wakes and shown acceleration gradients of 40 GeV/m using electron beams with metre-scale plasmas.
In the SLAC experiments only one electron pulse was used to excite the wakefield (Blumenfeld et al. 2007). Since the energy of the drive pulse is nominally 42 GeV, both the electrons and the wake are moving at a velocity close to c, so there is no relative motion between the electrons and the wakefield. Most of the electrons in the drive pulse lose energy in exciting the wake, but some electrons in the back of the same pulse can gain energy from the wakefield as the wakefield changes its sign.
When the 42 GeV SLAC electron beam passed through a column of lithium vapour 85 cm long, the head of the beam created a fully ionized plasma and the remainder of the beam excited a strong wakefield. Figure 2a (p29) shows the energy spectrum of the beam measured after the plasma. The electrons in the bulk of the pulse that lost energy in driving the wake are mostly dispersed out of the field of view of the spectrometer camera and so are not seen in the spectrum. However, electrons in the back of the same pulse are accelerated and reach energies up to 85 GeV. The measured spectrum of the accelerated particles was in good agreement with the spectrum obtained from computer simulations of the experiment, as figure 2b shows. This is a remarkable result when one realises that while it takes the full 3 km length of the SLAC linac to accelerate electrons to 42 GeV, some of these electrons can be made to double their energy in less than a metre.
Over the past 25 years, a relatively small number of dedicated researchers have solved many technical problems to reach a point where plasma-based accelerators are producing energy gains of interest to high-energy physics, but there are still many challenges ahead of us. The one that is often brought up is the energy spread and emittance of the accelerated electrons. Laser experiments have already shown self-trapped electron beams with an energy spread of a few per cent. In a beam-driven plasma accelerator a different plasma-electron trapping mechanism, called ionization trapping, could generate a perfectly phased sub-micrometre beam suitable for multi-stage acceleration, with an extremely low emittance and a narrow energy spread. Then there is the issue of the possible degradation of the beam quality because of collisions and, possibly, ion motion. If these are shown to be important effects then, like beam “hosing” and beam “head erosion” they will represent a design constraint on a plasma accelerator (Blumenfeld et al. 2007).
The next key challenge for plasma-based acceleration is to realise high-gradient acceleration of positrons. Positron acceleration is different from electron acceleration in the sense that the focusing forces of positron pulse-generated wakes have nonlinear longitudinal and transverse variation. It may be worthwhile accelerating positrons in linear plasma wakes generated by an electron pulse or by wakefields induced in a hollow channel, but this needs to be demonstrated.
Once electron and positron acceleration issues, including energy spread and emittance, have been addressed, the next key development is the “staging” of two plasma-accelerator modules. Again, for high-energy physics applications each module should be designed to add the order of 100 GeV energy to the accelerating beam. Given the microscopic physical size of the accelerating structure (the wavelength is about 100 μm), it is probably wise to minimize the number of plasma acceleration stages. In fact in the proposed energy doubler for the SLAC linac, only a single plasma-wakefield accelerator module was deemed necessary (Lee et al. 2002). In scaling this concept to 1 TeV centre-of-mass energy, one can envision a superconducting linac producing a train of five 100 GeV drive pulses, separated by about 1 μs, but containing three times the charge of the beam pulse that is being accelerated (figure 4). The drive pulses are first separated from one another and subsequently brought back to be colinear with the accelerating beam. Each pulse drives one stage of the plasma-wakefield accelerator from which the accelerating beam gains 100 GeV energy (Yakimenko and Ischebeck 2006). Both electrons and positrons can be accelerated in this manner. Alternatively, one can imagine an e––e– or a γ–γ collider instead of an e––e+ collider, which could greatly reduce the cost of such a machine.
Key challenges
I have described the many fine accomplishments of the advanced acceleration-research community by using the example of plasma-based accelerators. How will this and other concepts for advanced acceleration progress in the next decade? Will they continue to make progress to stay on track for a prototype demonstration of a new accelerator technology in the early 21st century? The answer depends on the availability of one or more suitable experimental facilities to do the next phase of research that I have outlined.
The time is now ripe to invest in appropriate facilities to take this field to the next level
There are several 100 TW class laser facilities in Europe, the US and Asia that should advance the laser-wakefield accelerator to give multi-giga-electron-volt beams. To go beyond this, a high repetition rate, 10 PW class laser facility is needed to demonstrate a 100 GeV prototype of a laser-driven plasma accelerator.
All advanced acceleration schemes will eventually have to face positron acceleration. How and where will experiments on high-gradient positron acceleration be done? The plasma-wakefield accelerator experiments that led to the energy doubling of 42 GeV electrons were carried out at the Final Focus Test Beam (FFTB) at SLAC, which has been recently decommissioned to make way for the Linac Coherent Light Source. SLAC has proposed a “replacement FFTB” beam line called SABER, which will provide experimenters with 30 GeV electron and positron beams. If adequately supported, SABER could become the premier facility not only for plasma-acceleration research but also for other advanced acceleration concepts.
There are about 40 groups worldwide working in plasma-based acceleration with a critical mass of trained scientists and students who are attracted to the field because it offers many chances to make unexpected discoveries. The time is now ripe to invest in appropriate facilities to take this field to the next level. It could be the critical factor that makes the difference to the future of high-energy physics in the 21st century.
When acclaimed playwright, novelist and translator Michael Frayn visited CERN in March there was a distinct air of humility about him. This Tony award winner, who is best known for such plays as Copenhagen and Noises Off, is genuinely honest and enthusiastic about science – a subject he openly claims he knows little about.
Frayn studied philosophy at Cambridge and went on to study in Russia during the Cold War, eventually becoming one of the leading translators of Russian literature. So, how did he find himself exploring science in his works? It all started in a rather serendipitous way: “When I was six years old there was a gang at school led by a fierce, Amazon-like girl, and because I wore spectacles I was appointed ‘gang scientist’. My job was to make explosives for the gang, using chalky soil, sawdust and elderberries.” Although he failed in his mission, this sparked his interest in science.
During his twenties Frayn served in the army and made friends with a fellow soldier who was fascinated by science and went on to become a zoologist. It was this friendship that introduced him to quantum theory and the uncertainty principle. As a student of philosophy, he also encountered extraordinary applications for these ideas, but it was not until the 1990s when he wrote Copenhagen that he began exploring science on a deeper level. “My only access to science is through the wonderful books that some scientists and science writers produce for the benefit of the lay people. Science becomes a very specialized subject and any non-scientist who ventures into it is a fool. But at the same time, I think you have to be an even bigger fool not to try to understand something, because it’s so important.”
Frayn indicates that modern, experimental science is possibly the greatest human achievement, affecting everything about the world and our general philosophical understanding of it. However, as Richard Feynman pointed out, to truly understand science, especially physics, one needs to understand mathematics. Despite this struggle to understand, Frayn says that it is important to let a little science into one’s life.
After bursting onto the scientific scene with his play Copenhagen, to much critical acclaim, Frayn was greatly struck by the generosity with which scientists treated it. He points out that there were a great many mistakes in the play, despite all of his efforts, and was surprised by the graciousness of the letters suggesting that he take another look at certain aspects. Whereas, “People in the arts, I think, take a great malicious pleasure in correcting each other,” he says.
Frayn first heard of Niels Bohr and Werner Heisenberg while on the services’ Russian course at Cambridge with his friend. Yet, he was only introduced to the story of Heisenberg’s visit to Bohr in Copenhagen in 1941 when he read a book by Thomas Powers called Heisenberg’s War. It was an unusual situation with old friends meeting under tremendously difficult circumstances as the Nazi regime had occupied the city. Many questions remain about the discussions that took place during this tense visit and the idea of such uncertainty sparked Frayn’s imagination.
“What fascinated me about the story are the questions it raises: Why did Heisenberg go to Copenhagen? What were his motives? And you can never really know the answer.” There is an uncertainty with human motivation and an uncertainty with the behaviour of a particle, and though the reasons are completely different, Frayn indicates that both have a theoretical barrier beyond which the human mind cannot reach, although he does encourage debate on this issue.
As for the scientists that he admires, he says: “All of them! I think Ernest Rutherford was a very interesting figure, and Niels Bohr because he was just a wonderful person. Science writer Richard Dawkins writes so well and tries to reach out to the lay readers, he truly appreciates the beauty of science.” He is also sympathetic to Werner Heisenberg, who he feels was put in a difficult position.
Touching the universe
Covering a wide range of disciplines, including linguistics, literature, neuroscience, philosophy and quantum physics, Frayn’s latest book, The Human Touch, asks whether the world has meaning or order other than what we give to it. The book also explores the similarities between science and fiction, both of which deal with narrative. He suggests that even the most abstract science is trying to tell a story and keep the interest of its audience. The Human Touch questions our place in the universe, a recurring theme in Frayn’s works. “The plays I’ve written are about how we organize the world around us, how we try to make sense out of it and try to make sense out of each other. I have been writing The Human Touch on and off for the past 30 years and I’ve tried to confront some of these questions.”
During his visit to CERN, Frayn gave a colloquium on his new book to CERN staff. Although a bit intimidated by the level of knowledge at CERN, he seemed to enjoy the opportunity to listen and debate with physicists about some of these philosophical questions.
After touring the ATLAS and CMS experiments, Frayn was stunned by CERN’s huge efforts to understand the universe more precisely. “I had no real concept of the sheer scale, the amount of effort and political skill needed, and the extraordinary technological complexity of it all, quite apart from the theories it will be testing. It really is quite amazing.” The desire to know our world better is something this philosophical writer appreciates, especially considering that these experiments are being constructed and performed in the sole interest of science, and not for monetary gain or military power.
The cold testing of 1706 superconducting magnets for the LHC came to a successful completion early this year. This important milestone for the project marked the end of an operation that had begun in 2001, meeting considerable challenges along the way. By the end of 2003 only 95 dipole magnets had been tested, but the effort and innovative ideas that came from the Operations Team enabled the team eventually to meet the target. The majority of the personnel for the tests came from India, for a year at a time, as part of the CERN–India Collaboration for the LHC. Their success provides a unique example of international collaboration in the accelerator domain on an unprecedented scale.
The LHC consists of two interleaved synchrotron rings, 26.7 km in circumference. The main elements of the rings are the 2-in_1 superconducting dipole and quadrupole magnets operating in superfluid helium at 1.9 K. The total number of cryogenic magnet assemblies – or cryomagnets – includes 1232 dipoles with correctors, 360 short straight sections (SSS) for the arcs with quadrupoles and integrated high-order poles, and 114 special SSS for the insertion regions (IR-SSS) with magnets for matching and dispersion suppression. All of these magnets had to be tested at low temperatures before they could be installed in the tunnel, and for this purpose a superconducting magnet test facility, equipped with 12 test benches and the necessary cryogenic infrastructure, was set up in building SM18 just across the border in France from CERN’s Meyrin site.
The magnet testing had several aspects. For each magnet the tests had to verify the integrity of the cryogenics, mechanics and electrical insulation; qualify the performance of the protection systems; train the magnet up to the nominal field or higher; characterize the field; ensure that the magnet met the design criteria; and finally accept the magnet according to its performance in quenches and in training. The workforce to do this consisted of three main teams – the Operation Team, who performed the tests and measurements, supported by the Cryogenics Team and the Magnet Connect/Disconnect Team (known as ICS). In addition, a team known as Equipment Support looked after improvements and on-call trouble-shooting of hardware and software, and a sub-team of ICS handled the movement of the magnets with a remote-controlled vehicle (the ROCLA).
The complexity of the magnets implied a high level of complexity in the test facility, which required its own specific infrastructure, from cryogenic feed boxes and high-current circuits to data acquisition for measurement and control. Assembling the full facility involved several groups from CERN’s AB, AT and TS departments working over many years, with the final test bench commissioned in June 2004.
The first testing of series production magnets began in 2001, with two test benches and a limited cryogenic infrastructure. Undertaken by specialists in the magnets and the related equipment, the operation at this stage was more laboratory R&D than a well-defined, structured approach to the test procedure. The first sets of dipoles, comprising around 30 samples from each of the three suppliers, had to be thoroughly tested, with full magnetic and other measurements. This extensive testing, together with the limited operational experience and support tools, meant that some 20–30 days were required to test a magnet during 2001–2002, and only 21 magnets were tested during this period.
To increase throughput, the test facility began to operate round the clock early in 2003. With a final set-up of 12 test benches and a minimum of 4 people a shift, this required a minimum team of 24. The initial plan had been to outsource, but by early 2002 it was clear that this was no longer an option, and also that only a few non-expert CERN staff were available to run the test facility. It was at this time that the Department of Atomic Energy (DAE), India, offered technical human resources for SM18. A collaboration agreement between India and CERN had been in place since the 1990s, including a 10 man-year arrangement for tests and measurements during the magnet prototyping phase. This eventually allowed more than 90 qualified personnel from four different Indian establishments to participate in the magnet tests on a one-year rotational basis (a condition requested by India) starting around 2002.
It was also clear that proper strategies and support tools were needed to meet the target of testing all magnets by the end of 2006. In addition to several reviews aimed at streamlining the process, an extensive study took place to define a selective, reduced set of magnetic measurements needed to qualify and accept a magnet.
A testing renaissance
The overall magnet-test operation involved significant manual effort, and while this remained the case throughout the tests, from mid-2003 the operational process underwent a renaissance, from basic manual data-logging to an efficient, sophisticated and highly automated test-management system. The Operation Team, and in particular the Indian personnel, provided essential feedback for framing new strategies and made significant proposals for improving the throughput. In addition, CERN’s web-based network backbone and computer facilities were widely used to develop supporting tools.
A “to-do list” of the minimum set of tests for a magnet was created, with methods developed to reduce human error in the process as much as possible. The list of tests was reflected in templates for a magnet test report. A new website included all of the important documentation, from manuals and templates to troubleshooting procedures and the shift plan. This proved immensely helpful in training new staff as well as in managing the daily activities.
A web-based system called the SM18 Test Management System (SMTMS), based on the to-do list, was developed to generate test-sequences and reports automatically and to store all of the relevant data. This enabled fast, reliable and error-free generation of crucial data. It also made it possible to keep track of the times taken for various phases of the tests, and everyone concerned could keep track of the tests from different locations both within CERN and further afield. An electronic log-book approach, using the network backbone at CERN, ensured easy access and helped to categorize and record faults that occurred during the tests.
Another web-based tool, the e-traveller, ensured a smooth interaction between the teams during setting up and at the end of the tests for a particular magnet. This tool informed the relevant teams about the need for their services on the magnet, using mobile-phone alerts in the appropriate language. This helped the Indian personnel to overcome difficulties in verbal communication with the exclusively French-speaking teams, while maintaining the work rhythm, as well as automatically recording the phases in the tests.
The SM18 community celebrated the Hindu festival of Diwali with a party in October 2006, and lit candles to represent the remaining magnets still to be tested. Diwali takes place on the date of the new moon, between the months of Asvina and Kartika on the Hindu calendar (usually in October or November).
To fulfil their roles at CERN, the technical engineers, all specialists in their own fields, had to take a leave of absence from their regular jobs. For example, Praveen Deshpande who was at CERN in 2006 designs instruments for accelerators and lasers, while Sampathkumar Raghunathan and Charudatta Kulkarni work on the R&D of nuclear instrumentation. For Deshpande, the best part about working at CERN was the interaction with different people. “I’m exposed to many people and agencies here, which I don’t get at home working in a small design section,” he explained in an interview for the CERN Bulletin. The logistics of implementing large-scale projects at CERN proved an eye opener. “The template approach with documentation and foresight is important in a big job like this,” said Raghunathan. Kulkarni agreed, and further identified excellent leadership skills as an important lesson that he learned.
The technical engineers could also use their free time to further their own knowledge. “Everyone was given the name of someone at CERN who works in a similar field to contact, if we wished. This is for our intellectual development. Because we work in R&D, it is important that we are up to date with new developments, so we are not left behind when we return a year later,” explained Kulkarni.
On their days off, the SM18 community organized group visits to tourist destinations, celebrated Indian festivals, such as Diwali, and even formed teams for cricket matches. Many of the technical engineers brought their families over, and some of their children attended local schools. For example, Raghunathan’s wife, two children and his mother moved to Switzerland for nine months. His seven-year-old daughter attended a French school for a full academic year. “They all enjoyed living here. My daughter also learned a lot of French. It’s an added asset.”
At the end of their year at CERN, when the technical engineers returned home, enriched by their experiences, they hoped to incorporate what they had learnt at CERN into their work, in particular the methods of coordinating and managing large-scale projects. However, the experience gained extends far beyond a professional level. The magnet test has facilitated international friendships and even reunited lost ones from home. Deshpande met friends that he knew from 13 years ago when he was undergoing training, but with whom he had since lost touch, as well as colleagues at the same establishment in India whom he has never met owing to the size of the organization.
To attain a high throughput it was also necessary to reduce the time and cryogenic resources taken in testing the magnets, including the “training” required to reach the operational magnetic field. In this, the current is increased until the magnet quenches (reverts from its superconducting state to a normally conducting state) and then the process is repeated. In the early stages, each dipole was trained to reach a field about 8% higher than required for LHC operation – a major time-consuming activity. During 2003, the Operation Team observed that the majority of magnets cross their nominal field (8.33 T or 11,850 A) on the second attempt, and that not much additional information on the quality of a magnet came from a third quench or more. This led to a “two-quench rule” being agreed by the magnet experts, in which a magnet was accepted after two quenches providing it crossed the nominal field by a small margin. Later a “three-quench rule” allowed a magnet to be accepted if it had failed the two-quench rule, but crossed a field of 8.6 T (12,250 A) in the third quench. This strategy drastically reduced the overall time for the cold tests.
Another important step towards reducing the overall test time was the introduction of a rapid on-bench thermal cycle for magnets that had a poor performance in the first run. Further time-saving came from the round-the-clock decision-making on the performance of a magnet by the operator, based on the results in the Magnet Appraisal and Performance Sheet, provided by the web-based SMTMS.
Figure 1 shows the cumulative number of magnet tests, including repeats, since 2003, both for dipoles and for SSS. While throughput was low until the end of 2003, it increased sharply after the introduction of the new tools and strategies. The flat regions at the end of each year are due to the annual shutdown of the cryogenic infrastructure, typically for seven weeks. More details are shown in table 1.
Testing the SSS magnets was a challenging task until the end of 2004, when all of the necessary information had finally been gathered and collated. The special IR-SSS magnets were even more of a challenge as they have a wide variety of types, structures and temperature regimes, and required the collection of a large amount of information for the tests. Each of the 114 magnets needed its own dedicated to-do list. As the table shows, the majority of the special SSS magnets were tested in 2006, together with significant numbers of standard SSS and dipole magnets, marking altogether a remarkable achievement for the year. While delays in delivery of the magnets to SM18 meant that not all of the magnets had been tested by the end of 2006, the target was achieved only a few weeks later by 23 February 2007.
Around 9% of the dipoles required the test procedure to be repeated, with repeat rates of a little over 12% for the SSS and IR-SSS magnets. In addition, some 3% of dipoles and 6% of the SSS had to be repaired or were rejected after the cold tests. These results alone justify the effort required for testing all of the magnets under the real cryogenic conditions. Moreover, the successful completion of this huge operation has been a unique example of international collaboration on an unprecedented scale in the accelerator domain.
A rocky planet only five times the mass of the Earth was discovered around the nearby low-mass star Gliese 581. This is the most Earth-like planet known to date and furthermore its orbit is at the “warm” edge of the habitable zone around that star, thus allowing speculation that this planet could harbour life.
The quest for detecting new planets orbiting other stars than the Sun is boosted by the huge impact that this research field has on the general public (CERN Courier October 2004 p19). During one week, the Observatory of the University of Geneva was overwhelmed with phone calls from all over the world concerning the discovery and its implications for life in the universe. This mass interest was beyond the expectations of the discovery team led by Stéphane Udry, which simply published its findings in a specialized, rather than an interdisciplinary, scientific journal.
The faint star Gliese 581 is only 20 light-years away and is thus among the 100 closest stars. It is one of some 100 ”M dwarf” stars monitored by the High Accuracy Radial-velocity Planet Searcher (HARPS) mounted on the 3.6 m telescope of the European Southern Observatory (ESO) at La Silla, Chile. This high-precision spectrometer previously found a Neptune-mass planet around Gliese 581, but with more observations and refined analysis the astronomers discovered two additional planets by detecting the wobbling that their gravitational pull exerts on the star. The two new planets have masses of five and eight times the mass of the Earth and orbit the red dwarf star every 13 and 84 days, respectively. Although these planets are much closer to Gliese 581 than the Earth is to the Sun, they are located approximately at the warm and cold edges of the habitable zone of such a low-luminosity star. Among the more than 200 extra-solar planets detected so far, some others have been found to be in the habitable zone of their parent star, but they were all bigger gaseous planets.
Apart from very exotic low-mass planets orbiting pulsars, the new planet of only five times the mass of the Earth is the lightest extra-solar planet found to date. With a diameter estimated to exceed that of the Earth by only 50% and being in or just at the boundary of the habitable zone makes it the prime subject of interest. According to Udry, the surface temperature of the planet depends on the highly uncertain composition and thickness of its atmosphere. Nevertheless, an equilibrium temperature between –3° C and +40° C was estimated for a Venus-like and an Earth-like albedo, respectively. It is therefore likely that water could be liquid on the surface of this planet, although the strength of the greenhouse effect remains an important unknown.
The existence of liquid water and the possibility of life on this planet cannot be probed directly at the moment and we can only consider it as the best known candidate for future space-mission projects, like NASA’s Terrestrial Planet Finder and ESA’s Darwin, to search for the signature of water and oxygen in its atmosphere. However, the relatively old and quiet star Gliese 581 has already became one of the most famous stars in the universe and currently focuses all hopes to find a nearby Earth twin.
The Japan Lattice QCD Collaboration has used numerical simulations to reproduce spontaneous chiral symmetry breaking (SCSB) in quantum chromodynamics (QCD). This idea underlies the widely accepted explanation for the masses of particles made from the lighter quarks, but it has not yet been proven theoretically starting from QCD. Now using a new supercomputer and an appropriate formulation of lattice QCD, Shoji Hashimoto from KEK and colleagues have realized an exact chiral symmetry on the lattice, and observe the effects of symmetry breaking.
Chiral symmetry distinguishes right-hand spinning quarks from left-handed and is exact only if the quarks move at c and are therefore massless. In 1961 Yoichiro Nambu and Giovanni Jona-Lasinio proposed the idea of SCSB, inspired by the Bardeen–Cooper–Schrieffer mechanism of superconductivity in which spin-up and spin-down electrons pair up and condense into a lower energy level. In QCD a quark and an antiquark pair up, leading to a vacuum full of condensed quark–antiquark pairs. The result is that chiral symmetry is broken, so that the quarks – and the particles they form – acquire masses.
In their simulation the group employed the overlap fermion formulation for quarks on the lattice, proposed by Herbert Neuberger in 1998. While this is an ideal formulation theoretically, it is numerically difficult to implement, requiring more than 100 times the computer power of other fermion formulations. However, the group used the new IBM System BlueGene Solution supercomputer installed at KEK in March 2006, as well as steady improvements of numerical algorithms
The group’s simulation included extremely light quarks to give eigenvalues of the quarks. The results reproduce predictions (see figure) indicating that chiral symmetry breaking gives rise to light pions that behave as expected.
A team working at CERN has detected the phenomenon of volume reflection using bent silicon crystals with a 400 GeV proton beam at the Super Proton Synchrotron. The efficiency achieved was greater than 95%, over a much wider angular acceptance than is possible with particle channelling in bent crystals. This effect could prove valuable in manipulating beams at the next generation of high-energy particle accelerators.
Using the ordered structure of a crystal lattice to guide high-energy particle beams is already finding applications through the effect of particle channelling. In channelling a charged particle becomes confined in the potential well between planes of the crystal lattice, and if the crystal is bent, the effect can be used to change the particle’s direction (figure 1). However, to be channelled in this way, the particle must have a small transverse energy, less than that of the confining potential well. In a bent crystal, a particle with higher transverse energy may also change direction: it may lose some transverse energy and then become captured, or it may have its transverse direction reversed in an elastic interaction with the potential barrier. This latter process, which changes the particle’s direction, is known as volume reflection – and it is this effect that dominates, and therefore becomes more interesting, at higher energies.
In the research at CERN, a team from institutes in Italy, Russia and the US mounted a silicon-strip crystal on a high-precision goniometer. A specially designed holder kept the (110) crystal planes bent at an angle of 162 μrad along the crystal’s 3 mm length in the beam direction. Various detectors mapped the trajectory of the particles along the beam line and measured their fluxes.
Figure 2 shows the horizontal deflection of particles, as measured 64.8 m downstream, for a range of crystal orientations. The effect of channelling is clearly visible when the crystal orientation is about 0.06 mrad, giving a deflection of 165 μrad, which corresponds to the bending angle of the crystal. Here about 55% of the particles were deflected. At larger orientations, this effect disappears as the beam can no longer enter the silicon between the crystal planes. Instead a smaller beam deflection, in the opposite direction, is seen. Here the measured deflection angle of 13.9 ± 0.2 (stat.) ± 1.5 (syst.) μrad agrees well with the calculated prediction for volume reflection of 14.5 μrad. This deflection occurs over a wide range of crystal orientations, corresponding to the bending angle of the crystal; beyond this the crystal appears amorphous and the beam no longer “sees” the (110) layers.
A preliminary analysis indicates an efficiency greater than 95% for volume reflection, which occurs over a far greater range of angles than channelling. This, the team says, suggests new perspectives for the manipulation of high-energy beams, for example for collimation and extraction in high-energy hadron colliders, such as the LHC. For example, a short bent crystal could be used as a “smart” deflector to aid halo collimation in a high-intensity hadron collider, or as a device to separate low-angle scattering events in diffractive physics, close the beam line.
Particle detectors developed for high-energy and nuclear physics often find uses in many other fields. Now silicon detectors with thin entrance contacts have been launched into space aboard the five spacecraft in NASA’s THEMIS (Time History of Events and Macroscale Interactions during Substorms) mission. Fabricated at the Lawrence Berkeley National Laboratory (LBNL), the detectors comprise the heart of solid-state telescopes (SSTs). They will study electrons and ions with energies between 25 keV and 6 MeV.
THEMIS will study the Aurora Borealis. Typically, the aurora is seen as a steady greenish-white band of light. Occasionally the band will move south and become brighter. Then, the auroral band may break up into many bands, some of which will move back towards the north, dancing rapidly and turning red, purple and white. This display is caused by an auroral substorm. The THEMIS mission will study the origin of these substorms. The five separate satellites were launched into highly elliptical orbits using a single Delta 2 rocket. The craft are strategically positioned to determine the location and sequence of the events that lead to these colourful displays.
Two SSTs are on board each of the spacecraft. Their purpose is to measure the distribution of energies of the electrons and ions arriving at each spacecraft from different parts of the magnetosphere. LBNL’s Microsystems Laboratory fabricated the silicon-diode detectors. They are large-area detectors that have very thin entrance contacts, only a few tens of nanometres thick. This allows them to detect electrons and ions with energies much lower than those that can be detected with standard silicon detectors. The detectors themselves can detect 2 keV electrons and 5 keV protons. However, the low energy threshold of the SSTs is determined not by the detectors, but by the noise performance of the electronics, which is limited by the available power.
Because the detector contacts are so thin, making enough large detectors posed a significant challenge: the project required 80 flight detectors. However, the Microsystems Laboratory provided advanced equipment and processes in an ultra-clean environment that enabled the fabrication of these detectors with high yield.
The SSTs have been commissioned and are now returning scientific data on the magnetosphere during the current “Coast Phase”. In December, when the satellites will be in their required orbits, the primary task of studying the auroral substorms will begin.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.