Bluefors – leaderboard other pages

Topics

The ISR in the time of Jentschke

The beginning of Willibald Jentschke’s mandate at CERN coincided with the start-up of the world’s first hadron collider, the Intersecting Storage Rings (ISR). Moreover, Jentschke had already been involved with the ISR before as chairman of the ISR Committee, so it is appropriate to look back at the ISR during his mandate.

cernjejo1_6-03

Let us first take a look at the working principle of the ISR. The idea was to build two rings slightly distorted so that they could intersect in eight different places. You would take the beams out of the Proton Synchrotron (PS), bring them to one of these rings and accumulate them into the vacuum chambers.

The tunnel in which these two magnet rings were built is about 15 m wide and it has 150 m radius on average compared with 100 m for the PS, despite the fact that the energy of the two machines was more or less the same. The main challenge for the ISR was to accumulate high-enough currents and maintain small-enough beam dimensions to achieve a luminosity that would be interesting for physics. This could be realized only after the invention of various ways of accumulating beams. We built on the idea developed by the MURA group of accelerator physicists from the US Midwestern Universities Research Association to accumulate a high current beam from hundreds of injected pulses by using an RF stacking process. In our initial design, this would lead to ISR beams 60 mm wide and about 10 mm high.

High currents, and therefore high instantaneous luminosity, are important. An equally important design goal, however, was long beam lifetimes so that the average luminosity would also be high. That required a very good vacuum compared with what was normal in those days. The planned vacuum for the ISR was 10-9 to 10-10 Torr. As we will see, we went far beyond that. Good vacuum, of course, was also important for the experiments to have manageable backgrounds.

Authorization for the ISR project was given at the end of Viki Weisskopf’s mandate and construction took place during Bernard Gregory’s time as director-general, starting in 1966 and finishing early in 1971. When Jentschke arrived, both rings had been installed and one of the rings – ring one – had been tested at the end of October 1970. We had been able to accumulate several amps of protons and we had found very promising lifetimes. But we had also met the first beam instabilities that limited the beam currents to around 3 A.

cernjejo2_6-03

In January 1971 simple detectors had been installed to observe the first collisions. We tested ring two so that both rings had been tested independently, and on 27 January 1971 we operated the two rings together and observed the first collisions. This was a great event for many of us. We had achieved proton-proton collisions in colliding beams for the first time ever.

Jentschke was present for the test. He brought champagne with him, and I recall that someone was not too happy that this was served rather late in the night. Some of us felt that it was more important to watch the lifetimes of the two beams before popping the corks. When we were satisfied that the lifetime was good we opened the bottles. The champagne may have been late, but it was well deserved! Physics started about one month later, and Jentschke hosted the official inauguration in October 1971.

The ISR in action

So much for the early history of the ISR. Let us turn to the characteristics of the machine and how the performance developed during the first few years of operation. Luminosity rapidly climbed, reaching the design goal of 4 x 1030 cm-2s-1 in early 1973. During Jentschke’s mandate, it increased by a factor of 10, and ultimately went on to reach 1.4 x 1032 cm-2s-1, some thirty times the design goal, before the ISR was closed in 1984.

I’d like to explain a little how the luminosity developed. I’ve already mentioned that we encountered instability in the beams very early. That was both expected and unexpected. We knew we were aiming for very intense beams and so it would not be strange to have instabilities. But the first instability that we met was one that we thought we had taken care of. The current came up to about 3 A, when suddenly we lost part of the beam. Then it started building up again until we again lost part of it and so on. This turned out to come from instability arising from the interaction between the beam and the vacuum walls – the resistive wall instability as it was called, although it was not due only to resistance.

Coping with instability

This effect turned out to require extremely careful beam handling to remove. We found, for example, that it didn’t take current away from the whole beam uniformly, instead it more or less punched holes in the beam. In other words, the instability was much more local than the beam itself and this meant that we had to have the right field shape not only globally for the beam, but also locally. So we had to fine-tune the field shape in the apertures with the help of the pole-face windings. This allowed us to improve the situation and build up higher currents.

cernjejo3_6-03

We applied the cure step by step, but as we went up in current we suddenly found that we were again losing part of the beam, but this time the time-scale for the loss was much longer. Not only that, after careful investigation, we saw a pressure bump building up. In other words, we had moved into a completely different kind of instability – not a beam instability but an instability due to pressure in the vacuum chamber caused by beam protons hitting the residual gas. Resulting secondary particles striking the vacuum-chamber wall released more particles, and this had a run-away effect that led to a pressure bump.

This effect required completely different cures – in particular, a tremendously increased pumping capacity and exceedingly clean walls. Again, we took it step by step. Where we found pressure build-up, we improved pumping in that area during the next available scheduled shutdown. We also improved the cleaning of the vacuum walls by using higher temperatures for the bake-out, and also by employing techniques such as gas discharge cleaning. Progress came in a series of jumps. We got a few more amps after we cured the first problem, then came the second and we cured that. Then we went back to the first, but at a higher current, and so on. In the end we had rather sophisticated beam tuning and vacuum treatment, and by solving these two problems we’d found the way to high luminosity.

We also manipulated the geometry of the beam cross-section. The height of the beam had a direct impact on luminosity, so we tried to reduce the height as much as possible, reaching about half what we had originally foreseen. We also put in a squeezing system – a low-beta section as it was called – to further reduce the beam height considerably in the interaction regions. This required a lot of inter-laboratory collaboration. We did not have the quadrupoles that were needed, so we got them from DESY, from the Rutherford Laboratory and from the PS Division at CERN.

cernjejo4_6-03

The improvements in the vacuum had the added benefit that they led to great improvements in beam lifetimes, which were typically 50-60 hours. We had very small beam losses, which meant that we had very low background in the interaction regions except for when halo built up. Backgrounds were so low that at one point when the experiments were asking for further improvements we had to point out that what they were seeing was coming from proton-proton interactions and not from collisions with residual gas in the rings!

The invention of stochastic cooling

These were the main features of the ISR during Jentschke’s time, but we also had some special things happening, and there’s one that I can’t resist mentioning: stochastic cooling. To me, the stochastic cooling story happened in the following way. Simon van der Meer had a fit of pessimism about the planned performance of the ISR. He was afraid that the machine wouldn’t work as promised, and he put all his mental energy into finding a way of saving it if this came to pass. Happily, he turned out to be wrong about the ISR, but he nevertheless invented stochastic cooling.

When he had worked out the theory, he concluded that it would not help the ISR and he put the idea to one side. Fortunately, he told his colleagues about the idea first. Later on, as the ISR got going, people realised that although stochastic cooling wouldn’t help the ISR much, it was a wonderful invention and we’d better take a look at it. So we managed to build in stochastic cooling to the ISR. When we switched it on we saw a reduction in beam height, so we had a clear demonstration that stochastic cooling worked.

cernjejo5_6-03

An ideal application for stochastic cooling would be in a machine where beam currents are not as high as in the ISR, and such an application was not long in coming. Stochastic cooling came into its own, as you all know, in the proton-antiproton project that ultimately led to the Nobel prize for Van der Meer and Carlo Rubbia. It was mainly applied in the antiproton accumulator, where stacking and cooling was the mechanism whereby the antiproton currents were accumulated.

Comments on the exploitation

Not everything always went well at the ISR. For example, on more than one occasion part of the beam went astray and punched holes all the way along a bellows structure, which made a mess not only of the bellows but also of the vacuum. On another occasion a spectrometer arm went out of control and broke the beam pipe, leading to a similar effect.

But these were isolated incidents, and didn’t harm the excellent physics programme at the ISR. One result that caused a stir was the observed increase in proton-proton total cross-section. I remember that shortly after this had been announced, I was sitting at the dinner table with a theorist who pointed at me and said “I will eat my hat if you machine people don’t find that you are making a mistake with your measurement of the effective height”. We didn’t find a mistake in our measurement, but since I did not meet the lady again I don’t know what became of her hat.

cernjejo6_6-03

One of the ISR’s important contributions to particle physics was that it provided a place where experimentalists learned how to do physics with colliding beam machines, which are so different from fixed-target machines. This was extremely useful for the proton-antiproton programme some years later. During a very productive life, the ISR reached an energy of 63 GeV in the centre of mass, or in other words the equivalent of 2 TeV for a fixed-target machine. The average vacuum reached an impressive 3 x 10-12 Torr, and for one fill with antiprotons, using stochastic cooling, a beam lifetime of 345 hours was achieved.

Concluding remarks

To conclude, it must have been very satisfying for Jentschke to be in charge of CERN during these pioneering years of hadron colliders. It must also have been satisfying later for him to follow the development not only of the ISR but the whole approach to particle physics with its shift from fixed target to almost entirely colliders. He was fortunate to be present at the beginning and then to follow it for a long time. His life as a scientist was rich and varied, and the ISR was an important part of it.

A heartfelt tribute

As we stand in the CERN Council chamber with your friends, your presence is so alive and strong. I would like to talk to you, Willi, not about you. I have many personal memories about how you helped in many ways. I choose to speak about your physics research interests.

cernjewi1_6-03

The secret of your success was your personality – a unique blend of knowledge, competence, vision, ideas, Viennese charm, courage and the talent to recognize and attract excellent people.

Among these were Hans Frauenfelder, a friend from the Urbana time who joined you in Hamburg creating a group looking for parity violation, the theoretician Harry Lehmann, your friend Peter Stählin, first research director at DESY, and J S Allen, who had developed an ion detector enabling the study of electron-neutrino correlation. You became interested in establishing the famous V-A interaction.

cernjewi2_6-03

Among your early collaborators I would like to mention Paul Söding, who was your first student in Hamburg, and Samuel C C Ting, who made DESY famous through his experiments on vector particles. At CERN, you very much enjoyed the challenge of commissioning the ISR, a unique research tool, in collaboration with Kjell Johnsen and Bernard Gregory who preceded you as director-general and took over from you the chair of the ISR. You were also proud of the discovery of neutral currents at CERN during your term of office. You were deeply convinced that the future plans of CERN must be based on international collaboration – a vision that has led CERN into the 21st century.

Remembering Willi

It was early 1978 and the group was very busy. In those days, it seems we were always very busy, but the winter of 1978 was especially so. A new polarized electron source had been installed on the linac, and we were commissioning it, testing a new spectrometer in the end station, and also learning how to use new beam-monitoring and beam-steering apparatus. We were preparing to look for parity violation in deeply inelastic electron scattering, an effect that was predicted by the Weinberg-Salam Model.

cernjetay1_6-03

Willi had just finished his term as CERN director-general and came to SLAC for a sabbatical visit. We welcomed our distinguished visitor and invited him to join in the experimental activity, something that suited us and seemed to suit him as well. Willi’s first action was to purchase his first pair of blue jeans. After all, this was a necessary part of one’s wardrobe if one’s preparing to work on shift. Willi never pretended he would contribute anything technical to the experiment, and we didn’t really expect that or ask. We were happy to have someone around with his experience and perspective on the field. We did tease him, however, as being our oldest graduate student.

Willi became particularly interested in one aspect of the experiment. The spin-polarized electrons had to have their spins rapidly flipped in order to measure the small parity violating asymmetries that arise from the weak electromagnetic interference. This was done by rapidly reversing the circular polarization of the laser beam that drove the photoemission source and polarized the electrons. The heart of the experiment was a device, a Pockels cell, a commonly used optical component that, when biased by a voltage, provides a quarter wave retardation of the laser beam. Willi was fascinated by the Pockels cell, invented by Friedrich Pockels in 1893. Willi descended on our library staff for help in locating the original turn-of-the-century scientific papers (in German, of course) so he could learn about these devices. We assume he found the papers. He was always delighted to lecture anyone who would listen about the physics and history of Pockels cells.

Willi participated in the shifts through the spring 1978 runs, culminating in the observation of a parity violating signal in the electron-scattering process. He didn’t take evening or owl shifts, but was usually around during the days, particularly in the afternoons around 4 p.m. at shift change. That was the busiest time. The collaboration was small enough to meet in the counting house, and at shift change at four we would meet informally to discuss progress and plans for the next day or so. Willi enjoyed those somewhat disorganized meetings and discussions.

We released our first results in the early summer of 1978, and Willi was present for that event. Happily we were allowed to include his name on the publication. We believe he returned home later, satisfied with his experimental sojourn and his visit to SLAC. A gentleman, a great physicist, and a great friend, we will miss him.

With gratitude to Willi

Willibald Jentschke had a significant influence on the course of my early scientific career for which I will always be grateful.

cernjeti1_6-03

In March 1963, I had just obtained my PhD from the University of Michigan and came to CERN where I had the good fortune to start working with Giuseppe Cocconi, Klaus Winter, Gustav Weber and Marcel Vivargent. After returning to the US, I worked with Leon Lederman at Columbia University and also wrote a paper in quantum electrodynamics with Stanley J Brodsky on higher-order Bethe-Heitler pairs.

At that time, a very important experimental result was announced at the Cambridge Electron Accelerator (CEA), which showed a large violation of first-order quantum electrodynamics (QED). In this experiment, the yield of wide-angle electron-positron pairs produced in the reaction γ + carbon → e+ + e + carbon was measured in order to test the validity of QED at small distances. This experiment generated a great deal of interest and was at the centre of discussions in the community of high-energy physicists. My previous work with Stan Brodsky spurred my interest in this result and compelled me into redoing this experiment. Klaus Winter introduced me to Prof. Jentschke, director-general of the Deutsches Elektronen-Synchrotron (DESY) in Hamburg, and this proved to be a major event in my career as an experimental physicist.

As a young physicist, I had never proposed nor led an experiment. My previous work at CERN and my PhD thesis (under the directions of Lawrence W Jones and Martin Perl) were on high-rate πp and pp interactions. I had no experience in the difficulty of measuring rare e+e with intense (≅1011 equivalent quanta per second) photons on nuclear targets, which always produced large amounts of π pairs. However, together with Arthur J S Smith, Ulrich J Becker and the late Peter Joos, we designed a detector that was quite different from the CEA design. After a long conversation in which he asked me many questions on backgrounds, acceptance, electronics, trigger and experimental redundancies, Jentschke decided to support our carrying out this experiment at DESY. Thus began my career on the study of lepton pairs, including tests of electrodynamics, photoproduction and leptonic decays of vector mesons at DESY, which ultimately led to the discovery of the “J” particle at Brookhaven.

A friend and mentor

Jentschke showed an abiding interest in our work and often visited us on weekends or late at night to discuss our results. He also introduced me to many leading German physicists – Wolfgang Paul, Herwig Schopper, Max Born and others. He often invited my family and me to his home when we were not taking data. From discussions with him, I learned of the tremendous efforts he had made in founding DESY and his desire to make it a world-class laboratory. His wisdom and inspiration were of great help to me, such as when he advised me to accept an offer from MIT where I have worked ever since. At that time, I had received many attractive offers. MIT’s was the only one that was not tenured, but Willi’s advice turned out to be correct in the long run.

I remember Willi Jentschke as a person of insight and dedication to physics and I will always be grateful for his support and encouragement.

ACFA unveils plans for linear collider

The Asian Committee for Future Accelerators (ACFA), together with the Japan Association of High Energy Physics (JAHEP) and the High Energy Accelerator Research Organization (KEK), have published a “roadmap report” for their linear collider project. The report was made public at the ACFA Linear Collider Symposium, held on 12 February at the international congress centre of Tsukuba in Japan. Nearly 400 people attended, not only from laboratories and universities around the world, but also from industry.

cernnews3_5-03

The linear collider project is an important one for ACFA. In statements in 1997 and 2001 ACFA strongly recommended that a linear collider should be constructed in the Asia-Pacific region with Japan as host for the worldwide international project, which should be operated concurrently with the Large Hadron Collider at CERN. The objective of the symposium was to explore the scope of the ACFA Linear Collider Project, including the overall design, cost, site and organizational aspects. The programme also included presentations on the viewpoints from the US, Europe and various ACFA countries, as well as from industry.

The initial goal of the ACFA linear collider is to perform experiments at a centre-of-mass energy (Ecm) of up to 500 GeV, with a luminosity of more than 1034 cm-2s-1. The design, as presented by Kaoru Yokoya of KEK, is based on a pair of linear accelerators installed in a straight tunnel about 30 km long. The main linacs will use X-band (11.424 GHz) RF technology, which has been developed in close collaboration with the NLC group in the US. This allows the electrons and positrons to be accelerated at 50 MeV/m or faster.

An important feature of the project is its energy-upgradability. The tunnel will be long enough for a machine eventually to reach Ecm = 1 TeV, but initially it would be only half-filled with RF accelerating structures. The energy could also reach beyond 1 TeV using the same technology – for instance, 1.25 TeV with one-third of the full luminosity.

Another option would use C-band (5.712 GHz) RF technology from about 400 GeV, with X-band accelerating structures filling the remaining space in the tunnel at a future upgrade.

A working group formed in 2001 listed eight candidate sites in Japan with the appropriate geology; an additional four sites are of interest because they are already national bases of scientific R&D. Atsushi Enomoto presented these options together with a description of the facility, including the underground tunnel structure, civil engineering processes, and systems for electric power and cooling. To maintain the accelerator complex continuously, the design foresees a double-tunnel structure – one for klystrons etc and the other running in parallel for accelerating structure.

Hirotaka Sugawara, KEK’s director-general until the end of March, revealed that the total construction cost of the linear collider is estimated to be ¥495.1 billion (€3.86 billion) for the baseline case, in which the main linacs to support operation at Ecm = 500 GeV are built within tunnels that can eventually support operation at Ecm = 1 TeV. The cost also includes payment of all human resources other than accelerator scientists.

ACFA recommended in their statements that the linear collider should be built as an international facility open to all interested parties. Based on this recommendation, a committee formed in July 2001 has recently issued a report describing how the linear collider might be organized as a truly global project. As Sakue Yamada of KEK explained, the proposal is for a new international laboratory, the Global Linear Collider Centre (GLCC), to be created in Japan to facilitate the long-term commitment of participating partners, as well as open and transparent management. All partners would be on an equal footing although the contributions in financial and/or human resources may vary widely. In order to realize the GLCC quickly, the formation of a Pre-GLCC was proposed. A worldwide team would work together, irrespective of their preferences concerning the host, site or accelerator technology.

N Ozaki, the secretary-general of the Linear Collider Forum of Japan – a collaboration between the academic side and industrial companies formed in 2002 – discussed the linear collider from the industry point of view. Industry has a strong interest in the linear collider because its research may lead to business innovation.

Ozaki clearly described the importance of co-operation between researchers and industry from the beginning of the project. He emphasized that industry wants an early start for the linear collider project, and stressed that Japanese industry hopes to have industrial partners in other countries. The forum has plans to visit them in order to build up international collaboration.

In concluding remarks, Sachio Komamiya from the University of Tokyo and the chairman of JAHEP spelt out the steps needed to realize the project. He emphasized that the final engineering design should be carried out by a global team under the Pre-GLCC, and should be completed by 2007. The construction of the machine is expected to take five years, including the excavation of tunnels and the installation of the accelerator, so commissioning could start in 2012.

CERN and Saclay: 40 years of co-operation

cernsac1_5-03

The Saclay Research Centre near Paris is the largest of the French Atomic Energy Commission’s research centres. Conceived from the start as a multi-purpose centre to bring together fundamental research and technical innovation, it has since expanded to explore many different aspects of nuclear physics and its applications, including, of course, particle physics. In 1963, when physicists from Saclay began experiments on the PS at CERN, it marked the start of a long and fruitful collaboration between the two laboratories. By the time the LHC is commissioned, this collaboration will have encompassed more than 30 different experiments, to which Saclay has brought its expertise in instrumentation, data acquisition and data analysis.

The bubble-chamber era

When researchers from Saclay first came to CERN in the 1960s, the majority of experiments in particle physics involved bubble chambers. Saclay was one of the pioneers of their construction and use at a number of accelerators: first Saturne at Saclay, then Nimrod – the Rutherford Laboratory’s 8 GeV proton synchrotron – and then the PS at CERN and the 70 GeV accelerator at Serpukhov. The first series of experiments at the PS by physicists from Saclay’s SPCHE (see “The early days of Saclay” box) involved the use of an 81 cm hydrogen bubble chamber. This was developed by the technical services at Saturne for the laboratory of the Ecole Polytechnique under the leadership of Bernard Gregory, who later became director-general of CERN. The chamber was used to study K p, K n, π p, π+ n and pbar-p scattering at energies of a few GeV, with the aim being to understand the collision mechanisms. The same theme was repeated, but at higher energies, in a second series of experiments at the PS on CERN’s 2 m hydrogen bubble chamber.

cernsacbox_5-03

The 1970s saw a further increase in collision energy, and bubble chambers became bigger and bigger in line with the increasing multiplicity and energy of the final particles. In 1971, André Lagarrigue led the construction of the Gargamelle bubble chamber at the Saturne laboratory. Exposed to the neutrino beam from CERN’s PS, Gargamelle led to the discovery of neutral currents in 1973. This was followed by the Big European Bubble Chamber (BEBC) at the SPS, which involved energies approximately 10 times higher than at the PS. In addition to investigating strong interaction mechanisms and resonances, these experiments also explored neutrino-nucleon and antineutrino-nucleon scattering, the first stage in a better understanding of neutrino physics.

cernsac2_5-03

During its 30 years of bubble-chamber experiments, Saclay’s DPhPE (Département de physique des particules élémentaires) – which the SPCHE had become in 1966 – built up strong teams, of around 150 people, that specialized in the scanning and measurement of images, before going on to develop automatic scanning techniques that allowed more than 10 million images to be analysed. As a result the DPhPE, together with CERN, had the greatest measurement and data-handling capacity of any European laboratory, and so was able to play a major role in the collaborations in which it participated. In developing bubble chambers, the DPhPE’s technical services also acquired skills in the fields of magnetism, cryogenics and control systems, as well as experience in the design, construction and running of large projects. This expertise was to come in useful in the new generation of experiments at CERN in which the DPhPE took part.

The first electronic experiments

The spark chamber was invented at the beginning of the 1960s, and when used in conjunction with counters equipped with fast electronic read-out systems, it allowed events to be pre-selected – something that is impossible in bubble chambers. Spark chambers are also able to record signals much more quickly than bubble chambers, and their use became widespread in high-energy physics, marking the start of the “electronic” detector era. The SPCHE soon turned to this new technology. Its first electronic experiment at CERN, performed in 1964 by the team of Paul Falk-Vairant, involved the measurement of the high-energy charge exchange reaction π p → π0 n at the PS, in an extension of an earlier experiment at Saturne. The equipment designed at the SPCHE consisted of scintillation counters and optical spark chambers to detect electromagnetic showers. Working first with a liquid hydrogen target, the experiment seemed to confirm the simple Regge pole theory favoured at the time; but when carried out with a polarised target, the results showed that a more complex interpretation was needed.

cernsac3_5-03

The DPhPE continued its extensive study of strong interaction mechanisms at the PS and also began to study strange particles following the 1964 demonstration of CP violation in the neutral kaon system, to which Réne Turlay, later a key figure at Saclay, contributed. In 1971, the start-up of the Intersecting Storage Rings (ISR) at CERN allowed matter to be explored at much higher energies, and physicists from the DPhPE took part in two experiments there. One of these was R702, whose purpose was to measure the production of particles with large transverse momentum, and whose results corroborated the theory of the granular structure of protons. In these early electronic experiments, the DPhPE contributed detector elements and the associated electronics commonly used at the time: scintillation counters for the trigger, Cerenkov counters for identifying particles, spark chambers for measuring trajectories, and polarised targets. The end of the 1960s saw the appearance of wire chambers for tracking, which were faster and more precise than spark chambers, and which allowed detectors to be built that were larger and easier to operate.

By 1977 when CERN’s 200-400 GeV proton synchrotron, the SPS, was commissioned, the results from the previous 15 years had changed the perception of particle physics. In particular, the discoveries of neutral currents at Gargamelle in 1973 and of charmed particles in 1974 represented an initial experimental validation of the Glashow-Weinberg-Salam theory of the electroweak force and of quantum chromodynamics (QCD), the theory of strong interaction. The various experiments at the SPS set out to test these theories in more depth. The DPhPE played an active role, taking part in the deep inelastic scattering experiments with neutrinos (CDHS) and muons (BCDMS), as well as experiments in hadroproduction (WA11, NA3) and photoproduction (NA14). These brought a large haul of results to which the DPhPE’s physicists made significant contributions: measurement of nucleon structure functions, confirmation of violations of the scale-invariance predicted by QCD, precise measurements of sin2ΘW and αs, and charm studies.

cernsac4_5-03

DPhPE built proportional chambers or drift chambers of various sizes and geometries for all of these experiments at the SPS. Given the large number of wire chambers that were needed, the assembly lines the laboratory had at that time were a valuable asset. In particular, large 4 m sided hexagonal chambers were designed for the CDHS, and were subsequently used in numerous detector tests under beam conditions before being integrated in the recent CHORUS experiment. The DPhPE also built its first large-scale calorimeter for the NA3 experiment. Comprising lead plates and scintillating tiles, its size (5 m x 2 m) necessitated the development of a new low-cost type of scintillator with a high attenuation length. Again the skills acquired through participation in these projects were put to good use in the next generation of experiments.

cernsac6_5-03

In the 1980s, CERN took the major step of converting its SPS into a 540 GeV proton-antiproton collider, which later ran at 630 GeV. Commissioned in 1981, the SppbarS and its two general-purpose experiments, UA1 and UA2, led to the discovery of the W± and Z bosons, bringing resounding proof of the Glashow-Weinberg-Salam model for electroweak interactions (“When CERN saw the end of the alphabet”). The DPhPE took part in both experiments, contributing not only through technical achievements but also in obtaining physics results, in particular regarding the W± and Z bosons, jets, and the search for the top quark. Building on its experience with NA3, the DPhPE became involved in calorimetry in both UA1 and UA2, with lead-sandwich electromagnetic calorimeters and scintillators for the UA1 and UA2 endcaps, followed by scintillating optical fibre detectors for the fibre tracker for the second phase of UA2. The scintillator “gondolas” of the UA1 calorimeter, a key component in identifying and reconstructing the decays of the W± and Z bosons based on electron decay, were one of the DPhPE’s most significant achievements in terms of the specific developments and equipment needed, including the development of a new extruded polystyrene scintillator that allowed large thin leaves of uniform thickness to be made.

The LEP era

The DPhPE was involved in LEP right from the outset, and participated in the ALEPH, DELPHI and OPAL experiments. In ALEPH, the contributions involved the superconducting solenoid – which was 5 m in diameter, 7 m long, with a field of 1.5 T – the lead-sandwich electromagnetic calorimeter that incorporated proportional tubes, and the silicon-tungsten luminosity calorimeter. For DELPHI, the DPhPE was involved with the tracker – a time projection chamber – and its associated data acquisition and read-out electronics. For OPAL, the contributions included the scintillator hodoscope for time-of-flight measurement and general trigger electronics. The 12 years of data taking at LEP contributed in many essential ways to refining the Standard Model. Physicists from DAPNIA, which the DPhPE became in 1991, were involved in this progress, taking part in, for example, beauty studies, the accurate measurement of the W boson mass, and the search for the Higgs boson and supersymmetric particles.

At the end of the 1980s, as the experiments at LEP progressed, the fixed-target programme began to focus again on the subjects for which this kind of experiment is still the most suited. The DPhPE decided to be involved in four experiments, namely CP violation in the neutral-kaon system (CP LEAR, then NA48), nucleon spin structure (NMC, SMC, then COMPASS), neutrino oscillation (NOMAD) and quark-gluon plasma (NA34). While most of these experiments have been completed, NA48 and COMPASS are still taking data.

Into the 21st century

Chambers are still the dominant instrument in particle physics but technologies have evolved, resulting for example in the “micromégas” (micromesh gaseous structure) chambers, which are able to absorb particle fluxes 1000 times more intense than conventional chambers, and which are also faster and more accurate. Developed by the DPhPE, together with the necessary state-of-the-art electronics, these chambers are used in COMPASS and have recently been adopted by the CAST experiment.

cernsac7_5-03

At the same time, the make-up of the Saclay teams has also evolved. The particle physicists no longer have a monopoly on experiments at CERN. They have been joined by teams of nuclear physicists from DAPNIA, studying the structure of the nucleus in the SMC and COMPASS experiments, for example, and in the neutron time-of-flight programme (nTOF). The future of the fixed-target programme at CERN also concerns DAPNIA, whose physicists and engineers are contributing to the proposal for a future Superconducting Proton Linac (SPL) accelerator complex at CERN, and to the design of experiments that would use the SPL’s intense neutrino beams for the study of CP violation in the lepton sector.

From LEP to the LHC

CERN’s latest machine, the Large Hadron Collider (LHC), will open up a new high-energy domain, and its experiments should clarify the precise nature of the electroweak symmetry breaking mechanism once and for all. DAPNIA is investing heavily in this future, with its particle physicists taking part in the ATLAS and CMS experiments and its nuclear physicists participating in ALICE. It is also involved in designing and monitoring the manufacture of the quadrupoles for the machine itself. Participation in ATLAS involves the design of the superconducting air-cored toroid magnet system, and the construction of the central electromagnetic liquid argon calorimeter. Involvement in CMS covers the on-line calibration system for the crystal electromagnetic calorimeter, which is based on the injection of laser light, as well as the general design and monitoring of certain components of the experiment’s superconducting solenoid magnet, which is 6 m in diameter, 12.5 m long, and has a field of 4 T. In ALICE, DAPNIA is contributing the design and production of the wire chambers for the muon spectrometer. Muons, electrons and photons are all hints of the signals that these experiments hope to discover or measure, whether it be the Higgs boson at ATLAS and CMS, or the quark-gluon plasma at ALICE. What more promising subjects for the continuation of the 40 year long co-operation between Saclay and CERN could one hope for?

Let the data free!

Making astronomical data from the telescopes in space or on Earth freely available is common practice. A first step in this direction for particle physics data has been undertaken recently with QUAERO, a scheme developed at Fermilab to make high-energy data from the D0 experiment generally available (Abazov et al. 2001). This kind of “experimental transparency” allows any physicist in the world to test a new theoretical idea or evaluation algorithm. However, the practice does not exist for data taken from dark-matter experiments, although the most natural approach for this relatively new cross-disciplinary field of astroparticle physics should be that the data do not remain the private property of each experimental collaboration, but become public, as in the case of astronomical data.

cernvie1_5-03

We do not believe that the continuing secrecy in experimental astroparticle physics has been introduced intentionally. On the contrary the reason most probably lies in the lack, as yet, of any direct signature for dark-matter particles, which are believed to dominate the gravitational mass of the universe strongly. This situation has existed for decades, but despite this the challenging experimental question of the nature of dark matter is now fascinating more and more physicists across different disciplines. To our knowledge, there is no other similar example in the past.

As long as dark-matter physicists believe they have a zero result with their data, they will focus on improving detector performance to stay at the forefront of their field of research. Who then, has the time and the courage to consider releasing data collected over several years, which have become downgraded at best to measurements of background? There is no lack of data coming out of the underground dark-matter experiments worldwide, but these data have already been quasi disqualified because they do not fit the widely accepted picture of dark-matter interactions at the Earth.

However, in the past even dark-matter data have been re-evaluated following a new (theoretical) approach from inside as well as outside the collaboration, and this is exactly why astroparticle physicists should release their data. Most, if not all, dark-matter experiments are not complex, and their data can easily be formatted for non-experts. Scientific problems know no frontiers, and certainly not those as defined by a collaboration, even an international one. The dark-matter problem itself might also require some kind of synergism, or even a cross-correlation, between different experiments that have been declared – or even not declared – as dark-matter experiments.

As in particle physics, astroparticle physics theory is far ahead of experimental performance. However, it could be that the generally accepted theoretical picture does not point the experimentalists in the right direction. After all, there have been plenty of unanticipated discoveries in the past. For example, if the recently widely discussed theory of extra dimensions reflects reality, at least some of the approaches of the dark-matter searches must be revised because the particles they are aiming to detect have completely different properties from those assumed so far. Obviously, we must be sure that a signature in dark-matter data from previous experiments has not been overlooked, otherwise the broken dreams of dark-matter physicists will become their nightmares.

Making the data from astroparticle physics public will certainly promote scientific collaboration and will increase the many numbers of “amateurs” working in this field. Scientific transparency can only be beneficial to the science we are supposed to serve, and we have therefore suggested to the astroparticle physics community that it releases its data (Hoffmann et al. 2003). CERN, with its astroparticle physics programme, could once more be the pioneer of a new approach.

A Brazilian feast of cosmology and gravitation

The Brazilian School of Cosmology and Gravitation celebrated its 25th anniversary in 2002 by launching a website that contains all 93 lectures and seminars of the nine schools that have been organized since the first school in 1977. The site, set up by the Cosmology and Gravitation Group at the Brazilian Center of Scientific Research (CBPF), which organizes the schools, contains an impressive collection of talks by many of the most important scientists in the areas of cosmology, gravitation, astrophysics and field theory. It is an important resource for students and researchers, which also shows the evolution of these areas of physics during the past 25 years. The material, which is in PDF format, can be accessed via the website of the Cosmology and Gravitation Group at the CBPF.

The proceedings of the 10th school, which was held from 29 July – 9 August 2002, will be published this year by AIP.

Theory of Optical Processes in Semiconductors

by P K Basu, Oxford University Press. Paperback ISBN 0198526292, £39.95.

61MchTQWYdL

Now out in paperback this book, aimed at graduate students in physics and engineering, and other beginners in the field, provides a simple quantum mechanical theory of important optical processes in semiconductors.

Plasma Waves: Second Edition

by D G Swanson, Institute of Physics Publishing. Hardback ISBN 075030927X, £48 ($75).

41CUVAeR0ZL

This extended and revised edition encompasses waves in cold, warm and hot plasmas and relativistic plasmas. Written as a textbook for students, it also provides essential reference material for researchers.

bright-rec iop pub iop-science physcis connect