Comsol -leaderboard other pages

Topics

Electrons signal a world first for EMMA

CCnew3_07_10

On Tuesday 22 June at 10.37 p.m. an electron beam passed for the first time through four sectors of EMMA, the prototype that has been built at the UK’s Daresbury Laboratory to test the concept of the nonscaling fixed-field alternating gradient accelerator (FFAG). As the beam passed more than half way round the accelerator’s circumference it marked a world “first” and demonstrated the underlying soundness of the most technically demanding aspects of the design.

FFAGs have a ring of magnets to focus the beam strongly, as in an alternating-gradient synchrotron, but the fixed magnetic field means that the beam spirals outwards as it is accelerated, as in a cyclotron. The result is more compact than a cyclotron, however, and although the concept is some 50 years old such machines are now being considered world-wide for a variety of applications. EMMA, the Electron Model for Many Applications, is a 20 MeV accelerator, which will test for the first time the concept of the nonscaling FFAG, in which the betatron tunes (the frequency of the transverse oscillations) are allowed to vary during the acceleration process. Nonscaling FFAGs are attractive because they tend to have much smaller transverse apertures than scaling FFAGs (where the beam optics remain constant during acceleration), which were first demonstrated at KEK in 2000.

EMMA has been built at the Daresbury Laboratory of the Science and Technology Facilities Council, under the auspices of the project for the COnstruction of a Nonscaling FFAG for Oncology, Research and Medicine (CONFORM). The electron beam injected into EMMA is generated by another novel accelerator system at Daresbury, the Accelerators and Lasers in Combined Experiments (ALICE). ALICE is the first accelerator in Europe to operate using energy recovery, where the energy used to create its high-energy beam is captured and reused after each circuit of the accelerator for further acceleration of fresh particles.

The next steps for EMMA, which are now underway, are to complete commissioning of the full ring, followed by the world’s first demonstration of the new acceleration technique.

Copernicium enters the periodic table

CCnew4_07_10

On 12 July, a ceremony at GSI celebrated the entry of copernicium into the periodic table of elements with a symbolic christening for the new element. Copernicium is 277 times heavier than hydrogen and the heaviest element officially recognized in the periodic table. It is named in honour of the astronomer Nicolaus Copernicus.

Element 112 was discovered at GSI in 1996 by an international team of scientists led by Sigurd Hofmann. The element has officially carried the name copernicium and the symbol Cn since 19 February 2010. Naming the element after Copernicus follows the long-standing tradition of choosing an accomplished scientist as eponym.

The team of scientists at GSI, from Germany, Finland, Russia and Slovakia, produced the new element for the first time in February 1996, by firing zinc ions onto a lead foil. The fusion of the nuclei of the two elements produced one atom of element 112. Although it is stable for only a fraction of a second, the team identified the new element through the radiation emitted during its decay. Further independent experiments at other research facilities confirmed the discovery of element 112 and in 2009 the International Union of Pure and Applied Chemistry officially recognized the existence of element 112 and acknowledged the GSI team’s discovery by inviting them to propose a name.

Multibunch injection provides a quick fill

Beam commissioning at the LHC continues to result in increasing luminosity for the experiments. The end of the first week of August saw data-taking pass another milestone, with integrated luminosity reaching 1 pb–1 – that is, a thousandfold increase since the end of June.

A major factor has been the implementation of multibunch injection from the Super Proton Synchrotron (SPS). This involves sending several bunches to the LHC in one SPS cycle, thus reducing the time needed to fill the collider. In tests on 27 July, using this scheme for the first time, the operators sent four bunches at a time into the LHC to give a total of 25 bunches (including one pilot bunch) in each direction.

Then, from around midnight on 30 July, the machine ran with stable beams of 25 bunches, providing 16 colliding pairs per experiment and delivering a peak instantaneous luminosity of around 2.6 × 1030 cm–2s–1. This corresponds to a total stored beam energy of 1.2 MJ. Further long fills with 25 bunches per beam followed in the first week of August, with peak luminosities of up to
3 × 1030 cm–2s–1 providing up to 120 nb–1 integrated luminosity per fill. By Friday 6 August, with the milestone of 1 pb–1 on the horizon, there was a small and well deserved celebration in the CERN Control Room, for the operations and commissioning teams whose hard work makes this progress possible.

TOTEM sees elastic scattering and LHCf completes first run

CCnew5_07_10

While the large experiments at the LHC have been collecting the first inverse picobarn of integrated luminosity at 7 TeV in the centre of mass, the two smaller experiments installed at the collider have also passed milestones.

TOTEM, which stands for TOTal cross-section, Elastic scattering and diffraction dissociation Measurement at the LHC, is designed to measure elastic scattering over a range of momentum transfer, as well as a variety of diffractive processes. To make these observations, the experiment needs to detect particles at angles of less than 1 mrad relative to the beam line, so TOTEM includes detectors in Roman Pots at a distance of 220 m on either side of the CMS collision point (Point 5). The Roman Pots can move in close to the beam line, so the collaboration has to work closely with the LHC collimation experts. Now they have succeeded in moving the detectors close to the beam, locating it with very high precision, first at the beam energy at injection of 450 GeV, and then at the normal running energy of 3.5 TeV per beam. This led to TOTEM’s sighting of not only the first candidates for proton–proton elastic scattering at 7 TeV, but also the first candidate for the diffractive process of double-Pomeron-exchange – the first time that such an interaction has been seen at 7 TeV.

LHCf, which stands for “LHC forward”, meanwhile, has already completed its first run at the LHC. This experiment consists of two independent detectors located at 140 m either side of the ATLAS collision point (Point 1). It studies the secondary particles created during the head-on collisions in the LHC, the goal being to compare the various models that are used to estimate the primary energy of ultrahigh-energy cosmic rays from the showers of particles that the primaries create in the atmosphere. LHCf was designed to work with high-energy particles, but at a low luminosity, and the experiment has now collected sufficient data to complete the first phase of the research programme at 450 GeV and 3.5 TeV per beam. The results of the data analysis at 450 GeV will be available by the end of the year, while data at 3.5 TeV will be analysed in 2011. The UA7 experiment carried out at the SPS proton–antiproton collider in the 1980s has already provided information for collisions at beam energies of 450 GeV. LHCf will be the first to provide results at 3.5 TeV and beyond

The detectors used for this first run were removed during a short technical stop of the LHC at the end of July. These were mainly plastic scintillators. The collaboration will now work on replacing them with more radiation-resistant crystal scintillators, to be ready by 2013 when the LHC will run at 7 TeV per beam. The collaboration also plans to change the position of the silicon detectors to improve the performance of the experiment in measuring the energy of the interacting particles.

Herschel delivers first results

The first scientific results obtained by the Herschel infrared observatory have been published in a special issue of the journal Astronomy & Astrophysics. Based on data collected during the first few months of operation, the papers cover a range of sources, from planets through newly forming stars in our Galaxy to distant galaxies.

The Herschel spacecraft was launched together with Planck by an Ariane 5 rocket on 14 May 2009. Both Herschel and Planck have been put into peculiar orbits round the second Lagrange point of the Sun–Earth system located 1.5 million km from Earth, on Earth’s night side. While Planck spins, slowly scanning the whole sky to map the cosmic microwave background with higher sensitivity and resolution than the Wilkinson Microwave Anisotropy Probe, Herschel performs pointed observations in the far-infrared to submillimetre range. The light collected by the 3.5 m mirror – the largest ever sent into space – can be directed towards three instruments on the focal plan, which is cooled to 2 K by superfluid helium. The temperature of the instruments’ detectors is further reduced to 300 mK by a complex system of cryo-coolers. The cooling is crucial to avoid the heat radiation of the instruments interfering with the observations, but it limits the lifetime of the mission to only 3.5–4 years.

The 152 new publications demonstrate the good performance of this challenging mission. They are all based on observations of two of the three instruments because the measurements with the Heterodyne Instrument for the Far Infrared started only in February 2010 owing to a technical problem. The two other instruments have excellent imaging capabilities in different wavebands. The longer the wavelength, the cooler the temperature, which allows Herschel to map the temperature of cold gas and dust structures in the interstellar medium. Prime targets are star-forming regions where large clouds of cold gas and dust slowly collapse to form protostars, which mature into stars within a few million years. The detection by Herschel of new populations of young stars and protostars is one of the highlights and is a key to a better understanding of the processes that lead to the birth of stars. In particular, the detection of high-mass protostars at the edge of regions of ionised hydrogen seems to confirm a complex formation scenario first proposed in 1977.

Looking at more distant targets, away from the Milky Way, Herschel can probe the evolution of star-forming galaxies over several thousand million years of cosmic history. A “must” for the early mission was to look at two prime fields already extensively observed by space telescopes such as Spitzer in the infrared, Hubble in the visible and Chandra in X-rays. Herschel opens a new wavelength window in these Great Observatories Origins Deep Survey (GOODS) fields, which have been carefully selected to study galaxies out to very high redshift. Its highly sensitive observations have pinpointed about half of the galaxies contributing to the cosmic infrared background. Herschel further finds that the main contribution to this diffuse emission comes from faint galaxies at relatively low redshift (z <1). In the future, by probing the contribution at different redshifts, Herschel will allow researchers to deduce the cosmic history of star formation in the universe.

A call for open time observations with Herschel closed on 22 July and resulted in 585 proposals requesting about 900 days of continuous observing time. There is thus much more science to come from Herschel. The batch of first papers is just the proverbial tip of the iceberg, as the project scientist Göran Pilbratt points out.

IPAC ’10: accelerating to an international level

Résumé

IPAC ’10 : l’accélération à l’échelle internationale

Une nouvelle ère a commencé cette année avec l’inauguration de la Conférence internationale sur les accélérateurs de particules (IPAC), fusionnant les différentes conférences régionales dans le domaine. L’IPAC aura lieu tous les ans alternativement en Amérique, en Asie et en Europe. La première édition (IPAC ’10), qui s’est tenue du 23 au 28 mai au Centre international de conférences de Kyoto, a été vraiment internationale, puisqu’elle a rassemblé près de 1250 participants à plein temps venant de plus de 30 pays différents d’Afrique, des Amériques, d’Australasie et d’Europe. Le démarrage du Grand collisionneur de hadrons, les progrès des collisionneurs électron-positon de luminosité élevée et l’évolution des sources de lumière comptaient parmi les thèmes marquants de la Conférence.

For many years members of the particle accelerator community have come together on a more or less regional basis to consider progress in their field and present the state of the art. North America has held the Particle Accelerator Conference (PAC) in “odd” years since the 1960s; the European Particle Accelerator Conference (EPAC) has run in “even” years since 1988; and the first Asian Particle Accelerator Conference (APAC) started on a three-year cycle in 1998. Now, as particle accelerators are beginning to exist worldwide, a new era has begun with a merging of the conferences into the International Particle Accelerator Conference (IPAC), which will alternate between America, Asia and Europe on a three-yearly basis.

The first in the series, IPAC ’10, took place at the Kyoto International Conference Centre on 23–28 May. It was an international affair, with close to 1250 full-time participants coming from more than 30 different countries in Africa, Asia, the Americas, Australasia and Europe. The conference was organized by the Science Council of Japan, the Physical Society of Japan, the Particle Accelerator Society of Japan and the Atomic Energy Society of Japan under the auspices of the Asian Committee for Future Accelerators (ACFA), the European Physical Society Accelerator Group, the American Physical Society Division of Physics of Beams and institutes in North America involved in PAC.

Shin-ichi Kurokawa and Katsunobu Oide, respectively honorary chair and chair of the Organizing Committee, Akira Noda, chair of the Scientific Programme Committee (SPC), and Toshiyuki Shirai, chair of the Local Organizing Committee, had the honour of opening the inaugural conference. Then, setting an appropriate international flavour, Albrecht Wagner, previously director of DESY and a former chair of the International Committee for Future Accelerators, opened the scientific programme with a presentation on “International Collaboration with High Energy Accelerators”.

The scientific programme

The programme itself spanned four and a half days, with plenary sessions on Monday and Friday mornings, and on Thursday afternoon. All of the other sessions were composed of two parallel oral sessions, while lively poster sessions took place at the end of each afternoon. The scientific programme was developed by the IPAC ’10 SPC, a truly international body with members composed 50% from Asia and 50% from Europe and North America. All together, there were 54 invited talks, 45 contributed oral presentations and 1800 posters. An industrial exhibition also took place during the first three days of the conference, in which 87 companies presented their hi-tech products and services.

CERN’s LHC had a central role in many of the presentations, both in talks and in posters. This was the first major accelerator conference to follow the machine’s successful restart in November 2009 and hence the first opportunity to present measurements from commissioning with beam. CERN’s Steve Myers set the scene in the opening plenary session, beginning with the repairs and consolidation following the incident that had brought commissioning to an early standstill in September 2008 (The subtle side of superconductivity). Since the end of March the machine has been operating at 7 TeV in the centre of mass, with alternating periods of beam commissioning and physics data-taking. In the first two months, the peak luminosity increased by two orders of magnitude, with a goal of reaching 1032 cm–2 s–1 before the end of 2010. Myers paid tribute to the high quality and reliability of many of the systems, which had made the commissioning fast, as well as to the dedication of the many teams and collaborating institutes that had made it all possible.

Other talks and poster presentations focused on more specific aspects of the LHC. The cryogenic system, for example, is the largest in the world in terms of refrigeration capacity (about 140 kW at 4.5 K and 20 kW at 1.8 K), distributed superfluid helium (24 km of superconducting magnets for a total mass of 36 000 tonnes below 2 K) and cryogen inventory (more than 130 tonnes of helium). The first operating experience with beams has already shown an availability of 90% and the learning process continues as the cryogenics teams resolve how to improve weaker points in the system, reducing both the number of unplanned stops and the recovery time after a stop. The control of the beam optics round all 27 km of each of the two rings and the precise alignment (to a tenth of a millimetre) of each individual element of the machine has been another success story. The optics (determining the beam size at any point of the machine) is based on a calculated magnetic model that provides the expected relationship between current and field in the magnets. The early phase of commissioning immediately showed the high quality of both the optics modelling and the magnetic model, which were based on measurements taken during five years of magnet production and testing. The orbit is within the specifications thanks to the alignment accuracy and its correction has raised no important issues.

Another vitally important aspect of LHC operation is the machine-protection system. Safe operation requires many systems for beam dumping, beam interlocks, beam instrumentation, equipment monitoring, collimators and absorbers, etc. The stored energy in the LHC beams is therefore being increased in steps, with operation for physics in between. At each step the machine protection is validated for operational conditions. By the time of the conference the energy stored in each of the beams already exceeded 100 kJ. By the end of 2010 this energy will have increased up to 30 MJ – well above the stored beam energy at Fermilab’s Tevatron, which amounts to about 1 MJ.

Pushing back the luminosity frontier belongs to the much smaller electron–positron storage rings that form the particle “factories”, in particular for studying the physics of B particles, as at PEP-II at SLAC and KEKB at KEK. Nobel laureate Makoto Kobayashi emphasized the importance of the B factories in establishing the mechanism of CP violation in the six-quark model, for which he and Toshihide Maskawa shared the Nobel prize in 2008 (CERN Courier November 2008 p6). In the following talk, Mika Masuzawa of KEK described how the physics community has now set a target of an integrated luminosity of 50–75 ab–1 for the exploration of new physics beyond the Standard Model. At a luminosity of
2 × 1034 cm–2 s–1 (near the KEKB peak) this would take more than 100 years, so there is a need for machines of much higher luminosity. Two projects that are currently in advanced stages of design, SuperKEKB in Japan and SuperB in Italy, aim to deliver a luminosity some 40–50 times higher (0.8 × 1036 cm–2 s–1 and 1 × 1036 cm–2 s–1 respectively).

Historically, particle accelerators developed as tools for high-energy particle physics and it was these that originally formed the basis for the associated conferences. The past decade, however, has seen an explosion in the number of accelerators used as light sources. There are now some 70 light sources in the world, serving a community of more than 100,000 users. In general, these users are demanding smaller beams and shorter pulses. One result is that light sources are now based not only on electron storage rings, but increasingly on free-electron lasers (FELs). These can provide short pulses of coherent radiation, which are valuable for probing rapidly changing processes.

John Galayda of the SLAC National Accelerator Laboratory presented the field’s equivalent of the LHC – the Linac Coherent Light Source (LCLS), the world’s first X-ray FEL to operate in the photon energy range 820–8200 eV. The LCLS uses an electron beam produced by a 1 km section of SLAC’s famous linac. It produced its first X-ray laser beam on 10 April 2009 and the first run for users took place between October and December 2009. Eleven successful experiments in atomic, molecular and optical science collected some 200 TB of data. Today it is routinely producing X-ray pulses with an energy of a few millijoules across the operating wavelength range.

Femtosecond X-ray pulses at linac-driven FELs such as the LCLS have a great scientific potential. “Pump-probe” experiments at these facilities aim to use X-rays (the probe) to produce time-resolved information at the atomic scale on a sample that has been excited (pumped) with a laser. Studies at the LCLS have already demonstrated an optical timing system based on stabilized fibre links that provide the required synchronization at less than 20 fs.

In Europe, FLASH, the FEL-user facility at DESY, covers a wavelength range of 6.5–50 nm and pulse durations of 10–50 fs. An upgrade started in autumn 2009 to increase the beam energy to 1.2 GeV – and reduce the shortest photon wavelength to 4.5 nm – by installing a seventh superconducting accelerating module, which is also a prototype for the European X-ray FEL project, XFEL.

Globally, there has been continued growth in the number of synchrotron light sources, with new machines coming on line to satisfy the demands of a rapidly expanded user community, particularly in the areas of biology and life sciences. New facilities include PETRA III at DESY, the Shanghai Synchrotron Radiation Facility in China, the Taiwan Photon Source in South-East Asia and the SESAME facility under construction in the Middle East. Older facilities are also responding to increasing demands from the users. In Japan, for example, a team at the Spring-8 storage ring is developing a system to provide short-pulse X-rays using crab cavities to tilt the beam pulses to produce X-ray bunches that are sliced to provide sub-picosecond pulses. A new XFEL machine is also under construction at the Spring-8 laboratory and, in Italy, FERMI@Elettra is a new FEL being built at the home of the Elettra synchrotron-radiation source to supply photons at wavelengths from 100 nm down to 4 nm. In Russia, the Budker Institute of Nuclear Physics in Novosibirsk has begun operating the world’s first multiorbit energy-recovery linac serving two FELs at present (CERN Courier July/August 2010 p10).

Developing applications

Accelerators play a key role not only in research in life sciences, but also in applications in medicine. Accelerator pioneer Robert Wilson first proposed using protons and light ions for cancer therapy more than 60 years ago. Today there are five carbon-ion therapy facilities around the world, with more to come. The Japanese, in particular, are making impressive headway in the field, with more than 10 years of experience at the Heavy-Ion Medical Accelerator in Chiba (HIMAC), where a new treatment facility is being constructed (CERN Courier June 2010 p22). The IPAC conference followed a week after a meeting of the Particle Therapy Co-Operative Group in Chiba.

Other developing areas for the application of particle accelerators figured in the closing plenary sessions, with talks from Norbert Holtkamp, principal deputy director-general of ITER, on energy-related developments, and by Jasper Kirkby of CERN, on the use of particle beams in climate research. Finally, Johannes Bluemer, of the Karlsruhe Institute of Technology, brought a different perspective, when he looked to the most powerful accelerators – those that exist in nature, providing cosmic rays with energies up to 100 EeV. These energies – and the multikilometre scale of the Pierre Auger Project to study these ultrahigh-energy cosmic rays – extend even further the superlatives reached by the LHC.

In addition to the standard sessions, the conference provided the opportunity to present a number of prizes. ACFA and IPAC together awarded prizes for outstanding work in the field of accelerators to Steve Myers of CERN, Jie Wei of Tsinghua University, and Mei Bai of Brookhaven National Laboratory (CERN Courier March 2010 p30). There were also prizes for the best student posters (The student factor). A further prize, decided by the IPAC ’10 organizing committee, was presented to the JACoW collaboration for its service to the accelerator community (JACoW: accelerating proceedings). Lastly, Caterina Biscari of INFN-LNF received a plaque on behalf of her colleague Claudio Federici, who won the competition to design a logo for the new IPAC series.

The first conference in the new international series has thus set the scene for the future meetings. The second IPAC will take place in San Sebastián, Spain (September 2011), and the third in New Orleans, in the US (May 2012). In this way the collaboration between the three regions, steadily enhanced in recent years, should continue to grow to the benefit of the worldwide accelerator community.

The student factor

Students are an essential factor in sustaining any research field – and particle accelerator R&D is no exception. The inclusion of students was an important part of the biennial EPAC meetings where the opportunity to attend a major international conference in their field was supported through contributions from the various European accelerator laboratories. While EPAC’s two-year cycle did not fit ideally with the normal 3-year doctoral student term, IPAC now provides a perfect fit.

Nearly 100 young scientists from around the world were able to attend IPAC ’10 thanks to sponsorship from societies, institutes and laboratories. One important aspect was their contributions to the poster sessions. Prizes for best the student posters were awarded to Houjun Qian of Brookhaven National Laboratory and Tsinghua University in Beijing, for the contribution on “Experimental Studies of Thermal Emittance of the Mg Cathode at the NSLS SDL” and to Marcel Ruf, University of Erlangen-Nurnberg, for the contribution “Beam Position Monitoring Based on Higher Beam Harmonics for Application in Compact Medical and Industrial Linear Electron Accelerators”.

The IPAC ’10 organizers thank the following for their sponsorship for the students: APS-DPB, CEA, CELLS-ALBA, CERN, CIEMAT, CNRS/IN2P3, DESY, DIAMOND, EPS-AG, ESRF, Foundation for High Energy Accelerator Science, GSI, ICR Kyoto University, INFN, Italian Physical Society, IUPAP, JAEA, Japan Society for the Promotion of Science, JASRI, Kyoto Chamber of Commerce and Industry, Kyoto City, Kyoto Prefecture, KEK, Kyoto University, MAX-lab, MSL, NIRS, Osaka University, POSTECH, PSI, RCNP, Osaka University, RIKEN, STFC, Synchrotron SOLEIL and UST.

JACoW: accelerating proceedings

The proceedings of IPAC ’10 are published on the Joint Accelerator Conferences Website (JACoW) located and maintained at CERN and mirrored at KEK. Originally created for the publication of the proceedings of APAC, EPAC, and PAC, the site now hosts the proceedings of 16 conferences related to accelerator research (CERN Courier March 2003 p9). Over 100 sets of proceedings have been published to date, and more than half a million contributions are downloaded from the European site each year.

JACoW’s goal of publishing speedily and efficiently was achieved in magnificent style for IPAC ’10. With a strong international team, the “pre-press” version of the proceedings was available on-line on the last day of the conference via the Scientific Programme Management System (SPMS), the tool developed by the collaboration for the management of all contributions to the scientific programme. Exactly three weeks later, 1569 papers were published in final form on the JACoW site.

This year, ACFA and the IPAC ’10 organizing committee recognized the JACoW collaboration’s achievement with an award for services to the accelerator community. On receiving the prize on behalf of the collaboration, Volker Shaa and Christine Petit-Jean-Genaz, chair and deputy-chair of the JACoW collaboration, respectively, underlined that without laboratory support for the JACoW effort, such results would not be possible. The JACoW site is totally accessible and is free of charge for the community. There is however a cost, albeit small, for the laboratories where a few members of staff dedicate a percentage of their time to JACoW, namely, CERN, DESY, Fermilab, GSI and KEK.

• See www.jacow.org.

Europe charts future for radioactive beams

CCeur1_07_10

These are exciting times for European nuclear physics, with several new facilities under study or under construction, including HIE-ISOLDE at CERN, the NuSTAR experiments at the Facility for Antiproton and Ion Research (FAIR) in Germany, the Selective Production of Exotic Species (SPES) project at the Legnaro National Laboratory in Italy and the Système de Production d’Ions Radioactifs en Ligne 2 (SPIRAL2) at the French national heavy-ion accelerator laboratory, GANIL. In addition, the nuclear-physics community is working on a common design for a future European isotope-separation on-line facility, EURISOL. To discuss the open issues and promote synergies among the different projects, the community met recently at the second European Radioactive Ion Beam Conference (EURORIB ’10), held in Lamoura, France, on 6–11 June.

The new-generation facilities will all produce more intense radioactive ion beams (RIBs) to probe nuclear structure. “The use of RIBs to understand the nature of the nucleus started about 50 years ago and many unexpected discoveries have been made with this technique over the past two decades,” explains Yorick Blumenfeld, spokesperson for the ISOLDE collaboration at CERN and chair of the EURORIB ’10 conference.

One of the main goals of the EURORIB ’10 conference was to allow the European nuclear-physics community to share experiences and information about all of the various projects. “At the conference we had very active working groups on subjects like instrumentation and data acquisition,” says Blumenfeld. “This was also done to try to develop common technical solutions, which could be used by several facilities and be moved between them. We think that it will be more efficient to be able to move instrumentation between the different sites than to design similar detectors for all of them and then use them only occasionally.”

Two techniques

Both the present and future facilities for the production of RIBs are based on two basic techniques: the “in-flight fragmentation” and the “isotope separation on-line (ISOL)” types. For in-flight fragmentation, heavy nuclei are accelerated before hitting a thin target where fragmentation or fission reactions take place and produce many kinds of nuclei, including exotic ones. A magnetic separator selects the species that the various experiments want to study. “The advantage of this type of technique is that beams have high energy and the production process is fast. In this way, one can produce very short lived nuclei that are used to study the extremes of the nuclear chart,” explains Navin Alahari, deputy scientific co-ordinator at SPIRAL2. “On the other hand, the yields are low because the interactions happen in a thin target. Moreover, the beam has a large angular and energy spread.”

In the ISOL technique, a beam of light ions impinges on a thick target. In this case, fission or spallation reactions are induced with the exotic nuclei diffusing out of the heated target. After that, they are ionized and the species of interest are selected. Some experiments use such beams at rest, others use them after they are post accelerated. “The advantage is that in this case we use the full power of the beam, thus obtaining high intensities,” continues Navin. “Because the beams are post accelerated, they have a well defined energy and a small angular spread. The disadvantage is that the process of diffusion is slow and therefore one can only produce nuclei that have a relatively long lifetime, the lower limit being of the order of milliseconds. Some types of elements don’t even come out of the target, they get stuck because of their chemical properties.”

The EURORIB ’10 conference provided the opportunity to promote collaboration and exchange between the communities using these techniques. “The two methods for producing radioactive ion beams complement each other in that precision studies – such as the investigation of the nuclear levels and the studies of the correlations between decay particles – need large intensities (therefore the ISOL-type facilities) and good beam quality, while the ‘in-flight fragmentation’ technique is really good for exploring high-energy nuclear excitations and the confines of the nuclear chart where lifetimes are very short,” confirms Valentina Ricciardi of GSI.

For the longer term, the ultimate goal of the ISOL community is to build the EURISOL facility. “We need so many intermediate facilities on the way to EURISOL because there are many new techniques that need to be explored,” says Faisal Azaiez, recently named director of l’Institut de Physique Nucléaire, Orsay. “After all of this preliminary work we will be able to converge and put together the best ideas in order to optimize the various processes and give birth to a common, state-of-the-art system.” A detailed design for this “ultimate” ISOL facility for Europe was devised during the EURISOL Design Study, funded partially by the European Commission, which lasted four and a half years and involved 20 European laboratories, including CERN.

Multiuser capability is an essential ingredient of the EURISOL concept, which is based on a superconducting continuous-wave linac capable of accelerating 5 mA of H ions to 1 GeV (5 MW beam power). The major part of the beam will be sent to a mercury converter-target where the neutrons produced will induce fission in six uranium-carbide targets surrounding the converter. An innovative magnetic beam-splitting system will create up to three additional 100 kW beams. These will impinge directly on solid targets to induce spallation reactions, which can populate the regions of the nuclear chart that are unattainable in fission reactions. After selection the radioactive beams can either be used at low energies or post accelerated in another superconducting linac with continuous energy variation up to 150 MeV/A for 132Sn, for example. The high-energy neutron-rich beams such as 132Sn, which will reach intensities of up to 1012 particles a second, can then be fragmented to produce high intensities of many otherwise inaccessible neutron-rich nuclei.

“A choice for the location for EURISOL will have to be made in the coming years,” says Blumenfeld, who led the EURISOL design study. “The natural course would be to choose one of the sites of the new ISOL facilities, but a green-field site could also be considered.”

CCeur2_07_10

While Europe is on its way towards combining efforts and converging on the EURISOL project, Japan and North America (the US and Canada) are also very active. Speakers from these countries presented their new facilities at EURORIB ’10 and, in some cases, even the first results. Seung-Woo Hong, at Sungkyunkwan University, Korea, for example, is leading the conceptual design project for a heavy-ion accelerator facility for producing RIBs using both the ISOL and the in-flight fragmentation techniques. “This shows that there is a lot of interest for this field all across the world, which is good news for our community,” comments Blumenfeld.

It is clear that nuclear physics is currently a lively field of research. This is easy to understand considering the many links it has with other disciplines and the variety of practical applications, which are becoming increasingly common. “Nuclear physics has a strong and direct link with astrophysics,” says Giovanni La Rana of the Italian National Institute of Nuclear Physics (INFN), describing one example. “Stars live because of nuclear reactions, and to understand how elements are synthesized in the stars scientists need information about nuclei involved in the reactions. Exotic nuclei are involved in the supernovae explosions and to study these processes, one needs masses, cross-sections and lifetimes of these nuclei. So far, the information is extrapolated from stable nuclei through theoretical models, but the new facilities should give us access to these new exotic nuclei.”

CCeur3_07_10

By contrast, Alexander Herlert, ISOLDE physics co-ordinator, cites applications closer to Earth. “Researchers implant radioactive nuclei into materials,” he says. “Observing the decay, they can study the properties of the materials. This technique, complementary to other solid-state techniques, allows them to understand the structure of semiconductors and new types of materials.” Nuclear-physics techniques also find application in the medical field, particularly in what is known as isotope harvesting. “Our facilities can produce all sorts of isotopes, basically anything,” says Ulli Köster of the Institut Laue-Langevin, who presented the future perspectives for the field at the conference. “If the doctors need to test new isotopes for medical imaging or treatment, then we can produce them.”

EURORIB ’10 certainly revealed the present vitality in nuclear physics. The talks and discussion sessions underlined the close scientific and technical collaboration between the different RIB facilities, which is propelling the field towards a unified European perspective.

Canada’s bright future in subatomic physics

CCtri1_07_10

In this era of fiscal uncertainty, several key agencies in Canada have stepped up and made firm commitments to TRIUMF and the future of particle and nuclear physics in Canada. In March, TRIUMF’s five-year core operating budget was renewed at the level of C$222.3 million for the 2010–2015 period. Then, in mid-June, the final pieces of the funding puzzle were put into place for the launch of the new flagship Advanced Rare IsotopE Laboratory (ARIEL) facility at TRIUMF, when the Province of British Columbia announced its C$30.7 million investment, completing the C$63 million package. The project includes a new, high-power, superconducting radio-frequency electron linear accelerator for isotope production.

As Canada’s national laboratory for particle and nuclear physics, TRIUMF is owned and operated by a consortium of 15 Canadian universities. Its core operating funds are supplied in five-year blocks by the federal Ministry of Industry through the National Research Council Canada. The previous five-year cycle ended on 31 March 2010; new funding for the laboratory for the 2010–2015 period was unveiled in March as part of the federal budget. The announcement completes a process of more than two years’ effort to secure the funding. This included both an international review by some of the world’s most accomplished scientists and an economic-impact study that analysed the direct, indirect and induced impacts on the provincial and federal economies of public spending at TRIUMF.

CCtri2_07_10

TRIUMF celebrated its 40th anniversary last year. Over the years it has evolved from covering only medium-energy nuclear physics to include high-energy physics, materials science, rare-isotope beam physics, accelerator science and technology, and most recently, nuclear medicine. TRIUMF regularly produces intense beams of exotic isotopes using proton beams of up to 50 kW extracted from the main 500 MeV cyclotron. These isotopes are produced and studied in the Isotope Separator and Accelerator (ISAC) facility, which includes an impressive suite of experiments and detectors for research in nuclear structure and nuclear astrophysics, and for tests of fundamental symmetries. TRIUMF recently completed an upgrade of the ISAC-II facility to provide acceleration of radioactive ions of up to 5 MeV/u. This linear accelerator was developed using superconducting radio-frequency cavities manufactured in Canada by PAVAC Industries in co-operation with TRIUMF. In nuclear medicine, TRIUMF has a 30-year history of producing medical isotopes using small cyclotrons in partnership with MDS Nordion for global sales and distribution.

The five-year vision

The federal contribution for operations will not support all of the TRIUMF community’s aspirations (nor should it), but it does support and strengthen key initiatives in particle physics, nuclear physics, materials science, nuclear medicine and accelerator science and technology. In nuclear physics, the programme will focus on exploiting the existing ISAC-I and ISAC-II facilities. An aggressive programme in target development will continue and deliver beams of novel isotopes from actinide targets for physics experiments in the next 2–3 years. A programme for the production and characterization of uranium-carbide foils for use in ISAC has begun and the first physics run using novel isotopes from actinides is scheduled for December 2010.

In materials science, construction work on additional muon beamlines will be completed to offer greater flexibility and more time for scientific usage. A new initiative in nuclear medicine is being launched that expands TRIUMF’s historic activities in medical-isotope production into radiochemistry for the development and preclinical qualification of new radiotracers. The nuclear-medicine programme will include new equipment, full-time personnel and stronger partnerships across Canada.

In particle physics, the ATLAS Canada Tier-1 Data Centre will continue its operations; it serves as one of the 10 global data-storage and distribution centres for physics data from the ATLAS experiment at CERN’s LHC. Canada’s involvement in the Japan-based Tokai-to-Kamioka neutrino experiment will continue to receive support from TRIUMF as the research moves into the data-collection and analysis phase.

ARIEL takes off

The ARIEL facility will be the new flagship of the TRIUMF programme, which includes a new underground beam tunnel surrounding a next-generation linear accelerator – the e-linac, a project led by the University of Victoria. This facility substantially expands TRIUMF’s isotope-production capabilities by adding the technique of photo-fission to the suite of available technologies. Canada will be unique in having electron- and proton-based capabilities for isotope production within the same laboratory. Moreover, for the first time in 35 years, TRIUMF’s main cyclotron will have a fully fledged younger sibling to drive the breadth of the laboratory’s research.

CCtri3_07_10

The lower floors of ARIEL will house the e-linac, which will produce an intense beam of electrons up to 50 MeV. An underground beam tunnel will connect the accelerator to the isotope-production area, where the beam of electrons will strike a convertor to create an intense beam of photons via bremsstrahlung. This beam will in turn be directed at targets made of beryllium, tantalum or actinide materials, for example. The isotopes will be extracted, separated and accelerated in real time and sent to the ISAC experimental areas.

The focus of ARIEL will be on “isotopes for physics and medicine”. In terms of nuclear physics with rare isotopes, ARIEL is expected to increase TRIUMF’s annual scientific productivity by a factor of 2–3 above current levels by providing a second primary “engine” for producing isotopes. ISAC will move from being a “one-at-a-time” facility to running several experiments simultaneously. The e-linac will expand the materials-science capabilities at TRIUMF by enabling high-volume production of lithium-8 for β-NMR studies using a beryllium target. In terms of isotopes for medicine, the facility is intended to develop and study next-generation medical isotopes that may have applications in therapy (e.g. via alpha emission). ARIEL will also be used to demonstrate and benchmark the use of photo-fission technology for larger-scale production of key medical isotopes that are currently only produced in reactors, such as 99Mo/99mTc. Photo-fission at ARIEL could produce at least one six-day Curie of 99Mo per gram of natural uranium target material for a 100 kW irradiation period.

Construction of the ARIEL facility and e-linac began on 1 July, providing immediate stimulus to the civil-construction and technical communities in British Columbia and Canada. The facility will be completed in 2013 and the e-linac will then be installed. Isotope production for physics and medicine will be commissioned in 2014 and round-the-clock operations will become routine in 2015. ARIEL is designed to support two target stations – one initially for electrons and a future one for a new proton beamline extracted from the main 500 MeV cyclotron.

The e-linac will begin with a 30 MeV, 100 kW beam by 2014, with plans for it to be upgraded to a full 500 kW beam in the 2015–2020 era. The superconducting radio-frequency technology selected for the accelerator expands an emerging core competency at TRIUMF in partnership with a local electron-beam welding company, PAVAC Industries. The e-linac will be built using 1.3 GHz technology, recognizing the global move to parameters similar to those of the TESLA and International Linear Collider projects. The injector cryomodule is being designed and constructed in collaboration with India’s Variable Energy Cyclotron Centre in Kolkata.

With a broad set of opportunities and programmes facing it, TRIUMF is optimistic about the next decade of scientific activity. Together with its national and international partners, the laboratory hopes to bring a “gold medal” home to Canada in subatomic physics.

The subtle side of superconductivity

Sector 3-4,

The LHC is probably the largest and most complex scientific instrument ever built. It relies on superconductivity, which plays a fundamental role because it allows magnetic fields in excess of 8 T to be reached. Combined with the radius of curvature of 2.804 km in the dipole (bending) magnets, this field enables proton beams to reach energies of 7 TeV, almost an order of magnitude higher than in previous accelerators. In total there are 1734 large, twin-aperture superconducting magnets, which include the backbone of 1232 main dipoles, each 15 m long. There are also 7724 smaller superconducting corrector magnets. To reach the design performance nearly all of the magnets are cooled with superfluid helium to 1.9 K. The total stored magnetic energy will be about 9000 MJ when running with the dipoles at 8.3 T and a beam energy of 7 TeV.

After 25 years from conception via R&D and construction to commissioning, the LHC started up in spectacular fashion on 10 September 2008. The success of this first commissioning with beam demonstrated the excellent field quality and geometry of the magnets, their precise alignment and good stability, the accuracy of the power supply and the successful operation of the highly complex 1.9 K cryogenic system. Only nine days later, however, in the course of hardware commissioning, a severe incident occurred in sector 3-4 during a ramp of the main dipole current to 9.3 kA (corresponding to a magnetic field of about 6.5 T). It was the final ramp before definitive commissioning of all eight sectors of the machine for operation at 8.6 kA and, hence, an energy of 5 TeV. Many magnets quenched and eventually helium was released into the tunnel and general power was lost in the sector. The incident led to a delay of more than a year before the physics programme began successfully in November 2009.

The protection scheme

Collateral damage

The first inspection of the LHC tunnel after the incident revealed considerable damage along a zone about 750 m long. There was deformation of connections, electrical faults, perforation of the helium vessel, local destruction of the beam tube with heavy pollution by debris including fragments of multilayer insulation, breakage or damage of cold support posts, breaches in the interconnection bellows, damage to the warm jacks that support the magnets and cracks in the tunnel floor. The pollution of the beam tubes from tiny confetti-like fragments of insulation extended much further, spanning the sector’s full 3 km-long arc. A task force led by Philippe Lebrun was immediately set up to analyse the incident and propose remedies. Within a month, CERN published the first interim report, followed by a more detailed second report in December 2008. The final report was published at the end of March 2009 (Bajko et al. 2009).

It soon became clear that the root of the incident lay with a single fault in an electrical connection between two adjacent magnets, which had led to extensive collateral damage. A defective joint had created a small resistive zone in a superconducting busbar designed to carry a maximum current of 13 kA. It was a small fault in a relatively low-tech system, but it had dramatic consequences, thanks to the subtleties of superconductivity.

Before discussing this in more detail, it is worth describing the magnet powering and the scheme designed to protect the magnets when a quench occurs. In a quench, a conductor rapidly changes from being superconducting (with no resistance) to being normally conducting (resistive). This transition creates a sudden heating effect in the resistive region. This needs to be controlled swiftly to avoid permanent damage to a magnet because the conductor can no longer sustain the high current, and the magnetic energy – about 7 MJ per dipole magnet – is converted into heat.

Busbars and splices

The main magnets of the LHC are connected electrically in a series via 13 kA superconducting busbars in eight main circuits, one per sector. Figure 1 shows a simplified version of the powering and protection scheme for one sector. The 154 dipoles in the sector are powered in series from one 13 kA power convertor – with a dump resistance connected in parallel. The quench-detection system (QDS) monitors for resistive transitions in a magnet by comparing the voltages across the two apertures. When the onset of a quench is detected the system switches in the dump resistor. The inductance, L, of the whole circuit and its resistance, R (determined by the current and maximum voltage), give a 1/e discharge time, L/R, of 104 s, which is far too long for the magnet to survive. Each magnet therefore has a cold bypass-diode and heaters on the coils. As soon as a resistive transition is detected the heaters are fired so as to quench the coils in less than 50 ms. The subsequent sudden rise in voltage turns on the diodes so that they conduct and the current in the quenched coils decays to almost zero in less than 1 s. Meanwhile, all of the unquenched magnets in the sector and the busbars that bypass the quenched coils continue to carry the full current.

Components of an interconnection

The busbars, in which the diodes are inserted, not only bypass any quenched magnet(s) electrically but also serve as a connection between adjacent magnets. So during a magnet quench the busbars carry the overall circuit current, decaying with a time constant of 104 s at the interconnections as well as in the quenched magnet(s). These busbars consist of a superconducting cable that is thermally and electrically coupled to a copper stabilizer along its whole length. The copper cross-section of the stabilizer is designed to be sufficient to carry the current safely, with no damage to the busbar, for the 104-second long discharge even if its superconducting cable is driven into the normal state.

In the case of the incident on 19 September 2008, analysis revealed that a sudden increase of the voltage occurred in the main dipole circuit in sector 3-4, such that the power supply could not deliver the required current. This initiated a fast de-ramp of the magnets, discharging their energy in the dumping system. The discharge was faster than the nominal time constant of 104 s and the circuit quickly became divided into two branches, indicating the presence of a short-circuit. Several magnets quenched.

The basic fault appears to have been a defective joint in the 13 kA connection between superconducting cables in two adjacent magnets

The basic fault appears to have been a defective joint in the 13 kA connection between superconducting cables in two adjacent magnets. As figure 2 shows, soft soldering based on tin-silver alloy is used not only to splice the superconducting cable but also to connect the copper stabilizer of the interconnection to both the cable joint and the stabilizing copper of the busbar. When finished, the connection looks like a continuation of the busbars that run along the whole length of the magnet system. The splice between superconducting cables is specified to have a resistance below 0.6 nΩ at 1.9 K. The actual results on samples during production showed an average of 0.2 nΩ with a variance of less than 0.1 nΩ. The resistance of the splice that failed was later evaluated to have been around 220 nΩ.

As they are superconducting, the busbars also have a QDS. This did not intercept the fault, however, because it was not sensitive enough to detect the approximately 2 mV voltage of the resistive zone; the sensitivity was, in fact, 300 mV with an intervention threshold of 1 V. It was subsequently found that, during a current plateau at 7 kA the previous day, sensors on the magnet had indicated a small but distinct increase in temperature of 40 mK above 1.9 K. This was a clear sign of the existence of an abnormal heat dissipation of 10.7 ± 2.1 W, corresponding to a resistance of 180–260 nΩ. (We now know, a posteriori, that we can use this “calorimetric” technique to detect these types of faults.) Had the resistance remained as small as this there would have been no major problem. However, because the current was ramped up to 8.7 kA on 19 September, localized heating increased the resistance, leading to thermal runaway. The heat dissipation was nearly 9 kW by the time the quench-detection threshold of 1 V was reached. Within a second, an electrical arc developed, puncturing the helium enclosure. This led to a release of helium into the insulation vacuum of the cryostat and the subsequent collateral damage described above.

Defective joint

So what had happened? A thermoelectrical model was able to simulate the thermal runaway of the resistive zone in the splice at 8.7 kA, based on the hypothesis of a resistance of 220 nΩ together with a lack of contact between the superconducting cable and copper stabilizer at the joint, as well as the existence of a longitudinal gap in the stabilizer as in figure 3 (Verweij 2009). This discontinuity in the stabilizing copper is important because it impedes the sharing of current between cable and stabilizer. The time constant of the current decay in the busbar is 104 s and the copper there is designed to cope with the heat generated as the current decays in the whole circuit. By contrast, the copper matrix of the superconducting cable is of a size that is sufficient to withstand a discharge time in a resistive state of less than 1 s – the decay time for a single magnet. If there is a discontinuity in the copper stabilizer as well as no contact between the cable and stabilizer, the joint in the superconducting cable cannot sustain the 104 s-long discharge and it melts away.

A subtle enemy

Thus, while the incident was triggered by a bad splice – that is a bad superconductor-to-superconductor joint – the analysis revealed a more subtle possibility. Although the splice between superconducting cables may be good, the surrounding copper stabilizer may not be in contact with the cable, as shown in figure 4. In fact, if the stabilizer is in good contact with the superconducting cable and just has a short longitudinal gap – a few millimetres, say – there is no danger: in a quench of the joint the current can pass through the copper matrix of the superconducting cable and the small amount of heat generated can escape easily via conduction in helium or the busbar.

However, if this gap is coupled with a lack of tin-silver soldering, i.e. the cable at the splice-to-busbar transition is not in good contact with the stabilizing copper for a certain length, then the situation can diverge. The current has to flow through the cable for the whole distance that the cable is isolated and the heat may become too large to escape before a large rise in temperature occurs, initiating thermal runaway and rapidly reaching the melting point in a few seconds. An interconnection joint can be quenched by external heating, for example by warm helium coming from a nearby quenching magnet. The lack of stabilizer continuity could thus cause thermal runaway in the busbar and it turns out to be a more subtle enemy than a bad splice, because it is more difficult to detect.

A defective interconnection

The task force that investigated the incident proposed a number of remedies, mitigation measures and points to study to improve safety and reliability of the LHC. These included the implementation of a new QDS on the busbars and interconnection line, with a sensitivity threshold of 0.3 mV during a ramp. In a steady state the new QDS can detect a bad splice with a resistance above 1 nΩ. Indeed, the worst interconnection splices have turned out to be about 3 nΩ, far below the runaway threshold, which is estimated to lie well above 50 nΩ.

Moreover, while hunting for bad interconnection splices in October 2008, we realized that the “old” QDS can be used in a measuring mode (rather than the usual active mode) to detect bad splices inside magnets that are in a superconducting state (i.e. at 1.9 K). Although not precise, these (and calorimetric) measurements quickly revealed three magnets (two in the LHC and one in reserve) with defective internal splices of 100, 50 and 25 nΩ. The two installed magnets were replaced, an action that meant that four sectors in total had to be warmed up during the shutdown in 2008–2009. More precise, dedicated tests that were made during the last months with the QDS system in measuring mode found no further bad internal splices, although the system did find 12 dipoles with an internal resistance well above the specification but below 25 nΩ. Internal splices are much less dangerous than interconnection splices because they are covered by the QDS of the magnets, where the current is cut off in less than 1 s. Moreover, all internal splices had been checked during cold-acceptance tests of the magnets at 8.6–9.0 T

The danger of the lack of stabilizer continuity in the busbar required a separate diagnostic method. By measuring a busbar in its resistive state (i.e. warm) over a minimum length (two or three magnets, i.e. 30 or 45 m) one can infer if there is a zone or zones where the cable is not in contact with the stabilizer in conjunction with a gap in the stabilizer. So far this has been done for the four sectors that were warmed up during the long shutdown. In these, all of the bad joints where the defect was longer than 20–25 mm were fixed by resoldering. The other four sectors were measured at 80 K with much less accuracy. As a result one of these sectors was warmed up and three bad joints were repaired, although some defects of almost 40 mm remain, and will be fixed in future.

Copper shunts

In the three sectors that were not warmed up, the inherent uncertainty in the cold measurements, means that defects up to 70 mm long have not been excluded. This limits the maximum safe current for powering the magnets with no risk of thermal runaway in the joints. Different studies based on different models have been made to evaluate the critical defect length, based on input from an experiment performed with a cable insulated from the busbar stabilizing copper for 50 mm. The results of these studies led to the decision to limit the field of the magnets to 4.5 T to begin with, and so allow commissioning with collisions at 3.5 TeV per beam, half of the maximum energy (Myers 2010). The LHC has been operating successfully in this manner since the end of March and will continue to do so throughout 2010 and 2011, allowing the experiments to gather significant amounts of data.

To exploit the full potential of the accelerator by pushing the magnets to 8.3 T, all bad interconnections with the cable detached from the stabilizer copper will have to be fixed. Experience with the sectors that were raised to room temperature during the shutdown suggests that around 10–15% of the joints will need to be resoldered. In addition, we have devised a system that will stabilize all of the interconnections. This involves a relatively simple copper shunt that will be soldered across all of the 10,000 or so interconnections (figure 5). This shunt will definitely cure the issue of the possible lack of continuity of the stabilizer. The aim is to ensure the complete electrical stability of the superconducting magnet system for the LHC’s foreseen lifetime of 25 years (Bertinelli et al. 2010). This will in turn allow the fullest possible returns in terms of new physics in a previously unexplored energy region.

• This article is based on the longer report, Superconductivity: its role, its success and its setbacks in the Large Hadron Collider of CERN (Rossi 2010).

OPERA catches its first tau-neutrino

CCnew1_06_10

The OPERA collaboration has announced the observation of the first candidate tau-neutrino (ντ) in the muon-neutrino (νμ) beam sent through the Earth from CERN to the INFN’s Gran Sasso Laboratory 730 km away. The result is an important final piece in a puzzle that has challenged science for almost half a century.

CCnew2_06_10

The puzzle surrounding neutrinos originated in the 1960s when the pioneering experiment by Ray Davis detected fewer neutrinos arriving at the Earth from the Sun than solar models predicted. A possible solution, proposed in 1969 by Bruno Pontecorvo and Vladimir Gribov, was that oscillatory changes between different types of neutrinos could be responsible for the apparent neutrino deficit. Conclusive evidence that electron-neutrinos, νe, from the Sun change type en route to the Earth came from the Sudbury Neutrino Observatory in 2002, a few years after the Super-Kamiokande experiment found the first evidence for oscillations in νμ created by cosmic rays in the atmosphere. Accelerator-based experiments have since observed the disappearance of νμ, confirming the oscillation hypothesis, but until now there have been no observations of the appearance of a ντ in a νμ beam.

OPERA’s result follows seven years of preparation and more than three years of beam provided by CERN. The neutrinos are generated at CERN when a proton beam from the Super Proton Synchrotron strikes a target, producing pions and kaons. These quickly decay, giving rise mainly to νμ that pass unhindered through the Earth’s crust towards Gran Sasso. The appearance and subsequent decay of a τ in the OPERA experiment would provide the telltale sign of νμ to ντ oscillation through a charged-current interaction.

Detecting the τ decay is a challenging task, demanding particle tracking at micrometre resolution to reconstruct the topology: either a kink – a sharp change (>20 mrad) in direction occurring after about 1 mm – as the original τ decays into a charged particle together with one or more neutrinos, or the vertex for the decay mode into three charged particles plus a neutrino.

The OPERA apparatus has two identical Super Modules, each containing a target section and a large-aperture muon spectrometer. The target consists of alternate walls of lead/emulsion bricks – 150,000 bricks in total – and modules of scintillator strips for the target tracker. The nuclear-emulsion technique allows the collaboration to measure the neutrino-interaction vertices with high precision. The scintillators provide an electronic trigger for neutrino interactions, localize the particular brick in which the neutrino has interacted, and perform a first tracking of muons within the target. The relevant bricks are then extracted from the walls so that the film can be developed and scanned using computer-controlled scanning microscopes.

The collaboration has identified the first candidate ντ in a sample of events from data taken in 2008–2009, corresponding to 1.89 × 1019 protons on the target at CERN. The sample contains 1088 events, including 901 that appear to be charged-current interactions. The search through these has yielded one candidate with the characteristics expected for the decay of a τ into a charged hadron (h), neutral pions (π0) and a ντ. Indeed, the kinematical analysis suggests the decay τ → rντ. The event has a significance of 2.36 σ of not being a background fluctuation for the τ decay to h0τ.

This candidate event is an important first step towards the observation of ντ → νμ oscillations through the direct appearance of the ντ. That claim will require the detection of a few more events, but so far the collaboration has analysed only 35% of the data taken in 2008 and 2009 and ultimately should have five times as much data than as at present.

bright-rec iop pub iop-science physcis connect