Comsol -leaderboard other pages

Topics

T2K beamline starts operation

CCnew7_05_09

On 23 April, the Tokai-to-Kamioka (T2K) long-baseline neutrino oscillation experiment confirmed the first production of the neutrino beam by observing the muons produced by the proton beam in the neutrino facility at the Japan Proton Accelerator Complex (J-PARC).

The T2K experiment uses a high-intensity proton beam at J-PARC at Tokai to generate neutrinos that will travel 295 km to the 50 kt water Cherenkov detector, Super-Kamiokande, which is located about 1000 m underground in the Kamioka mine (CERN Courier July/August 2008 p19). The experiment follows in the footsteps of KEK-to-Kamioka (K2K), which generated muon neutrinos at the12 GeV proton synchrotron at KEK.

With the beam generated at the J-PARC facility, T2K will have a muon-neutrino beam 100 times more intense than in K2K. This should allow the experimenters to measure θ13, the smallest and least well known of the angles in the neutrino mixing matrix, which underlies the explanation of neutrino oscillations. Experiments with atmospheric neutrinos have found the mixing angle θ23 to be near to its maximal value of 45°, while the long-standing solar neutrino problem has been solved by neutrino oscillations with a large value for θ12 (Borexino homes in on neutrino oscillations).

PETRA III stores its first positron beam

CCnew8_05_09

DESY’s new third-generation synchrotron radiation source, PETRA III, accelerated its first beam on 16 April. At 10.14 a.m. the positron bunches were injected and stored in the 2.3 km accelerator for the first time. The start of operation with the beam concludes a two-year upgrade that converted the storage ring PETRA into a world-class X-ray radiation source.

As the most powerful light source of its kind, PETRA III will offer excellent research possibilities to researchers who require tightly focused and very short-wavelength X-rays for their experiments. In particular, PETRA III will have the lowest emittance – 1 nm rad – of all high-energy (6 GeV) storage rings worldwide.

Following the stable storage of the particle beam achieved on 16 April, the accelerator is now being set up for the production of synchrotron radiation. The undulators – the magnets that ensure the machine’s high brilliance in X-rays – will be positioned to force the beam to oscillate and emit the desired intense, short-wavelength radiation. At the same time, the mounting of the 14 beamlines to be used for experiments continues in the newly constructed experiment hall. A first test run with synchrotron radiation is planned for this summer; regular user operation of the new synchrotron radiation source will start in 2010.

The PETRA storage ring began life as a leading electron–positron collider in the 1980s and later became a pre-accelerator for the electron/positron–proton collider, HERA. The remodelling into PETRA III at a cost of €225 million was funded mainly by the Federal Ministry of Education and Research, the City of Hamburg and the Helmholtz Association.

FFAGs enter the applications era

CCnew1_04_09

After a series of preliminary tests held on 26–27 February the Research Reactor Institute of Kyoto University (KURRI) received a national licence to conduct experiments for an accelerator-driven subcritical reactor (ADSR) using the Kyoto University Critical Assembly (KUCA). The first ADSR experiment began on 4 March using a newly developed fixed-field alternating-gradient (FFAG) proton accelerator connected to the KUCA. This marks the first use of an FFAG accelerator built for a specific application rather than as a prototype, and it heralds the start of a new era.

The Development of an Accelerator Driven Subcritical Reactor using an FFAG Proton Accelerator project, which is now reaching its goal, was initiated in 2002 under a contract with the Ministry of Education, Culture, Sports, Science and Technology (MEXT) as part of the Technology Development Project for Innovative Nuclear Energy Systems. In the experiment the FFAG accelerator provides a high-energy proton beam to a heavy-metal target in the KUCA to produce spallation neutrons, which in turn drive fission chain reactions in the KUCA-A Core.

The aim is to examine the feasibility of an ADSR and to lay the foundations for its development. The fact that the reactor is driven slightly below criticality makes this system intrinsically safe: as soon as the external neutron supply is stopped, fission chain reactions cease. ADSRs may also be useful for the transmutation of long-lived transuranic elements into shorter-lived or stable elements. They therefore have the potential to be used as energy amplifiers, neutron sources and transmutation systems.

First sector is closer to cool down…

Installation of the new helium pressure-release system for the LHC is progressing well. The first sector to be fully completed is 5-6, with all 168 individual pressure-release ports now in place. These ports will allow a greater rate of helium escape in the event of a sudden increase in temperature.

To install the pressure-release ports teams had to cut and open the “bellows” – the large accordion-shaped sleeves that cover the interconnections between two magnets. Once all of the ports were fitted, work on closing the bellows could begin. This marked the end of the consolidation work on this sector and the start of preparations to cool it down. By the end of March the first three vacuum subsectors had been sealed. Each subsector is a 200 m long section of the insulating vacuum chamber that surrounds the magnet cold mass. Once sealed, each subsector is tested for leaks before the air is pumped out.

Meanwhile, teams are working through the night and on weekends to install the replacement magnets in the damaged area of sector 3-4 at a rate of six to seven per week. At the same time, the pace of interconnection work has increased sharply over the past few weeks. For example, within a fortnight, the number of joints being soldered rose from two to eight a week. Elsewhere, a magnet in sector 1-2 that was found to have high internal resistance has now been replaced.

• For up-to-date news, see The Bulletin at http:/cdsweb.cern.ch/journal/.

…while the injection chain sees beam again

CCnew3_04_09

On 18 March beam commissioning started in Linac 2, the first link in CERN’s accelerator complex. This marks the start of what will be the longest period of beam operations in the laboratory’s history, with the accelerators remaining operational throughout winter 2009/2010 to supply the LHC. This will limit the opportunities for maintenance, so teams are anticipating what they would normally have done in the winter shutdown and doing as much as possible during the consolidation work on the LHC.

The injection chain for the LHC also contains more venerable accelerators, which have had considerable refurbishment work done on them over recent years. At 50 years old this year, the PS was starting to show signs of its age back in 2003, when the long period of radiation exposure on electrical insulation caused a fault in two magnets and a busbar connection. Since then there has been a huge campaign to refurbish more than half of the PS magnets, with the 51st and final refurbished magnet being installed in the tunnel on 3 February this year. In addition, the power supplies for the auxiliary magnets have been completely replaced and this year new cables have been laid.

The Magnet Group has also thermally tested almost every part of the machine – the first thorough survey of its kind in the history of the PS. After leaving the magnets to run for several hours the team used a thermal camera to check for poor connections, which would lead to slight heating.

The SPS has also undergone considerable refurbishment on top of the normal shutdown activities over the past few years. The final 90 dipole magnets have been repaired this year, ending the three-year project to refurbish the cooling pipes in 250 of the dipole magnets. Also, most of the cabling to the short straight sections has been replaced.

ALICE prepares for jet measurements

The ALICE experiment has reached another milestone with the successful installation of the first two supermodules of the electromagnetic calorimeter (EMCal).

CCnew4_04_09

ALICE is designed to study matter produced in high-energy nuclear collisions at the LHC, in particular using lead ions. The goal is to investigate thoroughly the characteristics of hot, dense matter as it is thought to have existed in the early universe. Experiments at RHIC at Brookhaven have shown that an important way to probe the matter formed in heavy-ion collisions is to study its effect on high-energy partons (quarks and gluons) produced early in the collision. As the partons propagate through the resulting “fireball” their energy loss depends on the density and interaction strength of the matter they encounter. The high-energy partons become observable as jets of hadrons when they hadronize and the energy loss becomes evident through the decreased energy of the products that emerge from the fragmentation process.

Although the ALICE experiment has excellent momentum-measurement and identification capabilities for charged hadrons, it previously lacked a large-acceptance electromagnetic calorimeter to measure the neutral energy component of jets. The EMCal, a lead-scintillator sampling calorimeter with “Shashlik”-style optical-fibre read-out, will provide ALICE with this capability. It consists of identical modules each comprising four independent read-out towers of 6 cm × 6 cm. Twelve modules attached to a back-plate form one strip-module, and 24 strip-modules inserted into a crate comprise one EMCal supermodule with a weight of about 8 t.

The EMCal is a late addition to ALICE, arriving in effect as a first upgrade. Indeed, the full approval (with construction) funds didn’t occur until early 2008. The calorimeter covers about one-third of the acceptance of the central part of ALICE, where it must fit within the existing structure by means of a novel independent support structure – between the magnet coil and the layer of time-of-flight counters. Installation of the 8 t supermodules requires a system of rails with a sophisticated insertion device to bridge across to the support structure. The full EMCal will consist of 10 full supermodules and two partial supermodules.

PAMELA finds an anomalous cosmic positron abundance

CCast1_04_09

The collaboration for the Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics (PAMELA) experiment has published evidence of a cosmic-positron abundance in the 1.5–100 GeV range. This high-energy excess, which they identify with statistics that are better than previous observations, could arise from nearby pulsars or dark-matter annihilation.

PAMELA, which went into space on a Russian satellite launched from the Baikonur cosmodrome in June 2006, uses a spectrometer – based on a permanent magnet coupled to a calorimeter – to determine the energy spectra of cosmic electrons, positrons, antiprotons and light nuclei. The experiment is a collaboration between several Italian institutes with additional participation from Germany, Russia and Sweden.

Preliminary, unofficial results from the PAMELA mission appeared last autumn on preprint servers together with speculation that PAMELA had found the signature of dark-matter annihilation. The paper by Oscar Adriani from the University of Florence and collaborators now published in Nature is more cautious with the dark-matter interpretation of the positron excess, identifying pulsars as plausible alternatives. The data presented include more than a thousand million triggers collected between July 2006 and February 2008. Fine tuning of the particle identification allowed the team to reject 99.9% of the protons, while selecting more than 95% of the electrons and positrons. The resulting spectrum of the positron abundance relative to the sum of electrons and positrons represents the highest statistics to date.

Below 5 GeV, the obtained spectrum is significantly lower than previously measured. This discrepancy is believed to arise from modulation of the cosmic rays induced by the strength of the solar wind, which changes periodically through the solar cycle. At higher energies the new data unambiguously confirm the rising trend of the positron fraction, which was suggested by previous measurements. This appears highly incompatible with the usual scenario in which positrons are produced by cosmic-ray nuclei interacting with atoms in the interstellar medium. The additional source of positrons dominating at the higher energies could be the signature of dark-matter decay or annihilation. In this case, PAMELA has already shown that dark matter would have a preference for leptonic final states. Adriani and colleagues deduce this from the absence of a similar excess of the antiproton-to-proton abundance, a result that they published earlier this year. They suggest that the alternative origin of the positron excess at high energies is particle acceleration in the magnetosphere of nearby pulsars producing electromagnetic cascades.

The authors state that the PAMELA results presented here are insufficient to distinguish between the two possibilities. They seem, however, confident that various positron-production scenarios will soon be testable. This will be possible once additional PAMELA results on electrons, protons and light nuclei are published in the near future, together with the extension of the positron spectrum up to 300 GeV thanks to on-going data acquisition. Complementary information will also come from the survey of the gamma-ray sky by the Fermi satellite.

Study group considers how to preserve data

High-energy-physics experiments collect data over long time periods, while the associated collaborations of experimentalists exploit these data to produce their physics publications. The scientific potential of an experiment is in principle defined and exhausted within the lifetime of such collaborations. However, the continuous improvement in areas of theory, experiment and simulation – as well as the advent of new ideas or unexpected discoveries – may reveal the need to re-analyse old data. Examples of such analyses already exist and they are likely to become more frequent in the future. As experimental complexity and the associated costs continue to increase, many present-day experiments, especially those based at colliders, will provide unique data sets that are unlikely to be improved upon in the short term. The close of the current decade will see the end of data-taking at several large experiments and scientists are now confronted with the question of how to preserve the scientific heritage of this valuable pool of acquired data.

To address this specific issue in a systematic way, the Study Group on Data Preservation and Long Term Analysis in High Energy Physics formed at the end of 2008. Its aim is to clarify the objectives and the means of preserving data in high-energy physics. The collider experiments BaBar, Belle, BES-III, CLEO, CDF, D0, H1 and ZEUS, as well as the associated computing centres at SLAC, KEK, the Institute of High Energy Physics in Beijing, Fermilab and DESY, are all represented, together with CERN, in the group’s steering committee.

Digital gold mine

The group’s inaugural workshop took place on 26–28 January at DESY, Hamburg. To form a quantitative view of the data landscape in high-energy physics, each of the participating experimental collaborations presented their computing models to the workshop, including the applicability and adaptability of the models to long-term analysis. Not surprisingly, the data models are similar – reflecting the nature of colliding-beam experiments.

The data are organized by events, with increasing levels of abstraction from raw detector-level quantities to N-tuple-like data for physics analysis. They are supported by large samples of simulated Monte Carlo events. The software is organized in a similar manner, with a more conservative part for reconstruction to reflect the complexity of the hardware and a more dynamic part closer to the analysis level. Data analysis is in most cases done in C++ using the ROOT analysis environment and is mainly performed on local computing farms. Monte Carlo simulation also uses a farm-based approach but it is striking to see how popular the Grid is for the mass-production of simulated events. The amount of data that should be preserved for analysis varies between 0.5 PB and 10 PB for each experiment, which is not huge by today’s standards but nonetheless a large amount. The degree of preparation for long-term data varies between experiments but it is obvious that no preparation was foreseen at an early stage of the programs; any conservation initiatives will take place in parallel with the end of the data analysis.

The main issue will be the communication between the experimental collaborations and the computing centres after final analyses

From a long-term perspective, digital data are widely recognized as fragile objects. Speakers from a few notable computing centres – including Fabio Hernandez of the Centre de Calcul de l’Institut, National de Physique Nucléaire et de Physique des Particules, Stephen Wolbers of Fermilab, Martin Gasthuber of DESY and Erik Mattias Wadenstein of the Nordic DataGrid Facility – showed that storage technology should not pose problems with respect to the amount of data under discussion. Instead, the main issue will be the communication between the experimental collaborations and the computing centres after final analyses and/or the collaborations where roles have not been clearly defined in the past. The current preservation model, where the data are simply saved on tapes, runs the risk that the data will disappear into cupboards while the read-out hardware may be lost, become impractical or obsolete. It is important to define a clear protocol for data preservation, the items of which should be transparent enough to ensure that the digital content of an experiment (data and software) remains accessible.

On the software side, the most popular analysis framework is ROOT, the object-oriented software and library that was originally developed at CERN. This offers many possibilities for storing and documenting high-energy-physics data and has the advantage of a large existing user community and a long-term commitment for support, as CERN’s René Brun explained at the workshop. One example of software dependence is the use of inherited libraries (e.g. CERNLIB or GEANT3), and of commercial software and/or packages that are no longer officially maintained but remain crucial to most running experiments. It would be an advantageous first step towards long-term stability of any analysis framework if such vulnerabilities could be removed from the software model of the experiments. Modern techniques of software emulation, such as virtualization, may also offer promising features, as Yves Kemp of DESY explained. Exploring such solutions should be part of future investigations.

Examples of previous experience with data from old experiments show clearly that a complete re-analysis has only been possible when all of the ingredients could be accounted for. Siggi Bethke of the Max Planck Institute of Physics in Munich showed how a re-analysis of data from the JADE experiment (1979–1986), using refined theoretical input and a better simulation, led to a significant improvement in the determination of the strong coupling-constant as a function of energy. While the usual statement is that higher-energy experiments replace older, low-energy ones, this example shows that measurements at lower energies can play a unique role in a global physical picture.

The experience at the Large Electron-Positron (LEP) collider, which Peter Igo-Kemenes, André Holzner and Matthias Schroeder of CERN described, suggested once more that the definition of the preserved data should definitely include all of the tools necessary to retrieve and understand the information so as to be able to use it for new future analyses. The general status of the LEP data is of concern, and the recovery of the information – to cross-check a signal of new physics, for example – may become impossible within a few years if no effort is made to define a consistent and clear stewardship of the data. This demonstrates that both early preparation and sufficient resources are vital in maintaining the capability to reinvestigate older data samples.

The next-generation publications database, INSPIRE, offers extended data-storage capabilities that could be used immediately to enhance public or private information related to scientific articles

The modus operandi in high-energy physics can also profit from the rich experience accumulated in other fields. Fabio Pasian of Trieste told the workshop how the European Virtual Observatory project has developed a framework for common data storage of astrophysical measurements. More general initiatives to investigate the persistency of digital data also exist and provide useful hints as to the critical points in the organization of such projects.

There is also an increasing awareness in funding agencies regarding the preservation of scientific data, as David Corney of the UK’s Science and Technology Facilities Council, Salvatore Mele of CERN and Amber Boehnlein of the US Department of Energy described. In particular, the Alliance for Permanent Access and the EU-funded project in Framework Programme 7 on the Permanent Access to the Records of Science in Europe recently conducted a survey of the high-energy-physics community, which found that the majority of scientists strongly support the preservation of high-energy-physics data. One important aspect that was also positively appreciated in the survey answers was the question of open access to the data in conjunction with the organizational and technical matters, an issue that deserves careful consideration. The next-generation publications database, INSPIRE, offers extended data-storage capabilities that could be used immediately to enhance public or private information related to scientific articles, including tables, macros, explanatory notes and potentially even analysis software and data, as Travis Brooks of SLAC explained.

While this first workshop compiled a great deal of information, the work to synthesize it remains to be completed and further input in many areas is still needed. In addition, the raison d’être for data preservation should be clearly and convincingly formulated, together with a viable economic model. All high-energy-physics experiments have the capability of taking some concrete action now to propose models for data preservation. A survey of technology is also important, because one of the crucial factors may indeed be the evolution of hardware. Moreover, the whole process must be supervised by well defined structures and steered by clear specifications that are endorsed by the major laboratories and computing centres. A second workshop is planned to take place at SLAC in summer 2009 with the aim of producing a preliminary report for further reference, so that the “future of the past” will become clearer in high-energy physics.

The age of citizen cyberscience

I first met Rytis Slatkevicius in 2006, when he was 18. At the time, he had assembled the world’s largest database of prime numbers. He had done this by harnessing the spare processing power of computers belonging to thousands of prime-number enthusiasts, using the internet.

CCvie1_04_09

Today, Rytis is a mild-mannered MBA student by day and an avid prime-number sleuth by night. His project, called PrimeGrid, is tackling a host of numerical challenges, such as finding the longest arithmetic progression of prime numbers (the current record is 25). Professional mathematicians now eagerly collaborate with Rytis, to analyse the gems that his volunteers dig up. Yet he funds his project by selling PrimeGrid mugs and t-shirts. In short, Rytis and his online volunteers are a web-enabled version of a venerable tradition: they are citizen scientists.

There are nearly 100 science projects using such volunteer computing. Like PrimeGrid, most are based on an open-source software platform called BOINC. Many address topical themes, such as modelling climate change (ClimatePrediction.net), developing drugs for AIDS (FightAids@home), or simulating the spread of malaria (MalariaControl.net).

Fundamental science projects are also well represented. Einstein@Home analyses data from gravitational wave detectors, MilkyWay@Home simulates galactic evolution, and LHC@home studies accelerator beam dynamics. Each of these projects has easily attracted tens of thousands of volunteers.

Just what motivates people to participate in projects like these? One reason is community. BOINC provides enthusiastic volunteers with message boards to chat with each other, and share information about the science behind the project. This is strikingly similar to the sort of social networking that happens on websites such as Facebook, but with a scientific twist.

Another incentive is BOINC’s credit system, which measures how much processing each volunteer has done – turning the project into an online game where they can compete as individuals or in teams. Again, there are obvious analogies with popular online games such as Second Life.

Brains vs processors

A new wave of online science projects, which can be described as volunteer thinking, takes the idea of participative science to a higher level. A popular example is the project GalaxyZoo, where volunteers can classify images of galaxies from the Sloan Digital Sky Survey as either elliptical or spiral, via a simple web interface. In a matter of months, some 100,000 volunteers classified more than 1 million galaxies. People do this sort of pattern recognition more accurately than any computer algorithm. And by asking many volunteers to classify the same image, their statistical average proves to be more accurate than even a professional astronomer.

When I mentioned this project to a seasoned high-energy physicist, he remarked wistfully, “Ah, yes, reminds me of the scanning girls”. High-energy physics data analysis used to involve teams of young women manually analysing particle tracks. But these were salaried workers who required office space. Volunteer thinking expands this kind of assistance to millions of enthusiasts on the web at no cost.

Going one step farther in interactivity, the project Foldit is an online game that scores a player’s ability to fold a protein molecule into a minimal-energy structure. Through a nifty web interface, players can shake, wiggle and stretch different parts of the molecule. Again, people are often much faster at this task than computers, because of their aptitude to reason in three dimensions. And the best protein folders are usually teenage gaming enthusiasts rather than trained biochemists.

Who can benefit from this web-based boom in citizen science? In my view, scientists in the developing world stand to gain most by effectively plugging in to philanthropic resources: the computers and brains of supportive citizens, primarily those in industrialized countries with the necessary equipment and leisure time. A project called Africa@home, which I’ve been involved in, has trained dozens of African scientists to use BOINC. Some are already developing new volunteer-thinking projects, and a first African BOINC server is running at the University of Cape Town.

A new initiative called Asia@home was launched last month with a workshop at Academia Sinica in Taipei and a seminar at the Institute of High Energy Physics in Beijing, to drum up interest in that region. Asia represents an enormous potential, in terms of both the numbers of people with internet access (more Chinese are now online than Americans) and the high levels of education and interest in science.

To encourage such initiatives further, CERN, the United Nations Institute for Training and Research and the University of Geneva are planning to establish a Citizen Cyberscience Centre. This will help disseminate volunteer computing in the developing world and encourage new technical approaches. For example, as mobile phones become more powerful they, too, can surely be harnessed. There are about one billion internet connections on the planet and three billion mobile phones. That represents a huge opportunity for citizen science.

MAGIC becomes twice as good

CCnew1_03_09

Since the Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescope began operation in 2003 it has made a host of discoveries about sources of high-energy cosmic gamma rays. This is in large part thanks to having the largest reflector in the world, a tessellated mirror with an area of 240 m2. Now the MAGIC-I telescope is being joined by a sibling, MAGIC-II, which has similar characteristics together with several improvements.

Cosmic gamma rays are important because they serve as direct messengers of distant events, unaffected by magnetic fields. Studies usually rely on satellite experiments, while ground-based Cherenkov telescopes are mainly used for the highest energies. The latter make use of the fact that charged secondary particles, generated in electromagnetic showers in the atmosphere, may emit Cherenkov radiation – photons with energies that are in the visible and UV ranges. Such photons pass through the atmosphere and are observable on the surface of the Earth using sufficiently sensitive instruments.

The MAGIC telescopes on the Canary Island of La Palma are specifically designed to detect Cherenkov radiation resulting from electromagnetic air showers generated by cosmic gamma rays. MAGIC-I was built with emphasis on the best light collection, making gamma rays accessible down to an energy threshold of 25 GeV, which is lower than for any other existing ground-based gamma-ray detector. It now provides an ideal overlap with the Large Area Telescope on the recently launched Fermi Gamma-Ray Space Telescope, which has an energy range from 20 MeV up to 300 GeV.

The new MAGIC-II telescope has the same reflector size as MAGIC-I, although it is made with larger mirrors. Its camera has been modified to increase spatial resolution and sensitivity by hosting a larger number of photomultipliers of higher efficiency. It also incorporates a new read-out system. The new system’s gain in sensitivity will be between a factor of two and a factor of three compared with MAGIC-I, depending on energy. A lower energy threshold means a higher sensitivity to special phenomena – and a deeper view into the universe. It was characteristics such as these that in 2008 enabled MAGIC-I to discover the most distant very-high-energy gamma sources, including the supermassive black hole 3C279, at a record distance of 6000 million light-years – an observation that questions theories on the transparency of the universe to gamma rays (MAGIC Collaboration 2008a). In addition, MAGIC-I revealed pulsed gamma-rays above 25 GeV emanating from the Crab pulsar; this is the highest energy pulsed emission so far detected (MAGIC Collaboration 2008b).

The new MAGIC-II telescope will couple a low threshold with higher sensitivity and resolution, as well as improve further the view of the highest energy phenomena in the universe. A “first-light” ceremony is to take place on 24–25 April at the site on La Palma.

bright-rec iop pub iop-science physcis connect