Comsol -leaderboard other pages

Topics

The Quantum Frontier: The Large Hadron Collider

by Don Lincoln, foreword by Leon Lederman, Johns Hopkins University Press. Hardback ISBN 9780801891441, $25.

CCboo2_08_09

As I write this review, in less than one week’s time I will be starting the second year of my physics-undergraduate degree at McGill University, Montreal. During the past summer I was granted the chance to spend time at CERN, an aspiration for every young physicist. Working as a student journalist for the CERN Bulletin, I was able to get away with asking enough questions to drive everyone mad and learnt a great deal about the various experiments currently (and previously) being conducted at CERN, in particular at the LHC. However, even after two months of constant probing, the LHC still held many more secrets and fantastic intricacies that I sought to understand.

It was only during my final weeks that the answers to these questions were found, by reading The Quantum Frontier. Don Lincoln’s playful, energetic style took me from the fundamentals of contemporary physics through to the extremely complex and sophisticated guts of the LHC experiments, touching on everything from the Earth’s “inevitable” destruction by black holes to speculated future physics experiments in a post-LHC era.

Cracking it open for the first time, I was worried that a book taking under 200 pages to cover such an ambitious topic would be riddled with sterile facts listed one after the other. But the contrary is what I found. Lincoln starts by addressing the obvious misconception that is in the watching world’s mind: will the LHC destroy the planet and all of us with it? Tackling this issue first with an overview of basic material often covered in high-school science classes (the components of the atom, etc.), Lincoln goes on to peer deeper and deeper into the world of particle physics, laying out the basic building blocks of matter and what the LHC hopes to discover.

As a student of the subject, I found that some of the material was familiar, while a great deal of the new ideas and theories were elegantly explained. Lincoln kept me happily engaged with poignant and often funny analogies that facilitated the explanations and catered for a concise understanding. Like any scientifically relevant book, it uses diagrams and graphs to elaborate ideas, but their inclusion is not daunting.

Being a particle physicist himself, Lincoln gives us a chance to see the world from such a perspective and conveys the excitement and awe that is experienced working in this field.

VECC’s superconducting success

CCnew1_08_09

A beam of Ne+3 has been accelerated by K-500 Superconducting Cyclotron at the Variable Energy Cyclotron Centre (VECC), Kolkata, up to the extraction radius of 650 mm. At 3.00 a.m. on 25 August, the beam probe monitored a beam current of around 40 nA. This measurement was confirmed by a beam viewer that uses a boroscope and through the observation of neutron and gamma radiation by the radiation monitor located outside the superconducting magnet. The energy of the Ne+3 beam at a radius of 650 mm is calculated to be 88 MeV. The presence of beam was further confirmed by activation analysis of an aluminium target probe in tests at a radiochemistry laboratory.

VECC’s superconducting cyclotron is the most advanced, hi-tech accelerator ever constructed in India. The cyclotron’s main structure, a 100-tonne iron-core superconducting magnet, is the largest in the country and has been operating virtually non-stop for more than three years. It produces a magnetic field of around 5 T over an area of about 1.3 m2. Its cold mass of about 8 t, consisting of the niobium-titanium superconducting coil and stainless-steel bobbin, has been kept continuously cooled at –269 °C inside a sophisticated cryostat. More than 35 km of superconducting wire was used to construct the coil at VECC. About 300 l of liquid helium is required to keep the coil cooled to its operating temperature, together with hundreds of litres of liquid nitrogen at –195 °C, every day. It is the first large-scale iron-core superconducting system in India.

CCnew2_08_09

The “3-Dee” RF system, which provides the acceleration kicks to the beam, has also functioned very satisfactorily. This system delivers more than 100 kW of radiofrequency power per cavity.

The main control room of the accelerator has been a hive of activity since 11 May when the first beam of low-energy charged particles was injected for acceleration. Since then all of the cyclotron systems have undergone continuous endurance tests. At the same time, the cyclotron team has also carried out all possible critical tests to ensure that the cyclotron is functioning with a circulating internal beam.

With very few such superconducting cyclotrons in the world, VECC has joined an exclusive club. The accelerator’s high-energy beams will be used for frontline basic and applied research in nuclear sciences. The facility will soon be dedicated to the nation and will open for research to the international community.

Final sector starts cool down

CCnew3_08_09

On 2 September the cool down of LHC sector 6-7 got underway, following some minor repair work. As the last sector to be cooled down, this marked a major milestone towards the restart of the collider later this year.

The cool down of sector 6-7 began two weeks earlier but it was interrupted by the detection of a short-circuit on the main dipole circuit. The cause of the short-circuit was later tracked down to poor insulation in a magnet busbar, which had been degraded by friction against a screw as the structure contracted during the cool down. After repair work followed by electrical and vacuum validation, the sector was once again ready to cool down.

In sector 8-1, the flexible hose that caused the helium leak into the insulation vacuum has been replaced and the sector is now being cooled down again. The cool down of sector 3-4, the one affected by the incident on 19 September 2008, also began at the end of August, thus marking the end of a complex phase of repair work.

Meanwhile consolidation work on the LHC has continued. On 26 August, the first two fully tested crates for the new quench-protection system (QPS) were installed in sector 1-2. These are the first of a total of 436 crates to be installed around the ring. The two crates include detectors for both the enhanced busbar protection and the symmetric quench protection.

Training quenches on magnets in sector 5-6 in June 2008 revealed that heat transfer to a neighbouring magnet can cause a quench that develops identically in two magnet coils. The original detection system compared voltage signals from two coils to detect a resistive build-up in either one, but if the signals develop in the same way, the quench would go unnoticed. The new protection system monitors the voltage across four adjacent dipoles (or two adjacent quadrupoles), allowing a symmetric quench to be detected, as well as providing a back-up detection method for normal, asymmetric quenches.

To test the crates before installation, a dedicated test bed has been created, capable of simulating all of the conditions in the LHC, from a symmetric quench to an increase in busbar resistance. The teams are working two shifts a day, including weekends, to test the new crates. Two more test benches are also being built to increase the production rate. The whole task is on target for completion in mid-October.

Another important new task for the QPS team is to speed up the energy extraction from the magnets. The quicker the energy can be extracted, the lower the risk of dangerously high temperatures in the event of a quench. The time constant for the dipoles will be halved to about 50 s. The decision to run at 3.5 TeV, and therefore with lower current in the magnets, has made this task relatively straightforward. Switching two of the three “dump” resistors into a series circuit, instead of having all three resistors in parallel, allows the energy to be converted into heat much faster. In the quadrupole circuits, the task is more complex. Reducing the time constant to the desired 10 s, from a previous 35 s, requires adding extra, newly designed resistors.

The new QPS system will also allow accurate resistance measurements to be taken remotely. This will save a huge amount of time and effort for the next rounds of interventions – for example when the energy of the LHC is increased.

• CERN is publishing regular updates on the LHC in its internal Bulletin, available at www.cern.ch/bulletin, as well as via twitter and YouTube at www.twitter.com/cern and www.youtube.com/cern.

DOE allocates Fermilab an additional $60.2 million

In the latest instalment of funding from the US Department of Energy’s (DOE) Office of Science under the American Recovery and Reinvestment Act, Fermilab is to receive an additional $60.2 million to support research towards next-generation particle accelerators and preliminary designs for a future neutrino experiment.

The new funds are part of more than $327 million announced by Energy Secretary Steven Chu on 4 August from funding allocated under the Recovery Act to DOE’s Office of Science. Of these funds, $220 million will go towards projects at DOE national laboratories. While many of the physics-related projects are associated with fusion research or light sources, Fermilab and the Brookhaven National Laboratory have both received support for activities in high-energy physics.

Taking the stimulus funds announced earlier this year into account, the Recovery Act is allocating more than $100 million to Fermilab. Out of the additional $60.2 million announced in August, the laboratory will devote $52.7 million to research on next-generation accelerators using superconducting RF technology. The remaining $7.5 million will go to fund a preliminary design for a future neutrino experiment, in collaboration with Brookhaven, which has received $6.5 million for neutrino research in addition to $3 million for improvements to its light source.

AMS gets its slot on a space shuttle in 2010

CCnew4_08_09

AMS-02, the experiment that will seek dark matter, missing matter and antimatter in space aboard the International Space Station (ISS), has recently received the green light to be part of the STS-134 NASA mission in 2010.

NASA has announced that the last or last-but-one mission of the space shuttle programme will be the one that is to deliver the Alpha Magnetic Spectrometer (AMS) to the ISS. The space shuttle Discovery is due to lift off in July 2010 and its mission will include the installation of AMS to the exterior of the space station, using arms on both the shuttle and station. Last year both the US House of Representatives and the Senate unanimously approved a bill requesting NASA to install AMS on the ISS, which was signed by president George W Bush a month later.

AMS is a cosmic-ray detector based on technologies developed at CERN, where it is currently based. The installation of the detector to the right side of the space station’s truss will be a delicate operation. It will be lifted out by the shuttle’s robotic arm and handed on to the station’s robotic arm, which will then install AMS in its location.

The astronauts selected for this flight include the European astronaut Roberto Vittori, a colonel in the Italian air force with a degree in physics. He will come to CERN in October with the rest of the crew to learn more about the experiment. The data collected by AMS will be transmitted instantly from the ISS to the Marshall Space Flight Center in Huntsville, Alabama, and finally to CERN, where all of the detector controls and physics analyses will be performed.

Belle finds a hint of new physics in extremely rare B decays

CCnew5_08_09

The Belle collaboration at KEK has recently analysed the angular distribution of leptons in the decays of B mesons into a K* meson and a lepton anti-lepton pair, where the lepton is an electron or a muon. The team finds that the results, which were presented in August at the Lepton–Photon International Symposium in Hamburg, are larger than expected from the Standard Model.

The figure shows the forward–backward asymmetry of the positively charged lepton with respect to the direction of the K* in B → K*l+l, based on the analysis of 660 million pairs of B and anti-B mesons. The measured data points are above the Standard Model expectation (solid curve). In the Standard Model this decay mode proceeds via a “Penguin diagram” involving intermediate virtual particles, such as a Z boson or a W boson, which are much heavier than the B meson. New heavy particles beyond those already known in the Standard Model should also participate in a similar way. The difference between the measurements and the Standard Model expectation (blue curve) might indicate that such new particles are indeed produced in addition to Z and W bosons. Indeed, the data points are closer to the prediction that includes supersymmetric particles (green curve).

This rare decay process was discovered by Belle in 2002. However, the measurement of the lepton forward–backward asymmetry has been quite difficult owing to the small decay rates. It has become possible only with increased data samples that the experiment has gathered, thanks to the improved performance of the KEKB accelerator. The analysis so far has yielded about 250 signal events. To clarify whether the results are indeed hinting at new physics, the Belle collaboration is continuing its measurements with a larger sample of accumulated data.

XMM-Newton sees fast-spinning white dwarf

Using observations by ESA’s Newton X-ray multimirror satellite, XMM-Newton, a group of Italian astronomers has derived the characteristics of a peculiar binary system consisting of a white dwarf and an evolved companion star. The mass of the compact object is found to be close to the maximum mass for a white dwarf. Continued accretion in the million years to come could make it explode as a Type Ia supernova.

White dwarfs are the cores that remain of Sun-like stars after they have ejected their gas envelopes, creating beautiful planetary nebulae (CERN Courier July/August 2003 p13, April 2007 p10 and September 2009 p11). They are analogous to neutron stars, which are produced in the core-collapse (Type II) supernovae explosions of stars that are more than eight times as massive as the Sun. While neutron stars are only about 20 km across, white dwarfs are typically the size of the Earth and are sustained by the quantum-degeneracy pressure of their electrons, rather than their neutrons.

The shallower gravitational potential well of white dwarfs compared with neutron stars is not able to heat the gas accreted from a companion star up to very high temperatures. X-ray binaries, where the compact object is a white dwarf, are therefore much less luminous and only detectable in the soft X-ray waveband. The feeble X-ray emission from a binary system that was first detected by the ROSAT satellite in the 1990s was re-observed in 2008 by XMM-Newton. The observation of RX J0648.0-4418, which was proposed by a group of Italian astronomers led by Sandro Mereghetti from the INAF-IASF institute in Milan, was scheduled to catch a possible X-ray eclipse of the white dwarf as it passed behind the companion star. The aim was also to analyse the X-ray pulsation induced by the spin of the white dwarf with a period of 13.2 s.

The 12-hour observation by XMM-Newton was successful and provided a clear detection of the eclipse as well as allowing the possibility to derive relative time lags in the pulses from one hour to the next. These delays were used by the researchers to gauge the size of the orbit of the white dwarf around the companion star. With the corresponding information already available for the optical star through spectroscopic analysis, the only remaining unknown related to the mass of both objects is the orbital-plane inclination. However, this is well constrained by the observed eclipse of the X-ray source to be close to edge-on and the study yields values in units of solar masses of 1.28±0.05 for the white dwarf and 1.50±0.05 for the companion star.

For the compact object, this is about twice the usual mass of white dwarfs and is close to the limit of 1.4 solar masses that Subrahmanyan Chandrasekhar derived in the early 1930s. It means that the increased accretion expected in the million years to come could make the white dwarf reach this limit with two possible consequences: either the dwarf star explodes as a supernova of Type Ia or it collapses into a neutron star. Type Ia supernovae are bright events that are used as standard beacons to measure the expansion rate of the universe (CERN Courier September 2003 p23, October 2007 p13). If this is going to happen at the derived distance of only about 2800 light-years from Earth, the supernova would appear as bright as the full Moon for several days. However, the fast spin of the dwarf star is likely to increase its stability and should thus delay this celestial show or even prevent it from happening.

Krakow welcomes 2009 EPS-HEP conference

CCeps1_08_09

The 13th-century merchants’ town of Krakow and former capital is now one of the largest and oldest cities in Poland. The scenic city centre – a UNESCO World Heritage Site – with its fascinating history and pleasant climate, provided the perfect setting for discussing new results and future developments at the biennial European Physical Society (EPS) conference on High Energy Physics (HEP). The event was held on 16–22 July at the new conference centre of the Jagiellonian University – the Auditorium Maximum.

The conference began with 35 parallel sessions and more than 350 contributions over two and a half days. Then, as is tradition, the EPS and European Committee for Future Accelerators scheduled a joint plenary meeting for Saturday afternoon. This focused on a number of talks concerning the future of the field: with Christian Spiering of DESY on “Astroparticle physics and relations with the LHC”; CERN’s director-general, Rolf Heuer, on “The high-energy frontier”; Alain Blondel of Geneva University on “The future of -accelerator-based neutrino physics'”; and Tatsuya Nakada of the École Polytechnique Fèdèrale de Lausanne on “Super-B factories”. Each presentation was followed by a lively discussion.

Sunday provided the opportunity for several sightseeing trips in and around Krakow. Monday saw a fresh start to the week and time for another tradition: the presentation of the EPS awards. For the first time, the High Energy and Particle Physics (EPS HEPP) prize was awarded to an experimental collaboration, Gargamelle, for the observation of the weak neutral current. After the awards ceremony, Frank Wilczek, the 2004 Nobel laureate, gave a special talk on “some ideas and hopes for fundamental physics”. This provided an excellent start to three days of plenary sessions, with around 35 presentations.

The Standard Model still reigns

CCeps2_08_09

The Tevatron proton–antiproton collider continues its smooth operation. With more than 6 fb–1 integrated luminosity delivered and peak luminosities exceeding 3.5 × 1032 cm–2s–1, the CDF and DØ experiments are steadily increasing their statistics. Both collaborations are pushing forward on the analysis of their latest data in a joint effort to confirm and enlarge the previously reported exclusion region for the Higgs mass of around 160–170 GeV. At the same time, several new ideas are emerging on how to improve the sensitivity of these experiments to more challenging Higgs decay channels. In addition to the direct search for the Higgs boson, both collaborations reported on new mass measurements of the W boson (MW = 80.399 ± 0.023 GeV) and confirmed the combined experimental result for the top quark mass (mt = 173.1 ± 1.3 GeV), pushing the error below the 1% level. These values lead to a further reduction of the preferred mass region for the Standard Model Higgs, as John Conway of the University of California Davis pointed out in his plenary presentation. Moreover, these and other precision measurements of the weak parameters (sin2θW = 0.2326 ± 0.0018stat ± 0.0006sys as compared with the theoretical prediction of sin2θW = 0.23149 ± 0.00013) show growing evidence that the Standard Model prefers a light Higgs, which, as Conway concluded, will make life difficult. Even for the large LHC experiments, ATLAS and CMS, this region of the window on the Higgs mass will require high statistics, combining different decay modes and sophisticated analyses.

A number of sophisticated statistical procedures are being developed and becoming available as complete software packages – for example, GFITTER – to simplify or fine-tune multidimensional analyses of experimental data. At the same time, there is impressive progress in calculating amplitudes for multileg processes and loops. A rather complete set of automatically derived “2 → 4 particle” cross-sections (the “Les Houches 2007 wish list”) demonstrates that higher-order corrections to important physics processes at the LHC cannot be ignored.

CCeps3_08_09

Increasing statistics at the Tevatron are also consolidating the observation of single top production, but at the same time the parameter space for new physics at or below the 1 TeV scale is becoming smaller, as Volker Büscher of Mainz explained. CDF and DØ have conducted studies that probe mass values for the charginos of supersymmetry up to 176 GeV; they find no evidence for neutralino production in their current data sample. In addition, the studies shift the possibility of quark compositeness or large extra dimensions further towards a higher energy scale.

While the latest updates on analyses of data from RHIC and the SPS were presented in the parallel sessions, Urs Wiedemann of CERN covered theoretical aspects of collective phenomena in his plenary talk. He summarized the motivation for experiments at RHIC (√sNN = 200 GeV) and the LHC (√sNN = 5500 GeV) to study the QCD properties of dense matter at the 150 MeV scale, which will be accessible at these high collision energies.

A wealth of new data is also emerging from the experimental analysis of B-physics – from both hadron colliders and e+e machines – ranging from analyses of rare exclusive decay modes to spectroscopy and physics related to the Cabibbo–Kobayashi–Maskawa matrix (CKM). The results further confirm oscillations in the neutral D and Bs sectors. This is another area where the Standard Model seems not to be seriously challenged: the CKM triangle appears to remain “closed” (within experimental errors). Nevertheless, as Andrzej Buras of TU Munich pointed out in his talk on “20 goals in flavour physics for the next decade”, there are still many challenges ahead. A breakthrough could come with firm experimental evidence for flavour-changing neutral currents in excess of Standard Model predictions. Buras’s message is clear: stay focused on the many observables that are not yet well measured and the decay modes that are not so far (or poorly) studied; spectacular deviations from the Standard Model remain possible.

CCeps4_08_09

With a new series of experiments under construction and several experiments producing new data, neutrino physics remains an experimentally driven enterprise. The neutrino sessions were – not surprisingly – very well attended. Better mass measurements are coming within reach, be it from obtaining upper limits by measuring time shifts in neutrinos from supernovae (mν < 30 eV) or from measuring the tritium β-decay spectrum (mν < 2 eV) or mass differences from oscillations (all Δm2 < 1 eV2). Because neutrinos are abundant in the universe, even a small neutrino mass will have implications in astrophysics. Dave Wark of Imperial College summarized the broad spectrum of neutrino physics experiments and their discovery potential. Anticipating the various experimental approaches and progress, he explained under which terms the Majorana phases, for example, could be determined.

New frontiers – on Earth and in space

CCeps5_08_09

While the large LHC experiments are commissioning their triggers, new ideas on the future of the LHC machine are being explored. These include high-luminosity schemes and higher beam energies, which will have different implications for future upgrades of both machine and experiments. R&D on accelerators is focusing not only on higher-energy frontiers and currents, but also on more efficient beam-crossing (“crab”) scenarios.

In a worldwide effort, the International Linear Collider collaboration aims to present a Technical Design Report in 2012 for a high-energy e+e machine. The Compact Linear Collider Study (CLIC) based at CERN, which aims for a Conceptual Design Report at the end of 2010, investigates different approaches and may reach a higher beam energy (3 TeV v 1 TeV). However, the physics simulations and detector designs for the two schemes face equal challenges.

The development of “super factories” is an ongoing effort that is complementary to the high-energy machines. These facilities should provide high-statistics experiments on, for example, the neutrino, charm and bottom sectors, with the necessary infrastructure for high-precision measurements. Caterina Biscari of Frascati presented a comprehensive overview of existing machines and (possible) future accelerators, in which she compared their main parameters.

CCeps6_08_09

The conference saw substantial contributions from astroparticle physics. The Auger experiment probing the highest energy cosmic rays (1020 eV) shows growing evidence for the Greisen–Zatsepin–Kuzmin cut-off. The energy spectrum agrees well (within the 25% calibration uncertainty on the energy scale) with results from the HiRes collaboration. Active galactic nuclei are now also observed by the High Energy Stereoscopic System (HESS) and the Large Area Telescope on the Fermi Gamma-ray Space Telescope. In particular, the core of Centaurus A appears to be extremely interesting owing to the bright radio source in its centre. High-energy cosmic rays are predominantly produced by “nearby” (< 100 Mpc) sources, while there is a slight indication that the composition changes with increasing energy, towards more heavy nuclei.

PAMELA (launched in 2006), the Advanced Thin Ionization Calorimeter balloon experiment (2008), Fermi (launched in 2008) and HESS show some excesses in the e± spectrum. The interpretation of these signals remains uncertain. Is it related to the nature of non-baryonic, that is, dark matter, or can the spectra be explained by astrophysics phenomena such as pulsars or supernova remnants? The PAMELA data have generated huge theoretical interest resulting in a multitude of dark-matter models. However, much more data are needed from both space-based experiments and ground-based searches for decaying weakly interacting massive particles. The Alpha Magnetic Spectrometer, finally scheduled to be launched in 2010, should at least provide much improved limits on the antiproton flux.

The next international Europhysics conference on high-energy physics will take place in Grenoble on 21–27 July 2011. After last years’ successful injection of proton beams into the LHC, followed by the unfortunate incident and subsequent repairs and consolidation, the starting date for high-energy collisions at the LHC is now rapidly approaching. At the meeting in Grenoble there will be lively discussions on Tevatron data – perhaps with surprises – and extensive reports, among others, on dark-matter searches. Of course, we all look forward to reports on first data analyses by the LHC experiments.

• The local organization of EPS-HEP 2009 by the Institute of Nuclear Physics PAN, Jagiellonian University, the AGH University of Science and Technology and the Polish Physical Society is acknowledged.

Working for the world: UNOSAT and CERN

CCuno1_08_09

Much of the interesting work that happens at CERN is underground – but not all. Since 2002, the team that runs UNOSAT, the Operational Satellite Applications Programme of the United Nations Institute for Training and Research (UNITAR), has been based at the laboratory’s Meyrin site. This hosting arrangement, which has support from the Swiss government, resulted from a pioneering institutional agreement between CERN and the UN. The programme demonstrates the potential for collaboration between these two international bodies in areas of mutual interest.

The mission of UNITAR, established by the UN General Assembly, is to deliver innovative training and to conduct research on knowledge systems and methodologies. Through adult professional training and technical support, the institute contributes towards developing the capacities of tens of thousands of professionals around the world using face-to-face and distance learning.

UNOSAT is a technology-based initiative supported by a team of specialists in remote-sensing and geographic-information systems. It is part of UNITAR’s Department of Research, mainly because of its groundbreaking innovations in the use of satellite-derived solutions in the context of the UN work. As a result of its research and applications, UNOSAT offers very high-resolution imagery to enhance humanitarian actions; monitors piracy using geospatial information; connects the world of the UN to Grid technology; and it has introduced objective satellite images into the assessment of human-rights violations.

A vital source of information

Initially created to explore the potential of satellite Earth observation for the international community, this programme has developed specific mapping and analysis services that are used by various UN agencies and by national experts worldwide. UNOSAT’s mission is to deliver integrated satellite-based solutions for human security, peace and socioeconomic development. Its most important goal, however, is to make satellite data and geographic information easily accessible to an increasing number of UN and national experts who work with geographic information systems (GIS).

The UNOSAT team combines the experience of satellite imagery analysts, database programmers and geographic-information experts with that of fieldworkers and development experts. This unique set of skills gives the UNOSAT team the ability to understand the needs of a variety of international and national users and to provide them with suitable information anywhere and anytime. Anywhere, because – thanks to CERN’s IT support – UNOSAT can handle and store large amounts of data and transfer maps as needed directly via the web; anytime, because UNOSAT is available 24 hours a day, every day of the year.

In simple terms, UNOSAT acquires and processes satellite data to produce and deliver information, analysis and observations, which are used by the UN and national entities for emergency response and to assess the impact of a disaster or conflict, or to plan sustainable development. The main difference between this programme and other UN undertakings is that UNOSAT uses high-end technology to develop innovative solutions. It does this in partnership with the main space agencies and commercial satellite-data providers.

One such innovation was the creation in 2003 of a new humanitarian rapid-mapping service. Now fully developed, the service has been used in more than 100 major disasters and conflict situations, and has produced more than 900 satellite-derived analyses and maps. The work requires the rapid acquisition and processing of satellite imagery and data for the creation of map and GIS layers. These are then used by the headquarters of UN agencies to make decisions and in the field during an emergency response to co-ordinate rescue teams and assess the impact of a given emergency. This type of map was of great use in the aftermath of the Asian Tsunami of 2004 and in response to the 2005 earthquake in Pakistan. Similar maps have been used to monitor the impact of the conflict between Israel and the Hezbollah in Southern Lebanon and during the Middle East crisis in Gaza. They have also been valuable in monitoring the flux of displaced populations, most recently, during the conflict this year in Sri Lanka (figure 1).

CCuno2_08_09

There are tens of less publicized crises every year in which the UN is involved because of their humanitarian consequences on thousands of innocent civilians in developing countries. UNOSAT supports the work of relief workers and NGO volunteers with timely and accurate analysis of a situation on the ground, and responds to requests from the field for particular geographic information.

The work of UNOSAT is not solely related with emergencies, although the maps available on the website all refer to humanitarian assistance. This publication policy is linked to enabling humanitarian workers in various field locations to download maps prepared by UNOSAT at CERN via internet or satellite telecommunications. In addition, there are a large number of maps and analyses that are not publicly available on the UNOSAT website because they are part of project activities requested by UN agencies, such as the UN Development Programme, the International Organization for Migration and the World Health Organization.

Once an emergency is over, the work of the UN continues with assistance to governments in rehabilitation and reconstruction. UNOSAT remains engaged beyond the emergency phase by supporting early recovery activities that are undertaken to help local populations get back to normality following a disaster or conflict. Satellites are helpful in these circumstances: think of the work required to reconstruct an entire cadastre, for example, without appropriate geographic information; or to plan the re-establishment of road and rail networks without accurate information on the extent of damage suffered.

UNOSAT’s experience in mapping and analysis – and its innovative methodologies – are regularly transferred to the world beyond, thanks to training modules and information events that are organized by the UN or directly by UNITAR. At CERN, for example, UNOSAT hosts and trains national experts from Indonesia, Nicaragua and Nigeria, to mention a few recent cases. These experts receive intensive two-week training sessions, during which they stay at CERN. In other cases, UNOSAT sends its trainers abroad to train and provide technical support to fieldworkers in developing countries. All of the experts trained by UNOSAT then become part of a global network of skilled staff who can be connected to work together when needed.

The technical work of UNOSAT is made possible by the agreement between UNITAR and CERN, so CERN’s support is of fundamental importance. The recognition – and even the awards that UNOSAT enjoys in return for its relentless work – go in part also to all those at CERN who help and support UNOSAT work.

Conscious of the potential held by this success story, CERN and UNITAR took the opportunity of the renewal of their agreement in December 2008 to begin a series of consultations to strengthen their collaboration in areas of mutual interest. The realm of scientific applications to advance international agendas that guide the work of the UN is being discussed at senior level and ideas for joint undertakings are currently being considered.

• For more information, visit www.unitar.org and www.unitar.org/unosat.

The future is together

CCilc1_08_09

One step at a time

This meeting at CERN [on 12 June] represented another step in bringing the CLIC and ILC efforts closer together. CERN’s director-general, Rolf Heuer, the CERN research director, Sergio Bertolucci, and the secretary of the CERN Council strategy group, Steiner Stapnes, attended part of the meeting and expressed their support. The meeting itself was constructive and productive in that we agreed on several important new initiatives, including plans to combine future workshops and to begin discussions on developing and articulating a joint strategy towards a linear collider.

In one sense, these efforts are in competition with each other. Each one has dedicated proponents and teams, and works hard to develop the competing technologies. But, in a more overriding sense, we are all working towards the same goal: to prepare for the next-energy-frontier machine for our field. Independent physics studies in Asia, the Americas and Europe have each given the highest priority for the future of the field to develop a lepton collider that complements the LHC while exploiting fully the terascale. These parallel R&D programmes are all needed to determine the technical capabilities, readiness, risks and costs of these options, while the LHC discoveries will determine the desired technical requirements and energy range.

Although the CLIC technology for the main linac is totally different from the superconducting RF technology of the ILC, other aspects of the design – including the sources, damping rings, beam delivery and detectors, as well as civil engineering and conventional facilities, and cost and schedule – have large overlaps. For that reason, last year we initiated a set of seven joint working groups in those areas where we can pool our resources and work together for the benefit of both teams.

During the meeting we reviewed the progress of the joint working groups and discussed future plans and ideas for specific work and deliverables for these groups. In addition to the obvious benefits of combining resources for joint problems, we have agreed on some other new longer-term goals. In particular, we have agreed to take a step towards bringing our managements closer together by adding a CERN/CLIC representative as a member of the GDE executive committee, and vice versa for the equivalent CLIC steering committee. Another step we have agreed on is to investigate the integration of our major CLIC and ILC workshops into common Linear Collider Workshops, from which the first one is foreseen tentatively at CERN on 20–24 September 2010.

Barry Barish, director of the ILC GDE.

• Extracted with permission from Director’s Corner, www.linearcollider.org/cms/?pid=1000644.

There are two major efforts underway to develop a linear electron–positron collider to complement CERN’s LHC in the exploration of physics in the region of the “terascale” – energies of around 1 tera-electron-volt (TeV) and higher. The concept pursued by the International Linear Collider (ILC) Global Design Effort (GDE) is based on superconducting RF technology for collisions up to 1 TeV in energy. The Compact Linear Collider Study (CLIC) on the other hand, is developing a novel technological approach based on two-beam acceleration, which is potentially capable of achieving collisions at energies of multi-tera-electron-volts. Now these two efforts are coming closer together with the aim of combining resources on areas of common interest.

On 12 June the first joint meeting of the ILC GDE executive committee, the CLIC steering committee and the CERN directorate took place at CERN. The GDE’s executive committee consists of the GDE’s director, Barry Barish, together with three regional directors (for the Americas, Asia and Europe), three project managers and three accelerator experts, who include the chairman of the CLIC steering committee (Jean-Pierre Delahaye). The CLIC steering committee comprises accelerator, detector and particle-physics experts as well as the chairman of the CLIC/CTF3 collaboration board (Ken Peach) and the ILC representative (Brian Foster).

The meeting proved to be a successful start to bringing the ILC and CLIC efforts closer together, particularly in areas linked to the construction and implementation of a future collider (see box). Following the meeting, a statement of common CLIC/ILC intent is under discussion. The aim is to promote and develop scientific and technical preparations for a linear collider as well as exploit possible synergies that enable the design concepts for the ILC and CLIC to be prepared efficiently in the best interest of linear colliders and more generally of high-energy physics.

Higher energies

One area of common ground is the development of suitable detectors for the particular environment of a terascale e+e linear collider. CERN joined this worldwide detector–development effort through its newly established Linear Collider Detector (LCD) project, which targets physics and detectors at a future collider, be it ILC or CLIC.

Currently most of the effort at CERN is going into the preparation of the conceptual design report for CLIC, which will be delivered by the end of 2010. Earlier studies have shown that the layout of an experiment exploiting the physics potential of a 3 TeV CLIC machine is in many ways similar to an experiment designed for sub-tera-electron-volt energies. Therefore, the ILC detector concepts (named ILD and SiD) form an excellent starting point for the CLIC study. Adaptations concentrate on a few essential differences: the higher CLIC energy, the increased beam-induced background rates and the ultra-fast 0.5 ns bunch spacing.

CCilc2_08_09

Compared with the ILC, outgoing particles will generally have higher energies at CLIC and will often group closely together in highly boosted jets. To preserve a good performance level this normally calls for an increase in lever-arm of the tracking system and more depth for the calorimeters. In practice this increase in size can be limited by optimizing the choice of detector materials and granularity, thereby restricting the corresponding increase in the inner radius of the detector’s solenoid coil.

The electron- and positron-beam bunches in CLIC are extremely small, with sizes of just 40 nm wide and 1 nm high. Their close encounter gives rise to strong electric fields, leading to the emission of numerous beamstrahlung photons, most of which will leave the detector through the outgoing beam pipe. Some subsequent secondary-beamstrahlung products will nonetheless enter the main detector volume. Because bunch crossings take place every 0.5 ns, the resulting background hits in the detector will overlay quickly while genuine high-energy e+e-physics interactions will take place at a much smaller rate. This means that to preserve the capability to recognize the physics signatures with good precision, the electronic-signal readout of most detectors at CLIC will require time stamping. Current studies indicate that a time–stamping resolution in the 20 ns range will be sufficient.

Low power consumption will be a “must” for all future linear-collider detectors because this allows for low-mass detectors and, therefore, excellent track and vertex precision. Turning the power of the detectors on and off at the pace of the incoming bunch-trains can potentially reduce the on-detector power dissipation by nearly two orders of magnitude. The corresponding power-pulsing rate will be 5 Hz for the ILC and 50 Hz for CLIC.

Given the similarity between the ILC and CLIC detectors, the new LCD-physics and detector project is an important cornerstone of the ILC–CLIC collaboration. It integrates fully into the worldwide detector and physics studies – and profits from the tremendous developments made for the ILC and its predecessors. It uses the ILC-experiment concepts and detector technologies as a basis and makes use of the same simulation tools. As of 2010, hardware R&D will start in a number of critical areas for a CLIC detector, such as very dense hadron calorimetry, time stamping of tracking and calorimeter signals, power pulsing of detector electronics and reinforced conductors for a large solenoid.

Motivated by the case of an e+e collider as the next machine to explore particle physics at the terascale, work towards a common linear-collider-physics community is underway. The LHC results will tell us whether this will be a sub-tera-electron-volt machine (ILC) or whether an energy reach to multi-tera-electron-volts is needed (CLIC).

• For further information about ILC, see www.linearcollider.org/. For details about CLIC, see http://clic-study.web.cern.ch.

bright-rec iop pub iop-science physcis connect