by Anil Ananthaswamy, Houghton Mifflin Harcourt. Hardback ISBN 9780547394527, $25. Paperback ISBN 9780547394527 $15.95.
In his recent book The Edge of Physics, Anil Ananthaswamy, a science writer for New Scientist, covers the most extreme physics and astronomy experiments that are set to uncover the secrets of neutrinos, dark matter and dark energy, galaxy formation, supersymmetry and extra dimensions. The author takes us on an extraordinary journey over five continents to tour the best telescopes and particle detectors, from the summits of the Andes to deep down in the Soudan mine, stopping by the South Pole and paying a visit to CERN. Following him on this trip is already exciting, but reading his account of discussions with physicists, astronomers and engineers along the way is simply fascinating. He tells us about each experiment as he discovered them through discussions with the scientists involved. For example, he writes about the ATLAS experiment through the eyes of Peter Jenni, Fabiola Gianotti and François Butin, with added insight from a meeting with Peter Higgs.
This makes for lively reading about all of these experiments. He not only tells us the most striking details about how each one was built, but he also includes accurate information about the science and technology behind them, avoiding clichés in his efforts to make it understandable to all. You read about his stay at Lake Baikal, discussing neutrino physics with enthusiastic and dedicated physicists such as Igor Belolaptikov and Ralf Wischnewski, sharing stories and vodka with them on the shores of Lake Baikal in the midst of winter – the only time that the photomultiplier tubes of the underwater neutrino experiment can be serviced from the frozen surface of the lake. The reader learns about the scientific research at all of these places through personal accounts from the scientists involved. At times, it felt as if I was meeting old friends at a conference and hearing their best stories about their experiments, sharing their enthusiasm and discovering unknown details about their research.
The Edge of Physics also allowed me to learn more about the best astronomy instruments, some located in idyllic places such as Hawaii, while others are under construction in the least life-sustaining places, such as in the Karoo desert in South Africa or the Hanle Valley in India. Ananthaswamy’s book is as much a tribute to the science as it is to the dedicated scientists pushing the limits of knowledge. His clear explanations and entertaining style will appeal to scientists and non-scientists alike. A book not to be missed.
The weeks following the first collisions at 7 TeV in the centre of mass have seen the LHC pass important milestones in delivering higher instantaneous luminosity to the experiments. With days dedicated mainly to beam-commissioning studies and nights given over to the preparation and delivery of collisions for experiments, the progress is clear to see.
The weekend of 23–24 April saw not only a tenfold increase in instantaneous luminosity to above 1.1 × 1028 cm–2 s–1 in all four experiments but also a record physics fill, with the machine in “stable-beam” mode for 30 hours. This allowed the experiments to more than double the total number of events recorded at 7 TeV. The successful weekend had been preceded by work to commission the “squeeze” on the beams at an energy of 3.5 TeV per beam at all four interaction points. This process, one of the most complex stages in the operation of the accelerator, was followed by a number of collimation and beam-dump tests to ensure sufficient protection of the experiments. The first physics fill with squeezed-beam optics led to a factor of five improvement in luminosity. A new bunch scheme with three bunches per beam then provided a further improvement by another factor of two.
Sunday 2 May saw another major step towards higher intensities with the first fill with two bunches per beam at the nominal (design) intensity of 1 × 1011 protons per bunch, at the injection energy of 450 GeV per beam. Within an hour of injection the team had removed the “separation bumps”, which keep the beams separated during the ramp, at all four interaction points simultaneously, thus providing collisions. After some further adjustments the operators were ready to prepare a second fill, this time with collisions in stable beam conditions, for the first time with bunches at nominal intensity.
The following two weeks saw further steps in a two-pronged approach to deliver higher luminosity to the experiments, either with more bunches or more protons per bunch. With two bunches per beam providing a total of up to 4 × 1010 protons per beam, the LHC was already delivering an instantaneous luminosity of 2 × 1028 cm–2 s–1 in some long periods of stable running at 3.5 TeV per beam on the weekend of 8–9 May.
After further tests, on the beam-dump system and aspects of machine protection, for example, on 14 May, a physics fill began with squeezed beams and four bunches per beam, giving a total of 8 × 1010 protons per beam. The next day, the first test took place to ramp a beam at nominal intensity at 3.5 TeV, with 60% of the beam surviving the ramp. This was followed by a long fill of stable beams for nearly 24 hours, now with six bunches per beam. It was then time to try ramping again with one bunch at nominal intensity and this eventually succeeded for beam 2, with 1.2 × 1011 protons, in fact, 10% above nominal and without losses.
By the end of the long weekend of 14–16 May, the LHC had doubled the integrated luminosity previously delivered since the restart in March. Then, on the evening of 17 May, both beams were successfully ramped at nominal intensity, marking the passage of another milestone in the progress towards the final targets for the year.
Twenty years ago, an amateur scuba diver swimming off the coast of Oristano in Sardinia found a navis oneraria magna – a 36-m Roman ship dating back more than 2000 years, to between 80 and 50 BC – whose cargo consisted of a 1000 lead forms. These were recovered with help from Italy’s National Institute of Nuclear Physics (INFN), which at the time received 150 of the lead bricks. Now, INFN is to receive a further 120 bricks to complete the shielding for the Cryogenic Underground Observatory for Rare Events (CUORE), in INFN’s Gran Sasso National Laboratory (LNGS).
INFN has received the lead bricks from the National Archaeological Museum of Cagliari in Sardinia. The bricks, together with the ship that transported them, had remained in the sea for two millennia, during which time the albeit low original radioactivity of one of the radionuclides, 210Pb, decreased by approximately 100,000 times. The 210Pb, which has a half-life of 22 years, has by now practically disappeared from the ancient Roman lead.
The parts of the bricks that contain inscriptions will be removed and conserved, whereas the remainder will be cleaned of incrustations and melted to construct a shield for the international CUORE experiment. Moreover, researchers from INFN will perform precise measurements on the lead (and possibly on the copper that was also found on the ship) to study the materials used in the Bronze Age.
The lead bricks were made available as the result of a collaboration involving INFN, its facilities in Cagliari and the Archaeological Superintendency of Cagliari, as well as with the support of the General Directorate of Antiquity. As part of this joint operation 20 years ago the INFN contributed 300 million lira for the excavation of the ship and the recovery of its cargo.
The bricks, which weigh about 33 kg each and are 46 cm long and 9 cm wide, will be used to shield the CUORE experiment. This collaboration is seeking to discover the extremely rare process of neutrinoless double-beta decay, which would allow researchers not only to measure directly the mass of neutrinos but also to determine whether or not they are Majorana particles (i.e. particles and antiparticles are one and the same). The detector will be based on an array of nearly 1000 tellurium-dioxide bolometers, cooled to about 10 mK.
The ALICE collaboration has submitted its first paper with results from LHC proton collisions at a centre-of-mass energy of 7 TeV. The results confirm that the charged-particle multiplicity appears to be rising with energy faster than expected.
The results are based on the analysis of a sample of 300,000 proton–proton collisions the ALICE experiment collected during the first runs of the LHC with stable beams at a centre-of-mass energy, √s, of 7 TeV, following the first collisions at this energy on 30 March. The collaboration compares them with data collected earlier at √s=0.9 TeV and √s=2.36 TeV, which they have re-analysed since their earlier publication, with the same normalization as for the new data.
The events used in the analysis have at least one charged particle in the central pseudorapidity region, |η| <1. The selection leaves 47,000, 35,000, and 240,000 events for analysis at 0.9 TeV, 2.36 TeV, and 7 TeV, respectively. At 7 TeV, the collaboration measures a pseudorapidity density of primary charged particles, dNch/dη = 6.01 ± 0.01(stat.)+0.20/0.12(syst.). This corresponds to an increase of 57.6% ± 0.4%(stat.)+3.6/1.8%(syst.) relative to collisions at 0.9 TeV.
This increase is significantly higher than expected from calculations with the commonly used models, so confirming the observations made earlier at 2.36 TeV. In addition, the ALICE collaboration find that the shape of the multiplicity distribution is not reproduced well by the standard simulations. These results have already triggered interest in the cosmic-ray community.
A meeting at GSI Darmstadt on 8–10 April provided a first opportunity to formulate a strategy on the laser technology needed to meet the challenge of future accelerators that will use or rely on lasers with very high average power. Hartmut Eickhoff, technical director of GSI, and Wim Leemans of Lawrence Berkeley National Laboratory opened the event. Leemans is chairman of the newly established Joint Task Force on Future Applications of Laser Acceleration, which operates under the umbrella of the International Committee for Future Accelerators (ICFA) and the International Committee on Ultra-High Intensity Lasers (ICUIL). The task force had invited experts on high-power laser technology and accelerator technology and their applications to this first meeting. Altogether, there were 47 participants from countries around the world, including China (1), France (4), Germany (18), Japan (4), Switzerland (2), the UK (4) and the US (14).
The main topics of discussion were the laser performance needed for accelerator technology to support the most challenging present and future needs, as well as questions of laser architecture, laser material and optical components. Representatives from accelerators and light sources outlined the top-level laser requirements for potential laser-based accelerator applications – that is, for colliders, light sources and medical applications.
The biggest challenge for laser technology is a laser-plasma e+e– collider with the goal of a top energy of 10 TeV. The consensus in the global high-energy physics community is that the next large collider after the LHC should be a tera-electron-volt-scale lepton collider. Options currently under study include an International Linear Collider (ILC) at 0.5–1 TeV, a Compact Linear Collider (CLIC) at up to 3 TeV and a muon collider at up to 4 TeV, all using RF technology. On the other hand, the very high gradients of around 10 GeV/m that are possible with laser acceleration, offer new avenues to reach even higher energy and more compact machines.
The workshop investigated the beam and laser parameters of a 1–10 TeV e+e– collider, with a luminosity of 1036 cm–2s–1, based on two different technologies – laser plasma acceleration and direct laser acceleration. The main challenges to the practical achievement of laser acceleration are high average power (around 100 MW), high repetition rate (kilohertz to megahertz), and high efficiency (around 40–60%) at a cost that ideally would be an order of magnitude lower than using technology based on RF. The workshop also studied the laser requirements for a 200 GeV γγ collider, proposed as the first stage of a full-scale ILC or CLIC. The laser systems required for such a collider may be within reach of today’s technology.
For light sources, lasers already play a significant role in existing facilities but they face new challenges with future projects that aim at much higher repetition frequencies. Ultrafast (femtosecond) lasers reaching levels of 1–10 kW will be required for use as “seed lasers” and for user-driven experiments. The third area of application is the use in medicine of laser acceleration of protons or ions and its potential to replace technology currently used in tumour therapy. Such lasers typically have very high peak-power (petawatt class) and require special pulse shapes with very high temporal contrast. Again, compact multi-kilowatt lasers will be needed.
Laser requirements for these applications are often many orders of magnitude beyond the capabilities of the lasers that are used in today’s scientific work, i.e. they require megawatts instead of tens of watts. Representatives from laser science at the meeting discussed and outlined how, with appropriate R&D, emerging 100-kW-class industrial lasers, 10-MW-class laser technologies for fusion energy and megawatt-class laser systems for defence work might be adapted to meet these challenging requirements.
Results from the workshop, including tables of the parameters required for laser technology and the goals, will be compiled in a report and submitted to ICFA and ICUIL for their approval, prior to public release.
Under a cloud of volcanic ash, the first annual meeting of the European Co-ordination for Accelerator Research & Development (EuCARD) project took place at the Rutherford Appleton Laboratory of the UK’s Science and Technology Facilities Council (STFC). From 13–16 April, this melting pot of ideas saw neutrino physicists mix with collimator designers, RF experts and magnet specialists to discuss progress with the EuCARD project as well as cutting-edge topics in accelerator sciences. EuCARD is a four-year project co-funded by the EU’s Framework Programme 7 (FP7), which involves 37 partners from 45 European accelerator laboratories, universities, research centres and industries.
The attendees, more than 100 in number, heard how the EuCARD networks (neutrino facilities, accelerator performance and RF technologies) had successfully oriented themselves towards efficient topical meetings, including the successful mini-workshop on LHC crab cavities that took place in October 2009. Two of these networks are currently considering increasing their scope to include plasma-wave acceleration and medical accelerators.
The collaborative R&D studies, whether on magnets, collimation, linear-collider technologies or advanced concepts, have demonstrated effective collaborations with promising progress. Out of many examples, the highlights presented at the meeting included an implementation strategy for crab-crossing at the LHC described by Rama Calaga of Brookhaven and progress in the design of a new compact crab cavity presented by Graeme Burt of Lancaster University. Many success stories provided food for thought, including impressive results on crab-waist luminosity at Frascati reported by Catia Milardi of INFN and prototyping of cryogenic collimators at the Facility for Antiproton and Ion Research, as Peter Spiller of GSI described. Whetting everyone’s appetite for a new type of acceleration was the talk by Allen Caldwell, of the Max Planck Institute for Physics, on proton-driven plasma-wave acceleration, following on from a recent EuCARD workshop. Future facilities for neutrinos were also part of the event with animated discussions about superbeams, beta beams and neutrino factories.
The EU strongly promotes access to European facilities, and within EuCARD opportunities are now open to external researchers. Four teams have already received EU support for access to the MICE facility at the Rutherford Appleton Laboratory, while the HiRadMat facility is at a design stage at CERN.
Enlarging the vision beyond EuCARD, guest speakers from related projects included Roland Garoby from CERN on the preparatory phase for an LHC upgrade (SLHC-PP); Eckhard Elsen from DESY on high-gradient superconducting RF cavities for an International Linear Collider (ILC-HiGrade); and Brigitte Cros of the French National Center for Scientific Research (CNRS) on the EuroLEAP project on laser-driven plasma-wave acceleration. Eric Prebys of Fermilab and the US LHC Accelerator Research Program showcased the strong R&D collaborations between the US and Europe, as well as exceptional advances in magnet design. Tord Ekelöf of Uppsala University and Roy Aleksan of the French Atomic Energy Commission (CEA) and the European Steering Group on Accelerator R&D (ESGARD) put EuCARD’s contribution towards the global accelerator R&D effort into perspective. A natural outcome was a discussion, under the auspices of ESGARD, of ways and means to tighten European and global collaborations.
Holding the meeting at the Rutherford Appleton Laboratory allowed attendees to visit the ISIS neutron source and the Diamond Light Source facility. In addition, staff from STFC presented aspects of the UK programme, notably Susan Smith with a summary of the ALICE and EMMA facilities at Daresbury and Mike Poole with an overview of STFC’s programme of accelerator R&D.
In his concluding remarks, CERN’s Jean-Pierre Koutchouk, the EuCARD project co-ordinator, acknowledged the quality and interest of the presentations, and the promising first results of this 4-year project. He thanked the 37 European partners for their dedication and dynamism and the STFC for the outstanding organization of the meeting at the Rutherford Appleton Laboratory.
Astronomers have found evidence that the magnetic field between galaxies cannot be negligible. Otherwise, a blazar observed by the High Energy Stereoscopic System (HESS,
CERN Courier January/February 2005 p30) at tera-electron-volt (TeV) energies should also have been detected by the Fermi Gamma-Ray Space Telescope in the giga-electron-volt (GeV) range (CERN Courier November 2008 p13). The non-detection by Fermi sets a lower limit of the order of 10–19 T on the magnetic field strength in intergalactic space, which is consistent with a cosmological origin of the field.
In the universe, the bigger the object, the weaker its magnetic field. The strongest fields are found around magnetars, neutron stars with a field of up to 1011 T (CERN Courier June 2005 p12). The Earth, being almost 1000 times larger than a neutron star has a modest field of 10–4 T, and the Milky Way’s field is again about 20,000 times less, even at its centre. Measuring the extremely low magnetic field of intergalactic space – away from any galaxy – is very challenging. Until now, only upper-limits could be set, but thanks to Fermi’s observations, the situation is changing. Two recent, independent papers give a lower-limit of the magnetic field based on very high-energy observations of the blazar 1ES 0229+200. Andrii Neronov and Ievgen Vovk from the ISDC data centre of the University of Geneva constrain the field to be higher than 3 × 10–20 T, while Fabrizio Tavecchio and colleagues from the Observatory of Brera, Italy, derive a lower limit of 5 × 10–19 T. The difference comes from differing analyses and assumptions, but both studies are based on the same dataset and the same argument.
The blazar – a powerful jet source in a distant galaxy – emits gamma-rays at TeV energies in a narrow cone that happens to be pointing towards the Earth. Along their journey, some of the gamma-rays interact with the optical-infrared background light to produce electron–positron pairs (CERN Courier June 2006 p14). These pairs will then rapidly cool on photons of the cosmic microwave background and Compton scatter them to GeV energies. The strength of the intergalactic magnetic field can have an influence on the intensity of the GeV photons observed by Fermi. A weak field will not significantly deflect electrons and positrons and hence the GeV photons will be predominantly emitted in the same direction as the primary TeV photons, whereas a strong field will result in an isotropically distributed GeV emission, which will then be undetectable by Fermi. The measurement of the TeV spectrum of the blazar by HESS, together with the upper limit on the GeV emission from the source direction set by Fermi, thus results in the determination of a lower limit on the magnetic field.
This very indirect determination is subject to many uncertainties, in particular on the opening angle of the jet emission, the intrinsic spectrum of the blazar and the intensity of the optical-infrared background light. Nevertheless, the estimated magnetic field is strong enough to favour a “top-down” scenario for its origin. The idea is that the accretion of matter within stars and galaxies amplifies a preexisting magnetic field that permeates the universe and would have been produced soon after the Big Bang. The alternative “bottom-up” scenario, where the magnetic fields are first produced in stars and then propagate outwards to galaxies and eventually intergalactic space, is disfavoured. This is good news for cosmologists because the field might help to identify and constrain processes at work in the very early universe.
More than half of the world’s people live in Asia. Even putting aside the two titans India and China, there are some 600 million inhabitants – 100 million more than in the entire EU – in the region that is commonly referred to as South-East Asia. From Myanmar in the west to Indonesia’s Papua province in the east, the territory is nearly twice the width of the continental US. Most of the Asian partners in EUAsiaGrid hail from this region, which has more than its fair share of natural disasters in the form of earthquakes, volcano eruptions, typhoons and tsunamis, not to mention enduring political tensions.
Despite these challenging circumstances, EUAsiaGrid has managed to make a significant impact in a relatively short time. This has been driven by increased sharing of data storage and processing power between participating institutions in the region. It was achieved through a concerted effort by the project leaders to encourage the adoption across the region of the gLite middleware of Enabling Grids for E-sciencE (EGEE), which is the same middleware used by the Worldwide LHC Computing Grid (WLCG).
As the head of EUAsiaGrid, Marco Paganoni, who is based at INFN and the University of Milan-Bicocca, points out: “This technological push has enabled researchers in some of the participating countries to become involved in international science initiatives that they otherwise might not be able to afford to participate in.”
EUAsiaGrid owes its origins to the pioneering efforts of the global high-energy physics community
Like many other international Grid projects, EUAsiaGrid owes its origins to the pioneering efforts of the global high-energy physics community to promote Grid technology for science, and to the nurturing role of the European Commission in spreading Grid technical know-how throughout the world through joint projects. In addition, a key catalyst for EUAsiaGrid has been Simon Lin, project director of Academia Sinica Grid Computing (ASGC). His efforts established ASGC as the Asian Tier-1 data centre for WLCG. He and his team have been bringing Asian researchers together for nine years at the annual International Symposium on Grid Computing (ISGC) held each spring in Taipei.
The EUAsiaGrid project, launched as a “support action” by the European Commission within Framework Programme 7 in April 2008, focuses on discovering regional research benefits for Grid computing. “We realized that identifying and addressing local needs was the key to success in this region,” says Paganoni. From the outset, capturing local e-science requirements was an important component of the project’s objectives. Moreover, comparing those requirements revealed a great deal of common ground amid all of the regional diversity.
One common theme was the region’s propensity for natural disasters and the ability of Grid technology and related information technology solutions to help mitigate the consequences of such events. For example, EUAsiaGrid researchers have helped build links between different national sensor-networks, such as those of Vietnam and Indonesia. Researchers in the Philippines are now benefiting from the Grid-based seismic modelling experience of their Taiwanese partners. Sharing data and Grid know-how in this manner means that the scientists involved can better tune local models of earthquake and tsunami propagation.
At the most recent ISGC, which was held in March, a special EUAsiaGrid Disaster Mitigation Workshop devoted a day to the latest technological progress in monitoring and simulating both earthquakes and tsunamis. Nai-Chi Hsiao of the Central Weather Bureau in Taipei explained in a talk about the early-warning system for Taiwan that it takes just 60 s for an earthquake to travel from the south to the north of the island, leaving precious little time to make a decision about shutting down nuclear reactors or bringing high-speed trains to a grinding halt and so avoid the worst consequences that a large earthquake might cause.
Where could Grid technology fit into this picture? The island is rocked by earthquakes, both large and small, all of the time. It is simply not viable to shut down power plants and stop trains every time that a tremor is detected. What is needed is a quick prediction of the impact that a particular earthquake may have on key infrastructure across the island. However, the level of shaking that an earthquake produces 100 km away can depend strongly on, for example, the depth at which it occurs.
There is certainly no time to do a full simulation once an earthquake is detected. According to Li Zhao of the Institute of Earth Sciences at Academia Sinica, it might instead be possible to pull out a pre-processed simulation from a database and make a quick decision based on what it predicts. This would require processing and storing the results of simulations for a huge number of possible earthquake epicentres – a task that is well suited to Grid computing.
Neglected diseases
Another common thread of the research sponsored by EUAsiaGrid has been searching for cures to diseases that plague the region but which have been largely neglected by pharmaceuticals companies because they do not affect more lucrative markets in the industrialized world.
Consider dengue fever, for example. For most sufferers, the fever and pain produced by the disease pass after a very unpleasant week, but for some it leads to dengue haemorrhagic fever, which is often fatal. Like malaria, dengue is borne by mosquitoes. But unlike malaria, it affects people as much in the cities as it does in the countryside. As a result, it has a particularly high incidence in heavily populated parts of South-East Asia and it is a significant source of infant mortality in several countries.
As yet there are no drugs designed to specifically target the dengue virus. So EUAsiaGrid partners launched an initiative last July called Dengue Fever Drug Discovery, which will start a systematic search for such drugs by harnessing Grid computing to model how huge databases of chemical compounds would interact with key sites on the dengue virus, potentially disabling it.
This is not the first time that Grid technology has been used to amplify the computing power that can be harnessed for such ambitious challenges. Malaria and avian influenza have been targets of previous massive search efforts, dubbed by experts “in-silico high-throughput screening”.
Leading the effort on dengue at Academia Sinica in Taipei is researcher Ying-Ta Wu of the Genomics Research Centre. He and colleagues prepared some 300,000 virtual compounds to be tested in a couple of months, using the equivalent of more than 12 years of the processing power of a single PC. The goal of this exercise was not just to get the processing done quickly but also to encourage partners in Asia to collaborate on sharing the necessary hardware, including institutes in Malaysia, Vietnam and Thailand.
It is not just hard sciences such as geology and biology that benefit from Grid know-how. Indeed, as Paganoni notes: “Modelling the social and economic impacts of major disasters and diseases is a Grid-computing challenge in itself, and is often top of the agenda when EUAsiaGrid researchers have discussions with government representatives in the region.”
Even the humanities have benefited from these efforts. Capturing culture in a digital form can lead to impressive demands for storage and processing. Grid technology has a role to play in providing those resources. For instance, it can take more than a week using a single desktop computer to render a 10-minute recording of the movements of a Malay dancer performing the classical Mak Yong dance into a virtual 3D image of the dancer, using motion-capture equipment attached to the dancer’s body. Once this is done, though, every detail of the dance movement is permanently digitized, and hence preserved for posterity, as well as being available for “edutainment” applications.
The problem, however, is that a complete Mak Yong dance carried out for ceremonial purposes could last a whole night, not just 10 minutes. Rendering and storing all of the data necessary for this calls for Grid computing.
Faridah Noor, an associate professor at the University of Malaya, became involved in the EUAsiaGrid project because she saw great potential for Grid-enabled digital preservation of traditional dances and artefacts for posterity. She and her colleagues are working on several projects to capture and preserve digitally even the most ephemeral cultural relics, such as masks carved by shamans of the Mah Meri tribe used to help cure people of their ailments or to ward off evil. The particular challenge here is that the shamans deliberately throw the masks into the sea as part of the ritual, to cast away bad spirits.
As Noor, who works in the area of sociolinguistics and ethnolinguistics, points out: “We have to capture the story behind the mask.” Each mask is made for an individual and his or her illness, so capturing the inspiration that guides the shaman while preparing the mask is as important as recording the way in which he carves the wood, and rendering 3D images of the resulting mask.
An important legacy of the EUAsiaGrid project, Paganoni says, will be the links that it has helped to establish between researchers in the natural sciences, the social sciences and the humanities, both within South-East Asia and with European institutions. These links trace their origin to a common interest in exploiting Grid technology.
• Based on articles previously published in International Science Grid This Week, with permission.
The National Synchrotron Radiation Research Center (NSRRC), situated about one hour’s drive from Taipei, has begun the construction of its second synchrotron-light source, the Taiwan Photon Source (TPS), with a ground-breaking ceremony that took place on 7 February. Like any other large-scale project, reaching this milestone involved years of preparation and intense decision-making. The project requirements left little room for even small deviations from delivery timetables or for cost increases. To meet its mandate on time, the NSRRC has relied on its experienced staff members, many of whom had previously participated in the construction of the Taiwan Light Source (TLS) in 1983 – the first accelerator at NSRRC. This is allowing the project to meet challenging deadlines and to transfer expertise to younger engineers.
The TPS is a $210 million project involving, at various times, more than 150 staff in charge of design, construction, administration and management of day-to-day operations. The official proposal for the TPS was submitted in 2006 and primary funding was provided by the National Science Council over a seven-year period, with $54 million for civil construction backed by the Council for Economic Planning and Development. Conceptual designs of the major systems were completed in 2009 and key systems are currently under construction. These include the linac, the cryogenic system, the magnets and the RF transmitters.
The TPS will be equipped with a 3 GeV electron accelerator and a low-emittance synchrotron-storage ring 518.4 m in circumference (see table). This will be housed in a doughnut-shaped building, 659.7 m in outer circumference, next to the smaller circular building that houses the existing 1.5 GeV accelerator, the TLS. The dual rings will serve scientists from South-East Asia and beyond who require an advanced research facility for conducting experiments with both soft and hard X-rays.
The storage ring
The TPS storage ring comprises 24 bending sections, 6 long straight sections and 18 short ones. A mock-up of a unit cell representing 1/24 of the storage ring has been constructed to test all systems before mass production, including the 14-m long vacuum pipe, prototype magnets and girders. This mock-up will be useful for evaluating and correcting – if necessary – specific design decisions. It has also served as a case study for the Machine Advisory Committee that reviewed the status of the TPS from technical and scheduling standpoints. One significant benefit gained from such a mock-up is that it allows for the spatial study of components that fit closely together, as well as of the cables and piping.
The vacuum chambers are made of aluminium alloy, based on the merits of lower impedance, lower heat resistance and its outgassing rate. There are two bending chambers per unit cell, each 4 m in length with, in some places, a 1 mm gap to the adjoining sextupole magnet in a bending section. In total there are 48 such units in the storage ring, with walls typically 4 mm thick in the straight sections. The beam pipes are made from aluminium extrusions with two cooling channels on each side. There are also several long vacuum chambers to cope with undulators installed between the magnet poles.
From vacuum to RF
A 14-m long vacuum pipe was produced as part of the 1/24 mock-up. Foreseeable production challenges include the development of machining and cleaning, of welding and cooling systems for the bending-chambers, and of a means to transport the finished product from the assembly site to the TPS storage ring. To minimize the mechanical distortion caused by thermal irradiation of the vacuum chambers, cooling-water channels are attached on both sides of the pipe and where the beam-position monitors (BPMs) are located. To transport the 14-m long vacuum pipe, a “hanger” of equivalent size was built to carry the assembled unit. A successful rehearsal, moving the transportation gear along 8 km of busy streets took place in March. The next step will be to ensure that no damage occurs to the vacuum pipe during the process.
To achieve optimal performance, the TPS accelerator will be mounted on metal girders placed on pedestals that can be adjusted via remote-control. The mock-up has demonstrated the sophistication reached in the design of these girders. Metal girders often suffer from rather low eigenfrequencies compared with concrete girders, especially when heavy magnets are placed on them. The TPS girders, however, are very stiff, which pushes up the eigenfrequencies. Measurements so far are in close agreement with predicted performance.
The TPS is designed for “top-up” operation, which is the standard operation mode in the TLS. The TPS injector complex will consist of a 150 MeV linear accelerator and a full-energy booster that will share the tunnel with the storage ring. Because this is a new facility with a low-emittance injector, the opportunity exists for using pulsed multipole injection, which may have significant benefits for quiet top-up. To allow acceptance tests of the linac before the storage-ring tunnel becomes available, construction work is under way on a bunker that will see future use for a Free-Electron-Laser (FEL) injector test facility.
Each of the 24 achromatic bending sections (unit cells) in the TPS contains 2 dipoles, 10 quadrupoles and 7 sextupoles. A further 168 skew quadrupoles, 1 injection septum-magnet and 4 kicker magnets, bring the total number of magnets to be installed to 629. All of the magnetic cores are made of silicon-steel sheet. The shaping of the iron laminations are made by wire cutting with computer numerical control machines to within 10 μm accuracy and are shuffled to ensure uniform magnetic properties. Accuracy in the magnet assembly is to be controlled to within 15 μm. The upper half of the magnet can be removed to install the vacuum chamber and the whole magnet can be detached without removing the vacuum chamber. The entire design for the magnet was performed in house with prototypes produced during phase I for thorough testing and measurement.
The TPS adopts the KEK approach to superconducting RF (SRF) to cope with future operational modes. Collaboration and technology transfer on the 500 MHz SRF module, as used at KEK for KEKB, is a de facto requirement to ensure the timely development of the SRF modules (including the 1.8 K cryostat for the harmonic SRF modules) and of technology-transfer for a higher-order-mode damped superconducting cavity suited to high-intensity storage rings. Conventional PETRA-type cavities will be considered as an alternative for commissioning in case the SRF cavities are not available in time.
The complexity and cost of constructing a new accelerator facility adjoined to an existing one is much higher than for one built on undeveloped land. However, to optimize resources and personnel, and the use of common equipment, as well as to allow a versatile research facility for users of both accelerators, the decision was taken to build the TPS at the NSRRC home base.
The site slopes down from south to north and abruptly descends 5 to 10 m at the northern edge, where the TPS will be built. The geology around the site is simple with gravel as the main formation. Ideally, the platform for the storage ring would be created above ground or by digging underground. The first approach is expensive and risks instability in an area known for frequent earthquakes; the latter will magnify the humidity problems in land soaked with rain and may cause a partial, if not total, subsidence of the existing TLS. To keep the civil construction cost within budget, the solution has been to meet both alternatives half way. The TPS storage-ring building will have its floor at the beamline area 12.5 m underground near the south side, and 4 m above ground at the north side. A beamline for medical imaging will be located on the west side next to the busiest traffic of the Hsinchu Science Park, while beamlines demanding nanoscale resolution will be located away from the possible sources of vibration.
Building a new accelerator next to an existing one involves continual challenges
Building a new accelerator next to an existing one involves continual challenges. Because the TPS building cuts into the edge of the TLS, the prevention of instability and vibration in the TLS caused by the construction work is a critical issue. To prepare for this daunting task, the NSRRC held workshops on ambient ground motion and civil engineering for the TPS in 2005 and 2008, so as to study the methods and strategic solutions used at other synchrotron facilities. These resulted in mechanical approaches to eliminate or reduce amplification of the floor motion by the girder system for the TPS, while also adding steel piles to prevent the adjacent TLS foundations from gradually crumbling.
Various methods to protect the TLS foundations and building centre on supporting the ground soil with in situ reinforcing and shoring-up the longitudinal sections that are exposed by excavation work. Taking advantage of the fact that the site is mainly of gravel formation, the TLS beam columns were reinforced with additional frames. In addition, seven H-beam, Type-L steel piles, 17.5 m long, were inserted in places where parts of walls of the TLS storage ring previously stood. Each pile was also equipped with a 200 cm × 120 cm × 60 cm concrete beam laid horizontally against the TLS foundations. These piles provide pressure to prevent the TLS from rising through elastic deformation occurring when the suppression disappears as a result of the 10 m-deep excavation.
To meet the target milestone of commissioning by the end of 2013, civil construction and accelerator installation will proceed concurrently. Partial occupancy of the linac building and ring tunnel needs to occur by the beginning of 2012 to meet the installation timetable for ring components. Power and other utilities will be brought in once pedestal paving and the installation of piping and cable trays begins. This will allow the setting up of the booster ring and subsystems in the storage ring. The SRF cavity will be the final component to move in and tests for TPS commissioning will follow accordingly.
With the accumulated expertise from the past, the design of the TPS has been achieved by the NSRRC’s own members. With their capability in developing insertion devices for the TLS and systems to cope with their operation established since 1993, the photon energy of the TPS should reach 30 keV. With a maximum brightness of 1021 photons/s/0.1%BW/mm2/mrad2 at 10 keV it will be among the brightest light-sources available.
The INFN’s Gran Sasso National Laboratory provides the world’s largest underground infrastructure for astroparticle physics. It currently hosts four operational dark-matter experiments – CRESST, DAMA-LIBRA, WArP and XENON – and was therefore a fitting venue for WONDER, the Workshop On Next Dark-matter Experimental Research. Designed to generate fruitful discussions about the future of the exciting field of dark-matter physics, the workshop was held on 22–23 March and attracted around 100 participants.
As is well known, “dark matter” is the name given to 23% of the “inventory” of the universe, the existence of which is indicated by several experimental facts, the first and most famous being the anomalous behaviour of the radial velocity of galaxies. Although some alternative models still survive to explain these unexpected effects, the most fascinating explanation – at least for particle physicists – is the existence of stable, massive particles that interact only weakly with ordinary matter and permeate all galaxies, including ours. Supersymmetry provides a nice theoretical framework for such an explanation, and the lightest supersymmetric particle, the neutralino, could be a viable candidate for dark matter. First, however, someone has to observe some experimental evidence to pin down the characteristics of the “dark” particles, which are often referred to as “WIMPs” – weakly interacting massive particles. The question is: how to identify the particles?
One way is to look for the production of WIMPs in collisions at the LHC at CERN. Other “indirect” techniques look for likely signatures of annihilations of WIMPs occurring in the Sun, Earth or galactic halo; these could appear, for example, as anomalous neutrino or gamma-ray fluxes. A third method is to observe the direct interactions of WIMPs with ordinary matter. Underground laboratories are the ideal place to carry out this quest. Anywhere else on the surface of the Earth, the overwhelming cosmic radiation would drown out the tiny signal (if it exists), making the search as hopeless as trying to spot a distant star in daylight.
Even amid the “cosmic silence” at the heart of a mountain (as at Gran Sasso), dark-matter experiments struggle to attain the best sensitivity with elaborate techniques and, above all, by trying to reduce the residual gamma and neutron backgrounds to unprecedentedly low levels.
DAMA-LIBRA, one of the first experiments at Gran Sasso, does in fact observe a significant modulation signal in its scintillators of high-purity sodium iodine, which is identical to the one that the motion of the Earth through the dark-matter halo is supposed to cause. The DAMA collaboration presents this signal as evidence for the discovery of dark matter and the scientific community waits for a confirmation, possibly with new, different techniques. The problem is that, until to now, the other experiments seem to rule out DAMA-LIBRA’s result, although the comparison between different techniques is far from straightforward. Theoretical models still survive that reconcile all current experimental results with a positive discovery by DAMA-LIBRA.
Among today’s technologies, detectors employing cryogenic noble liquids occupy a pre-eminent position. These seem to allow for excellent signal-to-background discrimination, coupled with the possibility to build massive detectors. The Gran Sasso National Laboratory provided a natural location to discuss the future of these searches because it hosts three experiments, other than DAMA-LIBRA, that are competing for the discovery of dark matter, namely CRESST, WArP and XENON. The race is particularly interesting between the latter two of these because they use the same “double-phase” technique, but with different targets. XENON employs 160 kg of its homonymous noble element in liquid form, while WArP has a similar amount of liquid argon, a medium with which research groups at INFN have considerable expertise.
Carlo Rubbia, the spokesperson of the WIMP Argon Programme (WArP), opened the workshop with an excellent and comprehensive overview of the experimental landscape. This was followed by theoretical talks that helped to set up the general framework of the field. With regard to experimental activities, preliminary results from the XENON100 detector provided a highlight of the workshop. About 11 days of data have been analysed and were presented by the XENON spokesperson Elena Aprile, from Columbia University. The data show an extremely low background – the lowest ever reached – and raise even stronger expectations for future results.
Claudio Montanari of INFN presented the status of WArP, which has just started data-taking, while Wolfgang Seidel of the Max Planck Institute talked about interesting results from CRESST, a detector made from scintillating calcium-tungstate crystals. Activities beyond Gran Sasso were also discussed. Masaki Yamashita of Kamioka/Tokyo presented Xmass, a particularly promising detector based on liquid xenon, which is close to its commissioning phase in the Kamioka mine in Japan. Newer techniques also seem to be interesting and promising. These include the directional detectors that Neil Spooner of Sheffield University described and, in particular, the bubble chambers COUPP and PICASSO, which Nigel Smith of SNOLAB discussed in his extensive overview of dark-matter activities around the world.
Two stimulating talks were dedicated to the problem of backgrounds, especially from neutrons. Frank Calaprice of Princeton University and Vitaly Kudryavtsev of Sheffield University described these issues. The final session covered, in depth and in a critical manner, the issues of backgrounds, sensitivity and stability for each group of techniques.
Overall, the workshop revealed an extremely lively field, with existing detectors producing new results, others about to enter their commissioning phase, advanced projects being proposed for new underground facilities and intense theoretical activity. We all “wonder” if a discovery is just round the corner.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.