Comsol -leaderboard other pages

Topics

The dark side of computing power

On a recent visit to CERN, I had the chance to see how the high-energy physics (HEP) community was struggling with many of the same sorts of computing problems that we have to deal with at Google. So here are some thoughts on where commodity computing may be going, and how organizations like CERN and Google could influence things in the right direction.

CCEvie1_11-05

First a few words about what we do at Google. The Web consists of more than 10 billion pages of information. With an average of 10 kB of textual information per page, this adds up to around 100 TB. This is our data-set at Google. It is big, but tractable – it is apparently just a few days’ worth of data production from the Large Hadron Collider. So just like particle physicists have already found out, we need a lot of computers, disks, networking and software. And we need them to be cheap.

The switch to commodity computing began many years ago. The rationale is that single machine performance is not that interesting any more, since price goes up non-linearly with performance. As long as your problem can be easily partitioned – which is the case for processing Web pages or particle events – then you might as well use cheaper, simpler machines.

But even with cheap commodity computers, keeping costs down is a challenge. And increasingly, the challenge is not just hardware costs, but also reducing energy consumption. In the early days at Google – just five years ago – you would have been amazed to see cheap household fans around our data centre, being used just to keep things cool. Saving power is still the name of the game in our data centres today, even to the extent that we shut off the lights in them when no-one is there.

Let’s look more closely at the hidden electrical power costs of a data centre. Although chip performance keeps going up, and performance per dollar, too, performance per watt is stagnant. In other words, the total power consumed in data centres is rising. Worse, the operational costs of commercial data centres are almost directly proportional to how much power is consumed by the PCs. And unfortunately, a lot of that is wasted.

For example, while the system power of a dual-processor PC is around 265 W, cooling overhead adds another 135 W. Over four years, the power costs of running a PC can add up to half of the hardware cost. Yet this is a gross underestimate of real energy costs. It ignores issues such as inefficiencies of power distribution within the data centre. Globally, even ignoring cooling costs, you lose a factor of two in power from the point where electricity is fed into a data centre to the motherboard in the server.
Since I’m from a dotcom, an obvious business model has occurred to me: an electricity company could give PCs away – provided users agreed to run the PCs continuously for several years on the power from that company. Such companies could make a handsome profit!

A major inefficiency in the data centre is DC power supplies, which are typically about 70% efficient. At Google ours are 90% efficient, and the extra cost of this higher efficiency is easily compensated for by the reduced power consumption over the lifetime of the power supply.

Part of Google’s strategy has been to work with our component vendors to get more energy-efficient equipment to market earlier. For example, most motherboards have three DC voltage inputs, for historical reasons. Since the processor actually works at a voltage different from all three of these, this is very inefficient. Reducing this to one DC voltage produces savings, even if there are initial costs involved in getting the vendor to make the necessary changes to their production. The HEP community ought to be in a similar position to squeeze extra mileage out of equipment from established vendors.

Tackling power-distribution losses and cooling inefficiencies in conventional data centres also means improving the physical design of the centre. We employ mechanical engineers at Google to help with this, and yes, the improvements they make in reducing energy costs amply justify their wages.

While I’ve focused on some negative trends in power consumption, there are also positive ones. The recent switch to multicore processors was a successful attempt to reduce processors’ runaway energy consumption. But Moore’s law keeps gnawing away at any ingenious improvement of this kind. Ultimately, power consumption is likely to become the most critical cost factor for data-centre budgets, as energy prices continue to rise worldwide and concerns about global warming put increasing pressure on organizations to use electrical power more efficiently.

Of course, there are other areas where the cost of running data centres can be greatly optimized. For example, networking equipment lacks commodity solutions, at least at the data-centre scale. And better software to turn unreliable PCs into efficient computing platforms can surely be devised.

In general, Google’s needs and those of the HEP community are similar. So I hope we can continue to exchange experiences and learn from each other.

LHC project passes several milestones

Progress on the construction of the Large Hadron Collider (LHC) at CERN has passed several important milestones in recent weeks. In mid-September the first 600 m of the cryogenic distribution line that will supply superfluid helium to the superconducting magnets passed initial testing at room and cryogenic temperatures. At the same time, the number of magnets installed in the tunnel passed the 100 mark, and several major contracts related to their construction have been successfully completed.

CCEnew1_11-05

The tests of the cryogenic line, which were the first to be implemented at close to the eventual operating conditions in the LHC tunnel, took place in sector 7-8. This is where technical problems were discovered during the initial installation in summer 2004, so that the system had to be redesigned, repaired and reinstalled.

CCEnew2_11-05

After several days of testing and cleaning at room temperature, the cool-down itself took 15 hours. This is a two-stage process using a 4.5 K helium refrigerator and a nitrogen pre-cooler. After the initial 10 hours of cool-down, the system reached the first temperature plateau of 80 K. Then, by the evening of 14 September, the cryogenic line had been brought down to around 5 K, about 3 K above the eventual operating temperature. The complete cold-commissioning process takes about five weeks. Once the thermal design has been validated, the magnets can then be connected to the cryogenic line.

Meanwhile, by the end of September, 102 of the LHC’s 1232 superconducting dipoles had been put in position in the tunnel. At the same time one of the most important contracts for the LHC had successfully concluded, with the supply of all 7000 km of the superconducting cable that forms the heart of the machine’s magnets. This cable has been provided by four companies in Europe – Alstom-MSA (France), EAS (Germany), Outokumpo (Finland/Italy) – Furukawa in Japan and OKAS in the US.

This was the latest in a series of contracts for the LHC that have recently come to completion. At the end of May, Belgian firm Cockerill Sambre of the Arcelor Group cast the last batch of steel sheets for the superconducting magnet yokes, which constitute around 50% of the accelerator’s weight. This was the first major contract to be concluded for the LHC; worth 60 million Swiss francs, it was signed just after CERN Council approved the LHC project in December 1996.

October saw the completion of the 60 km of vacuum pipes for the LHC beams by a single firm, DMV of Bergamo, Italy. These 16 m long pipes, made from austenitic steel, had to be continuously extruded and had to contain not a single weld in order to ensure perfect leak tightness between the vacuum inside and the superfluid helium outside. In the first week of September, the last rolls of austenitic steel for the collars of the dipole magnets arrived at CERN from NSSC (Nippon Steel) in Japan. The collars are designed to contain most of the magnetic forces created in the eight layers of superconducting coil that provide the magnetic field.

The production of the collared coils is also well on track. On 8 August Babcock Noell Nuclear (BNN) delivered their last collared coil, completing their contract for one-third of the dipole magnet coils. The contracts with the two other suppliers will also come to an end during the autumn of 2006.

Silicon trackers begin to take shape for CMS and ATLAS

Over the past few months the silicon microstrip tracker of the CMS experiment has been making steady, and rapid, progress towards meeting its next major target – installation of the complete detector in its site at intersection point 5 on the Large Hadron Collider in November 2006.

CCEnew3_11-05

This has been especially encouraging to the CMS collaboration as the past year has seen significant problems with relatively small details in a few key components, delaying the assembly of modules and their subsequent integration into the mechanical superstructure. However, these problems have now been overcome and the subsequent assembly speed of several inner layers of the tracker has demonstrated the readiness of the teams of engineers and physicists, who had used some of the time during the pauses to refine their procedures.

CCEnew4_11-05

The CMS tracker will be the largest silicon system ever built, with more than 200 m2 of silicon microstrips surrounding three layers of pixel detectors in a cylindrical barrel-like layout, with end-caps completing the tracking in the forward and backward regions. The construction involves teams from all over Europe and the US, who have developed components and pioneered automated techniques to manufacture modules that must withstand the stringent conditions at the heart of the CMS.

The inner barrel (see cover picture) is the responsibility of an Italian consortium. The delivery of the first half to CERN is expected this month, followed by the second half in January 2006. While tests begin on the inner barrel in a brand new integration facility, which is currently being erected at CERN, it will be joined by, and later inserted inside, the outer barrel system. This is largely the responsibility of CERN, and consists of modules arranged in rods that are being manufactured in the US by teams who have experience from Fermilab experiments. The two end-caps will complete the assembly in mid-2006; one will be built by a French team in the facility at CERN, the other by a German team in Aachen.

The remaining off-detector electronics and cooling systems are also beginning to arrive at CERN. These will allow the completed tracker to be studied for several months before it is moved to its final underground location at the centre of the CMS. Once in operation it will provide precise radiation-hard tracking for many years.

Meanwhile, September saw an important milestone for the ATLAS inner detector project with the delivery of the fourth and final Semiconductor Tracker (SCT) barrel to CERN. A few days after delivery, on 20 September, the barrel was integrated into the final configuration of the full barrel assembly.

The SCT has a silicon surface area of 61 m2 with about six million channels and is part of the ATLAS inner detector, where charged tracks will be measured with high precision. More than 30 institutes from around the world have contributed to building the component parts and structure of the SCT.

Moving outwards from the interaction region, the ATLAS inner detector comprises the pixel detector (consisting of three pixel layers), the SCT (four silicon strip layers) and the transition radiation tracker, or TRT (consisting of about 52,000 straw tubes).

During 2004 a team of physicists, engineers and technicians from several SCT institutes set up one of the largest silicon quality-assurance systems ever built (corresponding to about 15% of the final ATLAS readout system), which was capable of analysing the performance of one million sensor elements on nearly 10 m2 of silicon detectors simultaneously. Using this system to test barrels prior to their integration, the team found that more than 99.6% of the SCT channels were fully functional, an exceptionally good performance that exceeded specifications. The work is taking place in the SR1 facility at CERN, which was purpose-built by the ATLAS inner detector collaboration and houses a 700 m2 cleanroom.

This month the ATLAS inner detector teams will integrate the silicon tracker with the barrel TRT and test their combined operation in SR1. At the end of this year the SCT end-caps will arrive at CERN, and then be inserted into the TRT end-caps during spring 2006. In March 2006 the inner detector team will then place the barrel inner tracker in a steel frame and transport it to the ATLAS underground cavern. The entire integration process is scheduled to be finished at the end of 2006, when the all-important pixel detector will be inserted in the tracker.

The whole assembly of the inner detector will sit in the 2 T magnetic field of the central superconducting solenoid, which has a diameter of about 2.5 m. This will deflect the tracks of charged particles passing through the inner detector. The much larger air toroid magnet system (see CERN Courier cover picture September 2005) is to deflect the tracks of muons, which penetrate to the outer reaches of the huge ATLAS detector.

Rewards for optics in theory and practice

The 2005 Nobel prize in physics has been awarded to three physicists working in the field of optics, in recognition of past advances in the understanding of light as well as the present-day potential of laser-based precision spectroscopy. Roy Glauber of Harvard University receives half the prize for “his contribution to the quantum theory of optical coherence”, while John Hall of the University of Colorado and Theodor Hänsch of the Max-Planck-Institut für Quantenoptik in Garching share the other half for “their contributions to the development of laser-based precision spectroscopy, including the optical frequency comb technique”.

CCEnew5_11-05

The recognition of Glauber’s work comes appropriately enough in 2005, the centenary of Albert Einstein’s work on the photoelectric effect, in which he described radiation in terms of quanta, later termed photons. Glauber’s aim in his seminal paper of 1963 was to move from a semi-classical description of the photon field in a light beam towards a full quantum theoretical description, in particular to describe correlation effects. In Glauber’s words, “There is ultimately no substitute for the quantum theory in describing quanta.”

Glauber’s name is also familiar in particle physics, however, where he is widely known for his “Glauber model”, which nowadays has a range of applications in understanding heavy-ion interactions. In August 2005 he gave an opening talk at the Quark Matter 2005 conference in Budapest, 50 years after his original paper using diffraction theory to develop a formalism for calculating cross-sections in nuclear collisions. Glauber himself has regularly spent time as a visiting researcher in CERN’s theory division, from 1967 until the mid-1980s.

The work of Hall and Hänsch is by contrast a tour de force in experimentation. In developing a measurement technique known as the optical frequency comb, they have made it possible to measure light frequencies to within an accuracy of 15 digits. The “comb” exploits the interference of lasers of different frequencies, which produces sharp, femto-second pulses of light at extremely precise and regular intervals. This allows precise measurements to be made of light of all frequencies and has many applications in both fundamental and applied fields.

In particular, in particle physics the technique is allowing precise measurements of asymmetries between matter and antimatter, and possible drifts in the fundamental constants. Hänsch himself is a member of the ATRAP collaboration, which has successfully made antihydrogen at CERN’s Antiproton Decelerator (AD). Moreover, the frequency comb technique is being used in the ASACUSA experiment at the AD, which studies the spectroscopic properties of anti-protonic helium.

Barish presents plans for the ILC

The schedule of the Global Design Effort (GDE) for the future International Linear Collider (ILC) was an important topic at the meeting in September of CERN’s Scientific Policy Committee. Barry Barish, head of the GDE, presented a report on the progress made since the International Technology Review Panel announced the technology choice for the ILC in August 2004.

CCEnew6_11-05

Since the first ILC workshop, which was held at KEK in November 2004, work has been progressing towards a reference design. This year a second workshop was held in August at Snowmass in the US to refine the ideas. The reference design should be completed by the end of 2006, to be followed by a technical design report two years later. By 2010 the technical design report, together with the scientific results from the Large Hadron Collider and input from the CLIC Test Facility (CTF3) at CERN, will allow a decision on the future of the ILC.

KEDR adds new precision to meson mass measurements

In October 2005 the VEPP-4M collider at the Budker Institute of Nuclear Physics started its latest run with the KEDR detector. This continues a series of experiments that are exploiting the method of resonant depolarization (which was proposed and developed at the Budker Institute) to make precise measurements of masses in the region of the Ψ to Υ mesons (Skrinsky and Shatunov 1989).

Progress in understanding the resonant depolarization technique, as well as a new detection system for Touschek electron pairs (intrabeam scattering), has resulted in a significant improvement in the accuracy of the beam energy determination with KEDR. The error in a single measurement of the beam energy has reached a level of 1 keV, corresponding to a relative accuracy of 0.7 ppm. Figure 1, for example, illustrates a very clear jump in the counting rate of Touschek pairs, allowing a precise measurement of the depolarization frequency directly related to the beam energy. In 2002 this led to a measurement of the mass of the J/Ψ with a relative accuracy of 4 ppm: MJ/Ψ = 3096.917 ± 0.010 ± 0.007 MeV (Aulchenko et al. 2003). Compared with the previous experiment in 1980, this represented a sevenfold decrease in the uncertainty in the mass.

CCEnew7_11-05

In 2004 the masses of the Ψ’ and Ψ(3770) were measured in a second run in KEDR. The results, which are shown in figure 2, were presented recently at the HEP2005 conference in Lisbon in July. The preliminary values of the masses of the Ψ’ and Ψ(3770) are 3686.117 ±  0.012 ±0.015 MeV and 3773.5 ±  0.9 ± 0.6 MeV, respectively.

The precise measurement of the masses of the J/Ψ and Ψ’ mesons provides a mass scale in the energy region around 3 GeV, which forms the basis for an accurate determination of the mass for all charmed particles and the τ lepton. Since the width of the τ is proportional to its mass to the fifth power, high-precision tests of the Standard Model are very sensitive to the accuracy of this mass. At present the accuracy of the τ’s mass is dominated by the accuracy of the measurement by the Beijing Spectrometer (Bai et al. 1996).

CCEnew8_11-05

KEDR began the measurement of the mass of the τ in spring 2005. Using the same method as the Beijing Spectrometer, the collaboration plans to determine the mass by measuring the energy dependence of the cross-section near threshold. The aim is also to improve the statistics of τ decays, benefiting from the precise knowledge of the beam energy. In KEDR the energy is measured by two methods: resonant depolarization for a high-accuracy measurement once a day, and Compton backward scattering for monitoring the beam energy drift during data collection. Data processing is currently in progress.

VLT astronomers discover new population of distant galaxies

A team using the Very Large Telescope (VLT) of the European Southern Observatory (ESO) has identified a much larger population of distant galaxies than previously estimated. The new population mainly consists of galaxies forming stars at a very high rate.

The determination of the number of galaxies in the universe at different epochs is crucial for constraining models of the formation and evolution of galaxies. Counting galaxies in deep astronomical images is relatively simple, but measuring their redshift – hence, their distance and the epoch in the history of the universe when we see them – requires taking a spectrum of each galaxy.

Until now, measuring the spectrum of distant and therefore faint galaxies needed a great deal of observing time on the largest telescopes. Astronomers therefore had to select carefully the candidate high-redshift galaxies based on their brightness and colour. However, it now seems that they have been too restrictive in their criteria, thus missing a large population of distant galaxies with strong ultraviolet emission.

The discovery of this population of bright and distant galaxies was made possible by the Visible Multi-Object Spectrograph (VIMOS) on Melipal, one of the four 8.2 m telescopes of the VLT. Instead of measuring the spectrum of one galaxy at a time, VIMOS can measure simultaneously the spectra of about 1000 galaxies in a single field.

The unique capabilities of VIMOS allowed a team of French and Italian astronomers to determine systematically the redshift of all the galaxies in a given sky area and a given range of brightness. From a total of about 8000 galaxies, almost 1000 were measured at a redshift between 1.4 and 5, corresponding to looking back 9-12 billion years.

The results published by O Le Fèvre and collaborators show that the number density of galaxies at a redshift of around 3 exceeds previous estimates by a factor of 1.6 for the faintest galaxies and of 6.2 for the brightest ones. Around a redshift of 4 the number of galaxies was underestimated by a factor of 2-3.5. It seems therefore that a large population of bright galaxies at high-redshift was completely overlooked.

The newly identified population escaped previous studies mainly because of relatively high ultraviolet emission, which is emitted by massive young stars. The ultraviolet luminosity of these galaxies allows the estimate that their star-formation rate is in the region of 10-100 solar masses per year; currently in the Milky Way only a few solar masses of gas and dust are converted into stars every year. This discovery has profound implications for the history of star formation in the universe and for current theories of the formation and evolution of galaxies.

Further reading

O Le Fèvre et al. 2005 Nature 437 519.

MAGIC and Swift capture GRB

The Major Atmospheric Gamma Imaging Cherenkov telescope (MAGIC) at La Palma, Canary Islands, has observed a gamma-ray burst seconds after its explosion was detected by NASA’s Swift satellite. It is the first time that a gamma-ray burst has been observed simultaneously in the X-ray and very-high-energy gamma-ray bands.

CCEnew1_10-05

MAGIC detects cosmic gamma rays through the showers of charged particles they create in the atmosphere. With a tesselated mirror surface area of nearly 240 sq. m, it is the largest air Cherenkov telescope ever built and has been designed to be more sensitive to lower-energy gamma rays than other ground-based instruments. In this case, it was the ability to track rapidly – and the prompt action of the operators – that allowed the telescope to observe GRB050713A, a long-duration gamma-ray burst, only 40 s after its explosion on 13 July. MAGIC’s lightweight and precise mechanics let it rotate completely in 22 s.

Observations of GRB050713A began only 20 s after an alert from Swift, a member of the Gamma ray bursts Coordinates Network, which distributes the locations of bursts detected by spacecraft. In the case of Swift, this is in real time, so MAGIC was able to move on to the burst while it was still active in the X-ray range.

A first look at the MAGIC data did not reveal strong gamma-ray emissions above 175 GeV, and indeed the flux limit derived at very high energies by MAGIC is extremely low, two to three orders of magnitude lower than the extrapolation from lower energies. The upper limit for the flux of energetic gamma rays is consistent with the expected flux of a gamma-ray burst at high red-shift, strongly attenuated by cosmological pair production. These observations were reported at the 29th International Cosmic Ray Conference held in Pune, India, on 3-10 August; a detailed analysis of the data is in progress.

• MAGIC is managed by 17 institutes from Germany, Italy, Spain, Switzerland, Finland, the US, Poland, Bulgaria and Armenia.

CERN and Poland sign agreement

On 29 July, the rector of the AGH University of Science and Technology in Cracow, Ryszard Tadeusiewicz, and CERN’s director-general, Robert Aymar, signed a collaboration agreement relating to the commissioning of the instrumentation and monitoring equipment for the cryogenic system of the Large Hadron Collider (LHC). A team consisting of 12 physicists, engineers and technicians from the AGH University will assist teams at CERN in commissioning the cryogenic system in the tunnel.

CCEnew2_10-05

This is the first in a series of agreements that will relate to the commissioning of the LHC’s various systems. From the end of this year until the summer of 2007, CERN will enlist the aid of physicists, engineers and technicians from many different institutes in order to complete the tasks associated with the start-up of the accelerator.

Neutrino project on target for Gran Sasso

The CERN Neutrinos to Gran Sasso project (CNGS) has reached an important milestone with the successful first assembly of the target in a laboratory on the surface. Now the target is being dismantled prior to installation in its final location in the underground chamber.

CCEnew3_10-05

On schedule for start-up in May 2006, CNGS will send a beam of neutrinos through the Earth to the Gran Sasso laboratory 730 km away in Italy, north-east of Rome, in a bid to unravel the mysteries of these elusive particles. To create the beam, a 400 GeV/c proton beam will be extracted from CERN’s Super Proton Synchrotron and directed towards the CNGS target, which consists of a series of graphite rods installed in a sealed container filled with helium. Positively charged pions and kaons produced by the proton interactions in the target will then be focused into a parallel beam by a system of two pulsed magnetic lenses – the horn and the reflector.

A 1 km-long evacuated decay pipe allows the pions and kaons to decay, in particular into muon-neutrinos and muons. The remaining hadrons (protons, pions and kaons) are absorbed in an iron beam dump with a graphite core. The muons will be monitored in two sets of detectors downstream of the dump, and then absorbed further downstream in the rock, while the neutrinos continue on towards Gran Sasso.

The target itself consists of 13 graphite rods, each 10 cm long and 4 or 5 mm in diameter. The first nine rods are interspaced by 9 cm of air, while the last four rods have no air-space between them; the 13 rods are together installed in a target unit. The CNGS target station contains five units – one active with four spares – in a rotatable target magazine. Together with a novel beam-position monitor (an electromagnetic coupler operated in air), the target magazine is installed on an alignment table. The four jacks to adjust the position of this table are fixed on a base table, and the entire assembly is installed inside an array of massive iron shielding blocks.

The neutrino beam will be completely installed by the end of 2005, and the first beam of neutrinos should head off next May.

bright-rec iop pub iop-science physcis connect