by Ernest M Henley & Alejandro Garcia, World Scientific. Hardback ISBN 9789812700568 £56 ($98). Paperback ISBN 9789812700575 £33 ($58).
This is the third and fully updated edition of a classic textbook, which provides an up-to-date and lucid introduction to both particle and nuclear physics. Topics are introduced with key experiments and their background, encouraging students to think and allow them to do back-of-the-envelope calculations in a diversity of situations. Suitable for experimental and theoretical physics students at the senior undergraduate and beginning graduate levels, the book covers earlier important experiments and concepts as well as topics of current interest, with extensive use of photographs and figures to convey concepts and experimental data.
This autumn, commissioning should be in full swing on the LHC at CERN, the world’s largest laboratory for the study of subnuclear physics. So it is entirely appropriate that the 46th Course of the International School of Subnuclear Physics, the oldest of the 123 schools of the Ettore Majorana Foundation and Centre for Scientific Culture (EMFCSC) in Erice, will look at what may come from the LHC – both the expected and the unexpected.
This year’s course, directed by Antonino Zichichi and Gerardus ’t Hooft, is to be held in Erice in September. It will provide the perfect opportunity to focus on the highlights from CERN, and in particular the goals of the LHC. This was also the theme of the 45th in the series, held in 2007, when CERN’s director-general, Robert Aymar, stated that these goals “could determine the future course of high-energy physics and should allow us to go beyond the Standard Model”.
Physics beyond the Standard Model first appeared before the Standard Model itself, when Raymond Davis observed neutrinos from the Sun in the 1960s. At Erice last year, Alessandro Bettini from the Galileo Galilei physics department at Padua University pointed out: “From 1962 neutrinos were used to look into the Sun’s core, but their behaviour was totally unexpected.” This led to the case for neutrino oscillations – a phenomenon that the Italian Laboratori Nazionali Gran Sasso (LNGS) is studying through the CERN Neutrinos to Gran Sasso project, which started in August 2006. “The observation of neutrino oscillations has now established beyond doubt that neutrinos have mass and mix,” claimed Eugenio Coccia, director of LNGS, during his talk. “This existence of neutrino masses is the first solid experimental fact requiring physics beyond the Standard Model.”
The physics of neutrinos is also linked to the unseen matter of the universe. In 1933, Fritz Zwicky, on measuring the mean quadratic velocity of galaxies, proposed the existence of a kind of “invisible matter” – he named it dark matter – that could have neither electromagnetic nor strong nuclear interactions. Neutrinos became the obvious candidates for dark-matter particles, but the study of the evolution of large-scale structures in the universe has unexpectedly shown that the contribution of neutrinos must be extremely small, if it exists at all. Indeed, no Standard Model particle can be considered as the dominant component of dark matter. One new particle candidate is the sterile neutrino, as Lisa Randall from Harvard University explained last year. “This new ‘flavour’ of neutrino could be trapped, like gravitons, in a different brane from the one we live on,” she said. “For this reason we have not observed it directly so far. But the LHC should manage to see many particles that were created during the dawn of the universe and disappeared soon after the Big Bang.”
There are many questions in particle physics that the LHC could help to solve, which the 46th course will again discuss this year. A key question is whether the expectations from the LHC predictable.
To answer this, during his talk at the 45th course, Zichichi recalled a front-line scientist of the 20th century, whose birth centenary was celebrated last year at the World Nuclear Physics Conference in Tokyo. In 1935 Hideki Yukawa proposed the existence of a particle with a mass between that of the light electron and the heavy nucleon – the first meson. “No-one was able to predict the ‘gold mine’ hidden in the production, decay and intrinsic structure of the Yukawa particle,” said Zichichi. “This gold mine is still being explored today, and its present frontier is the quark–gluon-coloured world.” Zichichi also pointed out: “It is considered standard wisdom that nuclear physics is based on perfect theoretical predictions, but people forget the impressive series of unexpected events with enormous consequences [UEEC] discovered inside the Yukawa gold mine.”.
Such UEEC events are a common feature of the greatest scientific discoveries and the most important historical facts. However, there is a difference. Analysing history on the basis of “what if?” leads historians to conclude that the world would not be as it is if one or any number of “what if?” events had not occurred. This is not the case for science, as Zichichi underlines: “The world would have exactly the same laws and regularities, even if Galileo Galilei or somebody else had not made their discoveries.”
UEEC events will be crucial evidence for understanding the existence of complexity at the elementary level. “No one could predict a UEEC event on the basis of present knowledge,” Zichichi pointed out. “In fact predictions are based on the mathematical description of UEEC events, so they come only after a UEEC event. Moreover, we should be prepared with powerful experimental instruments, technologically at the frontier of our knowledge, to discover all the pieces of the Yukawa gold mine.”
With the advent of the LHC at CERN, a new supercollider will study the properties of a “new world” produced in collisions between heavy nuclei (208Pb82+) at the maximum energy so far available (1150 TeV). This world is the quark–gluon-coloured world, totally different from all that we have so far been dealing with.
As Aymar underlined: “If new physics is there, the LHC should find it.” There is nothing left for us but to await the unexpected.
Lisa Randall is the first female tenured theoretical physicist at Harvard University. This alone would probably be enough to raise the interest of most science journalists who are all too often confronted with the endless search for a female face who would look good in their newspapers, and make science somehow more human to non-scientific readers. Search her name in Google and read articles about her, then read her most recent book, and you realize that she is also one of the small band of physicists who can write popular science books. Then meet her, as I did at CERN, and you discover a no-nonsense person who finds it “normal” to deal with extra-dimensions and parallel universes, as well as hidden gravitons and quantum gravity.
Randall has visited CERN many times, staying for several months in 1992 to 1993, when she worked on B physics and also on ideas in supersymmetry and supersymmetry breaking. These ideas have since evolved, and she is now one of the world’s experts in the theory of extra dimensions, one of the solutions proposed for the puzzling question of quantum gravity. According to these theories, our universe could have extra dimensions beyond the four that we experience – three of space and one of time.
The idea of an extra dimension is simple to state, but how can we picture extra dimensions in our three-dimensional minds? As Randall concedes, explaining the extra dimensions is possible primarily through analogies, such as, Edwin Abbot’s analogy of Flatland. If you lived on a two-dimensional surface and could see only two dimensions, what would a three-dimensional object become for you? “In order to answer, you would have to explore your object in your two-dimensional view,” she explains. “The slice would be two dimensional but the object would still be three dimensional.” This is to say that, although extra dimensions are difficult to imagine in our limited three-dimensional world, we can nevertheless explore them.
Warping in a universe with extra dimensions would be an amazing discovery, but does Randall expect to find any evidence? The LHC, she explains, could hold the key. “The LHC will allow us to explore an energy scale never reached before – the TeV scale. We know there are questions about this particular scale. We know the simple Higgs theory is incomplete, so there should be something else around. That’s why people think it should be supersymmetry or extra dimensions, something just explaining why the Higgs boson is as light as it is,” she explains. Randall works in particular on the idea of warped geometry. If this is true, experiments at the LHC should see particles that travel in extra dimensions, the mass of which is around the tera-electron-volt scale that the LHC is studying.
One fascinating area of modern physics linked to extra dimensions is that of quantum gravity. Gravity is the best known among the forces that we experience every day, yet there is no theory that can describe it at the quantum level. Gravity also still holds secrets experimentally, because its force-carrying particle, the graviton, remains hidden from view, but Randall’s theories of extra dimensions could shed light here, too.
Could the graviton be found in the additional dimensions, and therefore in the proton–proton collisions at the LHC? “We don’t know for sure,” says Randall, “but the Kaluza–Klein partner of the graviton – the partner of the graviton that travels in extra dimensions – might be accessible.” It seems that even for the theorists leading the field, the theory is a little tricky to understand. “You have one graviton that doesn’t have any mass,” she explains, “and it acts just as a graviton is supposed to act in four dimensions. And you have another graviton that has momentum in the extra dimensions: it will look like a massive graviton according to four-dimensional physics. The particle will have momentum in the fifth dimension and this is the part that we will be able to see.”
The quantum effects of gravity have also led theorists to talk of the possibility that black holes could be formed at the LHC, but Randall remains sceptical. “I don’t really think we will find black holes at the LHC,” she says. “I think you’d have to get to even higher energy.” It is more likely in her opinion that experiments will see signs of quantum gravity emerging from a low-energy quantum gravity scale in higher dimensions. However, she admits: “If we really were able to have enough energy to see a black hole, it would be exciting. A black hole that you could study would be very interesting.”
Interesting, indeed, but also scary, because black holes have always been described as “matter-eaters”. However, there is nothing to fear. Massive black holes can only be created in the universe by the collapse of massive stars. These contain enormous amounts of gravitational energy, which pull in surrounding matter. Given the collision energy at the LHC, only microscopic and rapidly evaporating black holes can be produced in the collisions. Even if this does occur, the black holes will not be harmful: cosmic rays with energies much higher than at the LHC would already have produced many more black holes in their collisions with Earth and other astrophysical objects. The state of our universe is therefore the most powerful proof that there will be no danger from these high-energy collisions, which occur continuously on Earth.
So much for black holes, but I am still full of curiosity about Randall. What, for example, originally sparked her interest in physics? “I actually liked math first more than physics,” she says, “because when I was younger that is what you got introduced to first. I loved applying math a little bit more to the real world – at least what I hope is the real world.” Now, as a leading woman in a male-dominated research field, and as the author of a popular book, Warped Passages, she is the focus of media attention. She finds some of this surprising but notes that it’s not just attention to her but to the field in general. One of the motivations she had for writing her book, was that people are excited about the LHC. She saw the chance to give them the opportunity to find out more about what it will do. “These are difficult concepts to express. You could give an easy explanation or you could try to do it more carefully in a book. One of the very rewarding things is that a lot of people who have read my book have said they can’t wait for the LHC; they can’t wait to see what they are going to find. So it is exciting when you give a lecture and thousands of people are there – it’s exciting because you know that so many people are interested.” On the other hand, she finds some of the specific types of reporting disturbing, because it shows how far society still has to go: “We haven’t reached the point where it’s usual for women to be in the field.”
In addition to her work on black holes, gravity and so on, Randall is currently working on ideas of how to look for different models at the LHC, and how to look for heavier objects, such as the graviton, that might decay into energetic top quarks. She is also trying to explore alternative theories. “I’m not sure how far we’ll go in things like supersymmetry,” she says, “I’m playing around with models and ways to search for it at the LHC.”
Yes, physics is about playing around with ideas – ideas that nobody has ever had before but that have to be tested experimentally. The LHC will shed light on some of the current mysteries, and Randall, who like many others has played around with ideas for years, can’t wait for this machine to produce the experimental answers.
The ALICE experiment at the LHC is optimized for the study of heavy-ion collisions to investigate the behaviour of strongly interacting matter under extreme conditions of compression and heat. The interpretation of the data will rely on a systematic comparison of measurements with the same observables in proton–proton (pp) and proton–nucleus (pA) collisions, as well as in collisions of lighter ions under the same experimental conditions. The tracking and particle-identification capabilities of ALICE are designed to allow a precise study of these benchmarking processes and to perform efficiently in the particularly demanding conditions of the heavy-ion programme.
A key characteristic of heavy-ion collisions at the LHC energy is the high number of particles produced per event, more than two orders of magnitude higher than in a typical proton–proton collision in the central region. The design of ALICE is optimized for a charged particle multiplicity of around 4000 and has been tested with simulations up to double this number. The use of mainly 3D hit information with many points (up to 150) in a moderate magnetic field of 0.5 T makes the tracking capability particularly safe and robust.
Tracking in the central barrel of ALICE is divided into a six-layer silicon-vertex detector, which forms the inner tracking system (ITS) surrounding the beam pipe, and the time-projection chamber (TPC). The main functions of the ITS are the localization of the primary vertex (with a resolution better than 100 μm), the reconstruction of the secondary vertices from the decays of D and B mesons and hyperons, the tracking and identification of particles with momentum below 200 MeV/c, and improving the momentum and angle resolution for particles reconstructed by the TPC. The silicon pixel detector (SPD) forms the innermost two layers of the ITS, which is surrounded by two layers of drift detectors and two layers of double-sided microstrips. The drift and microstrip layers are equipped with analogue readout for independent particle identification via energy loss, dE/dx, in the non-relativistic region, thus providing the ITS with stand-alone capability as a spectrometer for particles with low transverse momentum, pt.
The SPD will operate in a region where the track density could be as high as 50 tracks/cm2. It has a key role in the determination of the position of the primary vertex and in the measurement of the impact parameter of secondary tracks originating from the weak decays of strange, charm and beauty particles. The active length of the two SPD layers is about 28 cm, with an acceptance coverage in pseudorapidity of η = ±2.0 for the inner layer and η = ±1.4 for the outer one, located around the beam pipe at average distances of 39 mm and 76 mm from the beam axis, respectively. The smallest clearance between the inner layer and the wall of the beam pipe – an 800 μm thick beryllium cylinder with an outer diameter of 59.6 mm – is less than 5 mm.
A distinctive feature of the SPD is the reduced amount of material seen by traversing particles. The resolutions in momentum and impact parameter for low-momentum particles are dominated by multiple scattering in the material of the detector. To keep the pt cutoff as low as possible, the SPD design uses several specific solutions to minimize the amount of material in the active volume. The result is that a straight track perpendicular to the detector surface traverses on average an amount of material per layer corresponding to about only 1% of a radiation length.
Another important consideration is the amount of radiation to which the SPD will be exposed during LHC operation. For 10 years of a standard running scenario, the integrated radiation levels of total dose and fluence on the inner layer are estimated to be 2.7 kGy and 3×1012 n/cm2. (1 MeV neutron equivalent), respectively. While this is lower than for other LHC detectors, the on-detector ASICs for the SPD have nevertheless been implemented in radiation-hard, deep-submicron technology, like the other more demanding cases at the LHC. The relatively modest radiation levels allow the detector to operate at ambient temperature without the risk of significant long-term degradation of the sensor characteristics. The total power dissipation in the on-detector electronics is around 1.35 kW. This is not high; however, because the mass of the detector is low, if the cooling system were to fail, the temperature would rise at a rate of about 1 °C/s. For this reason, the SPD has fast-acting, redundant temperature safety systems.
The basic components of the SPD are hybrid silicon pixels in the form of a two-dimensional matrix of reverse-biased silicon detector diodes. Each diode is connected through a conductive solder bump to a contact on a readout chip that corresponds to the input of a readout cell. The readout is binary: for each cell, a threshold applied to the pre-amplified and shaped signal produces a change in the digital output level when the signal is above a set threshold.
The SPD contains 1200 readout pixel chips and a total of 107 cells. The detector element is called a ladder, which consists of a silicon-sensor matrix bump-bonded to five readout pixel chips. The ladder sensor matrix contains 256×160 cells measuring 50 μm (rφ) by 425 μm (z), with longer sensor cells in the boundary region to assure coverage between readout chips. The ladders are attached in pairs to an interconnect (the pixel bus) that carries data/control bus lines and power/ground planes; a multi-chip module (MCM), located at one end of the pixel bus, controls the front-end electronics and is connected to the off-detector readout system via optical-fibre links.
Two ladders, the pixel bus and the MCM together form the basic detector module, known as a “half stave”. Two half staves, attached head-to-head along the z direction to a carbon-fibre support sector, with the MCMs at the two ends, form a stave. Each sector of the SPD supports six staves; two on the inner layer and four on the outer layer, and ten sectors mounted together in enclosed geometry around the beam pipe form the full two-layer barrel. Each half stave generates an 800 Mb/s output serial-data stream. The 120 half staves that form the SPD are all read in parallel, with full detector readout taking around 256 μs.
Although small in physical size, the SPD is packed with advanced and novel technical solutions, including the following few examples. To obtain the lowest material budget, the pixel ASIC wafers were thinned down to 150 μm after deposition of the solder bumps, which are about 20 μm in diameter. They were then diced and the die flip-chip bonded to 200 μm thick silicon sensors to form a ladder. This whole process was challenging and required specific developments by the industrial partners (VTT in Finland, Canberra in Belgium and ITC-irst, now FBK, in Italy).
Material budget considerations also led to the development of the pixel bus, a high-density aluminium/polyimide multi-layer flex. This technology, in which aluminium is used in place of copper, is not an industry standard and was made possible by the expertise available in the TS-DEM workshop at CERN. The optical transceiver module (one on each MCM), housed in a silicon package barely 2 mm thick, is a custom development by the same company that produced the optical links for the two larger LHC detectors.
The cooling system is of the evaporative type based on C4F10 and has required a specific system development. The sectors are equipped with cooling tubes and capillaries embedded in the carbon-fibre support sector, running underneath the staves (one per stave). The cooling tubes are made from a corrosion-free metal alloy (Phynox) with walls only 40 μm thick.
A unique feature of the SPD is its capability to generate a prompt trigger based on an internal Fast-OR. Each pixel chip provides a Fast-OR digital pulse when one or more of the pixels in the matrix are hit. This was originally included for self-test purposes, but it became clear that it could be adapted to generate a multiplicity trigger with a considerable interest for physics.
The Fast-OR signals of the 10 chips on each of the 120 half staves are transmitted every 100 ns on the 120 optical links that are also used for the data readout. They are processed in a separate processor unit according to a variety of predefined trigger algorithms to generate a signal that can contribute to the Level 0 (L0) trigger decision in the ALICE central trigger processor (CTP). Simulations have shown that using the Fast-OR information in the L0-trigger decision significantly improves background rejection in proton–proton interactions and event selection in heavy-ions runs.
The pixel-trigger signal generated by the Fast-OR processor must reach the CTP within about 800 ns of the interaction to meet the latency requirements of the L0-trigger electronics. The design has brilliantly met this challenging requirement for the trigger processor in tests and the full system is now being commissioned in ALICE with cosmic rays.
A major challenge for the LHC collaborations has been to bring together components and subsystems developed at institutes and production laboratories in different countries and locations. The SPD is no exception. The laboratories that took part in the design, development and construction of the SPD are: CERN, INFN and the University and Politecnico of Bari, INFN and the University of Catania, INFN/Laboratori Nazionali di Legnaro, INFN and the University of Padova, INFN and the University of Salerno, INFN and the University of Udine, and the Slovak Academy of Sciences of Košice.
The final integration of the SPD took place at CERN’s Departmental Silicon Facility, which was equipped to test the individual sectors and for the integration and pre-commissioning of the full detector, including the cooling plant and the full-scale configuration of power supplies and services. This strategy proved invaluable for debugging the system before installation in the experimental area.
In June 2007 the SPD was finally installed in the ALICE experimental area at Point 2 in the LHC ring, with connection to services possible in November, when the mini-frame carrying service interfaces was put in place.
Commissioning of the SPD started in January 2008 with the aim of being fully ready by the time that the LHC delivers the first proton–proton collisions, scheduled for later this summer. Indeed, the SPD is one of the ALICE sub-detectors to contribute to the measurement of the charged-particle multiplicity, which is the common objective of “day-one” studies for all of the LHC experiments.
Since May the SPD has been collecting cosmic data triggered by the Fast-OR trigger signal produced by the SPD itself. The events are selected by requiring at least one hit in the outer layer of the top half-barrel in coincidence with at least one hit in the outer layer of the bottom half-barrel. These data samples are being used for a preliminary alignment of the SPD components. More recently the same signal is being used to trigger other ALICE subdetectors: first the other two ITS systems – the drift and double-sided microstrip detectors – and then the TPC, thus exercising the combined TPC-plus-ITS tracking.
On the evening of 15 June, while preparing the detector for the cosmic run, the triggered events from the SPD showed a puzzling pattern never seen before. It took a while for the team working in the control room to realize that the SPD was observing one of the first signs of life of the LHC at Point 2: muon tracks produced in the beam dump during the injection test in transfer line TI 2.
• It is unfortunately impossible in this short article to give the well deserved credit to all of those who have deployed a relentless effort over nearly a decade to lead to completion this complex and exciting detector successfully to completion.
Krienen, who died on 20 March, began his long association with particle physics in 1952, before the European Organization for Nuclear Research officially came into being, in the days of the provisional council that gave CERN its name. He had spent the first few years of his professional life in the research laboratories of Philips at Hilversum. Combined with his academic background as an engineer-physicist at the University of Delft, this gave him the thorough training in the basics of materials and electromagnetism (radar) that was to manifest itself so clearly when he joined CERN.
Already an assistant to Cornelis Bakker at the Zeeman Laboratory at Amsterdam University, Krienen was perfectly suited to be one of the first recruits to the accelerator programme for the new European laboratory. The first project was the 600 MeV proton Synchrocyclotron (SC), with Bakker in charge. This was to be one of the highest-energy accelerators in the world, with the aim of providing a source of particles to initiate an experimental programme in pion and muon physics. The speed of construction was an important element, because CERN needed to become a focus for attracting many of the physicists who had migrated away from Europe. In the meantime, the planning and construction of the much more ambitious proton synchrotron (which became the 25 GeV PS) had also begun, although this was a longer enterprise by necessity.
A small team of young, enthusiastic people was established for the SC, led and advised by experts with more experience. Initially (1952 to 1954) they were scattered in small groups at European universities and laboratories that already had activities in particle physics research (Liverpool, Paris, Uppsala, Stockholm). They all moved to Geneva when the final choice of the CERN site was made.
Krienen had essentially two responsibilities: a specific one for the accelerating RF component of the machine and a more general one for keeping overall control and ensuring the necessary connections between the various groups. His competence made him the undisputed guide and mentor: critical at times, but always enthusiastic and forward-looking. His leadership in dealing at the highest level with industrial firms – some of them the largest in Europe – was an important contribution. Very soon the younger members of the SC team, some only 25 years old, learned enough to feel confident and carry on alone.
For the RF, most other high-energy synchrocyclotrons had adopted mechanical, rotating capacitors (reminiscent of the tuning capacitors of old-fashioned radio receivers). This allowed for the frequency modulation needed to accompany the relativistic energy increase that occurs in all circular accelerators with energies higher than a few tens of millions of electron-volts. To avoid the recurrent difficulties encountered with rotating capacitors (arising from operation in high vacuum, overheating, bearings, sparking, etc), Krienen adopted a bold, elegant solution: a vibrating, light alloy capacitor in the form of a tuning fork. The self-oscillating operation (at 50 Hz) was driven from the base of the fork, via an electromagnet. Special feedback circuits assured the control of the amplitude. Krienen studied the possible problems, such as parasitic vibration modes, metallurgy and fatigue, and came up with brilliant solutions. A spare, twin tuning fork was provided for the SC, but in many years of operation it was never needed.
For the firm in charge of the construction of the SC, Philips, this was both a new adventure and a successful collaboration with the world of particle-physics research. Thanks to the dedication and efforts of all, and of Krienen in particular, the machine was built in less than three years and first operated in August 1957. It subsequently proved to be a reliable workhorse and after many additions and improvements, it completed its career in 1990, following 33 years of very successful experiments in particle and nuclear physics.
Krienen also turned his talents to the benefit of particle-tracking detectors. In the early 1960s many experiments used spark chambers for this purpose. The sparks formed between metal planes in a gas when a high voltage was applied after the passage of a particle. The tracks that the chambers revealed in this way were recorded optically, initially on photographic films that were scanned offline to digitize the track co-ordinates. Later, TV cameras were used, allowing digital information about the co-ordinates of the tracks to be written onto magnetic tape. Acoustical methods, where transducers measured the arrival time of the sound wave of a spark, were also used successfully to measure the co-ordinates online.
In 1961, participants at a symposium on spark chambers at Argonne National Laboratory heard of some ideas for improving spark chambers by replacing the metal plates with wire planes. However, it was at the 1962 Conference on Instrumentation for High Energy Physics at CERN that Krienen presented the first extensive work on chambers with wire planes. He proposed the digital wire spark chamber, employing a novel method to read out wire planes with ferrite-ring core memories, as used in computers in those days. Each wire in a detector plane passed through a ferrite-ring core to ground or even to high voltage. The current through the wires touched by a spark, which was controlled and relatively low, set the magnetic cores, thus directly storing the track co-ordinate. This could be read out conveniently at high speed using the same procedures as used in computers.
The device marked a real breakthrough in the field of detectors. In the subsequent years, a large number were constructed and used in experiments at CERN, DESY, Brookhaven National Laboratory (BNL), Saclay and many other laboratories worldwide. However, a drawback of magnetic core read-out was that it could not be used in magnetic fields. This is one reason why spark chambers gradually became less popular. They were replaced by further developments of the wire chamber, such as multiwire proportional chambers, drift chambers, time-projection chambers, microstrip gas chambers, and finally by silicon trackers.
Krienen in the meantime continued to apply his inventiveness in the field of accelerators at CERN. In 1972 he made a major contribution to the 14 m diameter muon storage ring designed to measure the anomalous moment of the muon, g-2, to a few parts per million. This required a uniform magnetic field of 1.5 T with the vertical focusing provided by electric quadrupoles almost all of the way round the ring, operating at about 25 kV. Krienen, assisted by Wilfried Flegel, designed the quadrupole system and soon discovered that high-voltage quadrupoles in a magnetic field regularly spark over, even in the best vacuum. Studying the phenomenon, he realized that electrons were trapped in the combined fields (which resemble a Penning gauge) and that the breakdown occurred when the trapped charge had built up to a threshold value, which took a few milliseconds. However, the muon lifetime in the ring (lengthened by relativistic time dilation) was to be only 64 μs, and all would be gone in 800 μs, after which the quadrupoles could be switched off. So Krienen provided pulsed modulators to drive the electric plates and there was no significant breakdown.
The muons were injected by pion decay in flight inside the ring and filled all of the available phase space. Some passed close to the limiting apertures, so inevitably a small fraction (<1%) were lost per muon lifetime. This had a small effect on the g-2 measurement and limited the measurement of the time-dilated muon lifetime. Krienen then invented “electric scraping” to remove the muons at the edge of the population, which were the ones most likely to be lost. This was accomplished in a simple way by pulsing the quadrupoles asymmetrically at the beginning of the fill and then slowly bringing them up to the fiducial value. The loss was reduced to 0.1% per lifetime, and could be measured and a correction applied. Finally time dilation in a circular orbit was verified to 1 part in 1000 at a γ of 29.6. This remains one of the most precise tests of Einstein’s special theory of relativity.
In 1977 Krienen took charge of the design and development of the electron-cooling apparatus for CERN’s Initial Cooling Experiment (ICE) ring. Electron cooling, suggested by Gersh Itskovich Budker in 1966 and experimentally demonstrated in his laboratory in Novosibirsk in 1974 to 1976, consists of reducing the phase spread of ion beams circulating in a storage ring through Coulomb interactions with cooler electrons. Ions and electrons are mixed together along a straight section of the ring where they travel at the same average speed, the electrons being constantly renewed. At the limit, neglecting noises and instability, the temperature of the ions should be equal to that of the electrons, that is: Ti≃ Te → θi ≃ θe √(Me/Mi), where θi and θe are the angular divergences of the ion and electron beams respectively. The very small mass ratio Me/Mi makes this method extraordinarily favourable.
ICE was an alternating gradient storage ring and was constructed at CERN in less than a year, metamorphosing the existing g-2 zero-gradient ring, which had just finished its task. The idea was to demonstrate the feasibility of intense antiproton beams with the aim of using them in the Super Proton Synchrotron (SPS), operating as a proton–antiproton collider. Carlo Rubbia was the initiator and strenuous supporter of the whole project, which was to produce and thereby discover the intermediate bosons W± and Z0 predicted by the Standard Model.
The decision was taken that the ICE ring should also incorporate the appropriate equipment for the stochastic cooling system that Simon van der Meer had invented at CERN in 1974, which had already been successfully partially tested at the Intersecting Storage Rings. Between late 1977 and spring 1978, the potential of stochastic cooling became so evident that this system was adopted alone in the proton–antiproton complex, ultimately with great success.
Krienen, however, was pursuing the hard work of completing the electron-cooling system. He could not go fast because he had to design every part of the apparatus from scratch, and then construct and adjust it, as well as develop the detailed theory. In 1979, about two years after the start of ICE, his apparatus worked properly, achieving a factor of 107 in the six-dimensional phase space density of the circulating protons. It was too late for the proton–antiproton project at the SPS, but new aims appeared. Krienen’s device was moved, with minor modifications, to the Low Energy Antiproton Ring. After a few years and more substantial improvements, it was moved again to the Antiproton Decelerator.
After his retirement from CERN, Krienen moved to the US, where he often returned to this cooling method, suggesting improvements and new applications in several papers. In 1986 he joined Boston University as professor of engineering and applied physics, and set to work on the new muon g-2 experiment at BNL. This was broadly similar to the CERN machine but had many improvements: the magnet aperture and yoke were wider, and particles were to be injected many times for each cycle of the Alternating Gradient Synchrotron. Krienen realized that a pulsed inflector, as had been used at CERN, would need to be longer, therefore requiring more energy, and that it would be impossible to recharge the capacitors in time for them to be triggered many times per second. So he devised instead a superconducting inflector that would cancel the main field along the desired track, but have no leakage field outside, using no iron or ferrite, which would have perturbed the main field.
To achieve this with a steady current was a tour de force. Krienen used two cosθ windings of different diameters, one inside the other, carrying equal and opposite currents. Outside the device the magnetic field was strictly zero, but inside the inner winding the field was uniform and the return flux was confined to the space between the windings. Working with his PhD student Wuhzeng Meng, Krienen proved the concept with model windings and the final superconducting version was made by Akira Yamamoto at KEK in Japan. This invention was crucial to the success of the g-2 experiment at BNL.
Krienen was an inventive and original thinker with the ability to make his ideas work in detail. His work ranged from accelerator and beam optics through superconducting injection devices and slow extraction methods, to ion sources, RF and klystron technology, and many kinds of particle detector. His motivation was always the advancement of physics. He impressed his colleagues and friends with his vast knowledge of theory and practice, and with his enthusiasm and creativity, which he maintained until the end of his long life. He was a good team player with strong loyalty to colleagues. We will remember him with warm affection.
The Beam Test Facility (BTF) is part of the DAΦNE Φ-factory complex, the most recent of the electron–positron colliders in the long history of the INFN Laboratori Nazionali di Frascati (LNF). The facility features a high-intensity linac that provides electrons and positrons up to 750 MeV and 550 MeV respectively, a damping ring to improve injection efficiency and two main rings designed for the abundant production of K mesons coming from the decay of the Φ resonance at 1.02 GeV (Mazzitelli et al. 2003). The main research goal is to study matter–antimatter asymmetry and the interactions of “s” quarks, but K mesons are also useful tools in nuclear and atomic physics.
Before the high-intensity electron or positron beam pulses produced by the 60 m long linac are injected into the double storage ring, they can be extracted to a transfer line that is dedicated to the calibration and ageing of particle detectors, the characterization and calibration of beam diagnostics, and the study of low-energy electromagnetic interactions (figure 1). Here, the number of particles can be reduced to a single electron or positron per pulse by means of a variable thickness copper target. The particle momentum is then selected, with an accuracy better than 1%, using a dipole magnet and a set of tungsten collimators. The energy range is typically 25–500 MeV, and up to 49 pulses per second can be extracted (20 ms repetition time), with a bunch length of 10 or 1 ns. When not operating in conjunction with the collider, the linac’s maximum beam energy can be raised to 750 MeV (for electrons) and the intensity increased to a maximum of 1010 particles per second, limited by radiation safety.
When operating in low-intensity mode, particles are selected from the secondary showers emerging from the target, so either electron or positron beams can be chosen. The final intensity can easily be tuned (by adjusting the tungsten collimators) over a range of several orders of magnitude – from 104–105 particles per pulse, down to a single particle (Poisson distributed). An optical system of four quadrupoles along the BTF transfer line allows the transverse distribution of the beam to be tuned. A typical beam spot of 2×2 mm2. transverse section (1σ profile), with an angular divergence of about 2 mrad, is produced at 500 MeV in single-particle mode (figure 2). Two different beamlines are available, depending on the configuration of the final dipole magnet in the experimental hall (figure 3). When needed, the full-intensity beam can be extracted to the BTF area by removing the copper target from the beamline.
The commissioning of the transfer line, the two BTF exit lines and all the diagnostic devices needed for a reliable operation of a test-beam facility was completed in autumn 2002. The facility has since hosted tens of groups from all over Europe, who have run a variety of experiments and tests with electron and positron beams.
The applications of the BTF beam – with its intensity and energy range, good spatial and excellent timing properties – are extremely wide ranging. Typical uses of the facility in its single-electron mode of operation include testing the ring-imaging Cherenkov system for the LHCb experiment at CERN’s LHC and using electrons at 500 MeV to make highly accurate measurements of the efficiency of OPAL lead glass used in the NA62 experiment at the SPS. A more unusual investigation concerned the thermoacoustic detection of particles by the type of ultracryogenic resonant antenna used for gravitational-wave detection. Such antennae are sensitive enough to detect the impact of cosmic rays, and indeed the tests could observe the vibration occurring when the full force of the high-intensity BTF beam struck the antenna (figure 4 and 5).
An important upgrade of the BTF line was completed in 2005, with the installation of dedicated devices for the production of a beam of tagged photons. To intercept the BTF beam with a small, but not negligible, probability of emitting a bremsstrahlung photon, an active target made of four layers of single-sided silicon microstrip detector planes can now be inserted just before the last dipole magnet that selects one of the two exit lines. In this case the electron is not transported through the dipole but instead hits the inner wall of the vacuum pipe inside the magnet. Its energy is then detected by a series of silicon microstrip detector modules installed outside the beam-pipe, thus allowing the reconstruction of its bending radius. Combined with the measurement of position and angle in the active target, this yields the energy of the radiated photon with a resolution of 7% in the 200–500 MeV range, at a typical production rate of 0.5 Hz. This photon-tagging system has been used successfully for the calibration of the scientific payload (in particular the tungsten/silicon detector minicalorimeter) of the gamma-ray astronomy satellite AGILE, launched by the Italian Space Agency in summer 2007 (figure 6).
Since March 2007 the duty factor of the facility has been improved from 40% up to 90% of the operation time of DAΦNE, thanks to the installation of a new dedicated pulsed-dipole magnet, designed, in collaboration with Maurizio Incurvati and Claudio Sanelli at INFN-LNF, by CERN (Maccaferri and Chiusano 2006). This is capable of driving any of the 50 linac pulses per second to either the accumulator ring or the BTF transfer line. The BTF, operated by the Frascati Accelerator Division staff, typically provides beam for an average period of 250 days a year.
The BTF facility is already equipped with instrumentation and diagnostics capable of covering the entire energy and intensity range. It is also continuously being improved to satisfy the growing interest of the broad scientific community that it serves. The support and collaboration of the users is crucial for better operation and development of the facility. Many improvements of the BTF diagnostic tools have been introduced in collaboration with hosted groups. The requests and proposals made by the user community are important in pushing exploration towards new operating schemes and new possibilities, including projects for the production of a low-intensity neutron beam and R&D studies for high-precision diagnostics at high intensities, all of which are under way.
T2K is a second-generation, long-baseline, neutrino-oscillation experiment that will study the nature of neutrinos. A neutrino beam generated by the high-intensity proton accelerator of the Japan Proton Accelerator Complex (J-PARC) at Tokai will travel 295 km to the 50 kilotonne water Cherenkov detector, Super-Kamiokande, which is located about 1000 m underground in the Kamioka mine.
The J-PARC neutrino facility will follow the standard route for making a neutrino beam. This begins with an intense proton beam that strikes an appropriate target to create many secondary particles, including pions and kaons, which in turn decay to muons and the desired muon-neutrinos. The secondary particles pass through a decay volume followed by an absorber, or beam dump, which removes all but the muons and neutrinos from the beam. A further absorber – the rock in the Earth between the beam dump and the detector – removes the muons to leave only the neutrinos.
At J-PARC the primary beam line will consist of superconducting combined-function magnets for the arc section, with normal conducting magnets for fast extraction and the final focus. The target will form part of the secondary beam line, which will also contain the magnetic horn system to focus the pions and kaons into a beam, the decay volume, and the beam dump and muon monitors. The horn system being used consists of three horns, the first being combined with the target system. In addition, buildings for services such as power supplies and cooling water systems for the beam lines are under construction, as well as the building and underground pit for the near neutrino detector, ND280, which will monitor the neutrinos.
Work on the primary beam line is making good progress. The normal conducting magnets are all in place, and the installation of cabling and piping is under way. In the arc section, 12 of the 14 doublets of superconducting magnets have been installed, together with beam position monitors. The survey and alignment took place in April, with remaining work carried out after the commissioning of the main proton ring at 3 GeV. This saw the successful injection of 3 GeV protons from the rapid cycling proton synchrotron into the main ring on 22 May. Commissioning to 30 GeV will take place from December 2008 to February 2009, and the commissioning of the fast extraction for the neutrino beam should start in April 2009.
For the neutrino beam line, both the helium vessel for the decay volume and the target station (where the target and horn system will be installed) have been completed. Civil engineering around them continues on the target station building and the pit for the beam dump and muon monitors. The installation of the neutrino equipment into the target station should begin in July. The complete arrangement for the third horn was assembled at KEK in Tsukuba to debug the remote handling system that will be used for installation and maintenance. Tests on the operation of this horn began in April, while tests on assembling the target system and the first horn are scheduled for completion by July.
The beam dump consists of 14 core modules composed of graphite blocks and aluminium cooling plates. The modules were completed by the beginning of April, and by November they should be assembled together to form the beam dump, prior to installation in the beam-dump pit. Construction of the muon monitors is also under way and they are scheduled for testing with beam in July.
The pit for the neutrino monitor became available in April, so installation work could begin on the large magnet for the ND280 near detector, which is being assembled below ground before construction work begins on the surface building. The magnet has been donated by CERN, having been used in the UA1 experiment, for which it was built, and subsequently in the NOMAD neutrino experiment. It consists of 16 C-shaped yoke pieces, together with two carriages for the yokes, rails and other components. For the journey to Japan the yokes were disassembled into 32 short pieces and 16 long pieces, so as to fit into standard containers.
The various pieces travelled to Japan in three shipments, mainly by sea. The first and second shipments were for the yokes, carriages and jigs etc, while the third contained the delicate coils. The first shipment arrived at Japan’s Hitachinaka port on 18 March, bringing 24 short yoke pieces in 12 containers each 20 ft long, together with the two carriages each in a 40 ft container, and a third 40 ft container with items such as jigs. The two carriages, jigs and other items were transported from the port to J-PARC on 28 March in readiness for installing the magnet in the neutrino monitor pit. The task of unloading the 24 short yoke pieces at the area began on 1 April and on 3–4 April they were moved to the neutrino monitor area.
The second shipment arrived at the port on 10 April, bringing the remaining eight short yoke pieces and the 16 long yoke pieces. The short yoke pieces were taken into the neutrino monitor area on 19 April, and by the end of the month, the mobile crane had unloaded the long yoke pieces and carried them to the area, ready for re-assembling the short and long pieces into the16 yokes prior to installation in the pit. The coils, in the third and last shipment, were due to be delivered to the neutrino monitor area in the middle of June, for subsequent installation in the magnet yoke.
The survey to put reference lines on the floor of the neutrino monitor pit was carried out soon after the site became available, and by 14 April the rails for the yoke carriages were in position. The carriages were then lowered into the pit and mounted on the rails. The system for aligning the yokes was also set up, ready for when the yokes are installed on the carriages.
The 16 full yokes are being assembled at a rate of one per day. After they are all assembled, they will be lowered into the pit and mounted on the carriages using the alignment system. The plan is to complete installation of the yokes by the beginning of June. By this time, the coils should have been delivered from the port to the neutrino monitor area, in time for installation into the magnet yokes. Complete installation of the magnet in the neutrino monitor pit is scheduled for the end of June. The complete J-PARC neutrino beam facility and the near detector ND280 should then be ready by March 2009 so that the T2K experiment can start in April 2009.
• The T2K collaboration thanks the CERN management and European colleagues for their generosity in donating the UA1 magnet and their hard work in its preparation at CERN and J-PARC.
Exploiting the full potential of physics at the LHC, which includes R&D that is focused on upgrading luminosity, is the highest priority of the European Strategy for Particle Physics, which was adopted unanimously by the CERN Council in July 2006. The first LHC physics approaches ever nearer as the LHC hardware commissioning makes steady progress towards providing beams to the experiments later this year (Protons knock on the LHC’s door). Meanwhile, accelerator and detector experts are already looking farther into the future. They have begun preparatory work for the luminosity upgrade, known as the Super-LHC (SLHC), which was announced in April with an event at CERN to “kick-off” R&D (see ‘April event kicks off for the SLHC’, below).
The current LHC configuration is set up to produce proton–proton collisions at a centre-of-mass energy of 14 TeV and a luminosity of up to 1034 cm–2s–1. It will also provide high-energy lead–lead ion collisions at a centre-of-mass energy as high as 1.15 PeV (1150 TeV). The SLHC project, however, aims for a tenfold increase in luminosity for 14 TeV proton–proton collisions, achieved through the successive implementation of several new elements and technical improvements that are scheduled for 2012–2017. These include the major replacement of several accelerators in the LHC proton-injector chain, upgrades of the LHC-interaction regions and enhancements to the general-purpose experiments ATLAS and CMS.
Understanding how to improve the luminosity yield of the LHC has required careful scrutiny of the whole proton-injection and accelerator chain to seek out bottlenecks, inherent weaknesses and reliability problems. The findings are that more luminosity gain can be obtained from improvements to the injector chain than from changes in the LHC machine itself. This is no surprise, considering that some elements of the injector chain date from as early as 1959, when no one would even have dreamed of a superconducting accelerator the size of the LHC. In the current chain, protons pass successively from the source through Linac2, the Booster, the PS and the SPS before final injection into the LHC. The SLHC plans propose a future sequence of Linac4, the Low-Power Superconducting Proton Linac (LPSPL), PS2 (a new machine) and the SPS. Figure 1 shows both present and future schemes, while the aerial view shows more directly how the injectors will be positioned on the site at CERN.
The first bottleneck in the present layout occurs with the injection of proton bunches from Linac2 into the Booster. Protons are injected at 50 MeV by a multiturn injection process that inherently dilutes the beam brightness (the current within a given emittance). Much can be gained from using H– particles in the linac followed by injection in the Booster using a charge-exchange technique that removes excess electrons. This method avoids a dilution of beam brightness and directly translates into a luminosity increase in the LHC. Capturing and accelerating the now more brilliant beam requires an energy increase in the linac, thus reducing the beam self-repulsion in the Booster. This justifies the present 50 MeV proton linac (Linac2) being replaced by a new 160 MeV linac (Linac4) operated with H– ions. Plans for Linac4 are well advanced and its construction will begin soon, aiming for commissioning by 2012. This will result in a doubling of peak LHC luminosity.
Next in the present injector chain are the Booster and the PS, both of which suffer from inherent intensity limitations and reliability problems after many years of service. These both call for new injectors designed for the needs of the LHC, which also take into account potential future projects like a neutrino factory or a next-generation nuclear-isotope facility. Current plans concentrate on extending the energy of Linac4 to several giga-electron-volts (i.e. the LPSPL) and a new 50 GeV synchrotron called PS2, resulting in a higher performance – in particular, an increased SPS injection energy of 50 GeV and another doubling of the proton flux. Both machines are now entering the R&D and design-optimization stage, aiming for a decision on construction by 2011. The building of these new injectors can take place in parallel with operation of the LHC, with a changeover expected in 2017 after an extended shutdown.
In the LHC itself, major elements of the interaction regions in the two high-luminosity insertions can be replaced to give yet another luminosity gain of a factor of two. In particular, new focusing triplets based on Nb-Ti superconducting technology are forseen. Compared to the present systems they will have a larger aperture and will allow the beam size at the interaction point to be reduced. The new triplets will require parallel improvements in the LHC collimation system and the separation elements near the interaction regions, to be implemented before the 2013 physics run.
The ATLAS and CMS experiments will also require upgrades to increase their sensitivity limits in the presence of the higher interaction rates and increased radiation levels. At the SLHC, “pile-up” will amount to as many as 400 events per bunch crossing. This requires adapting trigger and data-acquisition schemes, as well as the complete replacement of the central tracking detectors. The new trackers will have finer granularity and an increased radiation hardness, while particular emphasis will be placed on minimizing their material budget. The forward muon regions will need major modifications, complemented by new beam-pipe elements and reinforced shielding.
While all of these technical developments towards the SLHC take place, LHC operation will continue uninterrupted and the LHC experiments will pursue their quest for new discoveries in the head-on collisions of protons and ions of extraordinarily high energy. In particular, these are expected to increase our knowledge of the origin of mass, the formation of matter, matter–antimatter asymmetries and issues such as extra dimensions of space, microscopic black holes and dark matter in the universe. Profiting from the successive luminosity increases, the SLHC will undoubtedly allow for further probing of phenomena first detected at the LHC. It will also provide better access to the detection of low-rate phenomena that will be inaccessible to the LHC, and will push the sensitivity limits for new physics processes to higher mass-scales.
Many R&D activities for the SLHC are now starting, thanks to several national funding sources, additional funding made available to CERN from its member states, and funding from the European Commission. The corresponding collaboration frameworks with worldwide partners are being established for the accelerators and for the experiments.
• April event kicks off for the SLHC
A public “kick-off” event, marking the start of SLHC developments, was held at CERN on 9 April. The aim was to inform a wide audience about the SLHC project. In a packed auditorium, the event began with a speech by CERN’s chief scientific officer, Jos Engelen. He emphasized the importance of developments towards the SLHC within the European particle physics strategy and commended the position taken up by SLHC activities within CERN’s overall programme. These will maintain the thrust of the numerous innovations accomplished for the design and implementation of the LHC, while working towards full exploitation of the LHC’s new physics. Engelen also reported on the role taken up by the LHC Committee in peer reviewing the R&D for the upgrade, and he encouraged the audience to submit their proposals.
The event continued with three overview talks. Michelangelo Mangano of CERN’s Theory Unit gave a talk on the present views on the physics potential of the SLHC – physics options that will become more refined as the LHC begins to reveal its secrets. Lyn Evans, LHC project leader, outlined the accelerator upgrade plans and associated timescales. Jordan Nash of Imperial College reported on the upgrade plans of both the ATLAS and CMS experiments. The event concluded with lively discussions on the impact of the announced gradual luminosity increases on the present physics, operation and upgrade plans of these experiments.
By Fabio Toscano, Sironi Editore. Paperback ISBN 9788851800963, €18.
Il fisico che visse due volte – the physicist who lived twice – is Lev Davidovich Landau, the iconoclastic physicist and 1962 Nobel Laureate. One of the greatest theorists of the Soviet Union, he made significant contributions to almost all fields in physics, from superfluidity to the properties of ferromagnetic bodies, from the absorption of sound in solids to the theory of phase transitions. This biography by Fabio Toscano, an Italian theorist with a broad experience in communicating science, nicely guides the reader through all aspects of this rich scientific production, never neglecting to present it primarily as a human adventure.
The main focus, as I expected, is on Landau’s attitude to physicists and people in general, and thanks to this book I discovered his rather peculiar personality. Unpleasant to most of the people with whom he interacted, he was loved by some of his colleagues and friends who had a great admiration for his broad knowledge and his courage always to say what he thought, regardless of constraints from politics, society or academic authority. His straight-talking attitude caused serious problems to both his career and his private life (he spent one year in prison) at a time when the Soviet Union was under Stalin’s dictatorship.
In addition to his written contributions and original articles, one of Landau’s main legacies for Russian science is the “Landau school”. To be admitted to the school, students had to pass a comprehensive exam, the “Theoretical Minimum”, designed personally by Landau. As Toscano explains, Landau kept personal contact with all his students until he died in 1968, six years after a car accident that brought him close to death. In the accident “not even the eggs Vera [the driver’s wife] had in her hands broke”, but Landau’s brain suffered from serious injuries that left him in a coma for three months. He never fully recovered, and was afterwards much less creative.
This book certainly shows Landau with all his humanity, even emphasizing some of the scientific traps into which he fell. However, the details about Russia’s history and social situation that the author likes so much sometimes make the reading hard and the focus too distant. When “stuck” in such pages, I was eager to come back to Landau’s real life in Moscow or Baku or Karkhov and follow him, for example in meeting Bohr and quantum mechanics. Having studied some of the volumes of the Course of Theoretical Physics that Landau wrote with Evgeny Lifshitz and other colleagues, I appreciated this biography. Toscano’s account is very accurate – even scientific – and describes well Landau’s personality, the raison-d’être of the book.
by Andrew Robinson, Thames & Hudson. Hardback ISBN 9780500513675, £13.97 ($25.51).
Try to imagine civilization without measurement. In addition to length, weight, height, or any of the other obvious scalar quantities that we use in our daily lives, time and language also require standards to make sense. Current quantification includes concepts inconceivable to the earliest humans – gigabytes, body-mass index, radioactivity, and even beam intensity … Without accurate measurements our society would become chaos. On the other hand, some measurements are far from accurate, but still give a very clear idea of the described quantity: “a scourge of mosquitoes”, “a run of salmon,” or “a handful of children” are all something we can easily visualize.
The story of measurement by Andrew Robinson, former literary editor of The Times Higher Education Supplement and the author of the bestselling The Story of Writing, consists of a series of chapters that can be read independently; it can also be read from cover to cover. However, by simply leaving the book on your coffee table you can enjoy it in silent moments in small doses every evening – and, I’m willing to wager, most of your guests will do the same, as they wait while coffee is being brewed.
Most people, perhaps with the exception of particle physicists who are used to aiming for “5σ detection” while doing their measurements, do not necessarily think about how measurements are going to be interpreted. Or maybe more subtle: who else is clear as to what accuracy means versus precision and error versus uncertainty? One chapter has been devoted to this interesting issue, and having originally trained as a survey engineer, this discussion brings back a lot of good memories for me.
The book has received mixed reviews, but it is not obvious which scale has been used for measuring the quality – after all it remains a coffee-table book and should be judged as such. I found it entertaining. Its potential popularity is also well reflected in that it exists in several language editions. Das Abenteuer der Vermessung and La storia della misurazione are already available in the bookshops. Read it yourself and make your own judgement, while, of course, applying all the rules that have to be taken into account for making a good measurement.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.