Comsol -leaderboard other pages

Topics

Neutrinos and nucleons

The Neutrino Paper

On 7 April 1934, the journal Nature published a paper – The “Neutrino” – in which Hans Bethe and Rudolf Peierls considered some of the consequences of Wolfgang Pauli’s proposal that a lightweight neutral, spin 1/2 particle is emitted in beta decay together with an electron (Bethe and Peierls 1934). Enrico Fermi had only recently put forward his theory of beta decay, in which he considered both the electron and the neutral particle – the neutrino – not as pre-existing in the nucleus, but as created at the time of the decay. As Bethe and pointed out, such a creation process implies annihilation processes, in particular one in which a neutrino interacts with a nucleus and disappears, giving rise to an electron (or positron) and a different nucleus with a charge changed by one unit. They went on to estimate the cross-section for such a reaction and argued that for a neutrino energy of 2.3 MeV it would be less than 10–44 cm2 – “corresponding to a penetrating power of 1014 km in solid matter”. This led them to conclude that even with a cross-section rising with energy as expected in Fermi’s theory, “it seems highly improbable that, even for cosmic ray energies, the cross-section becomes large enough to allow the process to be observed”.

However, as Peierls commented 50 years later, they had not allowed for “the existence of nuclear reactors producing neutrinos in vast quantities” or for the “ingenuity of experimentalists” (Peierls 1983). These two factors combined to underpin the first observation of neutrinos by Clyde Cowan and Fred Reines at the Savannah River nuclear reactor in 1956, and during the following years the continuing ingenuity of the particle-physics community led to the production of neutrinos with much higher energies at particle accelerators. With the reasonably large numbers of neutrinos that could be produced at accelerators, and cross-sections increasing with energy, their measurement became a respectable line of research, and ingenious experimentalists began to turn neutrinos into a tool to investigate different aspects of particle physics. Following the idea of Mel Schwarz, studies with neutrino beams began at the Alternating Gradient Synchrotron at Brookhaven and at the Proton Synchrotron (PS) at CERN in the early 1960s, and were taken to higher energies at Fermilab and at CERN’s Super Proton Synchrotron in the 1970s. They continue today, using high-intensity beams produced at Fermilab and the Japan Proton Accelerator Research Complex.

Dirac, Pauli and Peierls

At CERN, the story began in earnest in 1963 with an intense neutrino beam provided courtesy of Simon van der Meer’s invention of the neutrino horn – a magnetic device that focuses the charged particles (pions and kaons) whose decays give rise to the neutrinos – coupled with a scheme for fast ejection of the proton beam from the PS devised by Berend Kuiper and Günther Plass in 1959. First in line to receive the neutrinos was the 500-litre Heavy-Liquid Bubble Chamber (HLBC) built by a team led by Colin Rammm

The combination worked well, allowing the measurement of neutrino cross-sections for various kinds of interactions. Studies of quasi-electric scattering, such as ν + n → μ + p, mirrored for the weak interaction – the only way that neutrinos can interact – measurements that had been made for several years in elastic electron-nucleon scattering at Stanford. The cross-sections measured in electron scattering were used to derive electromagnetic “form factors” – an expression of how much the scattering is “smeared out” by an extended object, in comparison with the expectation from point-like scattering. The early results from the HLBC showed the weak form factors to be similar to those measured in electron scattering. Electrons and neutrinos were apparently “seeing” the same thing in (quasi-)elastic scattering.

Less easy to understand at the time were the “deep” inelastic events where the nucleus was more severely disrupted and several pions produced, as in ν + N → μ + N + nπ. The measurements of such events revealed a cross-section that increased with neutrino energy, rising to more than 10 times the quasi-elastic cross-section. Don Perkins of Oxford University reported on these results at a conference in Siena in 1963. “They were clearly trying to tell us a very simple thing,” he recalled nearly 40 years later, “but unfortunately, we were just not listening!” (Perkins 2001)

Indeed, most physicists thought that this sub-structure (quarks) was more of a mathematical convenience

The following year, Murray Gell-Mann and George Zweig put forward their ideas about a new substructure to matter – the “quarks” or “aces” that made up the hadrons, including the protons and neutrons of the nucleus. Today, this sub-structure is a fundamental part of the Standard Model of particle physics, and many young people learn about quarks as basic building blocks of matter while still at school. At the time, however, it was a different story because there was no evidence for real particles with charges of 1/3 and 2/3 that the proposals required. Indeed, most physicists thought that this sub-structure was more of a mathematical convenience.

The picture began to change at a conference in Vienna in 1968, when deep-inelastic electron-scattering measurements at SLAC’s 3 km linear accelerator by the SLAC-MIT experiment – the direct descendent of the earlier experiments in Stanford – made people sit up and listen. The deep-inelastic cross-section divided by the cross-section expected from a point charge (Mott scattering) showed a surprisingly flat dependence on the square of the momentum transfer (q2). This was consistent with scattering from points within the nucleons rather than the smeared-out structure seen in elastic scattering, which gives a cross-section that falls away rapidly with q2. Moreover, the measurements yielded a structure function – akin to the form factor of elastic scattering – that depended very little on q2 at large values of energy transfer, ν. Indeed, the data appeared consistent with a proposal by James Bjorken that in the limit of high q2, the deep-inelastic structure functions would depend only on a dimensionless variable, x = q2/2Mν, for a target nucleon mass M. This behaviour, called “scaling”, implied point-like scattering.

F<sub>2</sub>(x) per nucleon

What did this imply for neutrinos? If they really were seeing the same structure as electrons – if the deep-inelastic structure function depended only on the dimensionless variable x – then the total cross-section should simply rise linearly with neutrino energy. As soon as Perkins saw the first results from SLAC in 1968, he quickly revisited the data from the Heavy-Liquid Bubble Chamber and found that this was indeed the case (Perkins 2001).

The “points” in the nucleons became known as partons – a name coined by Richard Feynman, who had been trying to understand high-energy proton–proton collisions in terms of point-like constituents. A key question to be resolved was whether the partons had the attributes of quarks, such as spin 1/2 and the predicted fractional charges. The SLAC-MIT group went on to make an outstanding series of systematic measurements over the next couple of years, which provided undisputable evidence for the point-like structure within the nucleon – and led in 1990 to the award of the Nobel Prize in Physics to Jerome Friedman, Henry Kendall and Richard Taylor. This wealth of data included results that clearly indicated that the partons must have spin 1/2.

In the meantime, a new heavy-liquid bubble chamber had been installed at the PS at CERN. Gargamelle was 4.8 m long and contained 18 tonnes of Freon, and had been designed and built at Orsay under the inspired leadership of André Lagarrigue, of the Ecole Polytechnique. It was to become famous for the first observation of weak neutral currents in 1973 (CERN Courier September 2009 p25). The same year saw the first publication of total cross-sections measured in Gargamelle, based on a few thousand events, not only with neutrinos but also antineutrinos. The results had in fact been aired first the previous year at Fermilab, at the 16th International Conference on High-Energy Physics (ICHEP). They showed clearly the linear rise with energy consistent with point-like scattering. Moreover, the neutrino cross-section was around three times larger than that for antineutrinos, which confirmed that neutrinos and antineutrinos were also seeing structure with spin 1/2.

André Lagarrigue in front of the Gargamelle bubble chamber at CERN

However, there was still more. With data from both neutrinos and antineutrinos, the team could derive one of the structure functions that was also measured in deep-inelastic electron-scattering. Electrons scatter electromagnetically in proportion to the square of the charge of whatever is doing the scattering. Neutrinos, by contrast, are blind to charge and scatter only weakly. A comparison of the two structure functions should depend only on the mean charge squared seen by the electrons, which for quarks of charges 2/3 and –1/3 in equal numbers in the deuterium target used in the experiment at SLAC would be 5/18. So, the structure function from neutrino scattering, with no charge dependence, should be 18/5 of that for electron-scattering. As Feynman himself said: ” If you never did believe that ‘nonsense’ that quarks have non-integral charges, we have a chance now, in comparing neutrino to electron scattering, to finally discover for the first time whether the idea…is physically sensible, physically sound; that’s exciting.” (Feynman 1974)

At the 17th ICHEP held in London in 1974, particle physicists from around the world were able to see the results from Gargamelle for themselves – the neutrino structure function, when multiplied by 18/5, did indeed fit closely with the data from the SLAC-MIT experiment (see figure). Forty years on from the paper by Bethe and Peierls, neutrino cross-sections were not only being measured, they were revealing a more fundamental layer to nature – the quarks.

These early experiments were just the beginning of what became a prodigious effort, mainly at CERN and Fermilab, using neutrinos to probe the structure of the nucleon within the context of quantum chromodynamics, the theory of quarks and the gluons that bind them together. And the effort is not finished, because neutrinos are still being used to understand puzzles that remain in the structure of the nucleus. But that is another story.

MINERvA searches for wisdom among neutrinos

MINERvA Collab

Neutrino physicists enjoy a challenge, and the members of the MINERvA (Main INjector ExpeRiment for v-A) collaboration at Fermilab are no exception. MINERvA seeks to make precise measurements of neutrino reactions using the Neutrinos at the Main Injector (NuMI) beam on both light and heavy nuclei. Does this goal reflect the wisdom of the collaboration’s namesake? Current and future accelerator-based neutrino-oscillation experiments must precisely predict neutrino reactions on the nuclei if they are to search successfully for CP violation in oscillations. Understanding matter–antimatter asymmetries might in turn lead to a microphysical mechanism to answer the most existential of questions: why are we here? Although MINERvA might provide vital assistance in meeting this worthy goal, neutrinos never yield answers easily. Moreover, using neutrinos to probe the dynamics of reactions on complicated nuclei convolutes two challenges.

The history of neutrinos is wrought with theorists underestimating the persistence of experimentalists (Close 2010). Wolfgang Pauli’s quip about the prediction of the neutrino, “I have done a terrible thing. I have postulated a particle that cannot be detected,” is a famous example. Nature rejected Enrico Fermi’s 1933 paper explaining β decay, saying it “contained speculations too remote from reality to be of interest to readers”. Eighty years ago, when Hans Bethe and Rudolf Peierls calculated the first prediction for the neutrino cross-section, they said, “there is no practical way of detecting a neutrino” (p23). But when does practicality ever stop physicists? The theoretical framework developed during the following two decades predicted numerous measurements of great interest using neutrinos, but the technology of the time was not sufficient to enable those measurements. The story of neutrinos across the ensuing decades is that of many dedicated experimentalists overcoming these barriers. Today, the MINERvA experiment continues Fermilab’s rich history of difficult neutrino measurements.

Neutrinos at Fermilab

Fermilab’s research on neutrinos is as old as the lab itself. While it was still being built, the first director, Robert Wilson, said in 1971 that the initial aim of experiments on the accelerator system was to detect a neutrino. “I feel that we then will be in business to do experiments on our accelerator…[Experiment E1A collaborators’] enthusiasm and improvisation gives us a real incentive to provide them with the neutrinos they are waiting for.” The first experiment, E1A, was designed to study the weak interaction using neutrinos, and was one of the first experiments to see evidence of the weak neutral current. In the early years, neutrino detectors at Fermilab were both the “15 foot” (4.6 m) bubble chamber filled with neon or hydrogen, and coarse-grained calorimeters. As the lab grew, the detector technologies expanded to include emulsion, oil-based Cherenkov detectors, totally active scintillator detectors, and liquid-argon time-projection chambers. The physics programme expanded as well, to include 42 neutrino experiments either completed (37), running (3) or being commissioned (2). The NuTeV experiment collected an unprecedented million high-energy neutrino and antineutrino interactions, of both charged and neutral currents. It provided precise measurements of structure functions and a measurement of the weak mixing angle in an off-shell process with comparable precision to contemporary W-mass measurements (Formaggio and Zeller 2013). Then in 2001, the DONuT experiment observed the τ neutrino – the last of the fundamental fermions to be detected.

neutrino event

While much of the progress of particle physics has come by making proton beams of higher and higher energies, the most recent progress at Fermilab has come from making neutrino beams of lower energies but higher intensities. This shift reflects the new focus on neutrino oscillations, where the small neutrino mass demands low-energy beams sent over long distances. While NuTeV and DONuT used beams of 100 GeV neutrinos in the 1990s, the MiniBooNE experiment, started in 2001, used a 1 GeV neutrino beam to search for oscillations over a short distance. The MINOS experiment, which started in 2005, used 3 GeV neutrinos and measured them both at Fermilab and in a detector 735 km away, to study oscillations that were seen in atmospheric neutrinos. MicroBooNE and NOvA – two experiments completing construction at the time of this article – will place yet more sensitive detectors in these neutrino beamlines. Fermilab is also planning the Long-Baseline Neutrino Experiment to be broadly sensitive to resolve CP violation in neutrinos.

A spectrum of interactions

Depending on the energy of the neutrino, different types of interactions will take place (Formaggio and Zeller 2013, Kopeliovich et al. 2012). In low-energy interactions, the neutrino will scatter from the entire nucleus, perhaps ejecting one or more of the constituent nucleons in a process referred to as quasi-elastic scattering. At slightly higher energies, the neutrinos interact with nucleons and can excite a nucleon into a baryon resonance that typically decays to create new final-state hadrons. In the high-energy limit, much of the scattering can be described as neutrinos scattering from individual quarks in the familiar deep-inelastic scattering framework. MINERvA seeks to study this entire spectrum of interactions.

To measure CP violation in neutrino-oscillation experiments, quasi-elastic scattering is an important channel. In a simple model where the nucleons of the nucleus live in a nuclear binding potential, the reaction rate can be predicted. In addition, an accurate estimate of the energy of the incoming neutrino can be made using only the final-state charged lepton’s energy and angle, which are easy to measure even in a massive neutrino-oscillation experiment. However, the MiniBooNE experiment at Fermilab and the NOMAD experiment at CERN both measured the quasi-elastic cross-section and found contradictory results in the framework of this simple model (Formaggio and Zeller 2013, Kopeliovich et al. 2012).

he neutrino quasi-elastic cross-section

One possible explanation of this discrepancy can be found in more sophisticated treatments of the environment in which the interaction occurs (Formaggio and Zeller 2013, Kopeliovich et al. 2012). The simple relativistic Fermi-gas model treats the nucleus as quasi-free independent nucleons with Fermi motion in a uniform binding potential. The spectral-function model includes more correlation among the nucleons in the nucleus. However, more complete models that include the interactions among the many nucleons in the nucleus modify the quasi-elastic reaction significantly. In addition to modelling the nuclear environment on the initial reaction, final-state interactions of produced hadrons inside the nucleus must also be modelled. For example, if a pion is created inside the nucleus, it might be absorbed on interacting with other nucleons before leaving the nucleus. Experimentalists must provide sufficient data to distinguish between the models.

The ever-elusive neutrino has forced experimentalists to develop clever ways to measure neutrino cross-sections, and this is exactly what MINERvA is designed to do with precision. The experiment uses the NuMI beam – a highly intense neutrino beam. The MINERvA detector is made of finely segmented scintillators, allowing the measurement of the angles and energies of the particles within. Figures 1 and 2 show the detector and a typical event in the nuclear targets. The MINOS near-detector, located just behind MINERvA, is used to measure the momentum and charge of the muons. With this information, MINERvA can measure precise cross-sections of different types of neutrino interactions: quasi-elastic, resonance production, and deep-inelastic scatters, among others.

ratio of charged-current cross-section

The MINERvA collaboration began by studying the quasi-elastic muon neutrino scattering for both neutrinos (MINERvA 2013b) and antineutrinos (MINERvA 2013a). By measuring the muon kinematics to estimate the neutrino energies, they were able to measure the neutrino and antineutrino cross-sections. The data, shown in figure 3, suggest that the nucleons do spend some time in the nucleus joined together in pairs. When the neutrino interacts with the pair, the pair is kicked out of the nucleus. Using the visible energy around the nucleus allowed a search for evidence of the pair of nucleons. Experience from electron quasi-elastic scattering leads to an expectation of final-state proton–proton pairs for neutrino quasi-elastic scattering and neutron–neutron pairs for antineutrino scattering. MINERvA’s measurements of the energy around the vertex in both neutrino and antineutrino quasi-elastic scattering support this expectation (figure 3, right).

A 30-year-old puzzle

Another surprise beyond the standard picture in lepton–nucleus scattering emerged 30 years ago in deep-inelastic muon scattering. The European Muon Collaboration (EMC) observed a modification of the structure functions in heavy nuclei that is still theoretically unresolved, in part because there is no other reaction in which an analogous effect is observe. Neutrino and antineutrino deep-inelastic scattering might see related effects with different leptonic currents, and therefore different couplings to the constituents of the nucleus (Gallagher et al. 2010, Kopeliovich et al. 2012). MINERvA has begun this study using large targets of active scintillator and passive graphite, iron and lead (MINERvA 2014). Figure 4 shows the ratio of lead to scintillator and illustrates behaviour that is not in agreement with a model based on charged-lepton scattering modifications of deep-inelastic scattering and the elastic physics described above. Similar behaviour, but with smaller deviations from the model, is observed in the ratio of iron to scintillator. MINERvA’s investigation of this effect will benefit greatly from its current operation in the upgraded NuMI beam for the NOvA experiment, which is more intense and higher in (the beamline’s on-axis) energy. Both features will allow more access to the kinematic regions where deep-inelastic scattering dominates. By including a long period of antineutrino operation needed for NOvA’s oscillation studies, an even more complete survey of the nucleons can be done. The end result of these investigations will be a data set that can offer a new window on the process behind the EMC effect.

Initially in the history of the neutrino, theory led experiment by several decades

Initially in the history of the neutrino, theory led experiment by several decades. Now, experiment leads theory. Neutrino physics has repeatedly identified interesting and unexpected physics. Currently, physics is trying to understand how the most abundant particle in the universe interacts in the simplest of situations. MINERvA is just getting started on answering these types of questions and there are many more interactions to study. The collaboration is also looking at what happens when neutrinos make pions or kaons when they hit a nucleus, and how well they can measure the number of times a neutrino scatters off an electron – the only “standard candle” in this business.

Time after time, models fail to predict what is seen in neutrino physics. The MINERvA experiment, among others, has shown that quasi-elastic scattering is a wonderful tool to study the nuclear environment. Maybe the use of neutrinos, once thought to be impossible to detect, as a probe to study inside the nucleus, would make Pauli, Fermi, Bethe, Peierls and the rest chuckle.

Advanced radiation detectors in industry

 

The European Physical Society’s Technology and Innovation Group (EPS-TIG) was set up in 2011 to work at the boundary between basic and applied sciences, with annual workshops organized in collaboration with CERN as its main workhorse (CERN Courier April 2013 p31). The second workshop, organized in conjunction with the department of physics and astronomy and the “Fondazione Flaminia” of Bologna University, took place in Ravenna on 11–12 November 2013. The subject – advanced radiation detectors for industrial use – brought experts involved in the research and development of advanced sensors, together with representatives from related spin-off companies.

The first session, on technology-transfer topics, opened with a keynote speech by Karsten Buse, director of the Fraunhofer Institute for Physical Measurement Technique (IPM), Freiburg. In the spirit of Joseph von Fraunhofer (1787–1826) – a researcher, inventor and entrepreneur – the Fraunhofer Gesellschaft promotes innovation and applied research that is of direct use for industry. Outlining the IPM’s mission and the specific competences and services it provides, Buse presented an impressive overview of technology projects that have been initiated and developed or improved and supported by the institute. He also emphasized the need to build up and secure intellectual property, and explained contract matters. The success stories include the MP3 audio-compression algorithm, white LEDs to replace conventional light bulbs, and all-solid-state widely tunable lasers. Buse concluded by observing that bridging the gap between academia and industry requires some attention, but is less difficult than often thought and also highly rewarding. A lively discussion followed in the audience of students, researchers and partners from industry.

The second talk focused on knowledge transfer (KT) from the perspective of CERN’s KT Group. First, Giovanni Anelli described the KT activities based on CERN’s technology portfolio and on people – that is, students and fellows. In the second part, Manjit Dosanjh presented the organization’s successful and continued transfer to medical applications of advanced technologies in the fields of accelerators, detectors and informatics technologies. Catalysing and facilitating collaborations between medical doctors, physicists and engineers, CERN plays an important role in “physics for health” projects at the European level via conferences and networks such as ENLIGHT, set up to bring medical doctors and physics researchers together (CERN Courier December 2012 p19).

Andrea Vacchi of INFN/Trieste reviewed the INFN’s KT activities. He emphasized that awareness of the value of the technology assets developed inside INFN is growing. In the past, technology transfer between INFN and industry happened mostly through the involvement of suppliers in the development of technologies. In future, INFN will take more proactive measures to encourage technology transfer between INFN research institutions and industry.

From lab to industry

The first afternoon was rounded up by Colin Latimer of the University of Belfast and member of the EPS Executive Committee. He illustrated the varying timescales between invention and mass-application multi-billion-dollar markets, with a number of example technologies including optical fibres (1928), liquid-crystal displays (1936), magnetic-resonance imaging (MRI) scanners (1945) and lasers (1958), with high-temperature superconductors (1986) and graphene (2004) still waiting to make a major impact. Latimer went on to present results from the recent study commissioned by the EPS from the Centre for Economics and Business Research, which has shown the importance of physics to the European economy (EPS/Cebr 2013).

The second part of the workshop was devoted to sensors and innovation in instrumentation and industrial applications, starting with a series of talks that reviewed the latest developments. This was followed by presentations from industry on various sensor products, application markets and technological developments.

Erik Heijne, a pioneer of silicon and silicon-pixel detectors at CERN, started by discussing innovation in instrumentation through the use of microelectronics technology. Miniaturization to sub-micron silicon technologies allows many functions to be compacted into a small volume. This has led in turn to the integration of sensors and processing electronics in powerful devices, and has opened up new fields of applications (CERN Courier March 2014 p26). In high-energy particle physics, the new experiments at the LHC have been based on sophisticated chips that allow unprecedented event rates of up to 40 MHz. Some of the chips – or at least the underlying ideas – have found applications in materials analysis, medical imaging and other types of industrial equipment. The radiation imaging matrix, for example, based on silicon-pixel and integrated read-out chips, has many applications already.

Detector applications

Julia Jungmann of PSI emphasized the use of active pixel detectors for imaging in mass spectrometry in molecular pathology, in research done at the FOM Institute AMOLF in Amsterdam. The devices have promising features for fast, sensitive ion-imaging with time and space information from the same detector, high spatial resolution, direct imaging acquisition and highly parallel detection. The technique, which is based on the family of Medipix/Timepix devices, provides detailed information on molecular identity and localization – vital, for example in detecting the molecular basis of a pathology without the need to label bio-molecules. Applications include disease studies, drug-distribution studies and forensics. The wish list is now for chips with 100 ps time bins, a 1 ms measurement interval, multi-hit capabilities at the pixel level, higher read-out rates and high fluence tolerance.

In a similar vein, Alberto Del Guerra of the University of Pisa presented the technique of positron-emission tomography (PET) and its applications. Outlining the physics and technology of PET, he showed improved variants of PET systems and applications to molecular imaging, which also allow the visual representation, characterization and quantification of biological processes at the cellular and subcellular levels within living organisms. Clinical systems of hybrid PET and computerized tomography (CT) for application in oncology and neurology, human PET and micro-PET equipment, combined with small-animal CT, are available from industry, and today there are also systems where PET and magnetic resonance imaging (MRI) are combined. Such systems are being used in hadron therapy in Italy for monitoring purposes at the 62 MeV proton cyclotron of the CATANA facility in Catania, and at the proton and carbon synchrotron of the CNAO centre in Pavia. An optimized tri-modality imaging tool for schizophrenia is even being developed, combining PET with MRI and electroencephalography measurements. Del Guerra’s take-home message was that technology transfer in the medical field needs long-term investment – industry can withdraw halfway if a technology is not profitable (for example, Siemens in the case of proton therapy). In future, applications will be multimodal with PET combined with other imaging techniques (CT, MRI, optical projection tomography), for applications to specific organs such as the brain, breast, prostate and more.

The next topic related to recent developments in the silicon drift detector (SDD) and its applications. Chiara Guazzoni, of the Politecnico di Milano and INFN Milan, gave an excellent overview of SDDs, which were invented by Emilio Gatti and Pavel Rehak 30 years ago. These detectors are now widely used in X-ray spectroscopy and are commercially available. Conventional and non-conventional applications include the non-destructive analysis of cultural heritage and biomedical imaging based on X-ray fluorescence, proton-induced X-ray emission studies, gamma-ray imaging and spectroscopy, X-ray scatter imaging, etc. As Gatti and Rehak stated in their first patent, “additional objects and advantages of the invention will become apparent to those skilled in the art,” and Guazzoni hopes that the art will keep “drifting on” towards new horizons.

Moving on to presentations from industry and start-up companies, Jürgen Knobloch of KETEK GmbH in Munich presented new high-throughput, large-area SDDs, starting with a historical review of the work of Josef Kemmer, who in 1970 started to develop planar silicon technology for semiconductor detectors. Collaborating with Rehak and the Max-Planck Institute in Munich, Kemmer went on to produce the first SDDs with a homogeneous entrance window, with depleted field-effect transistor (DEPFET) and MOS-type DEPFET (DEPMOS) technologies. In 1989 he founded the start-up company KETEK, which is now the global commercial market leader in SSD technology. Knobloch presented the range of products from KETEK and concluded with a list of recommendations for better collaboration between research and industry. KETEK’s view on how science and industry can better collaborate includes: workshops of the kind organized by EPS-TIG; meetings between scientists and technology companies to set out practical needs and future requirements; involvement of technology-transfer offices to resolve intellectual-property issues; encouragement of industry to accept longer times for returns in investments; and the strengthening of synergies between basic research and industry R&D.

Knobloch’s colleague at KETEK, Werner Hartinger, then described new silicon photomultipliers (SiPMs) with high proton-detection efficiency, and listed the characteristics of a series of KETEK’s SiPM sensors, which also feature a huge gain (> 106) with low excess noise and a low temperature coefficient. KETEK has off-the-shelf SiPM devices and also customizes devices for CERN. The next steps will be continuous noise reduction (in both dark rate and cross-talk) by enhancing the KETEK “trench” technology, enhancement of the pulse shape and timing properties by optimizing parasitic elements and read-out, and the production of chip-size packages and arrays at the package level.

New start-ups

PIXIRAD, a new X-ray imaging system based on chromatic photon-counting technology, was presented by Ronaldo Bellazzini of PIXIRAD Imaging Counters srl – a recently constituted INFN spin-off company. The detector can deliver extremely clear and highly detailed X-ray images for medical, biological, industrial and scientific applications in the energy range 1–100 keV. Photon counting, colour mode and high spatial resolution lead to an optimal ratio of image quality to absorbed dose. Modules with units of 1, 2, 4 and 8 tiles have been built with almost zero dead space between the blocks. A complete X-ray camera based on the PIXIRAD-1 single-module assembly is available for customers in scientific and industrial markets for X-ray diffraction, micro-CT, etc. A dedicated machine to perform X-ray slot-scanning imaging has been designed and built and is currently under test. This system, which uses the PIXIRAD-8 module and is able to produce large-area images with fine position resolution, has been designed for digital mammography, which is one of the most demanding X-ray imaging applications.

CIVIDEC Instrumentation – another start-up company – was founded in 2009 by Erich Griesmayer. He presented several examples of applications of the products, which are based on diamond-detector technology. They have found use at the LHC and other accelerator beamlines as beam-loss and beam-position monitors for time measurements, high-radiation-level measurements, neutron time of flight, and as low-temperature detectors in superconducting quadrupoles. The company provides turn-key solutions that connect via the internet, supplying clients worldwide.

Nicola Tartoni, head of the detector group at the Diamond Light Source, outlined the layout of the facility and its diversified programmes. He presented an overview of the detector development and beamlines of this outstanding user facility in partnership with industry, with diverse R&D projects of increasing complexity.

Last, Carlos Granja, of the Institute of Experimental and Applied Physics (IEAP) at the Czech Technical University (CTU) in Prague, described the research carried out with the European Space Agency (ESA) demonstrating the impressive development in detection and particle tracking of individual radiation quanta in space. This has used the Timepix hybrid semiconductor pixel-detector developed by the Medipix collaboration at CERN. The Timepix-based space-qualified payload, produced by IEAP CTU in collaboration with the CSRC company of the Czech Republic, has been operating continuously on board ESA’s Proba-V satellite in low-Earth orbit at 820 km altitude, since being launched in May 2013. Highly miniaturized devices produced by IEAP CTU are also flying on board the International Space Station for the University of Houston and NASA for high-sensitivity quantum dosimetry of the space-station crew.

In other work, IEAP CTU has developed a micro-tracker particle telescope in which particle tracking and directional sensitivity are enhanced by the stacked layers of the Timepix device. For improved and wide-application radiation imaging, edgeless Timepix sensors developed at VTT and Advacam in Finland, with advanced read-out instrumentation and micrometre-precision tiling technology (available at IEAP CTU and the WIDEPIX spin-off company, of the Czech Republic), enable large sensitive areas up to 14 cm square to be covered by up to 100 Timepix sensors. This development allows the extension of high-resolution X-ray and neutron imaging at the micrometre level to a range of scientific and industrial applications.

• For more about the workshop, visit www.emrg.it/TIG_Workshop_2013/program.php?language=en. For the presentations, see http://indico.cern.ch/event/284070/.

The 1980s: spurring collaboration

The 1980s were characterized by two outstanding achievements that were to influence the long-term future of CERN. First came the discovery of the W and Z particles, the carriers of the weak force, produced in proton–antiproton collisions at the Super Proton Synchrotron (SPS) and detected by the UA1 and UA2 experiments. These were the first, now-typical collider experiments, covering the full solid angle and requiring large groups of collaborators from many countries. The production of a sufficient number of antiprotons and their handling in the SPS underlaid these successes, which were crowned by the Nobel Prize awarded to Carlo Rubbia and Simon van der Meer in 1984.

CCvie1_03_14

Then came the construction and commissioning of the Large Electron–Positron (LEP) collider. With its 27 km tunnel, it is still the largest collider of this kind ever built. Four experiments were approved – ALEPH, DELPHI, L3 and OPAL – representing again a new step in international co-operation. More than 2000 physicists and engineers from 12 member states and 22 non-member states participated in the experiments. Moreover, most of the funding of several hundred million Swiss francs had to come from outside the organization. CERN contributed only about 10% and had practically no reserves in case of financial overruns. Therefore the collaborations had to achieve a certain independence, and had to learn to accept common responsibilities. A new “sociology” for international scientific co-operation was born, which later became a model for the LHC experiments.

A result of the worldwide attraction of LEP was that from 1987 onwards, more US physicists worked at CERN than particle physicists from CERN member states at US laboratories. In Europe, two more states joined CERN: Spain, which had left CERN in 1968, came back in 1983, and Portugal joined in 1985. However, negotiations at the time with Israel and Turkey failed, for different reasons.

But the 1980s also saw “anti-growth”. Previously, CERN had received special allocations to the budget for each new project, leading to a peak around 1974 and declining afterwards. When LEP was proposed in 1981, the budget was 629 million Swiss francs. After long and painful discussions, Council approved a constant yearly budget of 617 million Swiss francs for the construction of LEP, under the condition that any increase – including automatic compensation for inflation – across the construction period of eight years was excluded. The unavoidable consequence of these thorny conditions was the termination of many non-LEP programmes (e.g. the Intersecting Storage Rings and the bubble-chamber programme) and a “stripped down” LEP project. The circumference of the tunnel had to be reduced, but was maintained at 27 km in view of a possible proton–proton collider in the same tunnel – which indeed proved to be a valuable asset.

A precondition to building LEP with decreasing resources was the unification of CERN. CERN II had been established in 1971 for construction of the SPS, with its own director-general, staff and management. From 1981, CERN was united under one director-general, but staff tended to adhere to their old groups, showing solidarity with their previous superiors and colleagues. However, for the construction of LEP, all of CERN’s resources had to be mobilized, and about 1000 staff were transferred to new assignments.

Another element of “anti-growth” had long-term consequences. Council was convinced that the scientific programme was first class, but had doubts about the efficiency of management. An evaluation committee was established to assess the human and material resources, with a view to reducing the CERN budget. In the end, the committee declined to consider a lower material budget because this would undoubtedly jeopardize the excellent scientific record of CERN. They proposed instead a reduction of staff from about 3500 to 2500, through an early retirement programme, and during the construction of the LHC this was even lowered to 2000. However, to cope with the increasing tasks and the rising number of outside users, many activities had to be outsourced, so considerable reduction of the budget was not achieved.

Yet despite these limiting conditions, LEP was built within the foreseen time and budget, thanks to the motivation and ingenuity of the CERN staff. First collisions were observed on 13 August 1989.

The theme of CERN’s 60th anniversary is “science for peace” – from its foundation, CERN had the task not only to promote science but also peace. This was emphasized at a ceremony for the 30th anniversary in 1984, by the American physicist and co-founder of CERN, Isidor Rabi: “I hope that the scientists of CERN will remember…[they are] as guardians of this flame of European unity so that Europe can help preserve the peace of the world.” Indeed during the 1980s, CERN continued to fulfil this obligation, with many examples such as co-operation with East European countries (in particular via JINR, Dubna) and with countries from the Far East (physicists from Mainland China and Taiwan were allowed to work together in the same experiment, L3, on LEP). Later, CERN became the cradle of SESAME, an international laboratory in the Middle East.

Unavoidably, CERN’s growth into a world laboratory is changing how it functions at all levels. However, we can be confident that it will perform its tasks in the future with the same enthusiasm, dedication and efficiency as in the past.

The Theory of the Quantum World: Proceedings of the 25th Solvay Conference on Physics

By David Gross, Marc Henneaux and Alexander Sevrin (eds.)
World Scientific
Hardback: £58
Paperback: £32
E-book: £24

CCboo3_02_14

Since 1911, the Solvay Conferences have helped shape modern physics. The 25th edition in October 2011, chaired by David Gross, continued this tradition, while also celebrating the conferences’ first centennial. The development and applications of quantum mechanics have been the main threads throughout the series, and the 25th Solvay Conference gathered leading figures working on a variety of problems in which quantum-mechanical effects play a central role.

In his opening address, Gross emphasized the success of quantum mechanics: “It works, it makes sense, and it is hard to modify.” In the century since the first Solvay Conference, the worry expressed by H A Lorentz in his opening address in 1911 – “we have reached an impasse; the old theories have been shown to be powerless to pierce the darkness surrounding us on all sides” – has been resolved. Physics is not in crisis today, but as Gross says there is “confusion at the frontiers of knowledge”. The 25th conference therefore addressed some of the most pressing open questions in the field of physics. As Gross admits, the participants were “unlikely to come to a resolution during this meeting….[but] in any case it should be lots of fun”.

The proceedings contain the rapporteur talks and, in the Solvay tradition, they also include the prepared comments to these talks. The discussions among the participants – some involving dramatically divergent points of view – have been carefully edited and are reproduced in full.

The reports cover the seven sessions: “History and reflections” (John L Heilbron and Murray Gell-Mann); “Foundations of quantum mechanics and quantum computation” (Anthony Leggett and John Preskill); “Control of quantum systems” (Ignacio Cirac and Steven Girvin); “Quantum condensed matter” (Subir Sachdev); “Particles and fields” (Frank Wilczek); and “Quantum gravity and string theory” (Juan Maldacena and Alan Guth). The proceedings end – as did the conference – with a general discussion attempting to arrive at a synthesis, where the reader can judge if it fulfilled the prediction by Gross and was indeed “lots of fun”.

Mathematics of Quantization and Quantum Fields

By Jan Dereziński and Christian Gérard
Cambridge University Press
Hardback: £90 $140
Also available as an e-book

612UMVJgRYL

Unifying a range of topics currently scattered throughout the literature, this book offers a unique review of mathematical aspects of quantization and quantum field theory. The authors present both basic and more advanced topics in a mathematically consistent way, focusing on canonical commutation and anti-commutation relations. They begin with a discussion of the mathematical structures underlying free bosonic or fermionic fields, such as tensors, algebras, Fock spaces, and CCR and CAR representations. Applications of these topics to physical problems are discussed in later chapters.

Three-Particle Physics and Dispersion Relation Theory

By A V Anisovich, V V Anisovich, M A Matveev, V A Nikonov, J Nyiri and A V Sarantsev
World Scientific
Hardback: £65
E-book: £49

61Ry82dMrpL

The necessity of describing three-nucleon and three-quark systems has led to continuing interest in the problem of three particles. The question of including relativistic effects appeared together with the consideration of the decay amplitude in the dispersion technique. The relativistic dispersion description of amplitudes always takes into account processes that are connected to the reaction in question by the unitarity condition or by virtual transitions. In the case of three-particle processes they are, as a rule, those where other many-particle states and resonances are produced. The description of these interconnected reactions and ways of handling them is the main subject of the book.

Science, Religion, and the Search for Extraterrestrial Intelligence

By David Wilkinson
Oxford University Press
Hardback: £25
Also available as an e-book

CCboo2_02_14

With doctorates in both astrophysics and theology, David Wilkinson is well qualified to discuss the subject matter of this book. He provides a captivating narrative on the scientific basis for the search for extraterrestrial intelligence and the religious implications of finding it. However, the academic nature of the writing might hinder the casual reader, with nearly every paragraph citing at least one reference.

Scientific and religious speculation on the possibility of life elsewhere in the universe is age-old. Wilkinson charts its history from the era of Plato and Democritus, where the existence of worlds besides our own was up for debate, to the latest data from telescopes and observatories, which paint vivid pictures of the many new worlds discovered around alien suns.

Readers familiar with astrophysics and evolutionary biology might find themselves skipping sections of the book that go into the specific conditions that need to be met for Earth-like life to evolve and attain intelligence. Wilkinson, however, is able to tie these varied threads together, presenting both the pessimism and optimism towards the presence of extraterrestrial life exhibited by scientists from different fields.

Despite referring to religion in the title, Wilkinson states early on that his work mainly discusses the relationship of Christianity and SETI. In this regard, the book provided me with much insight into Christian doctrine and its many – often contradictory – views on the universe. For example, despite the shaking of the geocentric perspective with the so-called Copernican Revolution, some Christian scholars from the era maintained that the special relationship of humans with God dictated that only Earth could harbour God-fearing life forms. Earth, therefore, retained its central position in the universe in a symbolic if not a literal sense. Other views held that nothing could be beyond the ability of an omnipotent, omnipresent God, who to showcase his glory might well have created other worlds with their own unique creatures.

After covering everything from science fiction to Christian creation beliefs, Wilkinson concludes with his personal views on the value of involving theology in searches for alien life. I leave you to draw your own conclusions about this! Overall, the book is a fascinating read and is recommended for those pondering the place of humanity in our vast universe.

Einstein’s Physics: Atoms, Quanta, and Relativity – Derived, Explained, and Appraised

By Ta-Pei Cheng
Oxford University Press
Hardback: £29.99
Also available as an e-book

CCboo1_02_14

Being familiar with the work of Ta-Pei Cheng, I started this book with considerable expectations – and I enjoyed the first two sections. I found many delightful discussions of topics in the physics that came after Albert Einstein, as well as an instructive discussion on his contributions to quantum theory, where the author shares Einstein’s reservations about quantum mechanics. However, the remainder of the text dedicated to relativity and related disciplines has problems. The two pivotal issues of special relativity, the aether and the proper time, provide examples of what I mean.

On p140, the author writes “…keep firmly in mind that Einstein was writing for a community of physicists who were deeply inculcated in the aether theoretical framework”, and continues “(Einstein, 1905) was precisely advocating that the whole concept of aether should be abolished”. Of course, Einstein was himself a member of the community “inculcated in the aether” and, indeed, aether was central in his contemplation of the form and meaning of physical laws. His position was cemented by the publication in 1920 of a public address on “Aether and the Theory of Relativity” and its final paragraph “…there exists an aether. According to the general theory of relativity space without aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time…”. This view superseded the one expressed in 1905, yet that is where the discussion in the book ends.

The last paragraph on p141 states that “…the key idea of special relativity is the new conception of time.” Einstein is generally credited with the pivotal discovery of “body time”, or in Hermann Minkowski’s terminology, a body’s “proper time”. The central element of special relativity is the understanding of the invariant proper time. Bits and pieces of “time” appear in sections 9–12 of the book, but the term “proper time” is mentioned only incidentally. Then on p152 I read “A moving clock appears to run slow.” This is repeated on p191, with the addition “appears to this observer”. However, the word “appears” cannot be part of an unambiguous explanation. A student of Einstein’s physics would say “A clock attached to a material body will measure a proper-time lifespan independent of the state of inertial motion of the body. This proper time is the same as laboratory time only for bodies that remain always at rest in the laboratory.” That said, I must add that I have never heard of doubts about the reality of time dilation, which is verified when unstable particles are observed.

Once the book progresses into a discussion of Riemannian geometry and, ultimately, of general relativity, gauge theories and higher-dimensional Kaluza–Klein unification, it works through modern topics of only marginal connection to Einstein’s physics. However, I am stunned by several comments about Einstein. On p223, the author explains how “inept” Einstein’s long proof of general relativity was, and instead of praise for Einstein’s persistence, which ultimately led him to the right formulation of general relativity, we read about “erroneous detours”. On p293, the section on “Einstein and mathematics” concludes with a paragraph that explains the author’s view as to why “…Einstein had not made more advances…”. Finally, near the end, the author writes on p327 that Einstein “could possibly have made more progress had he been as great a mathematician as he was a great physicist”. This is a stinging criticism of someone who did so much, for things he did not do.

The book presents historical context and dates, but the dates of Einstein’s birth and death are found only in the index entry “Einstein”, and there is little more about him to be found in the text. A listing of 30 cited papers appears in appendix B1 and includes only three papers published after 1918. The book addresses mainly the academic work of Einstein’s first 15 years, 1902–1917, but I have read masterful papers that he wrote during the following 35 years, such as “Solution of the field of a star in an expanding universe” (Einstein and Straus 1945 Rev. Mod. Phys. 17 120 and 1946 Rev. Mod. Phys. 18 148).

I would strongly discourage the target group – undergraduate students and their lecturers – from using this book, because in the part on special relativity the harm far exceeds the good. To experts, I recommend Einstein’s original papers.

ASACUSA produces first beam of antihydrogen atoms for hyperfine study

A beam of antihydrogen atoms has for the first time been successfully produced by an experiment at CERN’s Antiproton Decelerator (AD). The ASACUSA collaboration reports the unambiguous detection of antihydrogen atoms 2.7 m downstream from their production, where the perturbing influence of the magnetic fields used to produce the antiatoms is negligibly small. This result is a significant step towards precise hyperfine spectroscopy of antihydrogen atoms.

High-precision microwave spectroscopy of ground-state hyperfine transitions in antihydrogen atoms is a main focus of the Japanese-European ASACUSA collaboration. The research aims at investigating differences between matter and antimatter to test CPT symmetry (the combination of charge conjugation, C, parity, P, and time reversal, T) by comparing the spectra of antihydrogen with those of hydrogen, one of the most precisely investigated and best understood systems in modern physics.

One of the key challenges in studying antiatoms is to keep them away from ordinary matter. To do so, other collaborations take advantage of antihydrogen’s magnetic properties and use strong, non-uniform magnetic fields to trap the antiatoms long enough to study them. However, the strong magnetic-field gradients degrade the spectroscopic properties of the antihydrogen. To allow for clean, high-resolution spectroscopy, the ASACUSA collaboration has developed an innovative set-up to transfer antihydrogen atoms to a region where they can be studied in flight, far from the strong magnetic field regions.

In ASACUSA, the antihydrogen atoms are formed by loading antiprotons and positrons into the so-called cusp trap, which combines the magnetic field of a pair of superconducting anti-Helmholtz coils (i.e., coils with antiparallel excitation currents) with the electrostatic potential of an assembly of multi-ring electrodes (CERN Courier March 2011 p17). The magnetic-field gradient allows the flow of spin-polarized antihydrogen atoms along the axis of the cusp trap. Downstream there is a spectrometer consisting of a microwave cavity to induce spin-flips in the antiatoms, a superconducting sextupole magnet to focus the neutral beam and an antihydrogen detector. (The microwave cavity was not installed in the 2012 experiment.)

The detector, located 2.7 m from the antihydrogen-production region, consists of single-crystal bismuth germanium oxide (BGO) surrounded by five plates of plastic scintillator. Antihydrogen atoms annihilating in the crystal emit three charged pions on average, so the signal required consists of a coincidence between the crystal and at least two plastic scintillators. Simulations show that this requirement reduces the background, from antiprotons annihilating upstream and from cosmic rays, by three orders of magnitude.

The ASACUSA researchers investigate the principal quantum number, n, of the antihydrogen atoms that reach the detector, because their goal is to perform hyperfine spectroscopy on the ground state, n = 1. For these measurements, field-ionization electrodes were positioned in front of the BGO, so that only antihydrogen atoms with n < 43 or n < 29 reached the detector, depending on the average electric field. The analysis indicates that 80 antihydrogen atoms were unambiguously detected with n < 43, with a significant number having n < 29.

This analysis was based on data collected in 2012, before the accelerator complex at CERN entered its current long shutdown. Since then, the collaboration has also been preparing for the restart of the experiment at the AD in October this year. A new cusp magnet is under construction, which will provide a much stronger focusing force on the spin-polarized antihydrogen beam. A cylindrical high-resolution tracker and a new antihydrogen-beam detector are also under development. In addition, the positron accumulator will lead to an order of magnitude more positrons. The team eventually needs a beam of antihydrogen in its ground state (n = 1) so the updated experiment will employ an ionizer with higher fields to extract antihydrogen atoms that are in effect in the ground state.

bright-rec iop pub iop-science physcis connect