Comsol -leaderboard other pages

Topics

Helping CERN to benefit society

CCvie1_01_15th

I first came to CERN as a student in the mid 1980s, and spent an entrancing summer learning the extent of my lack of knowledge in the field of physics (considerable!) and meeting fellow students from across Europe and further afield. It was a life-changing experience and the beginning of my love affair with CERN. On graduation I returned as a research fellow working on the Large Electron–Positron collider, but at the end of three wonderful years I reluctantly came to the realization that the world of research was not for me. I moved into a more commercial world, and have been working in the field of investments for more than 20 years.

However, as the saying goes, you can take the girl out of CERN but you can’t take CERN out of the girl. I stayed in touch, and when, a few years ago, I met Rolf Heuer, the current director-general, and heard his vision of creating a foundation that would expand CERN’s ability to reach a wider audience, I was keen to be involved.

Science is, in some respects, a field of study that is open largely to the most privileged only. To do it well requires resources – trained educators, good facilities, textbooks, access to research and, of course, opportunity. These are not available universally. I was fortunate to become a summer student at CERN, but that is possible for a lucky few only, and there are many places in the world where even basic access to textbooks or research libraries is limited or non-existent.

And to those outside of the field of science, there is not always a good understanding of why these things matter. The return on a country’s investment in science will come years into the future, beyond short-term electoral cycles. There can appear to be more immediate and pressing concerns competing for limited spending, so advocacy of the wider benefits to society of investment in science is important.

The case for pure scientific research is sometimes difficult to explain. This is not just down to the concepts themselves, which are beyond most of us to understand at anything but a superficial level. It is also because the most fundamental research does not necessarily know in advance what its ultimate usefulness or practicality might be. “Trust me, there will be some” does not sound convincing, even if experience shows that this generally turns out to be the case.

Communication of the tangible benefits of scientific discovery, which can occur a long time after the initial research, is an important part of securing the ongoing support of society for research endeavours, particularly in times of strained financial resources.

After many months of hard work, the CERN & Society Foundation was established in June 2014. Its purpose is “to spread the CERN spirit of scientific curiosity for the inspiration and benefit of society”. It aims to excite young people in the understanding and pursuit of science; to provide researchers in less privileged parts of the world with the tools and access they need to enable them to engage with the wider scientific community; to advocate the benefit of pure scientific research to key influencers; to inspire cultural activities and the arts; and to further the development of science in practical applications for the wider benefit of society as a whole, whether in medicine, technology or the environment. The excitement generated by the LHC gives us a unique opportunity to contribute to society in ways that cannot be done within the constraints of dedicated member-state funding.

To translate this vision into reality will, of course, take time. The foundation currently has a three-person board, made up of myself, Peter Jenni and the director-general. It has benefited from some initial generous donations to get it off the ground and allow us to fund our first projects.

The foundation benefits from the advice of the Fundraising Advisory Board (FAB), which ensures compliance with CERN’s Ethical Policy for Fundraising. It filters through ideas for projects looking for support, and recommends those that are likely to have the highest impact. The FAB, chaired by Markus Nordberg, consists of CERN staff who help us to prioritize the areas on which to focus. In our early years, we have three main themes where we are looking for support: education and outreach; innovation and knowledge exchange; and culture and the arts. With the help of CERN’s Development Office, we are seeking support from foundations, corporate donors and individuals. No donation is too large or small.

Matteo Castoldi, heading the Development Office, has been instrumental in the practical side of the foundation, and is a good person to contact if you have ideas for a project, want help in formalizing a proposal for FAB or would like to discuss any aspect of the CERN & Society Foundation. Our website is up and running – please take a look to find out more, and if you would like to make a donation just click on the link. Thank you in advance for your support.

Time in Powers of Ten: Natural Phenomena and Their Timescales

By Gerard ’t Hooft and Stefan Vandoren (translated by Saskia Eisberg-’t Hooft)
World Scientific
Hardback: £31
Paperback: £16
E-book: £12
Also available at the CERN bookshop

CCboo4_10_14

With powers of 10, one cannot fail to think of the iconic 1970s film made by Charles and Ray Eames – a journey through the universe departing from a picnic blanket somewhere in Chicago. However, this book is not about distance scales, rather time. And the universe it reveals is one of constant turmoil and evolution. No vast empty wastelands here, where nothing changes across many powers of 10. Journeying across the time scales, we discover a universe teeming with activity at every stage – processing, ticking, cycling, continuously moving, changing, surprising.

Every page brims with the authors’ evident enthusiasm for the workings of the universe, be it the esoteric or the more mundane. I would never have expected to read a book where cosmic microwave background radiation sits side by side with the problems of traffic congestion in the US (time = 10 trillion seconds).

Leaping in powers of 10, the book races through stories of life, the Earth and the solar system, and on to physical processes quadrillions of times the age of the universe itself. The largest and smallest of time scales transport the reader to the strange and fascinating. Just as with distance scales, the very small and the very large are intimately entwined.

There is a gap between the more anecdotal and the more scientific. Record sprint times (time = 10 seconds) and the rhythm of our biological clock (time = 100,000 seconds) are light interludes in contrast with the decay modes of the ηc meson (time = 10 yoctoseconds) and the Lamb shift (time = 1 nanosecond). While this eclecticism is part of the book’s charm, some scientific baggage is required to enjoy the contents fully.

Where the book fails, is in the design. Visually, it is a little dull. With disparate styles of graphic illustrations, many taken from Wikipedia, the image quality is not up to that of the text. A clever design could take readers on a visual voyage, adding to the impact of the writing. The story warrants this effort.

It is striking that mysteries exist at every time scale, not only at the extremes – be it the high magnetic field of pulsars (time = 1 second), the explanation of high-temperature superconductors (time = 10 million seconds) or the origin of water on Earth (time = 100 quadrillion seconds). The book reveals the extraordinary complexity of our universe – it is a fascinating journey.

 

Behind the Scenes of the Universe: From the Higgs to Dark Matter

By Gianfranco Bertone
Oxford University Press
Hardback: £19.99
Also available as an e-book, and at the CERN bookshop

CCboo2_10_14

With the discovery of a Higgs boson by the ATLAS and CMS experiments, the concept of mass has changed from an intrinsic property of each particle to the result of an interaction between the particles and the omnipresent Higgs field: the stronger that interaction is, the more it slows down the particle, which effectively behaves as if it is massive. This experimental validation of a theoretical idea born 50 years ago is a major achievement in elementary particle physics, and confirms the Standard Model as the cornerstone in our understanding of the universe. However, as is often the case in science, there is more to mass than meets the eye: most of the mass of the universe is currently believed to exist in a form that has, so far, remained hidden from our best detectors.

Gianfranco Bertone seems to have been travelling through the dark side of the universe for quite a while, and I am glad that he has taken the time to write this beautiful account of his journey. The book is easy to read, the scientific observations, puzzles and discussions being interspersed with interesting short annotations from history, art, poetry, etc. Readers should enjoy the non-technical tour through general relativity, gravitational lensing, cosmology, particle physics, etc. In particular, one learns that space–time bends light rays travelling through the universe, and that we can deduce the properties of a lens by studying the images it distorts. At the end of this learning curve we reach the conclusion that “we have a problem”: no matter where we look, and how we look, we always infer the existence of much more mass than we can see. Bertone expresses it poetically: “The cosmic scaffolding that grew the galaxies we live in and keeps them together is made of a form of matter that is unknown to us, and far more abundant in the universe than any form of matter we have touched, seen, or experienced in any way.”

The second half of the book wanders through the efforts devised to indentify the nature of dark matter, through the direct or indirect detection of dark-matter particles, with the LHC experiments, deep underground detectors, or detectors orbiting the Earth. As more data are collected and interpreted, more regions of parameters defining the properties of the dark-matter particles are excluded. In a few years, the data accumulated at the LHC and in astroparticle experiments will be such that, for many dark-matter candidates, “we must either discover them or rule them out”. The book is an excellent guide to anyone interested in witnessing that important step in the progress of fundamental physics.

Publishing and the Advancement of Science: From Selfish Genes to Galileo’s Finger

By Michael Rodgers
World Scientific
Hardback: £50
Paperback: £25
E-book: £19

CCboo3_10_14

In Publishing and the Advancement of Science, retired science editor Michael Rodgers take us on an autobiographical tour of the world of science publishing, taking in textbooks, trade paperbacks and popular science books along the way. The narrative is detailed and chronological: a blow-by-blow account of Rodgers’ career at various publishing houses, with the challenges, differences of opinion and downright arguments that it takes to get a science book to press.

Rodgers was part of the revolution in popular-science publishing that started in the 1970s, and he conveys with palpable excitement the experience of discovering great authors or reading brilliant typescripts for the first time. Readers with an interest in science will recognize such titles as Richard Dawkins’ The Selfish Gene or Peter Atkins’ Physical Chemistry, both of which Rodgers worked on. Frustratingly, he falls short of providing real insight into what makes a popular-science book great. There is a niggling sense of “I know one when I see one”, but a lack of analysis of the writing.

Rodgers’ first job in publishing – as “field editor” for Oxford University Press (OUP), starting in 1969 – had him visiting universities around the UK, commissioning academics to write books. Anecdotes about the inner workings of OUP at the time take the reader back to a charming, pre-web way of working: telephone calls and letters rather than e-mails and attachments, and responding to authors in days rather than minutes. The culture of publishing at the time is conveyed with wry humour. OUP sent memos about the proper use of the semicolon, and had a puzzlingly arcane filing system, which added to the sense of mustiness.

A section on the development of Dawkins’ seminal The Selfish Gene threw up interesting tidbits – altercations about the nature of the gene, and a discussion about what makes a good title – but I was less interested in the analysis of the US market for chemistry textbooks, or such tips as “The best time to publish a mainstream coursebook is in January, to allow maximum time for promotion.”

At times, the level of autobiographical detail dilutes Rodgers’ sense of intellectual excitement about the scientific ideas in his books. The measure of a book’s success in terms of copies sold and years in print makes publishing a commercial rather than intellectual exercise, which to some extent left me disappointed. And although Rodgers worked part time, freelance or was made redundant at various points in his career, apart from a brief section in the epilogue, he seems rather blind to the changes sweeping the publishing industry, with the advent of free online content.

Those interested in the world of publishing, with a special interest in science, will find much to like about this book. But although Rodgers provides quirky tidbits about how some famous books came to be, it falls short of telling us what makes them great.

Faraday, Maxwell, and the Electromagnetic Field: How Two Men Revolutionized Physics

By Nancy Forbes and Basil Mahon
Prometheus Books
Hardback: $25.92

CCboo1_10_14

The birth of modern physics coincides with the lifespans of Michael Faraday (1791–1867) and James Clerk Maxwell (1831–1879). During these years, electric, magnetic and optical phenomena were unified in a single description by introducing the concept of the field – a word coined by Faraday himself while vividly summarizing an amazing series of observations in his Experimental Researches in Electricity. Faraday – a mathematical illiterate – was the first to intuit that, thanks to the field concept, the foundations of the physical world are imperceptible to our senses. All that we know about these foundations – Maxwell would add – are their mathematical relationships to things that we can feel and touch.

Today, the field concept – both classically and quantum mechanically – is unavoidable, and this recent book by Nancy Forbes and Basil Mahon sheds fresh light on the origins of electromagnetism by scrutinizing the mutual interactions of Victorian scientists living through a period characterized by great social and scientific mobility. Faraday started as a chemist, became an experimental physicist, then later a businessman and even an inspector of lighthouses – an important job at that time. Maxwell began his career as a mathematician, became what we would call today a theoretical physicist, and then founded the Cavendish Laboratory while holding the chair of experimental physics at the University of Cambridge.

The first seven chapters focus on Faraday’s contributions, while the remainder are more directly related to Maxwell and his scientific descendants or, as the authors like to say, the Maxwellians. The reader encounters not only the ideas and original texts of Faraday and Maxwell, but also a series of amazing scientists, such as the chemist Humphry Davy (Faraday’s mentor), as well as an assorted bunch of mathematicians and physicists including David Forbes (Maxwell’s teacher), John Tyndall, Peter Tait, George Airy, William Thomson (Lord Kelvin) and Oliver Heaviside. All of these names are engraved in the memories of students for contributions sometimes not directly related to electromagnetism, and it is therefore interesting to read the opinions of these leading scientists on the newly born field theory.

The historical account might at first seem a little biased, but it is nonetheless undeniable that the field concept took shape essentially between England and Scotland. The first hints for the unification of magnetic and electric phenomena can be traced back to William Gilbert, who in 1600 described electric and magnetic phenomena in a single treatise called De Magnete. More than 200 years later, the Maxwell equations (together with the Hertz experiment) finally laid to rest the theory of “action at a distance” of André-Marie Ampère and Charles-Augustin de Coulomb.

The last speculative paper written by Faraday (and sent to Maxwell for advice) dealt with the gravitational field itself. Maxwell replied that the gravitational lines of force could “weave a web across the sky” and “guide the stars in their courses”. General relativity was on the doorstep.

CERN Council selects next director-general

CCnew1_10_14

At its 173rd closed session on 4 November, CERN Council selected the Italian physicist Fabiola Gianotti as the organization’s next director-general. The appointment will be formalized at the December session of Council, and Gianotti’s mandate will begin on 1 January 2016 and run for a period of five years. She will be the first woman to hold the position of director-general at CERN.

Council rapidly converged in favour of Gianotti. “We were extremely impressed with all three candidates put forward by the search committee,” said Agnieszka Zalewska, the president of Council, on the announcement of the decision. “It was Dr Gianotti’s vision for CERN’s future as a world-leading accelerator laboratory, coupled with her in-depth knowledge of both CERN and the field of experimental particle physics, that led us to this outcome.”

Gianotti received a PhD in experimental particle physics from the University of Milan in 1989, working on the UA2 experiment at CERN for her thesis on supersymmetry. She has been a research physicist in the physics department at CERN since 1994, being involved in detector R&D and construction, software development and data analysis, for example for supersymmetry searches by the ALEPH experiment at the Large Electron–Positron (LEP) collider.

However, it is for her contributions to the ATLAS experiment at the LHC that Gianotti has become particularly well known. She was leader of the ATLAS experiment collaboration from March 2009 to February 2013, covering the period in which the LHC experiments ATLAS and CMS announced the long-awaited discovery of a Higgs boson, which was recognized by the award of the Nobel Prize to François Englert and Peter Higgs in 2013. Since August 2013, Gianotti has been an honorary professor at the University of Edinburgh.

In their own words

CERN appeared a gigantic enterprise to the young people who started to work for the fledgling organization from 1952 onwards, even before its official foundation in 1954. The adventure is traced here via some recollections recorded in interviews carried out by Marilena Streit-Bianchi for the CERN Archives between 1993 and 1997. These edited extracts cover some of the different evolving facets of the organization from the early 1950s to the late 1970s, and pay tribute to some of those who have passed away before the 60th anniversary of the young CERN they describe so vividly. Their enthusiasm and competences brought the organization to the level of excellence that has now become familiar.

A first recruit and the first machine

CCcer2_10_14

I was working at Philips Hilversum where I met Professor Bakker, and I became an assistant at the Zeeman Laboratory in Amsterdam. In 1951 I was asked to join his team, which was working on building CERN. I attended the Copenhagen meeting in 1952, when the alternate gradient principles became known – I believe that it was at this very moment that CERN was born. Although many of us, including myself, did not completely understand it, we immediately believed that an interesting new machine could be built, going from weak focusing to strong focusing. UNESCO provided money for our salaries. It was not much that we earned, but it was a terrific experience to arrive at a place where French and English had to be spoken.

Building the Synchrocyclotron (SC)
The alternating gradient machine was in development, and in the meantime it was decided to build a weak-focusing 600 MeV proton synchrocyclotron. I was asked to look after its set-up. We were a nice small team of 12 people sitting in barracks near the airport, whereas the [Proton Synchrotron] PS team, much larger, was staying at the University of Geneva. The construction of the site started, and it took 3–4 years. The six of us that really built the SC machine were working in a free atmosphere. It could look wild from the outside, but among us [there] was a strong discipline based on trust. We developed the RF system and conceived the tuning fork called the vibrating capacitor, made out of parts of very soft aluminium alloy. I made the design…and it was published in Nuclear Instruments and Methods. It worked well. The high-frequency system did not take too much time, whereas the magnet being 3000 tonnes of steel 5 m [in] diameter was not simple. After the war, all of the big countries wanted to contribute to it [France, Germany, Italy, England]. Each of them had some capability but didn’t have them all, so there was a choice to be made, and it was decided to make it based on technical and not on political grounds.

Frank Krienen 1917–2008.

Frank Krienen was one of the first recruits for CERN’s 600 MeV Synchrocyclotron, in 1952. In the 1960s, he turned to developing particle detectors, in particular wire spark chambers using different types of read-out. Later, he worked on the construction and operation of the electric quadrupoles for the muon storage ring, for the third and last g-2 experiment at CERN in the years 1969–1977, followed by the design and development of the electron-cooling apparatus for CERN’s Initial Cooling Experiment ring. 

Accelerating expertise

I went to Imperial College where Professor Dennis Gabor was, because I wanted to study beyond what university had given me. He had excellent courses in advanced particle dynamics, statistical physics, etc. It was an extremely important year for me, although [his] lectures were not on accelerators. The work I did for myself was on accelerators, and more specifically on linear accelerators, and the reason is simple. In Norway…a small country with few accelerators…the idea came up that perhaps it was possible to make accelerators in…the low-energy field. But in my stay with Gabor it was more the general knowledge I gained that was very much useful later in life.

The Proton Synchrotron (PS) Group
I was involved already a bit with Odd Dahl in 1951, then I became part of CERN full time from summer 1952. At that time, Dahl was leading the PS Group, as it was called in those days, from Bergen, [where] a group was working on the study of possible accelerators for what was called CERN. We were sitting in the home institute. It was an interesting experience – we had to communicate by letters and by travelling. There was no computer connection like now. I have never written so much in my life as I did in the first two years, or travelled for meetings as I did during 1952. I must admit that a very good spirit was struck…we were a good group, elected in a very specific way. Senior people like Dahl, [Wolfgang] Gentner, [John] Cockcroft, [Edoardo] Amaldi, etc, selected very good young people among their collaborators and students. We were enthusiastic, and we had the fortunate happening that the [alternating] gradient principle was invented at the beginning of the study. I have ever since admired Dahl for having the courage to switch the whole activity onto this new principle, only weeks after it was, shall we say, invented. It was a tremendous challenge to study if the energy could go to 30 GeV, instead of the 10 GeV we were talking of before.

The Intersecting Storage Rings (ISR)
I never thought of becoming the project head. I thought my ability beautifully fit the study-preparation phase, and then what I would have considered a more practical person could take over and lead the project. I hoped that he would ask me as deputy. So it was a surprise when I was asked to become the head of the ISR project.

A difficult thing to achieve technologically for the machine was to have an RF system that could do the job. The next most difficult big problem was the vacuum system, because we realized that it was tremendously important for the lifetime of the beam, an essential element for having efficient operation. The vacuum was improved continuously, far beyond what we first thought was necessary. There were many aspects related to vacuum that had to be solved. Indeed, to achieve the vacuum that we needed, most improvements were done after we put the machine into operation. I think for 10 years the vacuum was gradually improved, and improved and improved.

Kjell Johnsen 1921–2007.

Kjell Johnsen joined CERN in 1952, and became a world-leading accelerator expert through his work on the design of the PS. He went on to lead the ISR project, CERN’s first hadron collider and forerunner of the LHC.

Cosmic neutrinos and more: IceCube’s first three years

For the past four years, the IceCube Neutrino Observatory, located at the South Pole, has been collecting data on some of the most violent collisions in the universe. Fulfilling its pre-construction aspirations, the detector has observed astrophysical neutrinos with energies above 60 TeV, at the “magic” 5σ significance. The most energetic neutrino observed had an energy of about 2 PeV (2 × 1015 eV) – 250 times higher than the beam energy of the LHC.

These neutrinos are just one highlight of IceCube’s broad physics programme, which encompasses searches for astrophysical neutrinos, searches for neutrinos from dark matter, studies of neutrino oscillations, cosmic-ray physics, and searches for supernovae and a variety of exotica. All of these studies take advantage of a unique detector at a unique location: the South Pole.

IceCube observes the Cherenkov light emitted by charged particles produced in neutrino interactions in 1 km3 of transparent Antarctic ice. The detector is the ice itself, and is read out by 5160 optical sensors. Figure 1 shows how the optical sensors are distributed throughout the 1 km3 of ice, 1.5 km beneath the geographic South Pole. They are deployed 17 m apart, on 86 vertical cables or “strings”. Seventy-eight of the strings are spaced horizontally, 125 m apart in a grid of equilateral triangles forming a hexagonal array across an area of a square kilometre. The remaining eight strings form a more densely instrumented sub-array called DeepCore. In DeepCore, most of the sensors are concentrated in the lower 350 m of the detector.

Each sensor, or digital optical module (DOM), is like a miniature satellite made up of a 10 inch (25 cm) photomultiplier tube together with data-acquisition and control electronics. These include a custom 300 megasample/s waveform digitizer with 14 bits of dynamic range, plus light sources for calibrations, all consuming a power of less than 5 W. The hardware is protected by a centimetre-thick pressure vessel.

The ice in IceCube formed from compacted snow that fell on Antarctica 100,000 years ago.

The ice in IceCube formed from compacted snow that fell on Antarctica 100,000 years ago. Its properties vary with depth, with layers reflecting the atmospheric conditions when the snow first fell. Measuring the optical properties of this ice has been one of the major challenges of IceCube, involving custom “dust loggers”, studies with LED “flashers” and cosmic-ray muons. During the past decade, the collaboration has found that the ice is layered, that the layers are not perfectly flat and, most recently, that the light scattering is somewhat anisotropic. Each insight has led to a better understanding of the detector and to smaller systematic uncertainties. Fortunately, advances in computing technology have allowed IceCube’s simulations to keep up, more or less, with the increasingly complex models of light propagation in the ice.

The distributed sensors give IceCube strong pattern-recognition capabilities. The three neutrino flavours – νe, νμ and ντ – each leave different signatures in the detector. Charged-current νμ produce high-energy muons, which leave long tracks. All νe interactions, and all neutral-current interactions, produce hadronic or electromagnetic showers. High-energy ντ produce a characteristic “double-bang” signature – one shower when the ντ interacts and a second when the τ decays. More complex topologies have also been studied, including tracks that start in the detector as well as pairs of parallel tracks.

Despite past doubts, IceCube works and works well. More than 98% of the sensors are fully operational, and another 1% are usable – most of the failures occurred during deployment. The post-deployment attrition rate is a few DOMs per year, so IceCube will be able to operate for as long as required. The “live” times are also impressive – in the range of 99%.

IceCube has excellent reconstruction capabilities. For kilometre-long muon tracks, the angular resolution is better than 0.4°, verified by studying the shadow of the Moon cast by cosmic rays. For high-energy contained events, the angular resolution can reach 15°, and at high energies the visible energy can be determined to better than 15%.

Cosmic neutrinos

The detector’s dynamic range covers from 10 GeV to infinity. The higher energy the neutrino, the easier it is to detect. Every six minutes, IceCube records an atmospheric neutrino, from the decay of pions, kaons and heavier particles produced in cosmic-ray air showers. These 100,000 neutrinos collected every year are interesting in their own right, but they are also the background to any search for cosmic neutrinos. On top of this, the detector records about 3000 atmospheric muons every second. This is a painful background for neutrino searches, but a gold mine for cosmic-ray physics.

Although IceCube has an extremely rich physics programme, the centrepiece is clearly the search for cosmic neutrinos. Many signatures have been proposed for these neutrinos: point source searches, a high-energy diffuse flux, identified ντ, and others. IceCube has looked for all of these.

Point-source searches are the simplest strategy conceptually – just create a sky map showing the arrival directions of all of the detected neutrinos. Figure 2 shows the IceCube sky map containing 400,000 events gathered across four years (Aartsen et al. 2014c). In the southern hemisphere, the large background of downgoing muons is only partially counteracted by selecting high-energy muons, which are less likely to be of atmospheric origin. The 177,544 events in the northern-hemisphere sample are mostly from νμ. So far, there is no statistically significant evidence for any hot spots, even in searches for spatially extended sources. IceCube has also looked for variable sources, whether episodic or periodic, with similar results. These limits constrain theoretical models, especially those involving gamma-ray bursts.

If there are enough weak sources in the cosmos, they should be visible as an aggregate, diffuse flux. This diffuse flux is expected to have a harder energy spectrum than do atmospheric neutrinos. Calculations have indicated that IceCube would be more sensitive to this diffuse flux than to point sources, which is indeed the case. Several early searches, using the partially completed detector, turned up intriguing hints of an excess over the expected atmospheric neutrino flux. Then the search diverged from the anticipated script.

One of the first searches for diffuse neutrinos with the complete detector looked for ultra-high-energy cosmogenic neutrinos – neutrinos produced when ultra-high-energy cosmic-ray protons (E > 4 × 1019 eV) interact with photons of around 10–4 eV in the cosmic-microwave background, exciting them to a Δ+ resonance. The decay products of the pion produced in the Δ’s decay include a neutrino with a typical energy of 1018 eV (1 EeV). The search found two spectacular events, one of which is shown in figure 3. Both events were well contained within the detector – clearly neutrinos. Both had energies around 1 PeV – spectacular, but too low to be produced by cosmic rays interacting with CMB photons. Such events were completely unexpected.

Inspired by these events, the IceCube collaboration instigated a follow-up search that used two powerful techniques (Aartsen et al. 2013). The first was a filter to identify neutrino interactions that originate inside the detector, as distinct from events originating outside it. The filter divides the instrumented volume into an outer-veto shield and a 420 megatonne inner active volume. Figure 4 shows how this veto works: by rejecting events with significant in-time energy deposition in the veto region, neutrino interactions within the detector’s fiducial volume can be separated from backgrounds. For neutrinos that are contained within the instrumented volume of ice, the detector functions as a total absorption calorimeter, measuring energy with 15% resolution. It is flavour-blind, equally sensitive to hadronic or electromagnetic showers and to muon tracks. This veto analysis also used a “tagging” approach to estimate the atmospheric-muon background using the data, rather than relying on simulations. Because of the veto, the analysis could observe neutrinos from all directions in the sky.

The second innovation was to take advantage of the fact that downgoing atmospheric neutrinos should be accompanied by a cosmic-ray air shower depositing one or more muons inside IceCube. In contrast, cosmic neutrinos should be unaccompanied. A very high-energy, isolated downgoing neutrino is highly likely to be cosmic.

The follow-up search found 26 additional events. Although no new events had an energy near 1 PeV, the analysis produced evidence for cosmic neutrinos at the 4σ level. To clinch the case, the collaboration added a third year of data, pushing the significance above the “magic” 5σ level (Aartsen et al. 2014a). One of the new events had an energy above 2 PeV, making it the most energetic neutrino ever seen.

The observation of a flux of cosmic neutrinos was soon confirmed by the independent and more traditional analysis recording the diffuse flux of muon neutrinos penetrating the Earth. Both observations are consistent with a diffuse flux composed equally of the three neutrino flavours. No statistically significant hot spots were seen. The observed flux is consistent with that expected from cosmic accelerators producing equal energies in gamma rays, neutrinos and, possibly, cosmic rays.

Newer studies are shedding more light on these events, extending contained-event studies down to lower energies and adding flavour identification. At energies above 10 TeV, the astrophysical neutrino flux can be fit by a single power-law spectrum that is significantly harder than the background cosmic-ray muon spectrum:
φν = 2.06+0.4–0.3 × 10–18 (Ev/100TeV)–2.46±0.12 GeV–1 cm–2 sr–1 s (Aartsen et al. 2014d).

Within the limited statistics, the flux appears isotropic and consistent with the νeμτ ratio of 1:1:1 that is expected for cosmic neutrinos. The majority of the events appear to be extragalactic. Some might originate in the Galaxy, but there is no compelling statistical evidence for that at this point.

Many explanations have been proposed for the IceCube observations, ranging from the relativistic particle jets emitted by active galactic nuclei to gamma-ray bursts, to starburst galaxies to magnetars. IceCube’s dedicated searches do, however, disfavour gamma-ray bursts as the source. A spectral index of –2 (dNν/dE ~ E–2), predicted by Fermi shock-acceleration models, is also disfavoured, but many other scenarios are possible. Of course, the answer is clear: more data are needed.

Other physics

The 100,000 neutrinos and 85 × 109 cosmic-ray events recorded each year provide ample opportunities to search for dark matter and to study cosmic rays as well as neutrinos themselves. IceCube has measured the cosmic-ray spectrum and composition and observed anisotropies in the spectrum at the 10–4 level that have thus far defied explanation. It has also studied atypical events, such as muon-free showers expected from photons with peta-electron-volt energies, produced in the Galaxy, and investigated isolated muons produced in air showers. The latter have separations that shift from an exponential decrease to a power-law separation spectrum, as predicted by perturbative QCD.

IceCube observes atmospheric neutrinos across an energy range from 10 GeV to 100 TeV – at higher energies, the atmospheric flux is swamped by the flux of cosmic neutrinos. As figure 5 shows, the flux is consistent with expectations across a large energy range. Lower-energy neutrinos are of particular interest because they are sensitive to neutrino oscillations. For neutrinos passing vertically through the Earth, the νμ flux develops a first minimum at 28 GeV.

Figure 6 shows the observed νμ flux, seen in one year of data, using well-reconstructed events contained within DeepCore. The change in flux with distance travelled/energy (L/E) is consistent with neutrino oscillations and inconsistent with a no-oscillation scenario. IceCube constraints on the mixing angle θ23 and |Δm232| are comparable to constraints from other experiments.

IceCube also searched for neutrinos from dark-matter annihilation. Dark matter can be gravitationally captured by the Earth, the Sun, or in the centre or halo of the Galaxy. It then accumulates and the dark-matter particles annihilate, producing neutrinos. IceCube has searched for signatures of this annihilation, and has set limits. The Sun is a particularly interesting option, producing a characteristic dark-matter signature that cannot be explained by any astrophysical scenario. It is also mostly protons, allowing IceCube to set the world’s best limits on the spin-dependent cross-section for the interaction of dark-matter particles with ordinary matter.

The collaboration has also looked for even more exotic signatures, such as magnetic monopoles and pairs of upgoing particles. One particularly spectacular and interesting signature could come from the next supernova in the Galaxy. These explosions produce a blast of neutrinos with 10–50 MeV energy. This energy level is far too low to trigger IceCube directly, but the neutrinos would be visible as a collective increase in the singles rate in the buried IceCube photomultipliers. Moreover, IceCube has a huge effective area, which will allow measurements of the time structure of the supernova-neutrino pulse with millisecond precision.

IceCube is still a novel instrument unlikely to have exhausted its discovery potential. However, at high energies, it might not be big enough. Doing neutrino astronomy could require samples of 1000 or more, high-energy neutrino events. In addition, some key physics questions require a detector with a lower energy threshold. These two considerations are driving two different upgrade projects.

The IceCube high-energy extension (IceCube-gen2) aims for a detector with a 10-times-larger instrumented volume.

DeepCore has demonstrated that IceCube is capable of making precise measurements of neutrino-oscillation parameters. If precision studies can be extended to neutrino energies below 10 GeV, it will be possible to determine the neutrino-mass hierarchy. Neutrinos passing through the Earth interact coherently with matter electrons, modifying the oscillation pattern in a way that differs for normal and inverted hierarchies. In addition to a threshold of a few giga-electron-volts, this measurement requires improved control of systematic uncertainties. An expanded collaboration has come together to pursue the construction of a high-density infill array called Precision In Ice Next-Generation Upgrade, or PINGU (Aartsen et al. 2014b). The present design consists of 40 additional high-sensitivity strings equipped with improved calibration devices. PINGU should be able to determine the mass hierarchy with 3σ significance within about three years, independent of the value of the CP-violation phase.

The IceCube high-energy extension (IceCube-gen2) aims for a detector with a 10-times-larger instrumented volume, albeit with a higher energy threshold. It will explore the observed cosmic neutrino flux and pin down its origin. With a sample of more than 100 cosmic neutrinos per year, it will be possible to observe multiple neutrinos from the same sources, and so do astronomy. The instrument will also have an improved sensitivity to study the ultra-high-energy neutrinos produced in the interactions of cosmic rays with microwave photons.

Of course, IceCube is not the only collaboration studying high-energy neutrinos. Projects on the cubic-kilometre scale are also being prepared in the Mediterranean Sea (KM3NeT) and in Lake Baikal (GVD), with a field of view complementary to that of IceCube. Within KM3NeT, ORCA, a proposed low-threshold detector, would pursue the same physics as PINGU. And the radio-detection experiments ANITA, ARA, GNO and ARIANNA are beginning to explore the neutrino sky at energies above 1017 eV.

After a decade of construction, the completed IceCube detector came on line in December 2010. It has achieved the outstanding goal of observing cosmic neutrinos and has produced important results in diverse areas: cosmic-ray physics, dark-matter searches and neutrino oscillations, not to mention its contributions to glaciology and solar physics. The observation of cosmic neutrinos at the peta-electron-volt energy scale has attracted enormous attention, with many suggestions about the location of the requisite cosmic accelerators.

Looking ahead, IceCube anticipates two important extensions: PINGU, which will determine the neutrino-mass hierarchy, and IceCube-gen2, which will expand a discovery instrument into an astronomical telescope.

ICTP: theorists in the developing world

Fernando Quevedo

Fernando Quevedo, director of ICTP since 2009, came to CERN in September to take part in the colloquium “From physics to daily life”, organized for the launch of two books of the same name, to which he is one of the contributors. His participation in such an initiative is not just a fortunate coincidence, but testimony of his willingness to explain the prominent role that theoretical and fundamental physics have in human development. “Theory is the driving force behind the creation of a culture of science, and this is of paramount importance to developing societies,” he explains. “Abdus Salam founded the ICTP because he believed in this strong potential, which comes at a very low cost to the countries that cannot afford expensive experimental infrastructures.”

Unfortunately, theorists are not usually credited properly for their contributions to the development of society. “The reason is that a lot of time separates the theoretical advancement from the practical application,” says Quevedo. “People and policy makers at some point stop seeing the link, and do not see the primary origin of it anymore.” However, although these links are often lost in the complicated ripples of history, it is often the case that when people are asked to recall names of famous scientists, most likely they are theorists. Examples include Albert Einstein, Richard Feynmann, James Clerk Maxwell and, of course, Stephen Hawking. More importantly, theories such as quantum mechanics or relativity have changed not just the way that scientists understand the universe but also, years later, everyday life, with applications that range from lasers and global-positioning systems to quantum computation. For Quevedo, “The example I like best is Dirac’s story. He was a purist. He wanted to see the beauty in the mathematical equations. He predicted the existence of antimatter because it came out of his equations. Today, we use positrons – the first antimatter particle predicted by Dirac – in PET scanners, but people never go back to remember his contribution.”

Theorists often have an impact that is difficult to predict, even by their fellow colleagues. “When I was a student in Texas,” recalls Quevedo, “we were studying supersymmetry and string theory for high-energy physics, and we saw that some colleagues were working on even more theoretical subjects. At that time, we thought that they were not on the right track because they were trying to develop a new interpretation of quantum mechanics. Two decades later, some of those people had become the leaders of quantum-information theory and had given birth to quantum computing. Today, this field is booming!” Perhaps surprisingly, there is also an extremely practical “application” of string theory: the arXiv project. This online repository of electronic preprints of scientific papers was invented by string theorist Paul Ginsparg. Perhaps this will be the only practical application of string theory.

While Quevedo considers it important to credit the role of the theorists in the development of society and in creating the culture of science, at the same time, he recognizes an equivalent need for the theorists to open their research horizon and accept the challenge of the present time to tackle more applied topics. “Theorists are very versatile scientists,” he says. “They are trained to be problem solvers, and their skills can be applied to a variety of fields, not just physics.” This year, ICTP is launching a new Master’s course in high-performance computing, which will use a new cluster of computers. In line with Quevedo’s thinking, during the first year, the students will be trained in general matters related to computing techniques. Then, during the second year, they will have the opportunity to specialize not only in physics but also in other subjects, including climate change, astrophysics, renewable energy and mathematical modelling.

If you are from a poor country, why should you be limited to do agriculture, health, etc?

All of these arguments should not be seen as justifications for the need to support theoretical physics. Rather, wondering about the universe and its functioning should be a recognized right for anyone. “I come from Guatemala and have the same rights as Americans and Europeans to address the big questions,” confirms Quevedo. “If you are from a poor country, why should you be limited to do agriculture, health, etc? As human beings, we have the right to dream about becoming scientists and understanding the world around us. We have the right to be curious. After all, politicians decide where to put the money, but the person who is spending his/her life on scientific projects is the scientist.”

ICTP has the specific mandate to focus on supporting scientists from developing countries. Across its long history, the institute has proudly welcomed visitors from 188 countries – that is, almost the entire planet. While CERN’s activities are concentrated mainly in developed countries, the activity map of ICTP spreads across all continents more uniformly, including Africa and the whole of Latin America. “Some countries do not have the right level of development for science to get involved in CERN yet. ICTP can play the role of being an intermediate point to attract the participation of scientists from the least developed countries to then get involved with CERN’s projects,” Quevedo comments.

Quevedo’s relationship with CERN goes beyond his role as ICTP’s director. CERN was his first employer when he was a young postdoc, coming from the University of Texas. He still comes to CERN every year, and thinks of it not only as a model but, more importantly, as a “home away from home” for any scientist. Like two friends, CERN and ICTP have a variety of projects that they are developing together. “CERN’s director-general, Rolf Heuer, and myself recently signed a new memorandum of understanding,” he explains. “ICTP scientists collaborate directly in the ATLAS computing working groups. With CERN we are also involved in the EPLANET project (CERN Courier June 2014 p58), and in the organization of the African School of Physics (CERN Courier November 2014 p37). More recently, we are developing new collaborations in teacher training and the field of medical physics.”

Does Quevedo have a dream about the future of CERN? “Yes, I would like to see more Africans, Asians and Latin Americans here,” he says. “Imagine a more coloured cafeteria, with people really coming from all corners of the planet. This could be the CERN of the future.”

ICTP’s 50th anniversary

In June 1960, the Department of Physics at the University of Trieste organized a seminar on elementary particle physics in the Castelletto in Miramare Park. The notion of creating an institute of theoretical physics open to scientists from around the world was discussed at that meeting. That proposal became a reality in Trieste in 1964. Pakistani-born physicist Abdus Salam, who spearheaded the drive for the creation of ICTP by working through the International Atomic Energy Agency, became the centre’s director, and Paolo Budinich, who worked tirelessly to bring the centre to Trieste, became ICTP’s deputy director.

From 6 to 9 October this year, ICTP celebrated its 50 years of success in international scientific co-operation, and the promotion of scientific excellence in the developing world. More than 250 distinguished scientists, ministers and others attended the anniversary celebration. In parallel, the programme included exhibitions, lectures and special initiatives for schools and the general public.

• For the whole programme of events with photos and videos, visit www.ictp.it/ictp-50th-anniversary.aspx.

 

Cosmic particles meet the LHC at ISVHECRI

In August this year, CERN hosted the International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI), the 18th meeting in the series that started in 1980 in Nakhodka, Russia, and is supported by the International Union for Pure and Applied Physics. In the early years, the symposia focused mainly on studying hadronic interactions of cosmic rays in the atmosphere and in emulsion chambers, which were the main cosmic-ray detectors at the time. The scope of the series has since widened, and it has become a frontier for scientists from both the cosmic-ray and high-energy physics communities to discuss hadronic interactions as a common research subject of the two fields.

At this year’s symposium, which was organized jointly by high-energy and cosmic-ray physicists – Albert de Roeck, Michelangelo Mangano and Bryan Pattison, of CERN, and David Berge of NIKHEF – the participants focused on the latest data on hadron production from CERN’s LHC, and the implications for interpreting cosmic-ray measurements. The LHC is the first collider to provide data at an equivalent proton–nucleon energy that exceeds that of the so-called “knee” – the observed change in cosmic-ray flux at 3 × 1015 eV, which is still to be explained. A series of review talks provided a comprehensive, cross-experiment overview of the latest LHC data, ranging from dedicated measurements of hadron production in the forward direction to a multitude of minimum-bias measurements in proton–proton and heavy-ion collisions. In addition, presentations showed how the forward measurements made at the HERA electron–proton collider at DESY have proved to be very useful for cosmic-ray studies. These reviews were complemented by an evening lecture on Higgs physics by John Ellis of Kings College London.

Tanguy Pierog of Karlsruhe Institute of Technology (KIT) and CERN’s Peter Skands reviewed the different approaches chosen for developing hadronic-interaction models for applications in cosmic-ray and high-energy physics. Even though the predictions of such models that were developed for cosmic-ray interactions turned out to cover the first LHC data rather well, some retuning was necessary, both to improve the description of the measurements at the LHC and to obtain more reliable high-energy extrapolations. The predictions of the models show an increasing convergence after such tuning, and lead to a more consistent description of air-shower data.

However, even the latest generation of interaction models does not solve the discrepancies found for the production of muons in extensive air showers at very high energy. A discrepancy in the number of muons at giga-electron-volt energies is seen, for example, in the data from the Pierre Auger Observatory on inclined showers whose electromagnetic component is absorbed in the atmosphere before reaching the detectors at the Earth’s surface (figure 1). Furthermore, data from the KASCADE-Grande experiment presented by Juan Carlos Arteaga of Universidad Michoacana, Morelia, indicate a much weaker attenuation of the muonic-shower component than expected from simulations. KIT’s Ralf Ulrich pointed out that, in contrast to the electromagnetic-shower profile, which depends on neutral-pion production in high-energy interactions only, both high- and low-energy interactions are important for understanding the production of muons in air showers. Therefore, measurements from fixed-target experiments such as NA61/SHINE at CERN and the Main Injector Particle Production experiment at Fermilab, which Boris Popov of JINR reviewed, are also important for obtaining a better understanding of muon production in air showers. Alternative scenarios for enhancing this muon production, involving extensions of the Standard Model, were discussed by Glennys Farrar of New York University.

Many talks at the symposium illustrated the importance of multimessenger observations in astroparticle physics, for understanding not only the sources and the mass composition of cosmic rays but also a plethora of astrophysical phenomena. Examples are the review by Eli Waxman of the Weizmann Institute on different cosmic-particle accelerators and discussion of the propagation of ultra-high-energy cosmic rays by Andrew Taylor of the Dublin Institute for Advances Studies.

One highlight of the meeting was the discussion of high-energy neutrinos from astrophysical sources recently detected by IceCube (figure 2). Kota Murase of the Institute for Advanced Study, Princeton, reviewed different theoretical scenarios for the production of neutrinos in the tera- to peta-electron-volt energy range (1012 – 1015 eV). Tom Gaisser of the University of Delaware summarized the knowledge on neutrinos produced in interactions of cosmic rays in the atmosphere, which constitute the dominant background of non-astrophysical origin in the IceCube data. At peta-electron-volt and higher neutrino energies, the atmospheric lepton flux is dominated by the decay of charm particles, and LHC measurements on the production of heavy flavours are the only experimental data that reach the equivalent relevant energies. Given the limited acceptance in the forward direction at the LHC, QCD calculations and models are still of central importance for understanding high-energy neutrino production, as Victor Gonzalez of Universidade Federal de Pelotas and others discussed. Similarly, as Ina Sarcevic of the University of Arizona pointed out, calculating the interaction cross-section of neutrinos of energies up to 1019 eV is a challenge in perturbative QCD because of the need for parton densities at very low x. Anna Stasto of Penn State presented different theoretical approaches to understand low-x QCD phenomena, concluding that there is no multipurpose framework of general applicability.

The remaining uncertainties in predicting hadron production in high-energy interactions were one of the central questions discussed at the meeting, and highlighted by Paolo Lipari of INFN/Roma in his concluding remarks. There was general agreement that, in addition to ongoing theoretical and experimental efforts, the measurement of particle production in LHC collisions of protons with light nuclei, for example oxygen, would be the next step needed to reduce the uncertainties further.

bright-rec iop pub iop-science physcis connect