by Arthur I Miller, W W Norton. Hardback ISBN 9780393065329, £18.99 ($27.95). Paperback, published as 137: Jung, Pauli, and the Pursuit of a Scientific Obsession. ISBN 9780393338645, £11.99 ($16.95).
Do you think there is a sense beyond numbers? Do they have any special meaning? Are there some more powerful than others? Many great men throughout the centuries have exercised their minds to find answers to these questions. In his latest book, the distinguished historian of science Arthur I Miller (p17) investigates one of the possible responses in the unique blend of two extraordinary lives, those of Carl Jung and Wolfgang Pauli.
The book tells the story of the fruitful friendship between two of the greatest thinkers of our times, who were obsessed with the power of certain numbers. The two personalities are central to the narrative and the author masters their story with plenty of interesting details that hold our attention with humour. In the course of reading, we sometimes encounter complex physics formulas, but Miller expertly translates them into a refined interpretation that novices can understand.
Among the accurate account of the enormous and lasting contributions to their respective fields, such as Pauli’s hypothesis of the neutrino in physics and Jung’s theory of a collective unconscious in psychoanalysis, we find indeed “the” number: 137. This pure number, the fine structure constant, which to the eyes of a layman may appear harmless and meaningless, was the “step toward the great goal of finding a theory that would unite the domains of relativity and quantum theory, the large and the small, the macrocosm and the microcosm”. But it is not only that. Through the unfolding of dreams, mandalas, archetypes and symbols, this number turns out to be the golden gate between rational and emotional, creativity and intelligence, science and belief. This tale provides us with a window across time and space into enlightenments of genius.
Deciphering the Cosmic Number is a revelation of something beyond intuition that compels us to participate in the human torment in those whose lives are marked by the quest to find answers to questions transcending centuries and ages. It describes, looking through a magnifying glass, the lives of two human beings who achieved so much in their fields through a “strange friendship” during the difficult period of the Second World War.
The Chamonix workshop, held on 25–29 January, once again proved its worth as a place where all of the stakeholders in the LHC can come together, take difficult decisions and reach a consensus on important issues. This time the most important decision taken was to run the LHC for 18 to 24 months at a collision energy of 7 TeV (3.5 TeV per beam) before a long shutdown, which will allow time for all of the work necessary for the machine to reach the design collision energy of 14 TeV. As beam returns in the LHC this February it marks the start of the longest phase of accelerator operation in CERN’s history, running into summer or autumn 2011.
What is the reasoning behind this decision? First, the LHC a cryogenic facility, so each run is accompanied by lengthy cool-down and warm-up phases. Second, there is still essential work to be done to prepare the LHC for running at energies significantly higher than the collision energy of 7 TeV chosen for the first physics run. These facts led to a simple choice: run for a few months now and programme successive short shutdowns to step up in energy; or run for a long time now and schedule a single long shutdown before allowing a total energy of 14 TeV (7 TeV per beam). A long run gives the machine teams time to prepare carefully for the work that will be needed before running at 14 TeV. For the experiments, 18 to 24 months will bring enough data across all of the potential discovery areas.
Before the 2009 running period began, all of the necessary preparations to run the LHC at the collision energy of 1.18 TeV per beam had been carried out. The goal of the technical stop, scheduled to end in mid-February, was to prepare the machine for running at 3.5 TeV per beam, which requires a current of 6 kA in the LHC magnets.
The main work during the stop was on the new quench-protection system (nQPS), which is designed to improve the electrical reliability of the connection between the instrumentation feedthrough systems on the magnets and the nQPS equipment. There are around 500 of these connectors for each of the eight sectors in the LHC. An intensive effort ensured that this work was undertaken and completed in the first three weeks of January, so that the hardware-commissioning teams could proceed with testing the magnets up to 6 kA.
Several other teams took advantage of the stop to carry out other technical verifications and efficiency tests, for example, on some vacuum pumping units, the kicker system, the oxygen-deficiency hazard detectors, and on some ventilation components. At the same time as this work on the LHC, repairs took place on the water-cooling system of the CMS experiment.
All work, both in the LHC and in the CMS experiment, was scheduled to be completed by mid-February. The machine operations team will then begin to re-commission the LHC at 450 GeV per beam, building on the experience gained after the restart last year and completing investigations of machine parameters at this energy (CERN Courier January/February 2010 p24). The team will then prepare for the first ramps to 3.5 TeV per beam. Collisions at 3.5 TeV will follow, but only after the operators have established the appropriate running conditions.
Almost a decade after the experiments at CERN’s Large Electron–Positron (LEP) collider set a limit on the mass of the Higgs boson of 114.4 GeV/c2, the two experiments at Fermilab’s Tevatron, CDF and DØ have been able to reduce further the allowed mass range for the missing particle in their first joint Run II publication.
In the proton–antiproton collisions observed at the Tevatron, the Higgs boson could be produced in the fusion of two gluons. If its mass is more than 140 GeV/c2, it will usually decay into a pair of W bosons. The decay of the W bosons into a charged lepton (electron or muon) and a neutrino leads to three different signatures in the detectors: two electrons, two muons or an electron and a muon, in addition to “missing energy” from the undetected neutrinos. This is the key to reducing the background from the jets, which are copiously produced in hadronic collisions.
The Higgs boson is a scalar particle, i.e. it carries no spin. This fundamental property helps to distinguish the decays of the Higgs to two W bosons from other events that contain pairs of W bosons. The two charged leptons from the W boson decays in Higgs events are more likely to be close together than back-to-back in the detector. As a final step in seeking the Higgs, artificial neural networks are trained to distinguish a Higgs signal from background using a large number of kinematic variables.
Both Tevatron experiments have their best sensitivity at a Higgs mass of about 165 GeV/c2, i.e. just around the combined mass of the two W bosons. With about 5 fb–1 of collision data analysed, each experiment alone does not yet have sensitivity to exclude a Higgs boson if it is produced at the rate predicted by the Standard Model. Putting their data together, CDF and DØ can double the number of collisions used, breaking the “Standard Model barrier” for the first time since LEP.
Together, the experiments would expect about 70 Higgs events for a mass of around 165 GeV/c2 but their combined data are consistent with the assumption that no Higgs events have been produced. This observation is translated into a limit that excludes a Higgs boson in the mass range 162–166 GeV/c2 at the 95% confidence level.
The paper describing the combination is the first joint publication of the two collaborations using data from Run II of the Tevatron, which started in 2001. The publication, with 1042 authors, will appear in Physics Review Letters together with the individual results in separate letters. The data used represent about half the number of collisions that will eventually be recorded by CDF and DØ. This will give them the opportunity to increase significantly the sensitivity of Higgs searches in the future.
Together with the precision electroweak data that favour a low-mass Higgs boson, these new results indicate that the most likely mass for the Higgs boson – if it exists – is somewhere between the LEP and Tevatron limits of 114 and 162 GeV/c2.
PPA09, a workshop held at CERN on proton-driven plasma wakefield acceleration, has launched discussions about a first demonstration experiment using a proton beam. Steve Myers, CERN’s director for Accelerators and Technology, opened the event and described its underlying motivation. Reaching higher-energy collisions for future particle-physics experiments beyond the LHC requires a novel accelerator technology, and “shooting a high-energy proton beam into a plasma” could be a promising first step. The workshop, which brought together participants from Germany, Russia, Switzerland, the UK and the US, was supported by the EuCARD AccNet accelerator-science network.
Plasmas, which are gases of free ions and electrons, can support large electric fields – a property that can be exploited to accelerate particles to relativistic energies over much shorter distances than is possible with current technologies. Past research has focused on creating large-amplitude plasma waves by injecting a short, intense laser pulse or an electron bunch into the plasma. Indeed, accelerating gradients up to 100 GV/m have been established over a centimetre with laser excitation and up to 50 GV/m over a metre with a short electron bunch as driver.
A recent proposal is to excite the plasma through a more energetic proton bunch. The maximum energy gain of electrons accelerated in a single plasma wake is limited to roughly twice the energy of the particles in the driving bunch. Given that protons can be accelerated to tera-electron-volt energies in conventional accelerators, it should be possible to accelerate electron bunches in the wake of a proton driving-bunch to energies up to the tera-electron-volt regime in one pass through the plasma.
The plasma wake produced by a 1 TeV proton bunch has been already investigated in computer simulations (Caldwell et al. 2009). The simulated electric fields are a factor of 100 higher than those considered for the International Linear Collider, and could lead to the acceleration of a bunch of electrons to several hundred giga-electron-volts within a few hundred metres (starting with a 1 TeV short proton bunch as driver).
So far there have been no beam tests with proton-driven plasmas. The primary goal of the PPA09 workshop was, therefore, to start the discussion on a pioneering experiment – using a proton beam from CERN’s Proton Synchrotron or Super Proton Synchrotron to demonstrate the generation of strong wakefields by a proton bunch. The preparation of a letter of intent for such experimentation at CERN was discussed in the workshop. One of the questions left open is the method for generating the required long, dense plasma. The workshop identified two options, which are now being pursued in parallel.
The workshop concluded that a first round of beam measurements, possibly in 2012, would search for modulations of a long proton bunch (rms bunch length around 15 cm). This effect is predicted by particle-in-cell simulations and its observation would provide an excellent benchmarking test. The goals for subsequent rounds of experimentation would include generating stronger electric fields in the plasmas by first longitudinally compressing or otherwise pre-modulating the proton bunch, and eventually, in 2014 or later, demonstrating the acceleration of an electron bunch in the wake of the proton bunch.
Chan Joshi, from University of California, Los Angeles, and one of the prominent researchers participating in PPA09, defined the medium-term goal of a CERN proton-driven plasma wake-field experiment as the demonstration of 1- GeV proton-driven acceleration in less than 5 m; its ultimate goal would be to accomplish 100- GeV acceleration over a distance of 100 m.
Long-duration gamma-ray bursts are associated with peculiar supernova explosions and a long-lived radio afterglow emission has been detected for some of them. So the discovery of relativistic radio ejecta from two supernovae not associated with detected gamma-ray bursts is a surprise. Did the gamma rays point away from Earth or were they trapped inside the star?
Gamma-ray bursts lasting more than several seconds are associated with peculiar supernova explosions of massive stars. They are thought to be generated by a central engine that is likely to be a newborn black hole at the heart of the dying star (CERN Courier September 2003 p15). These supernovae are all of spectral type Ibc (i.e. either Ib or Ic), which means that they are core-collapse supernovae with no evidence of hydrogen lines in the spectrum, suggesting that the massive star has previously blown away its outer envelope of hydrogen. Only about 1% of type Ibc supernovae display gamma-ray bursts and evidence for relativistic ejecta inferred from radio observations.
Previously, supernovae with relativistic outflows have only been found via their prompt gamma-ray emission. Now, however, A M Soderberg of the Harvard-Smithsonian Center for Astrophysics and collaborators report in Nature the discovery of mildly relativistic outflows in a supernova without a detected gamma-ray burst. They deduce the velocity of the blast-wave in this supernova, SN 2009bb, from the luminosity and frequency of the synchrotron spectral turnover that gives a measure of the size of the radio source at a given time after the onset of the supernova. They obtain a velocity of more than about 0.6 times the speed of light, which is interpreted as evidence for a central engine. The latter should have powered a gamma-ray burst coincident in time and position with SN 2009bb, but none was detected by the Interplanetary Network of spacecrafts sensitive to gamma-ray bursts.
The radio luminosity of SN 2009bb and its decay over several months matches very well that of SN 1998bw, which was the first supernova that could be associated with a gamma-ray burst. The latter, GRB 980425, is still the nearest detected gamma-ray burst at a distance of about 120 million light-years. The new event in the spiral galaxy NGC 3278 is at a similar distance. The absence of an associated gamma-ray burst could indicate that there were no gamma rays or that they were directed away from the line of sight. However, even if the gamma-ray signal was as bright as GRB 980425 it would not have been detectable by the Interplanetary Network. It could thus be that SN 2009bb is just a twin of SN 1998bw. Only two other supernovae have been detected so far to be associated with dim and nearby gamma-ray bursts, namely GRB 031203 (CERN Courier September 2004 p13) and the X-ray flash GRB 060218 (CERN Courier October 2006 p13).
SN 2007gr is another supernova that can now be added to the list. It also has a mildly relativistic outflow detected without gamma-ray emission. This independent discovery was reported in the same issue of Nature by a group of astronomers led by Z Paragi from the Joint Institute for Very Long Baseline Interferometry in Europe, the Netherlands, and from the MTA Research Group of Budapest. This time, the expansion of the supernova was directly measured from two high-resolution radio interferometric observations separated by 60 days.
The most sophisticated physics experiments today take place at large facilities – accelerators – which in turn need large financial investments that cannot be provided by a single country, even if it is highly developed. Such investigations can be carried out only by collaborations between scientific centres in several countries, each bringing financial and intellectual contributions to the development of the cutting-edge facilities that make it possible to penetrate deeper into the secrets of matter. The LHC at CERN, for example, is the result of contributions from 20 member states and many other countries worldwide. Similarly, the Joint Institute for Nuclear Research (JINR) in Dubna, with 18 member states, is home to several accelerators. This internationally based research leads to new information not only about physics but also other fields, such as astronomy, condensed-matter physics and modern technology. The methods used are also of great importance for interdisciplinary fields, such as nanotechnology, medicine and microelectronics.
The physics of nuclei in exotic states is one of the most important and rapidly developing areas in nuclear physics. Researchers can now produce nuclei in extreme states – such as nuclei with high angular momentum (rapidly rotating nuclei), high excitation energy (“hot” nuclei), highly deformed nuclei (nuclei with unusual, super- and hyper-deformed shapes), nuclei with extremely large numbers of neutrons or protons (neutron-rich and proton-rich nuclei) and superheavy nuclei with a proton number, Z, above 110. The investigations of nuclear matter in such extreme states provide important information about the properties of the microcosm and make possible the modelling of a variety of processes taking place in the universe.
These studies, which rely on collaborative effort between countries, were the subject of the international symposium EXON 2009 held in the Black Sea resort of Sochi, Russia, on 28 September – 2 October. The symposium was organized by the four largest centres involved in the investigation of exotic nuclear states: JINR in Russia; the Grand Accélérateur National d’Ions Lourds (GANIL) in France; the GSI Helmholtzzentrum für Schwerionenforschung in Germany; and the research centre RIKEN in Japan. Some 140 scientists from institutes in 24 countries as well as from JINR attended the conference, with the largest number of participants from outside Russia coming from Germany (20 participants), France (16), Japan (12) and the US (8), with about 40 participants from JINR and 16 from institutes across Russia.
EXON 2009 was the fifth in this series of symposia on exotic nuclei, all of which have been held in Russia, the first one in 1991. All have been of interest not only for the organizers, but also for participants from other research centres. In addition to the discussions of scientific problems and the collaboration necessary to address them, participants have the opportunity to become acquainted with some of the most remarkable places in Russia, while the local research authorities and universities find out about the latest results in nuclear physics and the possibilities of applications in interdisciplinary fields of science and technology.
The scientific programme included invited talks about pressing problems in the physics of exotic nuclei as well as about new projects for large accelerator complexes and experimental facilities. The main discussions about the properties of nuclei at the limits of nucleon stability took place on the first day, with reports on the newly observed unusual states at high values of the ratios of the proton to neutron numbers. The topics included: the change of the “accepted” magic numbers when approaching the limit of neutron stability; the coexistence in the same nucleus of two or more types of deformation; and the increase of nuclear stability resulting from deformation, which is important for understanding the stability of pure neutron matter. As UNESCO had declared 2009 the International Year of Astronomy, one talk was dedicated to investigations in this field. Shigeru Kubono from the University of Tokyo discussed the possibilities of studying important astrophysical problems with the use of radioactive secondary beams.
Superheavy elements
In addition to the talks on light exotic nuclei, other reports covered the results of the latest experiments on the synthesis and properties of superheavy elements. Joint experiments by JINR’s Flerov Laboratory of Nuclear Reactions (FLNR), GSI and the Paul Scherrer Institute have found interesting results on the chemical identification of elements 112 and 114 at the FLNR U400 cyclotron, as Heinz Gäggeler of PSI described. Speakers from different countries reported on a range of investigations of the properties of the superheavy elements using different methods. These reports underlined the importance of the investigations of superheavy elements that are carried out in Dubna by existing collaborations. One striking example is the experiment aimed at the synthesis of element 117 that is currently being performed at the U400 cyclotron by a large group of physicists and chemists under the guidance of Yuri Oganessian and Sergei Dmitriev, in collaboration with scientists from different laboratories in the US, who provided the target material of 249Bk. In addition, theoretical presentations included predictions of possible reactions for synthesizing superheavy elements and of their chemical properties.
A second day was dedicated to reports on the current and future heavy-ion and radioactive beam accelerator complexes in different scientific centres. The four laboratories that co-organized the symposium are currently creating a new generation of accelerators that will make it possible to improve considerably the work on the synthesis and studies of the properties of new exotic nuclei. There were detailed talks on the SPIRAL project at GANIL, the RI Beam Factory at RIKEN, the Facility for Antiproton and Ion Research (FAIR) at GSI and the DRIBs project at JINR. In his talk, Mikhail Itkis, JINR’s vice-director, presented plans for the development of the institute’s accelerator facilities, including the new complex, NICA. Georg Bollen of Michigan State University reported on the project for the Facility for Rare Isotope Beams (FRIB), now funded and to be built at the university. In this way, more centres are joining the group of institutes that are developing a new generation of accelerator complexes.
There were also presentations about other facilities for the production of radioactive beams, including ALTO in Orsay, EXCYT in Catania, RIBRAS at the University of Sao Paulo and the radioactive beams project at the Cyclotron Institute of Texas A&M University. The discussions around these talks showed that beams of radioactive nuclei are fundamental to investigations of the properties of nuclear matter in extreme states.
Round-table discussions also took place during the symposium to consider the results obtained in joint work and possible future collaborations. Bollen, a leader of the FRIB project, suggested including Michigan State University as a co-organizer of the next symposium, EXON 2012, which could take place in the city of Vladivostok in the Russian Federation.
• For full details about the scientific programme and speakers, see http://exon2009.jinr.ru/. There were about 80 talks in total and some 40 posters shown, all of which will be published in conference proceedings by the American Institute of Physics.
In his pioneering work on supersaturated vapours, which began in the 1890s, C T R Wilson found that droplets condensed on the ionization trails left by charged particles. This led to many positive advances, among the most significant of which was the use of “cloud chambers” in ushering in the field of elementary particle physics. Just over 50 years ago, Edward Ney at the University of Minnesota suggested that cosmic rays might have an influence on the climate (Ney 1959). He proposed that ions from cosmic rays act as condensation centres for cloud droplets. It is immediately obvious that this is not the phenomenon that Wilson discovered. To make ionization trails visible, cloud chambers need very clean conditions and supersaturation at a level about four times greater than saturation. By contrast, it is rare to find such clean conditions in the atmosphere and supersaturation levels are almost never more than 1% above saturation.
More recently, in 1997 Henrik Svensmark and Eigil Friis-Christensen reported a link between clouds and elementary particles (specifically cosmic rays) that has been used to claim that changes in cosmic-ray intensity cause changes in cloud cover and could affect global warming (Svensmark 2007). In this article we describe work that examines this claim critically. We also touch on the fascinating topic of lightning initiated by cosmic-ray showers and its possible role in the origin and evolution of life.
Figure 1 shows the basic evidence used to claim a causal correlation between cosmic-ray intensity (measured by the neutron monitor operated by the University of Chicago at Climax, Colorado) and low cloud cover (at altitudes below 3.2 km as measured by satellites). The data in the figure are for the period 1985 to 2008, which covers two solar cycles. In the 22nd cycle (1985–1996) the correlation between cosmic-ray intensity and low cloud cover was strong and this was the origin of the claim. However, the next (23rd) solar cycle has now passed (1996–2007) and the correlation is much more difficult to see. This suggests that the 1985–1996 observation might have been “accidental” and the effect of something completely different (such as temperature). Nevertheless, we will put this to one side and consider whether the apparent correlation is causal.
A test of the causal hypothesis is to examine the correlation as a function of geomagnetic latitude. The 11-year cosmic-ray variation becomes bigger at higher magnetic latitudes because of the effect of the Earth’s magnetic field. Fewer low-energy cosmic rays enter the Earth’s atmosphere near the magnetic equator than near the poles. This effect is measured by the vertical rigidity cut-off (VRCO) – the minimum rigidity for a primary cosmic ray to reach the Earth’s atmosphere – which is computed from the local value of the planet’s magnetic field. Our analysis looked at the differences between the low cloud cover at solar minima in 1985 and 1996 and that at solar maximum in 1990 at different VRCO (Sloan and Wolfendale 2008). These were then compared with the changes in the cosmic-ray rate as measured from neutron monitors located around the world (figure 2). If the dip in the low cloud cover observed in 1990 was caused by the decrease in ionization from cosmic rays then all of the points in figure 2 would follow the line of the cosmic-ray variation, marked NM. They do not.
Cosmic rays are not the only source of ionization in the atmosphere. We have looked for changes in cloud cover associated with a variety of other sources. The ionization released from nuclear weapon tests in the atmosphere was one example that we examined. At large distances from the test centre, radiation levels are high but other effects of the blast are negligible. For example, measurements showed that the Bravo test (the largest of the US tests), which exploded a 15-megatonne device at Bikini Atoll on 1 March 1954, produced radiation levels of 100 R/h at a distance of 480 km from the explosion. This corresponds to 5 × 107 ion pairs/cm3, i.e. seven orders of magnitude more ionization than that produced by cosmic rays. However, no effects on cloud cover were observed. This shows that the efficiency for conversion of ions to cloud droplets must be low. Similarly, we examined radon concentrations in various parts of the world to see if high-radon regions had more cloud cover than their neighbours with low-radon concentrations. We also examined the ionization released in the Chernobyl disaster in 1986. Again, we did not find any significant effects of ionization on cloud cover.
Recently, Svensmark’s group examined the so-called Forbush decreases in cosmic-ray intensity, which are caused by solar coronal mass ejections. The group found that the six strongest (over the past 20 years) are followed by significant drops in low cloud cover and in other indicators of atmospheric water content. We have examined the evidence in detail and concluded that it is not only statistically weak but that it also needs unphysically long periods (6–9 days) for the change in cosmic-ray flux to manifest itself as changes in cloud cover or the cloud water content.
The correlation between low cloud cover and cosmic rays in figure 1 is presumably therefore not causal because we have found that ionization is not efficient at yielding cloud cover. A more likely cause relates to solar irradiance, not least because the change in energy content of solar irradiance is about 108 times that of cosmic rays. In this context, Mirela Voiculescu of Dunarea de Jos University in Romania and colleagues showed correlations between low cloud cover and either the cosmic-ray rate or the solar irradiance in limited geographical areas (Voiculescu et al. 2006). Such areas cover less than 20% of the area of the globe. A close examination of these geographical areas reveals that only the correlation between the solar irradiance and the cloud cover is seen in both solar cycles. By contrast, any correlation that there is with cosmic rays does not appear in both cycles.
Variation in solar irradiance over the 11-year solar cycle is a much more plausible cause of any correlation with cloud cover than cosmic rays; indeed, Joanna Haigh of Imperial College London has modelled such an effect (Haigh 1996). A comparison of the long-term variation of the global average surface temperature with the long-term solar activity shows that less than 14% of the observed global warming in the past 50 years comes from variations in the solar activity.
This is not to say that ionization has no effect on climate at all. There may well be an interesting effect involving the terrestrial electrical circuit, which seems to be affected by cosmic rays. No doubt the CLOUD experiment underway at CERN will throw further light on this problem and tell us just how much such an effect will be.
Lightning and the origin of life
One fall-out of the work described above has been interest in the role of cosmic rays in a particularly dramatic cloud effect: lightning. Alexander Gurevich and Kirill Zybin of the P N Lebedev Physical Institute in Moscow suggested in 2002 that extensive air showers (EAS) created by cosmic rays play a key role in initiating the leader strokes in lightning. This has been confirmed in more recent observations at the Lebedev Institute’s Tien-Shan Mountain Cosmic Ray Station by Gurevich and colleagues.
This phenomenon has a possible relevance to the origin of life on Earth. The current favourite models for this origin are either on comets in outer space, as Fred Hoyle and Chandra Wickramasinghe suggested, or in the black smokers or alkaline vents that result from volcanic activity in the deep oceans. However, another possibility follows from the famous early experiments of Stanley Miller and Harold Urey, in which they passed a spark through a mixture of liquids (water, methane, ammonia, etc) – the “prebiotic soup”. This resulted in the appearance of the basic building blocks of life, such as amino acids, RNA and monomers. One problem, however, was that the available spark energy, from lightning, was thought to be inadequate.
This is where the long-term variability of EAS rates may have relevance. We have shown that there should have been periods during which the EAS rate was higher by orders of magnitude than at present (Erlykin and Wolfendale 2001). Our theory is based on the statistical nature of supernova explosions, which are thought to be the originators of high-energy cosmic rays. Figure 3 shows how, from time to time, periods of high cosmic-ray intensity of tens of thousands of years will occur, as a nearby supernova explodes. This will lead to high lightning rates. One of these, occurring at around 4 Gy before the present (a not unlikely occurrence), could have led to the formation of the building blocks of life via the Miller-Urey mechanism. Life could then have evolved from such a start.
Perhaps less speculatively is the role of NOx (NO + N2O) generated by lightning strokes. It seems that nearly 20% of the contemporary concentration of NOx is produced by lightning. Its rate of production would certainly vary considerably. NOx is poisonous to mammals but promotes growth in plants; thus, an effect on evolution of species, both positive and negative, is likely.
In conclusion, the interaction of cosmic rays with the Earth’s atmosphere is a topic of considerable interest. Although it is unlikely that cosmic rays are a significant contributor to global warming, their contribution to the pool of aerosol-cloud condensation nuclei could be non-negligible; the CLOUD experiment has a big role to play in elucidating the interesting science involved. On a wider canvas it would not be surprising if electrical effects in the atmosphere, initiated by cosmic rays, played a role in the evolution of the Earth’s inhabitants.
• The authors are grateful to the John Taylor Foundation for supporting this work.
The first time that the International Symposium of Lepton and Photon Interactions at High Energies took place in Hamburg, in 1965, it was in its earlier incarnation that referred to electrons, rather than all leptons. The DESY electron synchrotron had started up the previous year, so the young laboratory was an obvious host for a conference that was relatively specialized. Since then, high-energy electrons have revealed the reality of quarks and the complex nature of the proton; muons have provided signatures of new states of matter, from charmonium to quark–gluon plasma; neutrinos from beyond the Earth have given glimpses of physics beyond the Standard Model; and photons have begun to offer a new view of the high-energy universe. “Lepton Photon” has thus grown to encompass all of particle physics and the 24th symposium, held in Hamburg during DESY’s 50th anniversary year, was no exception.
Within its standard format of invited plenary sessions only, Lepton Photon 2009 presented a clear and concise overview of particle physics today. Expectations for the future formed a recurrent theme, not only in view of the imminent start-up of the LHC but also looking to upgrades, new experiments and facilities to push frontiers in energy and luminosity. This report will focus mainly on recent results presented at the conference in topics varying from QCD and heavy ions to neutrinos and dark matter.
When originally planned, it was likely that this conference would be dominated by the first collisions at the LHC. This news should now fall to the summer conferences in 2010, but the LHC still loomed large in the presentations at Lepton Photon 2009. The first scientific session heard the latest news about the steady progress towards the restart, following the incident of September 2008. The four major experiments, ALICE, ATLAS, CMS and LHCb, took advantage of the prolonged shutdown to complete installation work, implement improvements and make thorough tests with cosmic rays – efforts that led to a highly successful restart in November and December last year.
The harvest of data from HERA – the world’s only electron–proton collider, which ran from 1992 until 2007 – continues to paint a remarkably clear picture of the internal workings of the proton within the context of QCD, the theory of the strong force. The precision that comes from combining HERA-I data (1996–2000) from the H1 and ZEUS experiments yields impressively accurate distribution functions for the gluons and the quark–antiquark sea in the latest QCD analysis at next-to-leading order (NLO), especially at low values of the momentum fraction. Both H1 and ZEUS have made the first measurements of the structure function, FL, at low x and there are also new results from ZEUS with improved precision at high values of momentum-transfer-squared (high Q2).
The HERMES collaboration at HERA took a different approach by observing the collisions of the electron beam with a gas target. The analysis of kaon production from deuterons indicates that the density of strange quarks – present in the quark–antiquark sea in protons and neutrons – varies differently with x than does that of the sea of lighter quarks. H1, meanwhile, has new measurements for charm and bottom quarks, which agree with QCD analyses.
The main aim of HERMES was to learn more about contributions to the nucleon’s spin, the goal also of COMPASS at CERN (using muons), fixed-target experiments with electrons at Jefferson Lab and polarized proton–proton collisions at RHIC at Brookhaven. Results from these studies have fed the first global NLO QCD analyses of both polarized deep-inelastic lepton–nucleon and proton–proton scattering. The results reinforce the puzzling discovery that the quarks and antiquarks contribute only 25–35% of the nucleon’s spin. They also indicate a large negative contribution from the strange quark at low x, with small contributions so far from the gluon, derived for the first time from the polarized proton–proton data, but subject to large uncertainties.
While parton distribution functions (PDFs) give a picture of the momentum fraction carried by the constituents in a nucleon, generalized PDFs (GPDFs) give a fuller view that includes information on longitudinal and transverse momentum, which should allow the contribution of orbital angular momentum to the nucleon’s spin to be derived. GPDFs can be extracted from measurements of deeply virtual Compton scattering. The e1-dvcs experiment with the CLAS detector at Jefferson Lab has made an extensive set of high-quality measurements of the beam-spin asymmetry, which will constrain the GPDFs. Also at Jefferson Lab and elsewhere, experiments have studied transversity, which gives a measure of helicity-flip. Last summer the HERMES collaboration reported clear evidence for a non-zero “Sivers effect” in semi-inclusive deep-inelastic scattering from a transversely polarized target, which suggests a non-zero orbital angular momentum for the quarks in a nucleon. A recent fit to data from both HERMES and COMPASS to determine the Sivers function indicates that the orbital angular momentum is mainly from the valence quarks.
Fermilab’s Tevatron continues its Run II, which began in 2001, with proton–antiproton collisions at a total energy of 1.96 TeV. Here the study of jets of particles reveals the hard scattering of the quarks. The DØ collaboration has now measured the angular distribution of pairs of jets (dijets) at 1.96 TeV collision energy, for dijet masses ranging from 0.25 TeV to more than 1.1 TeV – in effect, the first “Rutherford” experiment to go above 1 TeV, a century after Hans Geiger and Ernest Marsden published their results on alpha-particle scattering, which gave the first evidence for Rutherford scattering. This sets the most stringent limits to date on the scale for quark structure, Λ> 2.9 GeV, and also on the scale of extra dimensions.
The large amounts of data accumulated in Run II provide a major test-bed for QCD and an important hunting ground for new particles and new physics. By the time of the conference, the collider had delivered 7 fb–1 and the collaborations had analysed 2.7 fb–1. The CDF and DØ experiments have high-precision results that agree well with NLO perturbative QCD for inclusive jets and dijets, setting limits on new particles with masses up to more than 1.2 TeV. By contrast, there are discrepancies that are still to be understood in the production of isolated photons.
While results such as these from HERA and the Tevatron continue to consolidate QCD, there has also been impressive theoretical progress in making more precise predictions, in particular in higher-order calculations in readiness for the LHC. Leading-order calculations are already automated and are beginning to include more final-state particles, as in 2→6 body. There are important breakthroughs at NLO, with the first calculation of a 2→4 body cross-section, qqı→ttıbbı, in 2008 and developments in automation, for example in calculating W+3 jets, in 2009. These are important for estimating backgrounds to searches at the LHC. At next-to-NLO, there is progress in calculations on processes that will provide “standard candles” at the LHC.
At the same time, lattice QCD is moving from simulation to the calculation of real physical quantities, a quarter of a century after its invention. Improved algorithms with light quarks have led to new results on the hadron spectrum, with masses agreeing well with experiment. Contributions to flavour physics are also progressing with improved inputs for the Cabibbo–Kobayashi–Maskawa (CKM) matrix. A steady increase in computing power, to the petaflop scale, should lead to further improvements through simulations with smaller spacings (from 0.1 to 0.05 fm) on larger volumes (3–6 fm scale), which will be better suited for studies of QCD in hot, dense matter.
The ultimate test for QCD lies arguably in the hot and dense matter that forms in relativistic heavy-ion collisions and in determining its equation of state and bulk thermodynamic properties. Lattice QCD provides access to this extreme state through simulation, while RHIC at Brookhaven has been the main scene for such studies since 2000. The elliptical flow observed at RHIC is consistent with a phase transition – and this is what recent lattice QCD simulations also clearly indicate. It is also consistent with the formation of an almost perfect fluid, with a ratio of viscosity-to-entropy-density almost 10 times lower than in superfluid helium. Intriguing puzzles remain, however. The BRAHMS and PHOBOS experiments at RHIC shut down in 2006, but STAR and PHENIX are being upgraded. The LHC will also target hot QCD matter and perhaps observe the kinds of shockwave described in hydrodynamical calculations of a fluid-like medium.
In the electroweak sector, the Tevatron continues to make inroads into the areas that were out of reach to experiments at the Large Electron Positron (LEP) collider and SLAC Linear Collider, in particular measuring the W boson and top quark as never before. Nine years after the discovery of top, through ttı pair-production, both CDF and DØ finally observed the electroweak production of single top quarks in 2009. The combined results presented at the conference yield σt = 2.76 + 0.58 – 0.47 pb; they also allow a measurement of the CKM matrix element, |Vtb| = 0.91 ± 0.08. The experiments together now know the top quark mass to 0.7%, with a combined measurement of 173.1 ± 0.6 (stat ± 1.1 (syst GeV. Other results include a new world average for the mass of the W boson of 80399 ± 23 MeV, incorporating Tevatron data that gives an average of 80420 ± 31 MeV.
The Tevatron experiments also continue to chip away at channels that are difficult to pull out of the data, but which will be important in searches for the Higgs boson at the LHC. For example, both DØ and CDF have now observed the production and decay of a pair of Z bosons to four leptons – the smallest cross-section of diboson states in the Standard Model – at significances of 5.3σ and 5.7σ, respectively.
For real progress, the Standard Model is still screaming out for hard evidence for (or against) the Higgs boson. Direct searches at the Tevatron now exclude a Standard Model Higgs with a mass in the range 160–170 GeV (at 95% CL), while precision measurements, including the Tevatron’s masses for the W boson and top quark, push the mass below 163 GeV. By 2011, or soon after, the Tevatron should provide sufficient luminosity to exclude the Standard Model Higgs directly – or provide evidence for it.
In addition to squeezing the Higgs, CDF and DØ continue to search for new phenomena, but so far without success. At the same time, a variety of experiments are putting pressure on the Standard Model, searching for cracks that might lead to new physics. The Standard Model, meanwhile, remains so impervious that effects not much bigger than 2σ seem hopeful: at HERA, combined data from H1 and ZEUS show a slight excess (2.6σ) at high-momentum-transfer events in e+p interactions with multilepton final states.
Low-energy experiments also offer a route to new physics, for example, through measurements of finite electric dipole moments (EDMs) and rare muon decays. Here, the experiment at the University of Washington in Seattle delivered an important result in 2009, with a new limit on the EDM of 199Hg of <3.1 × 10–29 e cm (at 95% CL) – a factor of seven reduction in the previous upper limit. The collaboration has further improvements in the pipeline, which should increase the experimental sensitivity by a factor of 3 to 5. In the search for rare muon decays, the MEG experiment at PSI found a preliminary result for the branching ration of μ+ → e+γ of <3.0 × 10–11 from data collected in 2008.
Flavour physics offers a different line of attack, in particular through the CKM matrix, which links the different quark flavours. Testing the unitarity of the matrix ultimately tests the integrity of the Standard Model. New measurements on nuclear β-decays and from the KLOE, CLEO-c, Belle and BaBar experiments, as well as from CDF and DØ, continue to probe the matrix with increasing precision, with the result that the magnitudes of the matrix elements agree well with unitarity, although there are some small (up to 2σ) inconsistencies in results from different analyses. The global fit to the unitarity triangle is also good with the angles summing to (185 ± 13)°, although again there is some tension concerning sin2β at the 2σ level. High luminosity at the B factories at KEK and SLAC are making possible an impressive series of measurements on rare B decays with potential to expose new physics. The decay B→τυ, for example, which puts constraints on a charged Higgs particle, disagrees with the CKM fit at the 2.4σ level.
Neutrinos have been the only particles that have so far provided a playground outside the Standard Model, with the discovery of neutrino oscillations – and hence neutrino mass – in atmospheric and solar neutrinos some 10 years ago. Since then various experiments have pinned down oscillation parameters to the level of a few per cent, with different types of experiment being suited to different parameters. For example, experiments with solar electron-neutrinos and reactor electron-antineutrinos give access to the mass m21 and mixing angle θ12. The reactor experiment KamLAND finds Δm212 = 7.58 × 10–5 eV2 (to a level of 2.7%) and tan2 θ12 = 0.56 (to ˜25%) compared with the global solar result from the solar neutrino experiments of Δm212 = 4.90 × 10–5 eV2 (˜34%) and tan2 θ12 = 0.437 (˜10%). The Borexino experiment is now producing interesting results for solar neutrinos, over an energy range that includes electron-neutrinos from 7Be and the carbon-nitrogen-oxygen cycle as well as from 8B.
Using the muon-neutrino beams at the Neutrinos at the Main Injector facility at Fermilab, the MINOS experiment has measured the disappearance of muon-neutrinos, observing 848 events against an expectation of 1060 ± 60 for no oscillations and disfavouring other theoretical possibilities at a level of 6σ. MiniBooNE is, by contrast, investigating oscillations at lower energies with neutrinos from the Fermilab Booster neutrino beam. Set up to investigate the excess electron-antineutrino events seen in a muon–antineutrino beam by the LSND experiment at Los Alamos, MiniBooNE finds no significant excess across an energy range of 200–1250 GeV, but the results are as yet inconclusive regarding oscillations with Δm2 at the 1 eV2 scale suggested by the LSND result. Intriguingly, however, MiniBooNE does continue to observe an excess of electron-like events in the muon-neutrino beam, in the energy region between 200 and 475 GeV, as first reported in 2007.
Neutrinos from outer space have the potential to provide a new view of the cosmos, but their sources continue to elude discovery. There is more success with charged cosmic rays, where the Pierre Auger Observatory is making headway in the study of ultra-high-energy cosmic rays, with as many as 58 events above 55 EeV (55 × 1018 eV). The latest results confirm the extragalactic origin of these ultra-high-energy particles and their anisotropic distribution and underpin the collaboration’s enthusiasm for an Auger North array in the Northern Hemisphere to complement the existing Auger South array in Argentina.
The greatest success in pinning down sources comes from the cosmic gamma-ray experiments, with the Cherenkov arrays such as HESS and MAGIC complemented by the new spacecraft Fermi and AGILE. At very high energies the number of identified sources has risen from 12 in 2003 to an impressive 96 in 2009, which includes new categories such as starburst galaxies (2) and Wolf-Rayert objects (3) as well as the more familiar active galactic nuclei (24) and pulsar wind nebulae (23). The FermiLAT collaboration has also significantly increased the number of identified sources of high-energy gamma rays, finding 205 with a significance level of more than 10σ.
Cosmic radiation is also offering a tantalizing window on dark matter to complement the direct laboratory-based searches for dark-matter particles. The direct searches have seen much progress in looking for the hypothesized axions and weakly interacting massive particles, but confirmed detection remains elusive. Similarly, cosmic rays provide conflicting and unconfirmed evidence. The FermiLAT collaboration and the PAMELA experiment find increases in electrons and positrons, respectively, which could indicate dark matter but are probably effects from nearby pulsars.
From QCD to dark matter, Lepton Photon 2009 took a sweeping view across the whole range of particle physics today. While the Standard Model stands firm there remain many unanswered questions. In the closing talk, Guido Altarelli raised the spectre of the anthropic solution: perhaps we live in a universe that is very unlikely but that allows our existence, but he swiftly said he thought that this was not appropriate. In his view, supersymmetry remains the best solution to difficulties such as the hierarchy problem and, if this is the case, the LHC should find the light supersymmetric particles. The LHC is thus heavily charged with the expectations of the worldwide particle-physics community. If all goes well, results from the new collider should indeed dominate the next meeting in the Lepton Photon series, which is to be held in Mumbai in 2011.
It should come as no surprise then, that the work at the Large Hadron Collider at CERN has been the jumping-off point for some of Keith’s most critically acclaimed work. His 2002 exhibition “Supercollider”, shown at the South London Gallery in the UK, took its title from the goings on at CERN. The title piece of the exhibition (right) was a giant studio drawing with the subtitle “From the Action of Four Forces on 103 elements within four dimensions, we get…” and needs no explanation to any scientist. Random quotes drawn from everything from planetary charts to entries in anonymous diaries, combined with splashes of colour and pictures of a red-haired model wearing an itsy-bitsy, green bikini, are some of the myriad miscellaneous items that collide on this giant painting and which reflect the wonderful diversity of the world created “from the action of four forces…”. Another mixed-media piece, Bubble Chambers: 2 Discrete Molecules of Simultaneity, bursts with random quotes with random dates from 1325 to 2002 dotted across a surface that is crammed with molecules represented as bubbles in reds, blacks, blues, pinks and whites.
Both of these pieces are like a mirror held up to the viewer. Look at them both and, inevitably, the temptation kicks in to start drawing conclusions or to make narratives out of the random juxtapositions: the mind’s processes writ large. Like much of Keith’s work, he is interested in the way that we make sense of the world: as the observer and the observed; the viewer and the artist; and the ways in which we use logic, counter-intuition and intuition to make these discoveries. In essence, we are all in this artistic experiment together, discovering who and what we are in the act of looking – the artist included.
But if looking is important to Keith, it wasn’t until 2009 that he finally came to take his own look at CERN. Talk to him about his visit and he says that what impressed him above everything was “not so much the LHC or the machines themselves, as the way in which the scientists at CERN meet ideas head on and change the way we think about ourselves”.
He came as part of a private party of artists, including fellow British artist Cerith Wynn-Jones and the German experimentalist Ali Janka, who were shown round the CERN complex by the communication team in September 2009. In many ways, visting CERN was a homecoming for Keith and one that he found profoundly moving. He encountered an international community dedicated to breaking the boundaries of knowledge and challenging the world of appearances – ideals that are so close to his heart and mind too.
For someone who is so omnivorous in his wish to gain knowledge, physics isn’t the only science that fascinates him. Chemistry and mathematics engage him too. Some of his most famous pieces include the fractal dice, part of the Geno Pheno series (2005), which explores the worlds of cause and effect and takes its title from genetics. The work explores the idea of what is a starting point – an artwork’s DNA, so to speak; its physical manifestation or where it leads. The fractal dice pieces are three-dimensional aluminium and plastic sculptures, in vibrant primary colours – reds, blacks, greens and yellows. They are assembled in galleries around the world where they are shown according to a mathematical system, known as random iterative-functions systems, which is supplied to the curators by the artist. The form of each piece – sometimes as many as 14 are shown at any one time, sometimes fewer – is determined by the rolls of a dice and by the rules set out by the artist. For example, rule number one determines which colour a particular side of the sculpture should be. Complexity and unpredictability are both shown to be crucial components of the creative process, which involves both decisions and chance.
This love of engaging with different sciences and their processes shows how critical Keith is of being enslaved by any one knowledge system. His sculptural piece, Teleological Accelerator (2003), clearly shows this (top right). It is a massive wall installation measuring 5 m across, with two interlocking metal discs made of aluminium and steel that comprise a diagram of words and concepts written in pencil, ranging over all kinds of human achievements as well as an accumulation of scientific definitions. The flexible indicators can be twisted by the viewer so that the artist playfully conveys his idea that teleology is whatever you can make of it. Meaning is not a fixed point: it is always changing.
If much of Keith’s work shows a great indebtedness to science and a love of it as a knowledge system, and form of enquiry about the world, some of his latest work also shows an awe-inspiring sense of nature. After all, as Keith so eloquently says, “Science and art are the ways in which we describe the world. Nature is the world.” The 2009 work Mathematical Nature Painting Nested (bottom right), currently being shown at the Royal Academy of Art, London, is a portrait of original transformations. Paints and chemicals have been poured onto a primed aluminium sheet and a painting takes shape thanks to the hydrophic reaction that forms the basis of the painting. This is the first phase. In the second phase, Keith determines the appearance of the painting, as far as he can, to make it resemble cells structures or geographical strata, according to the way that he dries the paint over the following month.
Like particle physics itself, Keith is pushing boundaries, working within limits and constraints and outside them too: “I am not interested in the role of the artist as creator. Art is a vehicle of enquiry and the role of the artist is much more like that of Christopher Columbus – we are navigators and discoverers of what is already out there in the world but has yet to be discovered.”
He could just as easily be talking about the role of the scientist, but he is clear about how different artists and scientists are, as well as the ways in which the arts and science could and should interact: “Artists, unlike scientists, are not attempting to model the world. They are trying to engage the viewer with the wonder of it. If you attempt to marry and equate art with science, then you fail. If you allow what is not similar about art and science, and their different methods and processes to co-exist and thrive, then a real art/science collaboration and aesthetic will emerge. But at the end of the day, both art and science are united by one logic and one impulse – both are attempts to understand what it is to be human and the world around us.”
• For more information about Keith Tyson’s latest creations, see his official website at www.keithtyson.com.
En juin 2008, la Commission européenne a accordé 5,6 millions d’euros au projet PARTNER (Particle Training Network for European Radiotherapy), l’objectif étant de former de jeunes chercheurs sur certains aspects de l’hadronthérapie, touchant à la fois à la physique des particules et aux applications thérapeutiques. Coordonné par le CERN, ce projet, d’une durée de quatre ans, réunit 10 instituts et centres de recherche d’Europe. Il s’agit d’un réseau pluridisciplinaire exceptionnel dans lequel 25 étudiants reçoivent une formation sur des sujets allant de la physique des particules à l’épidémiologie. Au cours de la première année, le réseau a déjà organisé des formations sur les détecteurs et les accélérateurs, le travail en équipe et la radiobiologie pratique.
In June 2008 the European Commission awarded €5.6 million to the Particle Training Network for European Radiotherapy (PARTNER), with the objective of training young researchers in aspects of hadron therapy that involve particle physics and applications in medicine. Co-ordinated by CERN, the four-year project involves 10 institutes and research centres in Europe. It represents a unique multidisciplinary network in which 25 students, 4 of whom are based at CERN under the fellowship programme, receive training in fields that range from elementary particle physics through the design of gantries for hadron therapy to epidemiology. Such specially trained students are vital now that hadron therapy is becoming established as an important procedure for the treatment of cancer (CERN Courier December 2006 p24).
PARTNER’s activities got underway in October 2008 with the network’s initial meeting at CERN (CERN Courier December 2008 p6). Since then the network has designed a strategy for efficient project management and the timely distribution of the agreed deliverables, as well as having developed a training programme and recruited its key element – the researchers. The project is funded under the EC’s Marie Curie Initial Training Network (MC-ITN) scheme, which seeks to improve the career prospects of young researchers. One of the first activities for PARTNER was thus an MC-ITN Administrators Course for Marie Curie Projects, which was organized at CERN in January 2009 together with external consultants to help all of the collaborating institutes to understand what is required in the management of such projects.
A successful collaboration
The student training programme started the following June with a course on Detectors and Accelerators Applied in Medicine, held at the Instituto de Física Corpuscular (IFIC) in Valencia. The workshop provided an overview of several topics involved in the biomedical applications of detectors and accelerators, from Monte Carlo simulation and Grid computing to image science. CERN’s input into the PARTNER training programme is closely linked to its strengths not only in accelerator research but also in Grid technologies.
By adopting Grid technologies, the PARTNER project hopes to solve some of the multilayered problems of sharing data for referring patients, optimizing treatment planning and making use of the experience gained from managing databases. This involves the exchange of large diagnostic images, sharing cancer databases and merging referral systems across European borders. In this first course, the students were able to reinforce their knowledge as well as learn about new subjects. They also had the opportunity to meet each other for the first time and find out what others in the project are working on.
The second course was a workshop designed to familiarize young researchers with relevant aspects of leadership and team-building in the research environment. This took place in September at the University of Surrey and was led by David Faraday from Evolve Leadteam Ltd. The topics covered included communication, assertiveness, negotiation and time-management skills. Practical exercises and discussion groups complemented the theoretical presentations. Group activities included projects on technical problem solving, financial planning and analysis, risk assessment, product availability, implementation and construction. Those attending the workshop suggested that such a course would be valuable for all of the researchers in PARTNER, including those with more experience, because they come from differing disciplines and tend to have different skill levels.
A hands-on radiobiology course came next, held on 25–27 November 2009 at GSI, Darmstadt. This offered an introduction to radiobiology and lectures on experiment results, including topics such as DNA experiments, chromosome analysis, cell survival curves, DNA damage repair and DNA fragmentation after irradiation. In the context of the specific training offered by PARTNER, researchers also participated in the Particle Therapy Co-Operative Group Congress held in Heidelberg on 28 September – 3 October 2009 and the ESTRO course on “Radiation Therapy with Protons and Ions” held at Pfäffikon on 10–14 May 2009.
In all, PARTNER has had a successful first year. Students have given positive and constructive feedback on all of the activities and they particularly seemed to appreciate the possibility to network with students and experts in the field from all over Europe. The training programme continues in 2010, with the first course of the year, “Hadron therapy: past, present and future”. Held at CERN on 21–26 February, the course was co-organized by two medical experts from the MedAustron project and from Oxford University. Their aim is to teach what has been learnt in the past, to define clearly the relevant disciplines from physics and technology and to examine the potentials for proton- and ion-beam therapy
A valuable PARTNER
Lara Barazzuol
(Italian), Surrey Materials Institute, University of Surrey.
“The ‘Radiobiology hands-on’ training was an intensive and active time in which we all learnt a lot, actually practicing what was taught during the first session. Personally, I think it was very useful for my studies, being involved in radiobiological experiments. In particular, I have learnt new techniques that I am currently applying in my own research. Overall, I can say that the PARTNER project has immense potential to grow in the next two years, providing training that helps us to improve and develop our skills.
David Watts (British/Canadian), CERN and TERA.
“One very positive aspect of PARTNER is that the researchers, mostly young people like myself, are encouraged to network and share ideas and tackle problems together. We are given these opportunities during training sessions and courses, and during the leadership training we began to make bonds that should help us stay in contact during our careers in the field, thus keeping open the channels of communication between biologists, physicists, computer scientists and doctors. In short, we are kept very busy with our research, we are given all the opportunities to get training within and beyond our field, and I should not forget that though we have much to do, we are having fun doing it!”
Silvia Verdu Andres (Spanish), TERA Foundation (Therapy with Hadronic Radiations).
“PARTNER has succeeded in bringing together researchers from the different fields involved in hadron therapy and encouraging the exchange of knowledge between them. From a young researcher’s point of view, training is one of the most important things. The training courses have been carefully selected to introduce us to the complex world of hadron therapy and to help us become future professionals in our field, giving special importance to the way that we communicate and work in groups. Last but not least, I like PARTNER because it is composed of very enthusiastic people from all over the world who love the work they are performing for society.”
Faustin Roman (Romanian), CERN.
“I feel very privileged to be part of the PARTNER network because of the challenges of the project and the people involved. The programme is amazing: multidisciplinary courses, soft-skills training, practise and access to state-of-the-art technology. All of this is well organized, which says a lot about the quality of supervisors and the co-ordination of the project. Within the PARTNER project I have had the opportunity to build a software platform to share medical data. I know that by sharing medical information we can improve greatly the patient’s treatments and quality of life.”
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.