Comsol -leaderboard other pages

Topics

Carbon-ion radiotherapy flourishes in Japan

Résumé

L’hadronthérapie par ions carbone en pleine expansion au Japon

Avec un premier patient traité en 1994, le Japon s’est imposé comme précurseur dans l’utilisation des ions carbone pour le traitement des tumeurs. Le 16 mars dernier, un patient souffrant d’un cancer de la prostate a été traité pour la première fois à l’aide d’un faisceau d’ions carbone dans un nouvel établissement du GHMC, le centre de soin et de recherche dédié au traitement par ions lourds de l’université de Gunma, trois ans après le début de sa construction, en février 2007. Il s’agit d’un modèle réduit de l’HIMAC, le premier accélérateur d’ions lourds utilisé à des fins thérapeutiques situé à Chiba. D’ici 2014, le Japon comptera cinq centres de traitement par ions carbone et huit par protons.

The first prostate cancer patient was successfully treated with a 380 MeV/u carbon-ion beam at a new facility of the Gunma University Heavy Ion Medical Center (GHMC) on 16 March, three years after construction began in February 2007. This facility, which is a pilot project to boost carbon-ion radiotherapy in Japan, has been developed in collaboration with the National Institute of Radiological Sciences (NIRS).

The GHMC facility, shown schematically in figure 2, can deliver carbon-ion beams with energies ranging from 140 to 400 MeV/u. It consists of a compact injector, a synchrotron ring, three treatment rooms and an irradiation room for the development of new beam-delivery technology. The first beam was obtained from the accelerator system on 30 August 2009. Beam commissioning was followed by three months of pre-clinical research before the first treatment began. Each week since has seen two patients added to the schedule. The facility is a pilot based on a smaller version of the Heavy-Ion Medical Accelerator in Chiba (HIMAC). This means that clinical instrumentation, as well as the accelerator system, has been better tailored for clinical use and has a much lower cost – about a third less than the HIMAC facility.

Clinical trials

Heavy-ion beams are particularly suitable for deeply seated cancer treatment not only because of their high dose-localization at the Bragg peak but also as a result of the high biological effect around the peak (CERN Courier December 2006 p5). NIRS decided to construct the HIMAC facility in 1984, encouraged by the promising results from the pioneering work at the Lawrence Berkeley National Laboratory in the 1970s. Completed in October 1993, HIMAC was the world’s first heavy-ion accelerator facility dedicated to medical use. A carbon-ion beam was chosen for HIMAC, based on the fast-neutron radiotherapy experience at NIRS. It uses the single beam-wobbling method – a passive beam-delivery method – because it is robust with respect to beam errors and offers easy dose management.

Since the first clinical trial in June 1994, the total number of treatments with HIMAC had reached more than 5500 by February 2010, including about 750 treatments in 2009 alone, with single-shift operation on an average of 180 days a year. The treatment has been applied to various types of malignant tumour. As a result of the accumulated numbers of protocols, in 2003 the Japanese government approved carbon-ion radiotherapy with HIMAC as a highly advanced medical technology.

The development of beam-delivery and accelerator technologies has significantly contributed to improved treatments at HIMAC. For example, NIRS developed methods for treating tumours that move with a patient’s breathing and for reducing the undesired extra dose on healthy tissue around tumours that occurs in the single beam-wobbling method, routinely used at HIMAC.

The carbon-ion radiotherapy with HIMAC has so far proved to be significantly effective not only against many kinds of tumour that have been treated by low-linear energy transfer (LET) radiotherapy but also against radio-resistant tumours, while keeping the quality of life high without any serious side effects. It has also reduced the number of treatment fractions, which leads to a short course of treatment compared with low-LET radiotherapy. For example, a single fractionated irradiation with four directions has been used in treating lung cancer, which means only one day of treatment. NIRS therefore proposed a new facility to boost the application of carbon-ion radiotherapy, with the emphasis on a downsized system to reduce costs.

The design of the proposed facility was based on more than 10 years of experience with treatments at HIMAC. The key technologies for the accelerator and beam-delivery systems that needed to be downsized for the new facility were developed from April 2004 to March 2006, and their performances were verified by beam tests with HIMAC. In particular, the prototype injector system consisted of a compact 10 GHz electron-cyclotron resonance ion-source consisting of permanent magnets together with radio-frequency quadrupole (RFQ) and alternating-phase-focused interdigital H-mode (APF-IH) linear accelerators. This was constructed and tested at HIMAC because an APF-IH linear accelerator had never been constructed for practical use anywhere else. Tests verified that the injector system could deliver 4 MeV/u C4+ with an intensity of more than 400 eµA and transmission efficiency of around 80% from the RFQ to the APF-IH. This R&D work bore fruit in the recently opened GHMC facility.

NIRS, on the other hand, has been engaged in research on new treatments since April 2006 with a view to further development of the treatments at HIMAC. One of the most important aims of this project is to realize an “adaptive cancer radiotherapy” that can treat tumours accurately according to their changing size and shape during a treatment period. A beam-scanning method with a pencil beam, which is an active beam-delivery method, is recognized as being suitable for adaptive cancer radiotherapy.

NIRS, which treats both fixed and moving tumours, has proposed a fast 3D rescanning method with gated irradiation as a move towards the goal of adaptive cancer radiotherapy for treating both kinds of tumour. There are three essential technologies used in this method: new treatment planning that takes account of the extra dose when an irradiation spot moves; an extended flattop-operation of the HIMAC synchrotron, which reduces dead-time in synchrotron operation; and high-speed scanning magnets. These allow for 3D scanning that is about 100 times faster than conventional systems. Experiments using a test bench at the HIMAC facility (figure 3) have verified that the desired physical dose distribution is successfully obtained for both fixed and moving targets, with the expected result for the survival of human salivary-gland cells.

Using this technology, a new treatment research facility is being constructed at HIMAC (figure 4). The facility, which is connected to the HIMAC accelerator system, will have three treatment rooms: two rooms equipped with both horizontal and vertical beam-delivery systems and one room with a rotating gantry. The facility building was completed in March and one treatment room will be finished in September. Following beam commissioning and pre-clinical research, the first treatment should take place in March 2011.

Future outlook

Following on from the pilot facility at GHMC, two additional projects for carbon-ion radiotherapy have been initiated in Japan: the Saga Heavy Ion Medical Accelerator in Tosu (Saga-HIMAT) and the Kanagawa Prefectural project. The Saga-HIMAT project started construction of a carbon-ion radiotherapy facility in February 2010. This is based on the design of the GHMC facility and will be opened in 2013. Although this facility has three treatment rooms, two will be opened initially and use the spiral beam-wobbling method. Of these, one will be equipped with horizontal and vertical beam-delivery systems and the other with horizontal and 45° beam-delivery systems. In the next stage, the third room will be opened and use horizontal and vertical beam-delivery systems with the fast 3D rescanning method developed by NIRS. The Kanagawa Prefectural Government has decided to construct a carbon-ion radiotherapy facility in the Kanagawa Prefectural Cancer Center. This will also be based on the design of the GHMC-facility. Design work on the facility building began in April 2010 and it is expected to open in 2014.

More than 500,000 people in Japan are diagnosed with cancer every year and it is forecast that this number will continue to rise. In such a situation, the newly opened GHMC facility – following those at HIMAC and the Hyogo Ion Beam Medical Center – is expected to boost applications of carbon-ion radiotherapy in Japan. By 2014, five carbon-ion facilities and eight proton facilities will be operating. These are sure to play a key role in cancer radiotherapy treatment.

Collide – a cultural revolution

Would you employ me to run the LHC? Or perhaps to run an experiment at CERN with antimatter? After all, I have an abiding interest in physics – ever since an inspiring science teacher sparked my imagination with the Van de Graff generator and the laws of gravity. I have no expertise and little experience in physics – just a school girl’s love of equations combined with joyful enthusiasm and a wish to understand and engage with what it is that makes the world work.

view1

Now turn this question round: would you ask a physicist to devise an arts programme or CERN’s first cultural policy for engaging with the arts? What would your answer be? All right, I admit it. This is deliberate provocation. So let me explain.

Much has been written about the two cultures – art and science. It is a false distinction, which was imposed in the Age of Enlightenment and which in the 21st century we are finally beginning to shake off. Leonardo da Vinci made no such distinction between art and science. Aristotle most definitely did not. As the physicist-turned-poet Mario Petrucci says: “I have found that the rigour and precision of the scientist is not foreign to the poet, just as the faith-leaps of poetry are not excluded from the drawing boards of science.” The arts and science are kissing cousins. Their practitioners love knowledge and discovering how and why we exist in the world. They just express it in different ways.

Where there is a distinction between art and science – which has contributed to the this misunderstanding of how intimately related they really are – is in the ways in which people’s work is judged and evaluated. Cultural knowledge and expertise in the arts can seem totally mystifying. Why is one artist judged as great, and another not? There are no equations to evaluate and therefore no absolutes. The arts seems to be a muddy water of individual will, taste and whimsical patronage. But this is a simplistic distortion of a more complex and nuanced picture.

Arts specialism is all about knowledge and understanding. It is about knowing inside-out the history of art forms – whether dance, music, literature, the visual arts or film – and possessing the expertise to evaluate contemporary work; to spot the innovative and the boundary-bursting, as well as the great and exceptional. History lies at its heart – arts knowledge exists on a space–time continuum of reflection and understanding of the creative processes. Moreover, at the heart of this is what every scientist understands: peer review. Experts who are used to working with artists, who know what they are realistically capable of, as well as understanding the past and therefore the present and the future, choose and select projects and individuals for everything from exhibitions and showings, to competitions and grants, for example.

Which takes us to a new bold and brave experiment at CERN and my presence there. Don’t worry. I am not tinkering with the LHC. The collisions and interactions that I will be working with are all of the cultural kind. My expertise, knowledge and experience is in the field of arts and culture – 25 years of working in that arena, working with science too. The director-general, Rolf Heuer, has the vision and the wish to express the crucial inter-relationship of arts and science that makes culture. To do this, I am raising the partnerships and funds for “Collide” – an international arts residency programme at CERN – in which artists will come every year from different art forms to engage with scientists in a mutual exchange of knowledge and understanding through workshops, lectures and informal talks, and to begin to make new work. Who knows what the artists will create? Or the scientists for that matter? A spark chamber of poetry? A dance that defies gravity? Light sculptures that tunnel into the sky? Who knows? That depends on the serendipity of who applies and how they interact with whom and what is at CERN.

Crucially, the artists in residence will be selected by a panel of leading scientists working alongside leading arts specialists – directors, producers, curators, artists – so that mutual understanding and appreciation of how cultural knowledge works, and how expert judgements are made, can develop and be exchanged. This is one of the key strategies of a new cultural policy for engaging with the arts that I have devised for CERN. After all, great science deserves great art. Nothing less will do for the place that pushes knowledge to beyond known limits.

Nevertheless, at the heart of the arts at CERN is the critical connection between the lateral and logical minds that artists and scientists both have. “Collide” will be a way of showing this, of encouraging scientists and artists to work together in a structured programme of interplay and exchange. It will also be a way of creating an all-encompassing vision of CERN to the outside world and on different platforms – from stage and screen to canvas and the orchestra – showing CERN’s status as a major force in culture, as befits the home of the LHC and what some consider is possibly the biggest, most significant experiment on Earth.

The Standard Model and Beyond

by Paul Langacker, CRC Press. Hardback ISBN 9781420079067, £49.99 ($79.95).

book1

The Standard Model of elementary particles and their interactions via the electromagnetic, weak and strong interactions is a fabulously successful theory. Tests of quantum electrodynamics have been made to a precision at the level of better than one part in one billion; electroweak tests approach the one part in one hundred thousand level; and even tests of quantum chromodynamics, which are intrinsically more challenging, are being made at the per cent level.

Yet, despite this, we are still sure that the Standard Model cannot be the “ultimate” theory. We have yet to account theoretically for the exciting observations of the recent decades, namely, massive neutrinos, dark matter and dark energy, which provide direct evidence for new physics processes. We cannot account for the observed patterns of the masses of the fermion building blocks of matter, their manifestation in three generations or “families”, the “mixing” between the generations, or why the universe seems to contain almost no antimatter. And we don’t yet understand how to incorporate gravity in terms of a quantum-field theory.

Theoreticians have not been idle in developing models of the new physics that could underlie the Standard Model and that ought to manifest itself at the tera-electron-volt energy scale, such as alternative spontaneous electroweak symmetry-breaking mechanisms, supersymmetry and string theories, for example. However, within the framework of the Standard Model itself, we have yet to observe the Higgs boson, the presence of which is required to account for the generation of the masses of the W and Z particles.

This substantial book – at more than 600 pages – gives a detailed and lucid summary of the theoretical foundations of the Standard Model, and possible extensions beyond it.

Chapter 1 sets up the required notations and conventions needed for the ensuing theoretical survey. Chapter 2 reviews the basics of perturbative field theory and leads, via an introduction to discrete symmetry principles, to quantum electrodynamics. Group theory, global symmetries and symmetry breaking are reviewed in Chapter 3, which forms the foundation for the presentation of local symmetries and gauge theories in Chapter 4, where the Higgs mechanism is first introduced.

The heart of the book lies in Chapters 5 (strong interactions), 6 (weak interactions) and 7 (the electroweak theory), which at more than 170 pages is the most substantial. These chapters present a clear theoretical discussion of key physical processes, along with the phenomenology required for a comparison with data, and a brief summary of the relevant experimental results. Precision tests of the Standard Model are summarized, and the framework is introduced for parametrizing the head-room for new physics effects that go beyond it.

The final chapter summarises the known deficiencies of the Standard Model and introduces the well developed extensions: supersymmetry, extended gauge groups and grand unified theories. Fortunately, now that the LHC is up and running, we should expect to start to address experimentally at least some of these theoretical speculations. LHC results will provide the sieve for filtering the profound and accurate, versus the merely beautiful and mathematically seductive, models of nature.

The book ranges over huge swathes of theoretical territory and is self-consciously broad, rather than deep, in terms of coverage. I heartily recommend it to particle physicists as a great single-volume reference, especially useful to experimentalists. It also provides a firm, graduate-level foundation for theoretical physicists who plan to pursue concepts beyond the Standard Model to a greater depth.

 

Let the physics begin at the LHC

CCnew1_04_10

The big LHC experiments have been 20 years in the making; the meeting at which the proto-collaborations first presented their ideas publicly took place in Evian-les-Bains in March 1992. Over the past few years, as the huge and complex apparatus neared completion, they have gathered data from cosmic rays. While this was important for testing and aligning the multilayered detectors, as well as for exercising data-acquisition systems, it was only in November and December last year that the collaborations had their first sight of the long-awaited collisions at the LHC, first at 900 GeV in the centre of mass and then at 2.36 TeV. Collision data at 7 TeV are now beginning to roll in (The LHC’s new frontier). In the meantime the collaborations have been eager to make the most of the data obtained last year and the first LHC physics publications have appeared.

The ALICE collaboration was first off the mark in 2009, with the submission of a paper on the analysis of the 284 events recorded during the first burst of collisions on 23 November. The paper, which presents the measurement of the pseudorapidity density of charged primary particles in the central region at 900 GeV in the centre of mass, was accepted for publication in European Physical Journal C on 3 December (ALICE Collaboration 2010). It compares the measurement on proton–proton collisions at the LHC with those from earlier experiments, including UA1 and UA5 at CERN, which collected data for proton–antiproton collisions at 900 GeV in the centre of mass.

On 4 February the CMS collaboration followed suit with a submission to the Journal of High-Energy Physics, which was refereed and accepted for publication three days later. This paper presents measurements of inclusive charged-hadron transverse-momentum and pseudorapidity distributions for proton–proton collisions at both 900 GeV and 2.36 TeV, based on data collected in December (CMS collaboration 2010). The results at 900 GeV are in agreement with previous measurements (in UA5 and UA1) and in ALICE, and they confirm the expectation of near-equal hadron production in proton–antiproton and proton–proton collisions. The results at 2.36 TeV are in a new high-energy region, however, and they indicate an increase of charged-hadron multiplicity with energy that is steeper than expected.

CCnew2_04_10

On 16 March, it was the turn of ATLAS, with a paper submitted to Physics Letters B entitled “Charged-particle multiplicities in pp interactions at √s = 900 GeV measured with the ATLAS detector at the LHC”. This details the collaboration’s first measurements with some 300,000 inelastic events collected in December using a minimum-bias trigger during collisions at 900 GeV (ATLAS collaboration 2010). It presents results for the charged-particle multiplicity, its dependence on transverse momentum and pseudorapidity, and the relationship between mean transverse momentum and charged-particle multiplicity, measured for events with at least one charged particle in the kinematic range η <2.5 and pT> 500 MeV. The results indicate that the charged-particle multiplicity per event and unit of pseudorapidity at η = 0 is some 5–15% higher than the Monte Carlo models predict.

These papers are just the first glimpses of physics at the LHC. To support what is set to be an extensive programme of physics, the LHC Physics Centre at CERN has recently started up. It aims to collect together a variety of initiatives to support the LHC physics programme, from the organization of workshops to the development of physics tools (see http://cern.ch/lpcc).

Borexino gets a first look inside the Earth

CCnew3_04_10

The Borexino Collaboration has announced the observation of geoneutrinos at the underground Gran Sasso National Laboratory of the Italian Institute for Nuclear Physics (INFN). The data reveal, for the first time, an antineutrino signal well above background with the energy spectrum expected for radioactive decays of uranium and thorium in the Earth.

The Borexino Collaboration, comprising institutes from Italy, the US, Germany, Russia, Poland and France, operates a 300-tonne liquid-scintillator detector designed to observe and study low-energy solar neutrinos. Technologies developed by the collaboration have enabled them to achieve very low background levels in the detector, which were crucial in making the first measurements of solar neutrinos below 1 MeV. The central core of Borexino now has the lowest background available for such observations and this has been key to the detection of geoneutrinos.

Geoneutrinos are antineutrinos produced in the radioactive decays of naturally occurring uranium, thorium, potassium and rubidium. Decays from these radioactive elements are believed to contribute a significant but unknown fraction of the heat generated inside the Earth. This heat produces convective movements in the mantle, which influence volcanic activity and the tectonic-plate movements that induce seismic activity, as well as the geo-dynamo that creates the Earth’s magnetic field.

The importance of geoneutrinos was pointed out by Gernot Eder and George Marx in the 1960s and in 1984 a seminal study by Laurence Krauss, Sheldon Glashow and David Schramm laid the foundation for the field. In 2005, the KamLAND Collaboration reported an excess of low-energy antineutrinos above background in their detector in the Kamioka mine in Japan. Owing to a high background from internal radioactivity and antineutrinos emitted from nearby nuclear power plants, the KamLAND Collaboration reported that the excess events were an “indication” of geoneutrinos.

With 100 times lower background than KamLAND, the Borexino data reveal a clear low-background signal for antineutrinos, which matches the energy spectrum of uranium and thorium geoneutrinos. The lower background is a consequence both of the scintillator purification and the construction methods developed by the Borexino Collaboration to optimize radio-purity, and of the absence of nearby nuclear-reactor plants.

The origin of the known 40 TW of power produced within the Earth is one of the fundamental questions of geology. The definite detection of geoneutrinos by Borexino confirms that radioactivity contributes a significant fraction, possibly most, of this power. Other sources of power are possible, the main one being cooling from the primordial condensation of the hot Earth. A powerful natural geo-nuclear reactor at the centre of the Earth has been suggested, but is ruled out as a significant energy source by the absence of the high rate of antineutrinos associated with such a geo-reactor that should have been observed in the Borexino data.

Although radioactivity can account for a significant part of the Earth’s internal heat, measurements with a global array of geoneutrino detectors above continental and oceanic crust will be needed for a detailed understanding. By exploiting the unique features of the geoneutrino probe, future data from Borexino, KamLAND and the upcoming SNO+ detector in Canada should provide a more complete understanding of the Earth’s interior and the source of its internal heat.

Canada explores cyclotron solution to isotope shortage

CCnew4_04_10

The world’s most in-demand isotope for medical-imaging purposes is 99mTc, a daughter of the isotope 99Mo. 99Mo has been produced in plentiful supplies for the entire world chiefly by two research reactors: one in Canada and the other in the Netherlands. Both of these reactors are currently down for difficult repairs related to their age – the younger one is 47 years old.

One mitigating factor in maintaining the supply of 99Mo has been the immense co-operation among medical-isotope suppliers and consumers around the world, primarily brokered through working groups of the International Atomic Energy Agency and several industrial associations. However, in the face of the supply shortages – the pair of reactors produced 65% of the world’s 99Mo – Canada has been examining alternatives.

At the end of March the government of Canada released its policy response to an expert advisory panel that analysed the situation in autumn 2009. The report highlights two main alternatives to manufacturing the 99Mo isotope that is currently in so much demand: cyclotrons (with new target materials) and linear accelerators (using photo-neutron processes on 100Mo or photo-fission of 238U).

Cyclotrons have been used around the world for four decades to produce isotopes useful for medical-imaging purposes ranging from 11C and 18F to 82Sr. The primary method to be explored for the cyclotron approach to the manufacture of 99mTc utilizes the 100Mo(p,2n)99mTc reaction. When bombarding the 100Mo target foil with an energetic proton beam, 99mTc is produced in direct reactions and can then be extracted. High yields of 99mTc from this reaction depend on three things: high-energy cyclotrons, high-intensity beams and high-efficiency 100Mo targets – all of which will be developed and tested in the next year or so.

Along with a team of researchers and clinicians from across Canada, TRIUMF, the University of British Columbia and BC Cancer Agency have received initial Canadian government support to begin benchmarking and then optimizing the 99mTc yield from this process. Other groups are following suit along with several private companies.

If the technology pans out, and the contamination of ground-state 99Tc is controllable in the extracted 99Tc samples, it will be a new “killer app” for medical-isotope cyclotrons. Fine-tuning will be needed to select the optimal beam energy of the protons as well as the target geometries and the extraction and separation procedures. 99mTc produced directly at cyclotrons would be limited to local use because the six-hour half life prevents it from being shipped round the world as 99Mo currently is (with a 66-hour half life). However, this technology could provide an important supplement in major urban centres where cyclotron capacity exists for burgeoning nuclear-medicine departments. Cyclotron-produced 99mTc would reduce the need for 99Mo from reactors.

Independent of this innovation, cyclotrons have a bright future in nuclear medicine. The new isotopes and radiopharmaceuticals being developed using the so-called PET isotopes could eventually overtake the market dominance of 99Mo, so that cyclotrons will be everywhere.

Astronomers miss 90% of distant galaxies

The rate of star formation in the early universe is mainly deduced based on a specific hydrogen-emission line observed in remote galaxies. It was already suspected that this Lyman-α line is strongly absorbed by dust but not to the extent now found by a careful study of the effect using the Very Large Telescope (VLT) of the European Southern Observatory (ESO). It turns out that on average only about 5% of the emitted radiation escapes the galaxies, which in turn means that almost 90% of remote star-forming galaxies cannot be detected by current methods.

The formation of the first galaxies and stars started in the first 100 million years after the Big Bang. Star-forming galaxies are characterized by the presence of short-lived, massive stars that emit predominantly ultraviolet light, which ionizes the gas in their neighbourhood. The recombination of ionized hydrogen results in a series of emission lines corresponding to the transition between different excitation levels of the atoms. The strongest lines are the Lyman-α emission at an ultraviolet wavelength of 121.6 nm and the Balmer H-α line visible in red at 656.3 nm. For a distant galaxy, these lines are observed at longer wavelengths because of the expansion of the universe. A galaxy at a redshift of z = 2 will have the lines shifted towards longer wavelengths by a factor of z + 1 = 3, making the Lyman-α line almost visible and moving the H-α line to the near infrared.

The Lyman-α line has an ideal wavelength to identify ionized hydrogen gas in high-redshift galaxies with telescopes operating in visible light. It is furthermore typically 8.7 times brighter than the Balmer H-α line, which makes it the prime tracer of star formation at high redshift. Lyman-α is however also a resonant line and this means that its photons scatter on neutral hydrogen. This is a problem because it keeps the Lyman-α photons inside the galaxy for a long time, so giving them a big chance to be absorbed by dust before eventually escaping the galaxy.

The determination of the escape fraction of Lyman-α photons from the galaxy is difficult to assess. Model-dependent estimations based on galaxies observed at high-redshift (z = 2–3) previously suggested an escape fraction between 30% and 60% on average. This is far above the measurement obtained now by an international group of astronomers lead by Matthew Hayes from the Observatory of the University of Geneva. They have obtained an escape fraction of 5.3 ± 3.8% with a firm model-independent upper limit of 10.7 ± 2.8% at a redshift of z = 2.2, which corresponds to galaxies whose light took 10 thousand million years to reach Earth (Hayes et al. 2010).

The result was obtained by looking at a field of galaxies with dedicated narrow-band filters to get the Lyman-α and H-α line emission at this particular redshift. The GOODS-South (Great Observatories Origins Deep Survey) field of view was chosen because it was observed previously by different instruments, which had already characterized the properties of its galaxies. A custom-built filter for Lyman-α was mounted on the FOcal Reducer and low-dispersion Spectrograph (FORS) camera on one of the four 8.2-m telescopes of the VLT, while the H-α line emission was recorded by the new High Acuity Wide field K-band Imaging (HAWK-I) camera attached to another VLT telescope.

The analysis of these unique observations shows that the Lyman-α line is undetectable in most star-forming galaxies. Indeed, 90% of the galaxies remain unnoticed, such that for every 10 galaxies detected, there should be 100. The determination of this huge proportion of missed galaxies will allow astronomers to obtain a far more accurate description of the history of star formation in the universe.

Further reading:

M Hayes et al. 2010 Nature 464 562.

Black holes and qubits

CCbla1_04_10

Quantum entanglement lies at the heart of quantum information theory (QIT), with applications to quantum computing, teleportation, cryptography and communication. In the apparently separate world of quantum gravity, the Hawking effect of radiating black holes has also occupied centre stage. Despite their apparent differences it turns out that there is a correspondence between the two (Duff 2007; Kallosh and Linde 2006).

Whenever two disparate areas of theoretical physics are found to share the same mathematics, it frequently leads to new insights on both sides. Indeed, this correspondence turned out to be the tip of an iceberg: knowledge of string theory and M-theory leads to new discoveries about QIT, and vice versa.

Bekenstein-Hawking entropy

Every object, such as a star, has a critical size that is determined by its mass, which is called the Schwarzschild radius. A black hole is any object smaller than this. Once something falls inside the Schwarzschild radius, it can never escape. This boundary in space–time is called the event horizon. So the classical picture of a black hole is that of a compact object whose gravitational field is so strong that nothing – not even light – can escape.

Yet in 1974 Stephen Hawking showed that quantum black holes are not entirely black but may radiate energy. In that case, they must possess the thermodynamic quantity called entropy. Entropy is a measure of how disorganized a system is and, according to the second law of thermodynamics, it can never decrease. Noting that the area of a black hole’s event horizon can never decrease, Jacob Bekenstein had earlier suggested such a thermodynamic interpretation implying that black holes must have entropy. This Bekenstein–Hawking black-hole entropy is given by one quarter of the area of the event horizon.

Entropy also has a statistical interpretation as a measure of the number of quantum states available. However, it was not until 20 years later that string theory provided a microscopic explanation of this kind for black holes.

Bits and pieces

A bit in the classical sense is the basic unit of computer information and takes the value of either 0 or 1. A light switch provides a good analogy; it can either be off, denoted 0, or on, denoted 1. A quantum bit or “qubit” can also have two states but whereas a classical bit is either 0 or 1, a qubit can be both 0 and 1 until we make a measurement. In quantum mechanics, this is called a superposition of states. When we actually perform a measurement, we will find either 0 or 1 but we cannot predict with certainty what the outcome will be; the best we can do is to assign a probability to each outcome.

There are many different ways to realize a qubit physically. Elementary particles can carry an intrinsic spin. So one example of a qubit would be a superposition of an electron with spin up, denoted 0, and an electron with spin down, denoted 1. Another example of a qubit would be the superposition of the left and right polarizations of a photon. So a single qubit state, usually called Alice, is a superposition of Alice-spin-up 0 and Alice-spin-down 1, represented by the line in figure 1. The most general two-qubit state, Alice and Bob, is a superposition of Alice-spin-up–Bob-spin-up 00, Alice-spin-up–Bob-spin-down 01, Alice-spin-down–Bob-spin-up 10 and Alice-spin-down–Bob-spin-down 11, represented by the square in figure 1.

Consider a special two-qubit state that is just 00 + 01. Alice can only measure spin up but Bob can measure either spin up or spin down. This is called a separable state; Bob’s measurement is uncorrelated with that of Alice. By contrast, consider 00 + 11. If Alice measures spin up, so too must Bob, and if she measures spin down so must he. This is called an entangled state; Bob cannot help making the same measurement. Mathematically, the square in figure 1 forms a 2 × 2 matrix and a state is entangled if the matrix has a nonzero determinant.

This is the origin of the famous Einstein–Podolsky–Rosen (EPR) paradox put forward in 1935. Even if Alice is in Geneva and Bob is millions of miles away in Alpha Centauri, Bob’s measurement will still be determined by that of Alice. No wonder Albert Einstein called it “spooky action at a distance”. EPR concluded rightly that if quantum mechanics is correct then nature is nonlocal, and if we insist on local “realism” then quantum mechanics must be incomplete. Einstein himself favoured the latter hypothesis. However, it was not until 1964 that CERN theorist John Bell proposed an experiment that could decide which version was correct – and it was not until 1982 that Alain Aspect actually performed the experiment. Quantum mechanics was right, Einstein was wrong and local realism went out the window. As QIT developed, the impact of entanglement went far beyond the testing of the conceptual foundations of quantum mechanics. Entanglement is now essential to numerous quantum-information tasks such as quantum cryptography, teleportation and quantum computation.

Cayley’s hyperdeterminant

As a high-energy theorist involved in research on quantum gravity, string theory and M-theory, I paid little attention to any of this, even though, as a member of staff at CERN in the 1980s, my office was just down the hall from Bell’s.

My interest was not aroused until 2006, when I attended a lecture by Hungarian physicist Peter Levay at a conference in Tasmania. He was talking about three qubits Alice, Bob and Charlie where we have eight possibilities,000 , 001, 010, 011, 100, 101, 110, 111, represented by the cube in figure 1. Wolfgang Dür and colleagues at the University of Innsbruck have shown that three qubits can be entangled in several physically distinct ways: tripartite GHZ (Greenberger–Horne–Zeilinger), tripartite W, biseparable A-BC, separable A-B-C and null, as shown in the left hand diagram of figure 2 (Dür et al. 2000).

CCbla2_04_10

The GHZ state is distinguished by a nonzero quantity known as the 3-tangle, which measures genuine tripartite entanglement. Mathematically, the cube in figure 1 forms what in 1845 the mathematician Arthur Cayley called a “2 × 2 × 2 hypermatrix” and the 3-tangle is given by the generalization of a determinant called Cayley’s hyperdeterminant.

The reason this sparked my interest was that Levay’s equations reminded me of some work I had been doing on a completely different topic in the mid-1990s with my collaborators Joachim Rahmfeld and Jim Liu (Duff et al. 1996). We found a particular black-hole solution that carries eight charges (four electric and four magnetic) and involves three fields called S, T and U. When I got back to London from Tasmania I checked my old notes and asked what would happen if I identified S, T and U with Alice, Bob and Charlie so that the eight black-hole charges were identified with the eight numbers that fix the three-qubit state. I was pleasantly surprised to find that the Bekenstein–Hawking entropy of the black holes was given by the 3-tangle: both were described by Cayley’s hyperdeterminant.

Octonions and super qubits

According to supersymmetry, for each known boson (integer spin 0, 1, 2 and so on) there is a fermion (half-integer spin 1/2, 3 /2, 5/2 and so on), and vice versa. CERN’s Large Hadron Collider will be looking for these superparticles. The number of supersymmetries is denoted by N and ranges from 1 to 8 in four space–time dimensions.

CERN’s Sergio Ferrara and I have extended the STU model example, which has N = 2, to the most general case of black holes in N = 8 supergravity. We have shown that the corresponding system in quantum-information theory is that of seven qubits (Alice, Bob, Charlie, Daisy, Emma, Fred and George), undergoing at most a tripartite entanglement of a specific kind as depicted by the Fano plane of figure 3.

CCbla3_04_10

The Fano plane has a strange mathematical property: it describes the multiplication table of a particular kind of number: the octonion. Mathematicians classify numbers into four types: real numbers, complex numbers (with one imaginary part A), quaternions (with three imaginary parts A, B, D) and octonions (with seven imaginary parts A, B, C, D, E, F, G). Quaternions are noncommutative because AB does not equal BA. Octonions are both noncommutative and nonassociative because (AB)C does not equal A(BC).

Real, complex and quaternion numbers show up in many physical contexts. Quantum mechanics, for example, is based on complex numbers and Pauli’s electron-spin operators are quaternionic. Octonions have fascinated mathematicians and physicists for decades but have yet to find any physical application. In recent books, both Roger Penrose and Ray Streater have characterized octonions as one of the great “lost causes” in physics. So we hope that the tripartite entanglement of seven qubits (which is just at the limit of what can be reached experimentally) will prove them wrong and provide a way of seeing the effects of octonions in the laboratory (Duff and Ferrara 2007; Borsten et al. 2009a).

In another development, QIT has been extended to super-QIT with the introduction of the superqubit, which can take on three values: 0 or 1 or $. Here 0 and 1 are “bosonic” and $ is “fermionic” (Borsten et al. 2009b). Such values can be realized in condensed-matter physics, such as the excitations of the t-J model of strongly correlated electrons, known as spinons and holons. The superqubits promise totally new effects. For example, despite appearances, the two-superqubit state $$ is entangled. Superquantum computing is already being investigated (Castellani et al. 2010).

Strings, branes and M-theory

If current ideas are correct, a unified theory of all physical phenomena will require some radical ingredients in addition to supersymmetry. For example, there should be extra dimensions: supersymmetry places an upper limit of 11 on the dimension of space–time. The kind of real, four-dimensional world that supergravity ultimately predicts depends on how the extra seven dimensions are rolled up, in a way suggested by Oskar Kaluza and Theodor Klein in the 1920s. In 1984, however, 11-dimensional supergravity was knocked off its pedestal by superstring theory in 10 dimensions. There were five competing theories: the E8 × E8 heterotic, the SO(32) heterotic, the SO(32) Type I, and the Type IIA and Type IIB strings. The E8 × E8 seemed – at least in principle – capable of explaining the elementary particles and forces, including their handedness. Moreover, strings seemed to provide a theory of gravity that is consistent with quantum effects.

However, the space–time of 11 dimensions allows for a membrane, which may take the form of a bubble or a two-dimensional sheet. In 1987 Paul Howe, Takeo Inami, Kelly Stelle and I showed that if one of the 11 dimensions were a circle, we could wrap the sheet round it once, pasting the edges together to form a tube. If the radius becomes sufficiently small, the rolled-up membrane ends up looking like a string in 10 dimensions; it yields precisely the Type IIA superstring. In a landmark talk at the University of Southern California in 1995, Ed Witten drew together all of this work on strings, branes and 11 dimensions under the umbrella of M-theory in 11 dimensions. Branes now occupy centre stage as the microscopic constituents of M-theory, as the higher-dimensional progenitors of black holes and as entire universes in their own right.

Such breakthroughs have led to a new interpretation of black holes as intersecting black-branes wrapped round the seven curled dimensions of M-theory or six of string theory. Moreover, the microscopic origin of the Bekenstein-Hawking entropy is now demystified. Using Polchinski’s D-branes, Andrew Strominger and Cumrun Vafa were able to count the number of quantum states of these wrapped branes (Strominger and Vafa 1996). A p-dimensional D-brane (or Dp-brane) wrapped round some number p of the compact directions (x4, x5, x6, x7, x8, x9) looks like a black hole (or D0-brane) from the four-dimensional (x0, x1, x2, x3) perspective. Strominger and Vafa found an entropy that agrees with Hawking’s prediction, placing another feather in the cap of M-theory. Yet despite all of these successes, physicists are glimpsing only small corners of M-theory; the big picture is still lacking. Over the next few years we hope to discover what M-theory really is. Understanding black holes will be an essential prerequisite.

Falsifiable predictions?

The partial nature of our understanding of string/M-theory has so far prevented any kind of smoking-gun experimental test. This has led some critics of string theory to suggest that it is not true science. This is easily refuted by studying the history of scientific discovery; the 30-year time lag between the EPR idea and Bell’s falsifiable prediction provides a nice example (see Further reading). Nevertheless it cannot be denied that such a prediction in string theory would be welcome.

CCbla4_04_10

In string literature one may find D-brane intersection rules that tell us how N branes can intersect over one another and the fraction of supersymmetry (susy) that they preserve (Bergshoeff et al. 1997). In our black hole/qubit correspondence, my students Leron Borsten, Duminda Dahanayake, Hajar Ebrahim, William Rubens and I showed that the microscopic description of the GHZ state,000 +011+101+110 is that of the N = 4;1/8 susy case of D3-branes of Type IIB string theory (Borsten et al. 2008). We denoted the wrapped circles by crosses and the unwrapped circles by noughts; O corresponds to XO and 1 to OX, as in table 1. So the number of qubits here is three because the number of extra dimensions is six. This also explains where the two-valuedness enters on the black-hole side. To wrap or not to wrap; that is the qubit.

Repeating the exercise for the N <4 cases and using our dictionary, we see that string theory predicts the three-qubit entanglement classification of figure 2, which is in complete agreement with the standard results of QIT. Allowing for different p-branes wrapping different dimensions, we can also describe “qutrits” (three-state systems) and more generally “qudits” (d-state systems). Furthermore, for the well documented cases of 2 × 2, 2 × 3, 3 × 3, 2 × 2 × 3 and 2 × 2 × 4, our D-brane intersection rules are also in complete agreement. However, for higher entanglements, such as 2 × 2 × 2 × 2, the QIT results are partial or not known, or else contradictory. This is currently an active area of research in QIT because the experimentalists can now control entanglement with a greater number of qubits. One of our goals is to use the allowed wrapping configurations and D-brane intersection rules to predict new qubit-entanglement classifications.

So the esoteric mathematics of string and M-theory might yet find practical applications.

 

MoEDAL becomes the LHC’s magnificent seventh

CCmoe1_04_10

On 2 December 2009 the CERN Research Board approved the LHC’s seventh experiment: the Monopole and Exotics Detector At the LHC (MoEDAL). The prime motivation of this experiment is to search for the direct production of the magnetic monopole at the LHC. Another physics aim is the search for exotic, highly ionizing, stable (or pseudo-stable) massive particles (SMPs) with conventional electric charge. Although MoEDAL is a small experiment by LHC standards it has a huge physics potential that complements the already wide vista of the existing LHC experiments.

The scientific quest for the magnetic monopole – a single magnetic charge, or pole – began during the siege of Lucera in 1269 with the Picard Magister, Petrus Peregrinus. He was a Franciscan monk, a soldier, a scien-tist and a former tutor to Roger Bacon, who considered him the fore-most experimentalist of his day. It was during this siege that Peregrinus put the finishing touches to a long letter entitled the Epistole de Mag-nete, which is his only surviving work. In this document, Peregrinus scientifically established that magnets have two poles, which he called the north and south poles.

In 1864 the Scottish physicist James Clerk Maxwell published the 19th-century equivalent of a grand unified theory, which encompassed the separate electric and magnetic forces into a single electromagnetic force (Maxwell 1864). Maxwell banished isolated magnetic charges from his four equations because no isolated magnetic pole had ever been observed. This brilliant simplification, however, led to asymmetric equations, which called for the aesthetically more attractive symmetric theory that would result if a magnetic charge did exist. Thirty years later, Pierre Curie looked into the possibility of free magnetic charges and found no grounds why they should not exist, although he added that it would be bold to deduce that such objects therefore existed (Curie 1894).

Paul Dirac, in a paper published 1931, proved that the existence of the magnetic monopole was consistent with quantum theory (Dirac 1931 and 1948). In this paper, he showed that the existence of the magnetic monopole not only symmetrized Maxwell’s equations, but also explained the quantization of electric charge. To Dirac the beauty of mathematical reasoning and physical argument were instruments for discovery that, if used fearlessly, would lead to unexpected but valid conclusions. Perhaps the single contribution that best illustrates Dirac’s courage is his work on the magnetic monopole. Today, magnetic-monopole solutions are found in many modern theories such as grand unified theories, string theory and M-theory. The big mystery is, where are they?

In the 1980s, two experiments found signals induced in single superconducting loops that could have indicated the passage of monopoles, but firmer evidence with coincidences in two loops was never found. Cosmic-ray experiments have also searched for monopoles but so far to no avail. For example, the Monopole, Astrophysics and Cosmic Ray Observatory (MACRO) detector in the Gran Sasso National Laboratory has set stringent upper limits. High-energy collisions at particle accelerators offer another obvious hunting ground for monopoles. Searches for their direct production have usually figured at any machine entering a new high-energy regime – and the LHC will be no exception.

New limits

At CERN, the search for magnetic monopoles – using dedicated detectors – began in 1961 with a counter experiment to sift through the secondary particles produced in proton–nucleus collisions at the PS (Fidecaro 1961). Over the following years, searches took place at the Interacting Storage Rings and at the SPS. At the Large Electron–Positron (LEP) collider, the hunt for monopoles in e+e collisions was carried out in two experiments: MODAL (the Monopole Detector at LEP), deployed at intersection point I6 on the LEP ring (Kinoshita et al. 1992); and the OPAL monopole detector, positioned around the beam pipe at the OPAL intersection point (Pinfold et al. 1993). These established new limits on the direct production of monopoles.

The international MoEDAL collaboration, made up of physicists from Canada, CERN, the Czech Republic, Germany, Italy, Romania and the US, is preparing to deploy the MoEDAL detector during the next long shutdown of the LHC, which will start late in 2011. The full detector comprises an array of approximately 400 nuclear track detectors (NTDs). Each NTD consists of a 10-layer stack of plastic (CR-39 and MAKROFOL) and altogether they have a total surface area of 250 m2. The detectors are deployed at the intersection region at Point-8 on the LHC ring around the VErtex LOcato (VELO) of the LHCb detector, as figure 1 indicates. The MoEDAL collaboration positioned 1 m2 of test detectors before the LHC was closed for operation in November 2009. Figure 2 shows the detectors being installed. If feasible, they will be removed for analysis during the planned short shutdown at the end of 2010 and a substantial subset of the full detector system will be deployed for the run in 2011.

CCmoe2_04_10

The MoEDAL detector is like a giant camera for photographing new physics in the form of highly ionizing particles, and the plastic NTDs are its “photographic film”. When a relativistic magnetic monopole – which has approximately 4700 times more ionizing power than a conventional charged minimum-ionizing particle – crosses the NTD stack it damages polymeric bonds in the plastic in a small cylindrical region around its trajectory. The subsequent etching of the NTDs leads to the formation of etch-pit cones around these trails of microscopic damage. These conical pits are typically of micrometre dimensions and can be observed with an optical microscope. Their size, shape and alignment yield accurate information about the effective Z/β ratio, where Z is the charge and β the speed, as well as the directional motion of the highly ionizing particle.

The main LHC experiments are designed to detect conventionally charged particles produced with a velocity high enough for them to travel through the detector within the LHC’s trigger window of 25 ns – the time between bunch crossings. Any exotic, highly ionizing SMPs produced at the LHC might not travel through the detector within this trigger window and so will have a low efficiency for detection. Also, the sampling time and reconstruction software of each sub-detector is optimized assuming that particles are travelling at close to the velocity of light. Hence, the quality of the read-out signal, reconstructed track or cluster may be degraded for an SMP, especially for subsystems at some distance from the interaction point.

Another challenge is that very highly ionizing particles can be absorbed before they penetrate the detector fully. Additionally, the read-out electronics of conventional LHC detector systems are usually not designed to have a wide enough dynamic range to measure the very large dE/dx of highly ionizing particles properly. In the case of the magnetic monopole there is also the problem of understanding the response of conventional LHC detector systems to particles with magnetic charge.

The MoEDAL experiment bypasses these experimental challenges by using a passive plastic NTD technique that does not require a trigger. Also, track-etch detectors provide a tried-and-tested method to detect and measure accurately the track of a very highly ionizing particle and its effective Z/β. Importantly, heavy-ion beams provide a demonstrated calibration technique because they leave energy depositions very similar to those of the hypothetical particles sought. If it exists, a magnetic monopole will leave a characteristic set of 20 collinear etch-pits. There is no other conventional particle that could produce such a distinctive signature – thus, even one event would herald a discovery.

One of the world’s leading string theorists, Joseph Polchinski, has reversed Dirac’s connection between magnetic monopoles and charge quantization. He has posited that in any theoretical framework that requires charge to be quantized, there will exist magnetic monopoles. He also maintains that in any fully unified theory, for every gauge field there will exist electric and magnetic sources. Speaking at the Dirac Centennial Symposium at Tallahassee in 2002, he commented that “the existence of magnetic monopoles seems like one of the safest bets that one can make about physics not yet seen” (Polchinski 2003). The MoEDAL collaboration is working to prove him right.

CERN@school brings real research to life

CCsch1_04_10

School is where students study what is in textbooks and university is where they start doing research. Or so most people think. It therefore comes as a surprise to discover that teenagers still at school can participate in a research programme that allies space science and earth science. While sceptical educators would argue that in a normal situation teachers have no time, energy, motivation or money for such projects, Becky Parker at Simon Langton Grammar School in the UK has proved that the opposite can be true.

Inspired during a visit to CERN in 2007, she decided to bring leading-edge research to her school. Instead of going back with a simple presentation about how CERN works, Becky took back a real detector and started sowing ideas about how to set up a real research programme, which she called CERN@school. Her ideas fell on fertile ground as her school in Canterbury, in the county of Kent, is one of the most active in implementing innovative ways of teaching science in the UK. One of the school’s declared goals is to “provide learning experiences which are enjoyable, stimulating and challenging and which encourage critical and innovative thinking”. Students at Simon Langton Grammar School do not just study science, they do it.

“During one of my visits to CERN, I had the opportunity to meet Michael Campbell of the Medipix collaboration, and his young enthusiastic team,” recalls Becky. “They showed me the Timepix chip that they were developing for particle and medical physics. I thought that something like this could be used in schools for conducting experiments with cosmic rays and radioactivity.”

Cross-collaboration

The Timepix chip is derived from Medipix2, a device developed at CERN that can accurately measure the position and energy of single photons hitting an associated detector. The most recent success of the Medipix collaboration is the Medipix3 chip, which is being used in a project to deliver the first X-ray images with colour (energy) information. Initially designed for use in medical physics and particle physics, the Timepix chip now has applications that include beta- and gamma-radiography of biological samples, materials analysis, monitoring of nuclear power-plant decommissioning and electron microscopy, as well as the adaptive optics that are used in large, ground-based telescopes.

The students at Simon Langton Grammar School use the Timepix chip by connecting it directly to their computer via a USB interface box. “The box was developed by the Institute of Experimental and Applied Physics in Prague,” explains Becky. “They also developed the Pixelman software that we use to read out the data.” The chip and the box have a certain material cost but the software is made available for free by the Medipix collaboration.

Given the simple set-up and its relatively low cost, Becky’s idea can potentially be transferred to many other schools across the UK and elsewhere in Europe. “Collaboration is a key factor in modern research,” confirms Becky. “And, like in a real scientific collaboration, we are going to involve as many schools as possible in our project. We have received funding from Kent to put 10 Timepix chips into the county’s schools to create a network. This will allow us to show students how you do things at CERN and in other big laboratories.”

By setting-up a network, schools will collect large amounts of data on cosmic rays. “In the future we hope to have Timepix detectors in schools across the world. Participating schools will be able to send data back to us because we have powerful IT facilities and we can store large quantities of data,” says Becky. “We know that in other countries, such as Canada, Italy and the Netherlands, there are similar school programmes that collect data on cosmic rays. It would be ideal if we could all join our efforts and integrate all of the collected data together.”

Timepix in space

Nothing is out of reach for Becky’s ambitious teaching methods, not even deep space. In 2008 the school’s students decided to enter a national competition run by the British National Space Centre to design experiments that will fly in space. Next year, Surrey Satellite Technology Ltd will fly the Langton Ultimate Cosmic ray Intensity Detector (LUCID), a cosmic-ray detector array designed by Langton’s sixth-form students, on one of its satellites. “The students are learning so much from working on LUCID with David Cooke at Surrey Satellite Technology Limited and Professor Larry Pinsky from the University of Houston,” says Becky.

In LUCID, four Timepix chips are mounted on the sides of a cube (figure 1). Students have demonstrated that the four chips allow for the largest active area without breaking power and data transmission limits. A fifth chip, mounted horizontally on the base of the cube, will be modified to detect neutrons. LUCID’s electronics, including a field-programmable gate array for read-out, will be on printed circuit boards attached to the chips.

CCsch2_04_10

The Timepix detectors produced at CERN do not qualify for use in space. “At one of the last stages of the competition, we were told that our project would go through if we could raise the additional £60,000 needed to qualify the Timepix detectors for space,” Becky recalls. Thanks to the support of the South East England Development Agency and Kent County Council the money was found and LUCID could go into space. LUCID will be mounted outside the spacecraft’s fuselage, housed in a 3 mm (0.81 g cm–2) or 4 mm (1.08 g cm–2) enclosure. Components will mostly be on an inside face of the board offering a further 0.25 g cm–2 of shielding. The detector will also have to be qualified to withstand a vibration level of 20 grms.

Under Becky’s plans, data from LUCID will be compared with data collected by detectors installed on Earth, thus providing information about cosmic rays. “We expect terabytes of data each year from space. We will receive support from the UK Particle Physics Grid (GridPP) to use the Grid. It is a whole research package!,” she says.

The CERN@school project is not the only scientific project that Simon Langton Grammar School students are carrying out. “We collaborate with Imperial College on a research project in plasma physics. One of our students won the ‘Young Scientist of the Year’ prize and published a paper in a proper scientific journal. Others participate in a scientific project for the observation of exoplanets using the Faulkes Telescopes in Hawaii and Australia,” says Becky.

In addition the school hosts special projects in biology and in other branches of science, and also has its own research centre, the Langton Star Centre. This facility, still under construction, will have laboratories and training and seminar rooms. “We will be able to train teachers and students from other schools who want to take part in CERN@school and our other projects,” explains Becky. The centre’s website will include pages where data and analysis results from the network of participating schools will be shared.

These innovative teaching approaches benefit both students and teachers. The school’s philosophy is that 30% of the activities carried out must be beyond the official syllabus. The outcome is that the school provides about 1% of the total number of students studying for physics and engineering degrees at British universities. At the same time, motivating the teachers becomes much easier when they have the prospect of participating in real research programmes in collaboration with CERN, for example.

CCsch3_04_10

Many young people at school do not know what it would be like to study physics or engineering at university and do forefront research. However, when they get to work with the real scientists, they discover how amazing this is and readily jump aboard ambitious programmes. “If teachers let students take control in these kinds of projects, they will not mess around – they are going to do all of this properly because they know that this is serious stuff,” assures Becky. “With my students, I am quite rigorous. I tell them that they are going to do it like real scientists. And because this is really an amazing thing to be involved with, they do it properly and with a lot of enthusiasm.”

Becky’s attitude to “her” students, whom she calls “sweethearts”, is a far cry from that of teachers who say how difficult it is to control behaviour in schools and motivate students every day. So why is Becky’s experience so different? “I am in a lovely school,” she explains. “The cool thing to do at my school is physics. A 12 year old came to me last year and said: ‘Miss, we would like you to teach us quantum physics’ and so I did it.”

Becky’s initiative to foster the knowledge of “cool” physics in the region includes the “Langton Guide to the Universe”, in which parents are invited to attend physics lectures on modern and exciting physics. “Families come and receive a first input on things like quantum mechanics. Some kids who attended those lectures when they were very young later joined the school and set up the ‘quantum working group’, which produced a guide to how to teach quantum mechanics to the youngest. They have entered a national competition and reached the final.” These are the sort of expectations that you can have when you go to Simon Langton Grammar School. As Becky explains: “Our philosophy is that if students are interested in doing a scientific project, however ambitious, they can come and talk to us. This is your world, take the initiative and make it successful!”

bright-rec iop pub iop-science physcis connect