We can measure and analyse accumulated superconducting RF (SRF) operating experience in broad, high-level terms using the “cryomodule century”, or CC. Ten cryomodules operating for a decade, or 50 of them operating for two years, yield 1 CC. In the past, Tristan at KEK and HERA at DESY each accumulated more than 1 CC, and LEP-II accumulated nearly 4 CC. KEK-B, Cornell, and the Tesla Test Facility/FLASH have each accumulated a large fraction of 1 CC. In addition, well over half of the world’s SRF operating experience has taken place at two US Department of Energy nuclear-physics facilities: ATLAS at Argonne National Laboratory and the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Lab.
Although a mere 25 years old – including more than 15 years of running SRF in CEBAF – Jefferson Lab has, in this sense, accumulated many centuries’ worth of operating experience, about 6 CC. This experience has made it possible for CEBAF to operate at energies exceeding 6 GeV, 50% above its design energy. These energies resulted from a refurbishment programme that sought to improve the 10 lowest-performing cryomodules of the 42¼ installed at CEBAF. Refurbishment involved fixing problems in the SRF accelerating cavities inside the cryomodules, applying the latest advances in SRF science and technology. Each of the 10 refurbished cryomodules has performed at least 50% better than the best of the original complement – at two and a half times the original specification.
In Hamburg, the European XFEL project will in its first decade yield more than 10 CC, roughly comparable to today’s combined world total. The main linacs of the International Linear Collider (ILC), however, will require cryomodules for some 16,000 SRF cavities. ILC’s first decade will yield some 186 CC – more than an order of magnitude greater than the world’s present total or XFEL’s projected total. What challenges will confront those who seek to operate the ILC and other future machines over long periods?
At Jefferson Lab, ILC’s order-of-magnitude scale-up calls to mind the SRF pioneering of CEBAF, itself an order-of-magnitude scale-up from seminal SRF R&D that was conducted mainly at Cornell, KEK, DESY, and earlier at Stanford University. In the effort to head off or pre-compensate for operational difficulties, CEBAF’s scale-up challenges included higher-order modes, overall reliability in a many-cryomodule system and the fact that the beams to be accelerated had distinct properties not previously engaged. Yet, even though these and countless other pre-operational questions were attacked, actual practice, year in and year out, has turned up much that was simply unforeseen, and was probably unforeseeable. As a result, in CEBAF’s decade and a half of operating, about 1.5 refurbishments have been necessary per CC. Extrapolated, that would imply about 30 per year for the ILC.
Of course, extrapolations about the ILC and other future SRF machines are inevitably subject to errors. For one thing, experience to date involves operating gradients significantly lower than those planned for the ILC (and for the XFEL, as well). And at CEBAF and other operating SRF machines, most of the post-construction problems have already been corrected. For example, in SRF cavity processing, future accelerator builders won’t have to re-learn the value of high-pressure rinsing, which removes the performance limitation of field emission – and which is helping the ILC high-gradient R&D programme to achieve significantly higher accelerating gradients than past machines have reached.
But both the XFEL and the ILC will push the (current) state of the art just as CEBAF pushed the (then) state of the art. So it is certain that the problems that these future SRF machines are sure to encounter will be new and different. Nevertheless, past experience is all that we have, and we should try to learn from it. Despite the uncertainties, strategies for spares will need to be developed. To maintain the operating gradient, failure rates will need to be estimated. CEBAF had one cryomodule failure per CC, but the failures appeared only after the first 7 years, or the first 3 CC. The failures exposed flaws but new problems are surely coming. CEBAF has also had gradient degradation of 1% per year from new field-emission sites caused by particulates inside the vacuum system. In sum, from our experience, any SRF machine needs to plan for refurbishments at a rate of 1–2 per CC.
In current SRF accelerators, cryomodules are independent, standalone entities that can (with some difficulty) be pulled out for refurbishment. In future SRF accelerators, the need to minimize static heat losses pushes the design toward more integrated accelerator systems, even at the cost of making replacement harder. Yet if extrapolation from current operating experience is valid, it will be important to have the ability to refurbish, which means that it will be necessary to avoid having cryomodules that are difficult to extract. It’s the continuation of a longstanding design conflict: tight integration of systems improves performance, but makes repair harder.
SRF operating experience now has a long standing – many cryomodule centuries of it, in fact. This experience base constitutes an imperfect yet vital tool. And for all of us, there’s profit in looking back in order to see forward.
by Eugenie Samuel Reich, Palgrave Macmillan. Hardback ISBN 9780230224674, £15.99 ($26.95). Paperback ISBN 9780230623842, £12.99 ($17).
This book devotes 266 dense pages – 20 of them listing hundreds of notes – to a case of scientific misconduct staged at Bell Labs between 2000 and 2002, with Jan Hendrik Schön as the central figure. The plot follows the path leading up to the discovery that Schön’s breakthroughs on “molecular electronics” (which included lasers and superconductors made of organic plastics) were fraudulent.
Reich makes a good case in defending the argument that the economic situation at Bell Labs and the need to justify keeping a strong basic-research department in the company made the ground fertile for an ambitious young person to flourish and enchant (fool) the senior people. It is actually quite amazing to see that the co-authors of Schön’s papers knew so little about important points of the reported work, and that the fabrication of data was not uncovered earlier than it was, given the frequent questions being asked by many Bell people, including close collaborators, managers and other staff. It helped that many of his papers presented “measurements” that matched predictions. He seemed to write his papers backwards: first the conclusions, then the “data” that supported them, often generated from equations rather than from the apparatus.
In hindsight, it looks preposterous to think that Schön could possibly write more than 20 groundbreaking publications in such a short time period, including seven papers in a single month, November 2001. This, alone, should have alerted people to the possibility that the reported results may have been fabricated. The journals Nature and Science emerge from this book as not being very careful about reviewing the articles that they publish, placing the emphasis on selecting papers that will make the headlines (the “breakthrough of the year”) rather than in ensuring that they provide enough technical details to allow for a good scrutiny of their plausibility and for an efficient verification by other labs. Many people wasted time and money trying to replicate the fabricated results. Schön’s publication “success” surely benefited from having signed the papers with a senior co-author, a well known expert who gave further credibility to the fraudulent results by giving a multitude of seminars on the subject, to the point of being awarded, and accepting, prizes for the “discoveries”.
This is an interesting book and Reich clearly convinces the reader that, despite our natural tendency to think that scientists can be trusted (honest people, who might make mistakes), some of them deliberately fudge the measurements to fit with preconceived ideas, old or new. The scientific method needs to be learned, sometimes through years of careful training, modulated by sceptical professors (who can notice patterns that look “too good to be true”). However, I would gladly have exchanged many of the specific details about this single case for more information about other cases, together with a global discussion of the factors that lead to such frauds. Are they caused by young people with inadequate training and supervision? Or by ambitious senior people desperately looking for an important prize and pushing their young partners to search for anomalies and “new physics”, neglecting the importance of time-consuming validation checks? Are there branches of science where they are more frequent?
Reich was very meticulous and gives all sorts of details that interrupt the fluidity of the reading. She could have redesigned the narrative, avoiding some repetition, placed the introductory text of chapter 9 (!) at the start of the book, and removed a few of the lines and paragraphs containing little information. Without an introductory chapter preceding the main plot and giving a broad overview of this field, most readers will lack the minimum background knowledge needed to appreciate the reported saga. As a side remark, it is curious to learn that Nobel laureate Bob Laughlin repeatedly claimed that Schön’s results had to be fraudulent but his opinion “didn’t count because he was known to be too sceptical”.
by G Kane and A Pierce (eds), World Scientific. Hardback ISBN 9789812779755, $99/£55. Paperback ISBN 9789812833891, $54/£30. E-book ISBN 9789812779762, $129.
This book could hardly seem more timely, with the Large Hadron Collider (LHC) having started operations and new discoveries being eagerly awaited (but quite possibly a few years off yet). It consists of 17 chapters, each on a different topic, ranging from a description of the detectors to discussions of naturalness in quantum-field theories of particle physics.
The contributors are particle physicists, several of whom are prominent in the field. However, each chapter has different authors, so the result is inevitably a little patchy. The chapters differ widely in scope, in character and in the level of expertise assumed for the reader. For instance, the chapter on dark matter at the LHC is very basic and could be read by undergraduates, whereas the informative chapter on top physics is of a graduate level. There are also some much more general expansive essays, such as one that explores similarities between the BCS theory of superconductivity and particle physics, and the introductory chapter. The introduction assumes a fair amount of prior knowledge and is much too optimistic for my taste about the chance of discovering supersymmetry at the LHC. The author asserts that supersymmetry must be correct because of several pieces of circumstantial evidence, but I really think that other such a posteriori scraps could be used to prop up the evidence for competing theories.
There are a couple of obvious omissions, for example quark-gluon plasma physics and the ALICE detector. After all, the LHC will spend some of its time providing collisions between heavy ions, rather than protons, and ALICE will be trying to divine the properties of the resulting soup of quarks and gluons. The other missing topic is that of diffractive physics. It is likely that both the ATLAS and CMS experiments will eventually have forward detectors to measure protons that have just grazed another one in a collision. Under certain theoretical circumstances, it is even possible to produce Higgs bosons in the central detector during these collisions. Such rare events could provide useful experimental constraints on the properties of Higgs bosons. The chapter about the ATLAS and CMS detectors is welcome, but it could benefit from some basics about how particles interact as they travel through matter. This important link in the logical chain is missing from the discussion.
Perspectives on LHC Physics is a timely, heterogeneous offering, with some interesting gems and informative parts, as well as some fairly off-the-wall speculation. I think that there should be sections of it to interest most readers in the physical sciences, but that they may well wish to choose particular chapters to read. Luckily, the format of the book makes this easy to achieve.
The field of exotic atoms has a long history and it is currently experiencing a renaissance, from both the experimental and theoretical points of view. On the experimental side, new hadronic beams are either already available, with kaons at the DAΦNE facility at Frascati, or will soon become available with the start-up of the Japan Proton Accelerator Research Complex (J-PARC). New detectors, with improved performance in energy resolution, stability, efficiency, trigger capability etc, are also starting to operate. On the theoretical side the field has advanced significantly through recent developments in chiral effective-field theories and their applications to hadron–nuclear systems. In light of these developments it was appropriate for the international workshop “Hadronic atoms and nuclei – solved puzzles, open problems and future challenges in theory and experiment” to address these topics on 12–16 October 2009, at the European Centre for Theoretical Studies in Nuclear Physics and related areas, ECT*, Trento.
Unique methods
So what are hadronic atoms and why is there a growing interest in studying them? An exotic hadronic atom is formed whenever a hadron (pion, kaon, antiproton) from a beam enters a target, is stopped inside and replaces an orbiting electron. Such an exotic atom is usually formed in a highly excited state; a process of de-excitation through the respective atomic levels then follows. The X-ray transitions to the lowest orbits (1s) are affected by the presence of the strong interaction between the nucleus and the hadron, which shifts the 1s level with respect to the value calculated on a purely electromagnetic basis and limits the lifetime (increases the width) of the level. Extracting these quantities via the measurement of the X-ray transitions provides fundamental information on the low-energy hadron–hadron and hadron–nuclear interactions, which is impossible to obtain by any other method. Quantities such as kaon–nucleon scattering lengths, for example, turn out to be directly accessible by measuring the properties of exotic atoms. These are key quantities for dealing in a unique way with important aspects of low-energy QCD in the strangeness sector, such as chiral-symmetry breaking.
The DAΦNE Exotic Atoms Research (DEAR) experiment has measured kaonic hydrogen with unprecedented precision, which led to a lively debate at the workshop on the procedure for extracting the kaon–proton scattering length as well as its compatibility with existing kaon–nucleon scattering data. The SIDDHARTA collaboration, also at DAΦNE, presented the results of an even more precise measurement performed in 2009 on kaonic hydrogen, which will be complemented with an exploratory measurement of kaonic deuterium. The E570 experiment at KEK and SIDDHARTA have both measured kaonic helium and found that there is agreement with theory, thereby solving the “kaonic helium puzzle” – a long-standing discrepancy between measured and theoretical values for the 2p level in 4He. The new E17 experiment planned at J-PARC will in the near future measure the X-ray spectrum of kaonic 3He with the highest precision. With other experiments already in the pipeline at existing and/or future machines at GSI, J-PARC and DAΦNE, the future of hadronic atoms will extend its horizons both in terms of precision as well as in dealing with new types of exotic atoms not previously measured, such as kaonic deuterium or sigmonic atoms (where a sigma replaces an electron).
Another hot topic that was intensively discussed at the workshop concerns the recent studies of K– mediated bound nuclear systems. Theory originally suggested that the (strongly attractive) isospin I=0 K–N interaction in few-body nuclear systems can favour the formation of discrete and narrow K–-nuclear bound states with large binding energy (100 MeV or even more). However, recent work suggests that such deeply bound kaonic nuclear states do not exist: antikaon–nuclear systems might be only weakly bound and short-lived. There are different interpretations for the existing experimental results based, for example, on the interaction of negative kaons with two or more nucleons. This topic is related to a new puzzle in the physics of kaon–nucleon interactions: the nature of the Λ(1405) – does it have a single- or double-pole structure? There were long discussions about this at the workshop.
New frameworks
All of these topics have important consequences in astrophysics, for example, in the physics of neutron stars. The workshop reviews of experimental results covered experiments at KEK, Brookhaven and Dubna, as well as FINUDA at DAΦNE, the FOPI detector at GSI, OBELIX at the former Low-Energy Antiproton Ring at CERN, and the DISTO detector at the former Saturne laboratory in France. There was also a critical review of current theories and models. Discussions about future perspectives centred on an integrated strategy in which complementary facilities should bring together the various pieces of the overall puzzle. Among these are experiments proposed at J-PARC (E15, E17), GSI (upgrades of the FOPI and HADES detectors) and DAΦNE (AMADEUS), together with the possibility of using antiprotons to create single- and double-strangeness nuclei at CERN, J-PARC or the Facility for Low-energy Antiproton and Ion Research at GSI.
The workshop proved that the field of hadronic atoms and kaonic nuclei is active. While some puzzles, such as those concerning kaonic hydrogen and kaonic helium, are now solved thanks to the newer experiments (E570 at KEK, DEAR and SIDDHARTA at DAΦNE), many problems remain unresolved, or “open”. The workshop formulated and targeted important questions that still need experimental results and deeper theoretical understanding. There are many future challenges in both the experimental and theoretical sectors, which were formulated within a single framework for the first time.
There was also a round-table discussion, led by Avraham Gal from the Hebrew University of Jerusalem, that dealt with the search for the K–-nuclear bound state. This proved extremely useful because it established common ground on what information (i.e. experimental results) could bring light to the field in future. This is important because new experiments are about to start, including the upgrades to AMADEUS, E15, HADES and FOPI.
The five-day workshop also included a visit to the Fondazione Bruno Kessler (FBK) centre for scientific and technological research. This gave the opportunity for the FBK to demonstrate its capacity to perform research in the field of frontier detectors for future experiments and to establish contacts with experiments that are potentially interested in such developments. In addition, there were presentations of the EU Seventh Framework Programmes (FP7), with Carlo Guaraldo of LNF-INFN Frascati describing the HadronPhysics2 project. In particular, experimentalists and theoreticians came together in a session dedicated to the LEANNIS Network in HadronPhysics2 FP7 – a network that focuses on low-energy antikaon–nucleon and nucleus interactions – in which topics and perspectives in the field were presented and discussed.
One important success of the workshop was that young people made up around half of the participants and that researchers from many countries took part, including Israel and Iran. This made it an occasion for not only scientific exchanges but for cultural and social ones as well, proving once again that scientists are part of society, with an important role.
The moment that particle physicists – and many others – around the world had been waiting for finally arrived on 20 November 2009. Bunches of protons circulated once again round CERN’s Large Hadron Collider (LHC), a little more than a year after a damaging incident brought commissioning to a standstill in September 2008. As the operators put the machine through its initial paces, the collider passed a number of milestones – from the first collisions in the LHC detectors at 450 GeV per beam to collisions with “squeezed” multibunch beams at the world-record energy of 1.18 TeV. In addition, the collaborations collected sufficient data to calibrate their detectors and assess how well they perform before the real attack on high-energy physics begins later this year.
“It has been remarkable,” Steve Myers, CERN’s director for accelerators and technology, commented in a presentation to CERN Council and staff on 18 December. “Things have moved so quickly that it has been hard to keep up with the progress.” It was also the tip of an iceberg – a pinnacle of highly visible success built on a year of unstinting effort on repairs and consolidation work, painstaking hardware commissioning and the final preparation for operation with beam.
The restart finally got underway with the injection of both beams into the LHC on Friday 20 November and their careful threading round the machine, step by step, as on the famous start-up day in September 2008. There was jubilation in the CERN Control Centre as Beam 1 made its first clockwise circuits of the machine at 8.40 p.m. A little over an hour later, it had made several hundred circuits, captured by the RF. It was then the turn of Beam 2, which completed the first anticlockwise circuit at 11.40 p.m. and had also been captured successfully by the RF at a little after midnight.
During the following hours the four experiments were treated to special “splash” events, in which a single beam strikes a collimator nearby. These events produce an avalanche of particles that leave a host of tracks and allow the collaborations to check the relative timing of the detectors, for example.
The first day already demonstrated that vital elements of beam instrumentation, such as the beam-position monitors and beam-loss monitors, were working well. Over the following weekend, the operators continued commissioning, in particular on Beam 1, including fine-tuning of the RF. This work already led to a good beam lifetime of around 10 hours, as measured from the decay of the beam current. Other key studies included measurements and refinements of the betatron tune (the frequency of transverse oscillations about the nominal orbit) and chromaticity (variations in the tune as a function of the momentum deviation). The tune of the machine immediately showed itself to be remarkably good, a testament to the many years of effort involved in the design and construction of the thousands of magnets that guide the beams round the 27 km ring.
Monday 23 November saw the LHC reach a brand-new milestone when the two beams circulated simultaneously for the first time at 1.25 p.m. – just in time for an announcement at a press conference about the restart that was held at CERN at 2.00 p.m. The operators then adjusted machine parameters to provide the experiments with the first, real beam–beam collisions, each in turn.
ATLAS was first, with a collision event recorded at around 2.22 p.m. Four hours later it was the turn of ALICE, which immediately saw the trigger rate rise from about 0.001 to 0.1 Hz. Over the next 40 minutes the experiment recorded nearly 300 events. LHCb followed at about 5.45 p.m. This experiment found it less easy to confirm collisions because only the larger and more distant parts of the detector were switched on, but nevertheless the events collected showed indications of good-looking vertices. Soon after 7.00 p.m. the operators tried again for collisions in ATLAS and CMS, this time at a slightly higher intensity and with improved beam steering. CMS bagged its first collision at 7.40 p.m.
These first collisions were all obtained with a low-intensity “probe” beam, so called because it allows the operators to probe the limits of safe operation of the LHC with a single bunch per beam of only about 3 × 109 protons. Over the following days, probe beams were used in continued commissioning to ensure that higher intensities could be safely handled and stable conditions could be guaranteed for the experiments over sustained periods. Higher intensities would be needed for the experiments to acquire a meaningful amount of data but nevertheless the first period of collisions provided plenty to report on in presentations to a packed main auditorium at CERN on 26 November, just six days after the restart. There were measurements of timings, tracking, calorimetry, missing energy and plenty more from all four of the big LHC experiments, as well as reconstructions, including π0 peaks from LHCb and CMS.
During the first three days the LHC operated as a storage ring and as a collider, but at a beam energy of only 450 GeV – the injection energy from the SPS. An important next step was to begin tests to ramp the current and hence the field in the dipole magnets in synchrony with increasing beam energy (supplied by the RF). On 24 November, Beam 1 underwent the first ramp, reaching 560 GeV before it died away after encountering resonances in the betatron oscillations. Nevertheless, the LHC had worked as an accelerator for the first time.
Further commissioning ensued, including energy matching between the SPS and the LHC on 27 November. Two days later, the operators were ready to try the first ramp to a world-record energy and at 9.48 p.m. on 29 November they accelerated Beam 1 from 450 GeV to 1.04 TeV. This exceeded the previous world-record beam energy of 0.98 TeV, which had been held by Fermilab’s Tevatron collider since 2001. Within three hours, the LHC had broken its own record, as both beams were successfully accelerated to 1.18 TeV at 0.44 a.m. on 30 November. This was the maximum energy for this first LHC run, corresponding to 2 kA in the dipole magnets – the limit to which the safety systems had been tested before the restart.
Later that same day tests began to study any effects that the solenoid magnets in the experiments might have on the beam orbit, which would need compensatory adjustments. ALICE was the first to ramp the solenoid field, followed by ATLAS and finally the biggest of the three, the “S” in CMS with its full field of 3.8 T. The effects were all small; indeed, changes in the orbit arising from earth tides at the time of the ramp in CMS proved to have a bigger effect than the field of the giant solenoid.
December began with a “first” of a different kind, when the ALICE collaboration, having analysed the 284 events recorded on 23 November, submitted the first paper based on collision data at the LHC for publication in the European Physical Journal C. The collaboration analysed the events to measure the pseudorapidity density of charged primary particles in the central region. The results are consistent with previous measurements made at the same centre-of-mass energy a quarter of a century ago, when CERN’s SPS ran as a pulsed proton–antiproton collider. The paper was accepted for publication two days later.
From 1 to 6 December the operations team continued with beam commissioning at 450 GeV, in particular with aperture scans to determine the operational space for beam manoeuvres and collimator scans to indicate the best settings for these devices, which are used to “clean” the beam by removing particles forming a halo around the main core. These studies are important for setting the parameters for the safe running of the machine – safe in the sense that the halo particles do not go off course into the LHC magnets or sensitive parts of the experiments.
Other studies concern aborting a run safely and depositing the beams in the beam dump near Point 6 on the ring. During normal running, if the beam becomes unstable the beam-loss monitors should sense this and trigger a set of fast pulsed magnets to eject the beams along a tunnel to the beam stop. To avoid dumping all of the energy in a single spot on the dump face – which at full intensity would be around 360 MJ per beam – magnets along the tunnel spread out the beam so that it “paints” a circle when it arrives at the stop.
Preliminary investigations of this kind are all undertaken at low intensities with the probe beam. On 5 December the operators took a first small but significant step to higher intensity when they injected two bunches per beam into the LHC. Beam with four bunches each followed in the early hours of 6 December and, at 6.46 a.m., the operators declared the first period of “stable beams” at 450 GeV, with some 1010 protons per beam. This meant that the collaborations could switch on all parts of their detectors, including the most sensitive, collecting data at a rate of about 0.5 Hz. Ultimately the LHC will run with 2808 bunches per beam
While the operators continued to take steps to increase the intensity – both through more bunches and with more protons per bunch injected from the SPS – stable running at 1.18 TeV also remained an important goal. A test ramp with two bunches per beam on 8 December gave ATLAS the chance to record a first collision at a total energy of 2.36 TeV, although at the time the experiment was in “safe” mode and many parts were turned off.
The continued careful studies with higher intensities led to a first period of stable beams at 450 GeV with higher bunch intensities on 11 December, this time with four bunches per beam and 2 × 1010 protons per bunch. This increased the event rates in the experiments to about 10 Hz, some 100 times higher than in the first tests on 23 November. Ultimately, at 9.00 p.m. on 14 December, the LHC began to run with stable beams with 16 bunches, providing some 1.85 × 1011 protons per beam – and trigger rates of around 50 Hz.
The four big experiments were eventually able to observe significant numbers of collisions with all of the subdetectors operational at a beam energy of 450 GeV under stable conditions, accumulating a grand total of 1.6 million events. LHCf, the small experiment that sits in the forward direction close to the ATLAS detector, amassed enough events for the collaboration to begin the first physics. This experiment, which is to study the production of showers of particles similar to those created in cosmic-ray showers, collected some 6000 showers at 900 GeV in the centre of mass.
In addition, progress with ramping on 14 December allowed the experiments to record collisions at a total energy of 2.36 TeV for the first time during a 90-minute period of stable beams, with two bunches per beam. Altogether, the four big experiments recorded some 125,000 events in this new energy region.
With the LHC run scheduled to end on the evening of 16 December for a shutdown for further consolidation work in preparation for running at higher energies, the last two days saw the machine revert to the operators for further commissioning studies. First there were tests on 15 December in which one of the TOTEM experiment’s delicate Roman pots was moved closer towards the beam to record the first track in the “edgeless” silicon detectors (CERN Courier September 2009 p19).
Finally, in the early hours of 16 December the beam experts were able to test the “squeeze” at the interaction regions. A squeeze involves reducing the beam size at the collision points by reducing (“squeezing”) the betatron function, β, which describes the amplitude of the betatron oscillations. With four bunches per beam, the machine ramped once again to 1.18 TeV, a squeeze to 7 m was successfully applied at interaction region 5, where the CMS experiment is located.
After further beam studies, at 6.00 p.m. the operators prepared to dump the beam for the last time in 2009, just as planned. This ended the first, highly successful full commissioning run for the LHC, which is being followed by a technical stop until February. While the LHC remains on stand-by, work continues to implement protection systems to allow high-energy running at up to 3.5 TeV per beam, as well as to make other modifications and repairs in the machine and the experiments. The first four weeks of running had brought plenty of success, auguring well for the future. After some time for celebrations over the festive season, it would be time to prepare for the next step in this great adventure.
High-energy physics experiments address fundamental questions using large facilities and complex detectors, which often use innovative detection techniques. It is usual to build and operate more than one such detector at the same accelerator – to confront, compare and eventually merge the measurements. Combining measurements made by similar detectors becomes feasible and ultimately mandatory when these detectors are well understood and tested with many physics analyses. This step was achieved recently by the H1 and ZEUS experiments, which took data at DESY’s HERA collider from 1992 until 2007.
HERA was the only electron–proton collider ever built, providing collisions between electrons or positrons of 27.5 GeV and protons of up to 920 GeV to give a centre-of-mass energy of 320 GeV. The data collected at HERA are unique and have led to precise measurements of the proton structure, in particular in the region of low Bjorken-x, below 0.01, where no other measurement exists. At HERA, the point-like electron probes the gluon-fabric of the proton down to scales as small as 1/1000 of the proton’s radius. These measurements provide a clean testing ground for the Standard Model. Furthermore, searches for new physics signals at HERA are complementary to searches made at other colliders.
So far, H1 and ZEUS have published individual measurements investigating a plethora of different processes in more than 400 scientific articles. However, for the first time, three joint publications have recently been submitted to the Journal of High Energy Physics. These combinations of the H1 and ZEUS data in coherent analyses address a new paradigm in this field of research.
Universal structure functions
Combining data sets improves individual measurements because the amount of information increases. The theory of statistics states that uncertainties diminish by a factor of √2 when the amount of data is doubled. When systematic uncertainties are taken into account, however, the effects are more subtle: there is no gain for errors correlated between the experiments. Typical examples of this kind are theoretical calculations that are needed to extract the experimental results. Uncertainties that are fully uncorrelated (not only between the experiments but also from one measurement point to the next) have a similar behaviour to statistical errors and are reduced by the magical factor of √2. Finally, the most interesting case comes from errors that are correlated within each experiment, but uncorrelated between experiments. One example is the energy scale of the calorimetric measurements: the technologies of the calorimeters in the two experiments are different and they are calibrated using independent procedures. Hence the respective errors are independent between the experiments but are nevertheless correlated from one measurement to the next within each experiment. These uncertainties are reduced by more than the usual factor of √2. This can basically be seen as an effect of a cross-calibrating of the detectors with respect to each other using a large number of independent measurements.
The paper on the measurement of inclusive deep inelastic scattering cross sections, submitted for publication by the H1 and ZEUS collaborations, contains a combination of more than 1402 individual measurements from 14 publications to obtain 741 cross-section measurements of unprecedented precision. All of the available data on neutral and charged-current interactions taken during the first phase of HERA running from 1992 to 2000 are used. The data cover virtualities, Q2, of the exchanged bosons from 0.2 GeV2 up to the highest values reachable at HERA of around 30,000 GeV2, and values of Bjorken-x as small as 0.2 × 10–6. These data extend into the electroweak regime from regions where perturbative QCD has never been tested. At small values of Bjorken-x, x <10–2, no other measurements exist. In this region the gain from the combination process is impressive: the individual measurements are dominated by systematic errors that become drastically reduced down to as little as 1%. These cross-sections depend on the universal proton-structure functions, F2, F3 andFL, which encapsulate the parton content of the proton. The structure function F2 dominates over most of the phase space, except at high Q2, where parity-violating weak effects lead to a non-zero contribution from xF3, and at large y, where the longitudinal part of the cross-section arising from gluon radiation leads by FL being sizeable.
Figure 1 shows parts of the universal structure function, F2, as a function of the variable x, for various values of the photon virtuality, Q2. The increase of F2 towards low x, discovered in the first years of HERA, is confirmed with a precision approaching 1%. As Q2 grows, F2 becomes steeper towards low x, reflecting the contributions to the quark component from gluon fluctuations in a qq pair. This rise is a fundamental discovery and reveals the role of gluons in binding nuclear matter. It is possible to decompose this structure function into one part that arises from “hard scattering” (so-called coefficient functions) and another non-perturbative part, which reflects the partonic content of the proton. Using the new data, the collaborations have extracted a new set of parton-distribution functions (HERAPDF 1.0), shown on the left in figure 2 for a high photon virtuality, Q2=10,000 GeV2. This partonic content is universal and can be used to make predictions for other processes involving protons – for example, cross-sections in proton–proton collisions at the LHC.
One example within the Standard Model is the production of single weak bosons at the LHC. This process can be regarded as a “standard candle” and can even be used to determine luminosity in the collider because the measurement can be done with great accuracy. The precision of the corresponding theoretical predications is dominated by the uncertainties that originate from the knowledge of the proton–parton distributions, which in turn come from the measurements at HERA (figure 2, right).
Ultimately, the H1 and ZEUS measurements provide the standard candle against which any new phenomenon at the LHC in the mass range of up to a few hundred giga-electron-volts will have to be compared. The new physics may well be in this range – in which case a precise knowledge of the production cross-section would be crucial in order to explore the properties of these new particles.
New physics arises in many theoretical extensions of the Standard Model. According to these extensions, a peak in a mass spectrum or a deviation in a certain variable should be observable. However, new physics can also manifest itself beyond the “standard predictions” and show up as spectacular events in regions of phase space where, according to the Standard Model, only few events should be seen. Events with energetic isolated leptons are an example of such a golden channel. Experimentally, they provide a clean signature; theoretically, they benefit from robust predictions.
The experiments at HERA reported the observation of events with isolated leptons (electrons or muons) and missing transverse momentum as early as 10 years ago. In the Standard Model this topology is explained by the production of a W boson, which decays to an energetic charged lepton and a neutrino. The neutrino escapes undetected, leading to “missing” momentum. The observation of such a rare process (typically one such event is recorded in 10 million other events) is a challenge and requires the full experimental information of the multilayer/multipurpose H1 and ZEUS detectors. Some of the observed events also contain a prominent hadronic jet – which makes them unlikely as W candidates because any hadronic recoil would typically be produced at low transverse momentum.
H1 observed a discrepancy with the Standard Model amounting to as much as three standard deviations, but with no effect seen in ZEUS. To clarify this point, the collaborations undertook a joint analysis effort. They investigated carefully all of the differences and studied all of the systematic effects. The individual results stand up to this scrutiny. By interpreting the difference as a statistical fluctuation, the two experiments can perform a common analysis. This leads to a decrease in the significance of the observed excess for events with large hadronic transverse momentum to below two standard deviations; it also improves significantly the measurement of the W cross-section (figure 3). Thus, this measurement becomes an important confirmation of the weak sector of the Standard Model in a unique configuration.
The third joint paper deals with events with more than one charged lepton that are dominantly produced by photon–photon collisions, the photons originating from the colliding electrons and protons. In individual analyses, H1 and ZEUS found a few hundred events containing several leptons, both electrons and muons, at high transverse momentum, including some events where the scalar sum of the lepton momenta exceeds 100 GeV. In a combined analysis, seven events are observed in this region in positron–proton collisions for an expected number of 1.9 ± 0.2, while no such event is observed in electron–proton collisions for a similar expectation (figure 4). The observation of the excess in positron–proton collisions is still compatible with the Standard Model and is interpreted as a statistical fluctuation. However, this observation stimulates discussion because it is also possible to attribute the excess to a bilepton resonance, such as a doubly charged Higgs boson, H++, produced in electroweak interactions.
These combined measurements from H1 and ZEUS are the first in a series of legacy results from the unique electron–proton collider, HERA. More than 20 years after the start of the facility and two years after the end of the data-taking, the harvest is in its best phase. This is also good for the LHC.
The 13th International Conference on Elastic and Diffractive Scattering – the “Blois Workshop” – dates back to 1985, when the first meeting was held in the picturesque, old French town of Blois, famous for the 14th-century Royal Château de Blois. The conference series continues to focus on progress towards understanding the physics of hadronic interactions at high energy. A major strength of the meetings is the way in which they facilitate detailed discussion between theorists and experimentalists, thereby motivating new ways of formulating theoretical approaches and confronting them with experimental measurements – past, present and future.
More than 100 participants from 18 countries attended the latest meeting in the series, held at CERN on 29 June – 3 July 2009. The relatively informal manner of the 70 talks encouraged discussion. Appropriately, given the imminent start-up of the LHC, the following topics featured prominently: the total proton–proton (pp) cross-section; elastic pp scattering; inelastic diffractive scattering in electron–proton (ep), pp and heavy-ion collisions; central exclusive production; photon-induced processes; forward physics and low-x QCD; and cosmic-ray physics.
Theoretical developments
On the theoretical side, important aspects of soft diffraction were nicely introduced by Alexei Kaidalov of the Institute of Theoretical and Experimental Physics (ITEP) in Moscow, who emphasized factorization effects and unitarization in the framework of Reggeon calculus. Although everyone anticipates that the total pp cross-section will continue to rise with increasing energy – following the pioneering prediction of H Cheng and T T Wu in 1970 – a number of contributions made distinct predictions for its value at LHC energies – typically ranging between 90 mb and 140 mb, with surprising predictions as high as 250 mb. Several other features of elastic scattering at LHC energies were also considered within the framework of different models that were successful at lower energies. André Martin of CERN, with his long-established theoretical rigour, reported on a new limit for the inelastic cross-section.
The central production of various exclusive final states with one or two “leading protons” – Higgs production at the LHC, in particular – was also a source of much debate. This subject challenges different approaches in QCD, notably the “gluon ladder”, and how these approaches relate to the long-standing theoretical construct, the Pomeron. Douglas Ross of Southampton University presented an interesting treatment of the Balitsky–Fadin–Kuraev–Lipatov (BFKL) kernel of such a ladder, based on the extraction of the low-x gluon distribution in experiments at the HERA ep collider. The issue of the “rapidity-gap survival probability” as an explanation for substantial factorization-breaking in inelastic diffraction in hadron–hadron collisions (as opposed to ep collisions) continues to challenge theory and is important when developing models for central Higgs production. Mark Strikman of Penn State University presented a notable proposal of a new sum rule.
The workshop devoted a full day to contributions dealing with the physics of QCD at various extremes, such as at the lowest parton fractional momenta (low-x QCD) and at the highest densities achievable, e.g. in heavy-ion collisions. Emil Avsar and Tuomas Lappi of CEA/Saclay and Francesco Hautmann of Oxford University reviewed the physics of gluon saturation and possible modifications of the standard QCD evolution equations at tiny values of Bjorken-x. A second topic, summarized by Raphael Granier de Cassagnac of the Laboratoire Leprince-Ringuet, Gines Martinez of SUBATECH, and Jean-Yves Ollitrault of Saclay, concerned studies of the collective behaviour of a multiparton system in a hot, dense state such as a quark–gluon plasma. Various other talks covered the latest experimental and theoretical developments in each of these two active research areas of the strong interaction, all with prospects at the LHC very much in mind.
Experimental highlights
Presentations on experimental developments highlighted the challenge of diffractive physics and the way that it relies on a particularly close symbiosis of measurement and theory. The phenomenology of elastic pp scattering, based on long-standing measurements at the Intersecting Storage Rings at CERN and later experiments at CERN and Fermilab, continues within either “classic Regge” or “geometrical” approaches. The latter is now beginning to produce a “transverse” view of the proton’s structure, as Richard Luddy of Connecticut University explained. Such understanding will be testable in the near future in deep-exclusive lepton-scattering experiments, for example in COMPASS at CERN where, as Oleg Selyugin of JINR described, such measurements may be interpreted in terms of generalized parton distributions.
As at all meetings since EDS returned to Blois in 1995, there were reports from the experiments at HERA on the status of the deep-inelastic structure of the diffractive interaction, this time by Henri Kowalski of DESY and Alexander Proskuryakov of Moscow State University. The impressive precision of the data reveals beautiful features that demonstrate the quark and gluon components of the t-channel (i.e. the leading) exchange mechanism. Put differently, the data are sensitive to the parton structure of the proton’s diffractive interaction. Results on the scale dependence of these leading exchanges, measured at HERA in exclusive meson production, now provide precise data with which QCD theory has to be reconciled, as Pierre Marage of the Université Libre de Bruxelles explained.
The main experimental highlights came, arguably, from the CDF experiment at Fermilab’s Tevatron with the measurements of the central exclusive two-photon production (pp → ppγγ) and di-jet production (pp → pp+2 jets), presented by James Pinfold of Alberta University, Christina Mesropian and Konstantin Goulianos of Rockefeller University and Michael Albrow from Fermilab. Both processes are important as precursors for the exclusive Higgs search at the LHC; the agreement of the predictions, made prior to the measurements, with the data is an important milestone in the preparation for exclusive Higgs hunting – appropriately christened “Higgs with no mess” by the experimentalists concerned.
A session dedicated to ultrahigh-energy cosmic-ray observations underlined their complementarity to collider measurements in view of understanding hadronic interactions, as Jörg Hörandel of Radboud University, Nijmegen, explained. Alessia Tricomi of INFN/Catania University pointed out that, in particular, forward experiments can contribute valuable data to the development of models of air showers.
With its first data, the LHC will already provide new measurements that are crucial to this active field.
Other topics at the meeting included photon-induced processes from the BaBar and Belle experiments, with reviews of relevant heavy-ion results from RHIC at Brookhaven and prospects for the LHC. Looking further into the future, Paul Newman of Birmingham University reported on possibilities for ep and electron–ion interactions at an LHeC.
Given the venue of EDS ’09, perhaps the most appropriate session was the one that was concerned with new experiments. Taking advantage of the presence of the unique breadth of expertise present at an EDS meeting, a panel discussion took place between representatives of theory and experiments, moderated by Karsten Eggert of Case Western Reserve University and CERN. It provided the opportunity to exchange ideas about which measurements to carry out first at the LHC, how to create synergies between different experiments and about future upgrade possibilities for the forward proton detectors. Several new ideas for possible measurements at the LHC were proposed and discussed. With its first data, the LHC will already provide new measurements that are crucial to this active field. The meeting ended with a strong sense of anticipation, given the imminent diffractive data at a new energy scale from the first run of the LHC.
Dynamic scientific projects with daring research programmes involving high technology can often trigger breakthroughs in innovation and industrial development. A team at the Joint Institute for Nuclear Research (JINR) at Dubna has conceived of one such project: the Nuclotron-based Ion Collider fAcility (NICA), a superconducting accelerator complex for colliding beams of heavy ions in the energy range of 4–11 GeV per nucleon in the centre of mass. It is this kind of project that is vital if Russia is to become a leader in innovation development.
The aim of NICA is to study an intricate and mysterious phenomenon: the mixed phase of quark–gluon matter. Conceived by the research group led by Alexei Sissakian, head of the NICA project, the facility is based on the Nuclotron, the superconducting ion synchrotron that already operates at JINR’s Veksler and Baldin Laboratory of High-Energy Physics. This latest project builds on the scientific schools and traditions of the scientists who founded this international centre for research in nuclear physics on Russian territory. The result is a collaboration between physicists at Dubna and other Russian scientific centres: the Institute for Nuclear Research of the Russian Academy of Sciences (RAS); the State Scientific Centre; the Institute for High-Energy Physics (IHEP) in Protvino; the Budker Institute of Nuclear Physics (BINP); the Scientific Research Institute for Nuclear Physics of Moscow State University; and the Institute for Theoretical and Experimental Physics in Moscow.
New lease of life
The NICA project has been under development since 2006, in close co-operation with leading institutions of the RAS, the Rosatom State Atomic Energy Corporation, the Federal Agency for Science and Innovation, the Federal Agency for Education, Moscow State University and the Russian Scientific Centre “Kurchatov Institute”. It will culminate in a unique accelerator complex – a cascade of four accelerators that includes the existing Nuclotron – which should be completed by 2015. Constructed at JINR with much effort and hardship in the period of change in Russia during the 1990s, the Nuclotron has been useful for world science but owing to insufficient financing, this superconducting accelerator has not achieved the planned beam parameters. The capacity of the vacuum and cryogenic equipment that was affordable a decade ago did not allow further energy increases. Today, however, the NICA project is breathing new life into the Nuclotron and has opened up new prospects for high-energy physics.
Studying the properties of nuclear matter is the fundamental task for modern high-energy physicists, with experimental research conducted at an extremely small scale – around a millionth of a nanometre. Achieving this task not only opens new horizons in perceptions of the world and enables researchers to decipher the evolution of the universe, it also lays the foundation for the development of new techniques on the super-small scale.
According to modern ideas, quark–gluon matter has a mixed phase – like boiling water that exists simultaneously with vapour. The mixed phase of hadronic matter should include free quarks and gluons simultaneously with protons and neutrons, inside which quarks are already constrained – or “glued” – by gluons. In the phase diagram of temperature and baryon density, the border between the hadronic state and quark–gluon plasma is not a thin line but a domain the size and shape of which is still difficult to determine. It is here, in what we call “the Dubna meadow”, where the mixed phase of hadron matter should exist.
NICA begins with the heavy-ion source, KRION, which propels nuclei into the linear accelerator that will be constructed by specialists from IHEP in Protvino. The beam then enters the booster-synchrotron, where particles are accelerated to the required energy. Thirty-four bunches, each consisting of 10,000 million nuclei, are transported into the Nuclotron. Once aligned by the superconducting magnets to form a thin thread approximately 30 cm long, they are split into two colliding beams of 17 bunches, each with its own ring in the 251-m circumference ion collider.
These two collider rings intersect at two points equipped with detectors. At one collision point, the MultiPurpose Detector (MPD) will detect the existence of the mixed phase and a number of other features in this energy range, such as chiral-symmetry restoration, critical phenomena and the modification of hadron properties in the hot, dense quark–hadron medium. The MPD is designed to spot particles that shoot out from the collision point in every direction. It will be necessary to apply mainly new technological approaches to develop a device with a sufficiently high level of sensitivity. Another detector is planned for the spin programme – the Spin Physics Detector (SPD) – which will be located at the second collision point. Particle polarization is another mystery of the universe, which Dubna’s theoreticians hope to unravel through experiments for NICA that have been designed together with specialists from BINP in Novosibirsk, who are pioneers in colliding-beam-accelerator technology.
The upgrade of the Nuclotron in Dubna is fully underway. The vacuum in the ring has been improved and the cryogenic complex – the heart of the superconducting accelerator – has been completely upgraded, as well as the power system. Modern diagnostic equipment is currently being installed and a new ion source is under development. The technical project for the NICA accelerator complex and the project concept are being developed at the same time.
Several groups of high-quality specialists from different JINR laboratories work in the NICA/MPD centre, where they are implementing the project for the new accelerator complex and experimental facilities. These include theoreticians, computer programmers, accelerator technologists, co-ordinators and experimentalists. Alexander Sorin, co-supervisor of the NICA project, is the centre’s general leader. Igor Meshkov heads the activities on the development of the accelerator complex and his former student, Grigory Trubnikov, now deputy-chief engineer of JINR, is leading the Nuclotron upgrade. Vladimir Kekelidze, the director of the Laboratory of High-Energy Physics, heads the team designing the MPD.
The construction of any modern experimental facility is impossible without detailed technical planning, so JINR has sought to involve the best-qualified engineers and designers in the process. Nikolai Topilin has returned to Dubna from CERN – where he was responsible for the development of the front-end calorimetry for the ATLAS experiment at the LHC – as chief designer of the NICA complex. It is a good sign for Dubna that engineering designers are returning, having left for the West when Russian science was in decline. Their high-level abilities have always been – and still are – in demand in western countries, so the fact that physicists and engineers are returning to Dubna shows that JINR has chosen the right way forwards.
The development of an accelerator is always linked with the course of events elsewhere, so the physics programme for such a facility and the concept of its construction elements are dynamically interrelated from the outset of the erection of this large-scale machine. The NICA project’s White Book, published in spring 2009, contains the physics basis of the experimental programme at the accelerator complex. It is constantly being replenished with new pages and is open to everyone wanting to contribute to the project.
Because Dubna is an integral part of the worldwide scientific community, the research and quality of the facilities must be of the highest level for it to attract partners. On 9–12 September 2009, the Laboratory of Theoretical Physics held the fourth round-table discussion on the programme, “Physics at the NICA Collider”, with 82 experts in heavy-ion physics from leading nuclear centres in 16 countries (including six JINR member states and four JINR associate members) invited to take part. Representatives from experimental collaborations of leading large facilities for similar research – JINR’s friends and scientific rivals – also showed interest in the programme for NICA, including RHIC at Brookhaven in the US, the Super Proton Synchrotron at CERN and the future Facility for Antiproton and Ion Research (FAIR) at GSI in Germany. The delegation from Germany was the largest, with nine experts, including Boris Sharkov, director-designate of FAIR, and Peter Senger, leader of the Compressed Baryonic Matter collaboration at FAIR.
The specifications of the NICA collider formed the main topic of discussion. Experts analysed the main aspects: nuclear-matter research in experiments with relativistic heavy-ion collisions; new states of nuclear matter at high baryonic densities; local P- and CP-violation in hot nuclear matter (the chiral magnetic effect); electromagnetic interactions and restoration of the chiral symmetry; mechanisms of multiparticle production; correlation femtoscopy and fluctuations; and polarization effects and spin physics at the NICA accelerator. Participants also discussed details of the strategy to develop the MPD and the SPD, based on the physics programme. Representatives from institutes in Russia and elsewhere took an active part in developing the programme. Russian scientists working abroad, including those originally from Dubna, proved to be eager supporters of the NICA project – for example, Brookhaven was represented by the leader of the Nuclear Theory Group, Dmitri Kharzeev.
In summary, there have been considerable qualitative evaluations at the level of world scientific expertise of the expediency and feasibility of the NICA project. “We strongly support the implementation of the NICA collider project and we are sure that if the project is completed in time it will make an outstanding contribution to our knowledge about the properties of the superdense matter…The unique opportunity to put the NICA project into action in Dubna must not be missed,” reads the joint memorandum on the results of the round-table discussions.
The coming year will see further contributions from Dubna to the scene of heavy ions. A new scientific journal, Heavy Ion, will accompany the research in heavy-ion physics at JINR, with the first issue scheduled for this year. On 23–29 August Dubna will take the baton from Brookhaven when it hosts an important international conference on heavy-ion collisions at high energies, the “6th International Workshop on Critical Point and Onset of Deconfinement”.
The two currently operating high-energy gamma-ray satellites have both detected Cygnus X-3 (Cyg X-3) during episodes of strong radio flaring. These first detections of a genuine microquasar in our galaxy demonstrate that even small-scale relativistic jets are powerful particle accelerators.
Cyg X-3 is a peculiar binary system in our galaxy that might end its life as a gamma-ray burst (CERN Courier November 2009 p10). What makes this X-ray binary special is not the compact object that is a black hole or a neutron star, but the companion object, which is a rare Wolf-Rayet star. Such massive stars are in a late stage of their evolution and are characterized by a strong stellar wind blowing away the outer layers of gas, which are already enriched with heavy elements such as nitrogen, carbon and oxygen.
With a short orbital period of less than 5 hours, the black hole or neutron star of Cyg X-3 is moving very close to the hot surface of the Wolf-Rayet star, and deep inside its wind blowing at about 1000 km/s. Inhomogeneities in the wind hitting the compact object are likely to be the cause of the extreme variability of Cyg X-3. The chaotic accretion of gas from the wind sometimes leads to the formation of relativistic jets, which have been resolved by radio telescope arrays. This characteristic makes Cyg X-3 a microquasar, in analogy with the powerful jets of quasars, the active hearts of remote galaxies (CERN Courier July/August 2006 p10).
The Italian Astro-rivelatore Gamma ad Immagini Leggero (AGILE) satellite detected four major gamma-ray flares of Cyg X-3 at photon energies above 100 MeV. As M Tavani and colleagues reported in Nature, the flares lasted only a couple of days and were found during a long-term observing campaign of the Cygnus region between mid-2007 and mid-2009. They were all observed at epochs when the hard X-ray flux monitored by NASA’s Swift satellite was low. Furthermore, three of the four gamma-ray flares preceded a radio flare by less than 10 days. As the radio flares are known to be emitted by relativistic particles in the jet, this coincidence strongly suggests that the gamma-ray flare is also emitted by the jet or is related to the jet-formation process.
NASA’s Fermi satellite was launched in 2008, one year after AGILE, and also detected several flares of Cyg X-3 at energies above 100 MeV. Thanks to the superior sensitivity of its Large Area Telescope (LAT), the Fermi LAT collaboration was able to detect a periodicity in the gamma-ray signal corresponding to the 4.8 hour orbital period of the Cyg X-3 binary system. This detection, reported in Science, locates the origin of the flares within the complex gamma-ray emission region surrounding Cyg X-3. The Fermi data also confirm the link found by AGILE between gamma-ray flares and flaring activity observed at radio frequencies.
The corroborating results on Cyg X-3 by the two missions provide the first evidence that a genuine microquasar can emit high-energy gamma rays. This detection has important implications for the jet acceleration mechanism, although the actual emission process, in particular whether the emission comes from electrons or protons, is still the subject of debate.
Lawrence Berkeley National Laboratory is set to explore further the high-gradient acceleration of electron beams using ultra-short pulse lasers with the construction of a new facility – BELLA, the Berkeley Lab Laser Accelerator. The primary goal is to provide researchers within the laboratory’s Laser Optical Systems Integrated Studies (LOASIS) programme with a petawatt-class, ultra-short pulse laser system for experiments aimed at demonstrating a 10 GeV electron beam from a metre-long plasma channel.
A laser plasma accelerator (LPA) of this kind relies on creating an electron-density wave in an ionized medium (i.e., plasma) by displacing the electrons away from the ions with an intense laser pulse. The charge separation results in a strong electric field (up to 1010 V/m) that co-propagates with the laser pulse (like a wake behind a boat) and is capable of accelerating electrons to very high energies in a short distance. Electrons pulled out of the background plasma into the wake can then “surf” on it to reach high energies. Typical electric fields generated in an LPA can be more than a 1000 times larger than in conventional RF accelerators, enabling the acceleration of electrons to giga-electron-volt energies in distances of centimetres instead of tens of metres.
BELLA will build on previous results from the LOASIS programme, which is led by Wim Leemans, one of six recipients of the US Department of Energy’s Ernest Orlando Lawrence Award for 2009. In 2004, researchers with LOASIS showed that high-quality electron beams with an energy spread of a few per cent could be produced at energies of 100 MeV from a structure only 2 mm long. Two years later, the team demonstrated that beams of 1 GeV can be produced from a 3 cm-long plasma structure. One of the key elements of these experiments was the guiding of the laser beams in plasma channels over distances that are long compared with their natural diffraction distance, much as an optical fibre guides a low-power beam.
The aim with the BELLA facility is to scale up these experiments to produce electron beams with energies exceeding 10 GeV in a metre-scale plasma channel. Such devices could form the building blocks of a future-generation linear collider for particle physics, provided that technology is developed to cascade many of these modules and to produce high-quality electron beams with high efficiency. While it could take decades to match the output of the highest-energy RF-based machines, BELLA represents an essential step in investigating how more powerful accelerators of the future might become not only more compact and but much less expensive. Such systems also hold the promise of making possible a table-top accelerator operating in the range of tens of giga-electron-volts, which would be small and cheap enough for universities and hospitals.
The development of a compact linear accelerator with the BELLA project will have also several short-term applications. Among the unique features of LPA-produced electron beams are their duration of a few femtoseconds and their intrinsic synchronization to a conventional laser. A high-quality 10 GeV electron beam could be used to build a soft X-ray free-electron laser, which would be a valuable tool for biologists, chemists, materials scientists and biomedical researchers, allowing them to observe and time-resolve ultrashort (femtosecond) phenomena. A multigiga-electron-volt electron beam could also be used to produce highly collimated, mega-electron-volt photons that could penetrate cargo in a nondestructive way and be highly useful for remote detection of nuclear material. Such high-energy photon beams can be produced by scattering an intense (low-energy photon) laser pulse off the high-energy electron beam.
BELLA will be housed in an existing building at Berkeley. The space will be reconfigured and upgraded to include a clean room, new laser laboratory space and additional shielding. The project is funded largely by the American Recovery and Reinvestment Act (commonly known as economic stimulus funding), which is providing $20 million towards BELLA’s construction. The facility will be completed in about three and a half years.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.