Comsol -leaderboard other pages

Topics

CEBAF: a fruitful past and a promising future

CCceb1_09_12

On 18 May, the US Department of Energy’s Jefferson Lab shut down its Continuous Electron Beam Accelerator Facility (CEBAF) after a long and highly successful 17-year run, which saw the completion of more than 175 experiments in the exploration of the nature of matter. At 8.17 a.m., Jefferson Lab’s director, Hugh Montgomery, terminated the last 6 GeV beam and Accelerator Division associate director, Andrew Hutton, and director of operations, Arne Freyberger, threw the switches on the superconducting RF zones that power CEBAF’s two linear accelerators. Coming up next – the return of CEBAF, with double the energy and a host of other enhancements designed to delve even deeper into the structure of matter.Jefferson Lab has been preparing for its 12 GeV upgrade of CEBAF for more than a decade. In fact, discussions of CEBAF’s upgrade potential began soon after it became the first large-scale accelerator built with superconducting RF technology. Its unique design features two sections of superconducting linear accelerator, which are joined by magnetic arcs to enable acceleration of a continuous-wave electron beam by multiple passes through the linacs. The final layout took account of CEBAF’s future, allowing extra space for an expansion.

Designed originally as a 4 GeV machine, CEBAF exceeded that target by half as much again to deliver high-luminosity, continuous-wave electron beams at more than 6 GeV to targets in three experimental halls simultaneously. Each beam was fully independent in current, with a dynamic range from picoamps to hundreds of microamps. Exploiting the new technology of gallium-arsenide strained-layer photocathodes provided beam polarizations topping 85%, with sufficient quality for parity-violation experiments.

Inside the nucleon

CEBAF began serving experiments in 1995, bombarding nuclei with the 4 GeV electron beam. Its physics reach soon far outstripped the initial planned experimental programme, which was historically classified in three broad categories: the structure of the nuclear building blocks; the structure of nuclei; and tests of fundamental symmetry.

CCceb2_09_12

Experiments exploring the structure of the proton led to the discovery that its magnetic distribution is more compact than its charge. This surprising result, which contradicted previous data, generated many spin-off experiments and caused a renewed interest in the basic structure of the proton. Other studies confirmed the concept of quark–hadron duality, reinforcing the predicted relationship between these two descriptions of nucleon structure. Other measurements found that the contribution of strange quarks to the properties of the proton is small, which was also something of a surprise.

Turning to the neutron, CEBAF’s experiments made groundbreaking measurements of the distribution of electric charge, which revealed that up quarks congregate towards the centre, with down quarks converging along the periphery. Precision measurements were also made of the neutron’s spin structure for the first time, as Jefferson Lab demonstrated the power of its highly polarized deuteron target and polarized helium-3 target.

Studies conducted with CEBAF revealed new information about the structure of the nucleon in terms of quark flavour, while others measured the excited states of the nucleon and found new states that were long predicted in quark models of the nucleon. High-precision data on the Δ resonance – the first excited state of the proton – demonstrated that its formation is not described by perturbative QCD, as some theorists had proposed. Researchers also used CEBAF to make precise measurements of the charged pion form-factor to probe its distribution of electric charge. New measurements of the lifetime of the neutral pion were also performed to test the low-energy effective field theory of QCD.

Following the development of generalized parton distributions (GPDs), a novel framework for studying nucleon structure, CEBAF provided an early experimental demonstration that they can be measured using high-luminosity electron beams. Following the upgrade, it will be possible to make measurements that can combine GPDs with transverse momentum distribution functions to provide 3D representations of nucleon structure.

From nucleons to nuclei

In its explorations of the structure of nuclei, research with CEBAF bridges the descriptions of nuclear structure from experiments that show the nucleus built of protons and neutrons to those that show the nucleus as being built of quarks. The first high-precision determination of the distribution of charge inside the deuteron (built of one proton and one neutron) at short distances revealed information about how the deuteron’s charge and magnetization – terms related to its quark structure – are arranged.

CCceb3_09_12

Systematic deep-inelastic experiments with CEBAF have shed light on the EMC effect. Discovered by the European Muon collaboration at CERN, this is an unexpected dip in per-nucleon cross-section ratios of heavy-to-light nuclei, which indicates that the quark distributions in heavy nuclei are not simply the sum of those of the constituent protons and neutrons. The CEBAF studies indicated that the effect could be generated by high-density local clusters of nucleons in the nucleus, rather than by the average density.

Related studies provided experimental evidence of nucleons that move so close together in the nucleus that they overlap, with their quarks and gluons interacting with each other in nucleon short-range correlations. Further explorations revealed that neutron–proton short-range correlations are 20 times more common than proton–proton short-range correlations. New experiments planned for the upgraded CEBAF will further probe the interactions of protons, neutrons, quarks and gluons to improve understanding of the origin of the nucleon–nucleon force.

High-precision data from CEBAF are also helping researchers to probe nuclei in other ways. Hypernuclear spectroscopy, which exploits the “strangeness” degree of freedom by introducing a strange quark into nucleons and nuclei, is being used to study the structure and properties of baryons in the nucleus, as well as the structure of nuclei. Also, the recent measurement of the “neutron skin” of lead using parity-violation techniques will be used to constrain the calculations of the fate of neutron stars.

CCceb4_09_12

CEBAF’s highly polarized, high-luminosity, highly stable electron beams have exhibited excellent quality in energy and position. Coupled with the state-of-the-art cryotargets and large-acceptance precision detectors, this has allowed exploration of physics beyond the Standard Model through parity-violating electron-scattering experiments. Currently, the teams are eagerly awaiting the results of analysis of the experimental determination of the weak charge of the proton.

A bright future

Although the era of CEBAF at 6 GeV is over, the future is still bright. Jefferson Lab’s Users Group has swelled to more than 1350 physicists. They are eager to take advantage of the upgraded CEBAF when it comes back online, with 52 experiments – totalling some six years of beam time – already approved by the laboratory’s Program Advisory Committee (Dudek et al. 2012).

CCceb5_09_12

Jefferson Lab is now shut down for installation of the new and upgraded components that are needed to finish the 12 GeV project. At a cost of $310 million, this will enhance the research capabilities of the CEBAF accelerator by doubling its energy and adding an additional experimental hall, as well as by improving the existing halls along with other upgrades and additions.

Preliminary commissioning of an upgrade cryomodule has demonstrated good results. The unit was installed in 2011 and commissioned with a new RF system during CEBAF’s final months of running at 6 GeV. The cryomodule successfully ran at its full specification gradient, 108 MeV, for more than an hour while delivering beam to two experimental halls. Commissioning of the 12 GeV machine is scheduled to commence in November 2013. Beam will be directed first to Hall A and its existing spectrometers, followed by the new experimental facility, Hall D.

Berkeley welcomes real-time enthusiasts

CCrt1_09_12

The IEEE-NPSS Real-Time Conference is devoted to the latest developments in real-time techniques in particle physics, nuclear and astrophysics, plasma physics and nuclear fusion, medical physics, space science, accelerators and general nuclear power and radiation instrumentation. Taking place every second year, it is sponsored by the Computer Application in Nuclear and Plasma Sciences technical committee of the IEEE Nuclear and Plasma Sciences Society (NPSS). This year, the 18th conference in the series, RT2012, was organized by the Lawrence Berkeley National Laboratory (LBNL) under the chair of Sergio Zimmermann and took place on 11–15 June at the Shattuck Plaza Hotel in downtown Berkeley, California.

The conference returned to the US after being held in Lisbon for RT2010 and in Beijing in 2009, when the first Asian conference of this series was held at the Institute for High-Energy Physics. RT2012 attracted 207 registrants, with a large proportion of young researchers and engineers. Following the meetings in Beijing and Lisbon, there is now a significant attendance from Asia, as well as from the fusion and medical communities, making the conference an excellent place to meet real-time specialists with diverse interests from around the world.

Presentations and posters

As in the past, the 2012 conference consisted of plenary oral sessions. This format encourages participants to look at real-time developments in different sectors other than their own and greatly fosters the necessary interdisciplinary exchange of ideas in the various fields. Following a long tradition, each poster session is associated with a “mini-oral” presentation session. Presenters can opt for a two-minute talk, which helps them to emphasize the highlights of their posters. It is also an excellent educational opportunity for young participants to present and promote their work. With a mini-oral presentation still fresh in mind, delegates can then seek out the appropriate author during the following poster session, an approach that stimulates lively and intensive discussions.

The conference began as usual with an opening session with five invited speakers who surveyed hot topics from physics or innovative technical developments. First, David Schlegel of LBNL gave an introduction to the physics of learning about dark energy from the largest galaxy maps. Christopher Marshall of Lawrence Livermore National Laboratory introduced the National Ignition Facility and its integrated computer system. CERN’s Niko Neufeld gave an overview talk on the trigger and data acquisition (DAQ) at the LHC, which provided an introduction to the large number of detailed presentations that followed during the week. Henry Frisch of the University of Chicago presented news from the Large Area Photodetectors project, which aims for submillimetre and subnanosecond resolution in space and time, respectively. Last, Fermilab’s Ted Liu spoke about triggering in high-energy physics, with selected topics for young experimentalists.

The technical programme, organized by Réjean Fontaine of the University of Sherbrook, Canada, brought together various areas of real-time computing applications and DAQ covering a range of topics in various fields. About half of the topics came from high-energy physics, the rest mainly from astrophysics and nuclear fusion, medical applications and accelerators.

Some important sessions, such as that on Data Acquisition and Intelligent Signal Processing, started with an invited introductory or review talk. Ealgoo Kim of Stanford University reviewed the trend of data-path structures for DAQ in positron-emission tomography systems, showing how the electronics and DAQ are similar to those for detectors in high-energy physics. Bruno Gonçalves of the Instituto Superior Técnico Lisbon spoke about trends in controls and DAQ in fusion devices, such as ITER, particularly towards reaching the necessary high availability. Riccardo Paoletti of the University of Siena and INFN Pisa presented the status and perspectives on fast waveform digitizers, with many examples being given in following presentations.

Rapid evolution

This year the conference saw the rapid and systematic evolution of intelligent signal processing as it moves further towards front-end signal processing at the start of the DAQ chain. This incorporates ultrafast analogue and timing converters that use the waveform analysis concept together with powerful digital signal-processing architectures, which are necessary to compress and extract data in real time in a quasi “deadtime-less” process. Read-out systems are now made of programmable devices that include hardware and software techniques and tools for programming the reconfigurable hardware, such as field-programmable gate arrays, graphic processing units (GPUs) and digital signal processors.

An increasing number of applications and projects using new standards

Participants saw the evolution of many new projects that include architectures dealing with fully real-time signal processing, digital data extraction, compression and storage at the front-end, such as the PANDA antiproton-annihilation experiment for the Facility for Antiproton and Ion Research being built at Darmstadt. For the read-out and data-collection systems, the conceptual model is based on fast data transfer, now with multigigabit parallel links from the front-end data buffers up to terabit networks with their associated hardware (routers, switches, etc.). Low-level trigger systems are becoming fully programmable and in some experiments, such as LHCb at CERN, challenging upgrades of the level-0 selection scheme are planned, with trigger processing taking place in real time at large computer farms. There is an ongoing integration of processing farms for high-level triggers and filter farms for online selection of interesting events at the LHC. Experiences with real data were reported at the conference, providing feedback on the improvement of the event selection process.

A survey of control, monitoring and test systems for small and large instruments, as well as new machines – such as the X-ray Free-Electron Laser at DESY – was presented, showing the increasing similarities and possibilities for integration with standard DAQ systems of these instruments. A new track at the conference this year dealt with upgrades of existing systems, mainly related to LHC experiments at CERN and to Belle II at KEK and the SuperB project.

The conference saw an increasing number of applications and projects using new standards, emerging technologies such as Advance Telecommunications Computing Architecture (ATCA), as well as feedback on the experience and lessons learnt from successes and failures. This last topic, in particular, was new at this conference. Rather than showing only great achievements in glossy presentations, it can also be helpful to learn from other people’s difficulties, problems and even mistakes.

CANPS Prize awarded

A highlight of the Real-Time conference is the presentation of the CANPS prize, which is given to individuals who have made outstanding contributions in the application of computers in nuclear and plasma sciences. This year the award went to Christopher Parkman, now retired from CERN, for the “outstanding development and user support of modular electronics for the instrumentation in physics applications”. Special efforts were also made to stimulate student contributions and awards were given for the three best student papers, selected by a committee chaired by Michael Levine of Brookhaven National Laboratory.

Last, an industrial exhibit by a few relevant companies ran through the week (CAEN, National Instruments, Schroff, Struck, Wiener and ZNYX). There was also the traditional two-day workshop on ATCA and MicroTCA, which is the latest DAQ standard, following CAMAC, Fastbus and VME, from the telecommunications industry. This workshop with tutorials, organized by Ray Larsen and Zheqiao Geng of SLAC and Sergio Zimmermann of LBNL, took place during the weekend before the conference. Two short courses were also held that same weekend, one by Mariano Ruiz of the Technical University of Madrid on DAQ systems and one by Hemant Shukla of LBNL on data analysis with fast graphic cards (GPUs).

The 19th Real-Time Conference will take place in May 2014 in the deer park inside the city of Nara, Japan. It will be organized jointly by KEK, the University of Osaka and RIKEN under the chair of Masaharu Nomachi. A one-week Asian Summer school on advanced techniques on electronics, trigger, DAQ and read-out systems will also be organized jointly with the conference.

• More details about the Real-Time Conference. A special edition of IEEE Transactions on Nuclear Sciences will include all eligible contributions from the RT2012 conference, with Sascha Schmeling of CERN as senior editor.

Charting the future of European particle physics

Tatsuya Nakada

The original CERN convention, which was drafted nearly 60 years ago, foresaw that the organization should have a role as co-ordinator for European particle physics, as well as operating international accelerator laboratories. Today, this role is more appropriate than ever: the long lead times usually required to prepare and construct facilities and experiments for modern high-energy physics, together with the increased costs for these activities, underlie the need for a general European strategy in the field. So it was natural for CERN Council to initiate the creation of a European Strategy for Particle Physics in June 2005 and to establish dedicated groups for reviewing the scientific status and producing a proposal. They consulted widely with the community, funding agencies and policy makers in preparing the strategy document, which was adopted by Council in July 2006 during a dedicated session in Lisbon.

The strategy consists of 17 concise descriptions, with action statements. It addresses not only scientific issues but also subjects such as the organization and social relevance of high-energy physics. The highest priority on the scientific programme was given to the LHC, followed by accelerator R&D for possible future high-energy machines, including the luminosity and energy upgrades of the LHC, linear e+e colliders and neutrino facilities.

Breakthroughs in science can emerge from unexpected corners

CERN Council adopted this strategy in 2006 with an understanding that it be brought up to date at intervals of typically five years. The first update is now being prepared for presentation to Council in 2013, the process having been postponed for two years to wait for data from the LHC at energies of 7 and 8 TeV in the centre of mass. As a result, in addition to the recent discovery at the LHC of a new boson that is compatible with the Standard Model Higgs particle, the third mixing angle of the neutrino mass-matrix has become known through experiments elsewhere.

These new results generate more scientific questions compared with 2006, such as:

• How far can the properties of the Higgs(-like) particle be explored at the LHC, with the 300 fb–1 of data expected for Phase 1, and with the 1000–3000 fb–1 (1–3 ab–1) that the high-luminosity upgrade should yield? Do we need other machines to study the particle’s properties? If so, after taking into account factors such as the technical maturity, energy expandability, cost and location, what is the optimal machine: a linear or circular e+e collider, a photon collider or a muon collider? As a more concrete question, what should the European reaction be towards the linear collider that is being considered in Japan?

• The European neutrino community is putting forward a short-baseline neutrino programme to search for sterile neutrinos, as well as a long-baseline one to measure neutrino-mass mixing parameters, to take place in Europe. In addition, R&D studies are underway for a “neutrino factory” as an eventual facility. But, what should the European neutrino programme be, and where does the global aspect start to play a role?

• What are the options for a future machine in Europe after the LHC? Will this be a machine to address physics at the 10 TeV energy scale? Will data from the LHC at the full design energy provide enough justification for this? When will be the right moment to take a decision, and what kind of R&D must be done to be ready for such a decision in the future?

Breakthroughs in science can emerge from unexpected corners. Therefore, the strategy must also have some flexibility to allow the fostering of unconventional ideas.

The process of updating the European strategy began formally in the summer of 2011 when Council set up a new European Strategy Group, which is assisted by the European Strategy Preparatory Group for scientific matters in preparing the proposal for the update. As with the process that led to the original strategy, the proposal will be based on the maximum input from the particle-physics community, as well as from other stakeholders – both inside and outside Europe. An important part of this consultation process was the Open Symposium recently held in Krakow, where the community expressed their opinions on the subjects outlined above, as well as on flavour physics, strong-interaction physics, non-accelerator-based particle physics and theoretical physics. Issues important for carrying out the research programme, such as accelerator science, detector R&D, computing and infrastructure for large detector construction, were also addressed. The meeting demonstrated that there is an emerging consensus that new physics must be studied both by direct searches at the highest-energy accelerator possible, as well as by precision experiments with and without accelerators.

The Preparatory Group is in the process of producing a summary document on the scientific status. The European Strategy Group will meet in January 2013 in Erice to draft the updated strategy – which must also take global aspects into account – for discussion by CERN Council in March. The aim is that Council will adopt the updated strategy during a special session to be held in Brussels in May.

• Further information on the update of the European Strategy of Particle Physics may be found at https://europeanstrategygroup.web.cern.ch/EuropeanStrategyGroup/.

The Quantum Exodus: Jewish Fugitives, the Atomic Bomb, and the Holocaust

By Gordon Fraser
Oxford University Press
Hardback: £25

CCboo1_08_12

Don’t be misled by the title of this book. It contains a surprising amount of information, much more than focusing on the exodus of Jewish scientists from Germany after the rise of the Nazi Party. The book puts anti-Semitism into a broad historical perspective, starting with the destruction of the Temple in Jerusalem, the expelling of the Jews all across Europe and the growth of a mild and sometimes hidden anti-Semitism. This existed in Germany in the 19th century and even to some extent under the Nazis, when the initial objective was to cleanse German culture of all non-Aryan influences. However, various phases led eventually to the Holocaust. A political spark was ignited when the parliamentary building in Berlin went up in flames in February 1933 and Adolf Hitler became Chancellor. The Civil Service Law was soon introduced that forbade Jews from being employed by the state, followed by the burning of books and the Kristallnacht, during which Jewish shops were destroyed – all of which were further steps towards the “final solution”.

In parallel to these political developments, Quantum Exodus describes the rise of quantum physics in Germany during the 19th century, with protagonists such as Alexander von Humboldt, Wilhelm Röntgen, Hermann von Helmholtz, Max Planck, Walther Nernst and Arnold Sommerfeld. They attracted many Jewish scientists from all over Europe, among them Hans Bethe, Max Born, Peter Debye, Albert Einstein, Lise Meitner, Leó Szilárd, Edward Teller and Eugene Wigner, who went on to become key players in 20th-century physics. Most of them left Germany, some at an early time, others escaping at the last moment and most of them going to the UK or US, often via Denmark, with Niels Bohr’s institute as a temporary shelter. An exodus also started from other countries such as Austria and Italy. The book recounts the adventurous and disheartening fates of many of these physicists. Arriving as refugees, they were initially often considered aliens and during the war sometimes even as spies. The author gives some spice to his narrative by adding amusing details from the private lives of some of the protagonists.

A detailed account is given of the Manhattan Project project and how the famous letter by Einstein to President Franklin Roosevelt initiated the building of the fission bomb. It was written as a result of pressure by Szilárd, the main mover behind the scenes. What is less known is the primordial importance of a paper by Otto Frisch and Rudolf Peierls in the UK, which already contained the detailed ideas of the fission bomb. Robert Oppenheimer, an American Jew, became scientific director of the Manhattan Project after his studies in Europe, bringing the European mindset to the US. He attracted many émigrés to the project, such as Bethe, Teller, Felix Bloch and Victor Weisskopf. The book relates vividly how Teller, because of his stubborn character, could not be well integrated into this project; rather, he pushed in parallel for the H-bomb.

The author implies, although somewhat indirectly, that the rise of Nazism and the development of the nuclear bomb have a deeper correlation, without giving convincing details. However, the interaction of science (its stars) and politics is well described. Bohr’s influence, although at the centre of nuclear physics, was limited – partly because of his mumbling and bad English (something that I witnessed at the Geneva Atoms for Peace Conference in 1957, where his allocution in English had to be translated simultaneously into English.)

Many of the exiled physicists who worked on the Manhattan Project developed considerable remorse after the events of Hiroshima and Nagasaki. When I invited Isidor Rabi to speak at the 30th anniversary of CERN he considered his involvement in the foundation of CERN as a kind of recompense for his wartime activities.

The descriptive account of science in the US and Europe after the Second World War is interesting. In the US, politicians’ interest in science decreased substantially and a change was introduced only when the shock of Sputnik led eventually to the “space race”. Basic science also benefited from this change, leading for example to the foundation of various national laboratories such as Fermilab. In Europe, a new stage for science emerged when a pan-European centre to provide resources on a continental rather than a national scale was proposed and CERN was founded in 1954.

The book benefits from the fact that the author is competent in physics, which he sometimes describes poetically, but never wrongly. He has done extremely careful research, giving many references and a long list of Jewish emigrants. I found few points to criticise. Minor objections concern passages about CERN, although the author knows the organization so well. For example, the response of CERN towards the Superconducting Super Collider was the final choice of the circumference of the LEP tunnel (27 km) in view of the possibility of a later proton–proton or proton–electron collider in the same tunnel, while the definite LHC proposal came only in 1987; and the LHC magnets are superconducting to achieve the necessary high magnetic fields and not so much to save electricity.

The various chapters are not written in chronological order, and political or scientific developments are integrated with human destinies. This assures easy and entertaining reading. Like me, older readers who have known many of the protagonists, will not avoid poignant emotions. For young readers, the book is recommended because they will learn many historical facts that should not be forgotten.

One intriguing question (probably unanswerable) that was not considered, is: what would have happened to US science without the contribution of Jewish immigrants?

Deflector shields protect the lunar surface

CCnew1_08_12

The origin of the enigmatic “lunar swirls” – patches of relatively pale lunar soil, some measuring several tens of kilometres across – has been an unresolved mystery since the mid-1960s, when NASA’s Lunar Orbiter spacecraft mapped the surface of the Moon in preparation for the Apollo landings. Now, a team of physicists has used a combination of satellite data, plasma-physics theory and laboratory experiments to show how the features can arise when the hot plasma of the solar wind is deflected around “mini-magnetospheres” associated with magnetic anomalies at the surface of the Moon.

Initially thought to be smeared-out craters, close-range photographs from Lunar Orbiter II showed that at least one large swirl – named Reiner Gamma, after the nearby Reiner impact crater – could not be a crater. Studies from subsequent Apollo missions revealed that the swirls are associated with localized magnetic fields in the lunar crust. Because the Moon today has no overall magnetic field, these “magnetic anomalies” seem to be remnants of a field that has existed in the past.

In 1998–1999, the Lunar Prospector mission discovered that the magnetic anomalies create miniature magnetospheres above the Moon’s surface, just as the Earth’s planetary magnetic field does on a much larger scale when it deflects the charged particles of the solar wind around the planet. Could the mini-magnetospheres on the Moon, which are only a few hundred kilometres in size, somehow shield the crust from the solar wind and so prevent the surface from darkening as a result of constant bombardment by incoming particles?

CCnew2_08_12

One problem with this idea has been that the magnetic fields – in the order of nanotesla – seem to be too weak to affect the energetic particles of the solar wind on the scales observed. However, a team led by Ruth Bamford of the Rutherford Appleton Laboratory and York University has shown that it is the electric field associated with the shock formed when the solar wind interacts with the magnetic field that deflects the particles bombarding the Moon.

Data from various lunar-orbiting spacecraft suggested a picture in which the solar wind is deflected round a magnetic “bubble”, creating the effect of a cavity within the plasma density that is enclosed by a “skin” that is only kilometres thick. This skin effectively reflects incoming protons, increasing their energy.

To explain these observations, Bamford and colleagues invoke a two-fluid model of the plasma, with unmagnetized ions and magnetized electrons. The electrons are slowed down and deflected by the magnetic barrier that forms when the magnetic field of the solar wind encounters the magnetic anomaly – but the much heavier ions do not respond so quickly. This leads to a separation in space-charge and hence an electric field.

The team confirmed the principle of their theoretical model by using a plasma wind tunnel with a supersonic stream of hydrogen plasma and the dipole field of a magnet. The experiment showed that the plasma particles were indeed “corralled” by a narrow electrostatic field to form a cavity in the plasma, so protecting areas of the surface towards which the particles were flowing. Translated to the more irregular magnetic fields on the lunar surface, with a range of overlapping cavities, this can provide the long-awaited explanation of the light and dark patterns – protected and unprotected regions, respectively – that make up the swirls.

Baryon oscillation spectra for all

CCnew3_08_12

By professional astronomy standards, the 2.5 m telescope at Apache Point Observatory is quite small. More than 50 research telescopes are larger and many are located at much better sites. Apache Point Observatory is also a little too close to city lights – the atmospheric turbulence that dominates the sharpness of focus is about two times worse than at the best sites on Earth – and summer monsoons shut down the observatory for two months each year.

Yet, the Sloan Digital Sky Survey (SDSS), using this telescope, has produced the most highly cited data set in the history of astronomy (Trimble and Ceja 2008; Madrid and Macchetto 2009). Its success is rooted in the combination of high-quality, multipurpose data and open access for everyone: SDSS has obtained 5-filter images of about a quarter of the sky, spectra of 2.4 million objects and has made them publicly available on a yearly basis, even as the survey continues.

SDSS-III launched its ninth data release (DR9) on 31 July. This is the first release to include data from the upgraded spectrographs of the Baryon Oscillation Spectroscopic Survey (BOSS) – the largest of the four subsurveys of SDSS-III. By measuring more distant galaxies, these spectra probe a larger volume of the universe than all previous surveys combined.

BOSS has already published its flagship measurement of baryon acoustic oscillations (BAO) to constrain dark energy using these data (Anderson et al. 2012). BAO are the leftover imprint of primordial matter-density fluctuations that froze out as the universe expanded, leaving correlations in the distances between galaxies. The size scale of these correlations acts as a “standard ruler” to measure the expansion of the universe, complementing the “standard candles” of Type Ia supernovae that led to the discovery of the accelerating expansion of the universe.

Another major BOSS analysis using these data is still in progress. In principle, BAO can also be measured by using bright, distant quasars as backlights and measuring the “Lyman alpha forest” absorption in the spectra as intervening neutral hydrogen absorbs the quasars’ light. The wavelength of the absorption traces the red shift of the hydrogen and the amount of absorption traces its density. Thus, this also measures the structure of matter – including BAO – but at much further distances than is possible with galaxies. BOSS has the first data set with enough quasars to make this measurement and the collaboration is nearing completion of this analysis. However, the final results are not yet published and now the data are public for anyone else to try this.

Are there any surprises in the results? Not yet. BOSS has the most accurate BAO measurements yet, with distances measured to 1.7%, but the results are consistent with the “ΛCDM” cosmological standard model, which includes a dark-energy cosmological constant (Λ) and cold dark matter (CDM). But DR9 contains only about a third of the full BOSS survey and BOSS has already finished observations for data release 10 (DR10), due to be released in July 2013. DR10 will also include the first data from APOGEE, another SDSS-III subsurvey that probes the dynamical structure and chemical history of the Milky Way.

Summer running at the LHC

The LHC has delivered more than twice as many collisions to the ATLAS and CMS experiments this year as it did in all of 2011. On 4 August, the integrated luminosity recorded by each of the experiments passed the 10 fb–1 mark. Last year, they each recorded data corresponding to around 5.6 fb–1. On 22 August this year, the more specialized LHCb experiment passed 1.11 fb–1, the same as its entire data sample for 2011.

The LHC’s peak luminosity had been running 5–10% lower following June’s technical stop. This was mainly owing to a slight degradation in beam quality from the injectors – an issue that was resolved at the beginning of August. The LHC had also been suffering from occasional beam instabilities, which have resulted in significant beam losses. A solution to this second problem lay in finding new optimum machine settings with the polarity of octupole magnets reversed relative to that of recent years. (The octupole magnets are used to correct beam instabilities.)

This reversal, accompanied by an adjustment of the settings in the sextupole magnets, was studied over several days in August. These changes paid off and the beams became more stable when brought into collision, so the bunch currents could be increased from 1.5 × 1011 to 1.6 × 1011 protons per bunch. With this increased bunch intensity, the peak luminosity in ATLAS and CMS reached more than 7.5 × 1033 cm–2 s–1, compared with the maximum peak luminosity of 3.6 × 1033 cm–2 s–1 in 2011.

In addition, successful commissioning of injection and RF-capture using new Super Proton Synchrotron optics (called Q20 optics) has opened the way for even higher bunch intensities. This new optics system has yet to be used operationally.

During the summer runs, the machine regularly enjoyed long fills in the 12- to 15-hour range. This showed the benefits of the extensive consolidation work to mitigate the effects of radiation to electronics in the LHC tunnel and the continuing efforts to improve overall reliability. The LHC is well on its way towards its goal of delivering in the order of 15 fb–1 in 2012. Indeed, at the beginning of September, CMS and ATLAS had already recorded more than 13 fb–1.

Illuminating extra dimensions with photons

Photons are a critical tool at the LHC, and the ATLAS detector has been carefully designed to measure photons precisely. In addition to playing a central role in the recent discovery of a new particle resembling the Higgs boson, final states with photons are used both to make sensitive tests of the Standard Model and to search for physics beyond it.

Recent results from the ATLAS experiment using the full 2011 data set are shining new light – in more than one sense – on theoretical models that propose the existence of extra dimensions. In these models, which were originally inspired by string theory, the extra dimensions are “compactified” – finite in extent, they are curled up on themselves and so small that they have not yet been observed. Such models could answer a major mystery in particle physics, namely the weakness of gravity as compared with the other forces. The basic idea is that gravity’s influence could be diluted by the presence of the extra dimensions. Different variants of these models exist, with corresponding differences in how they could be detected experimentally.

CCnew5_08_12

Events with two energetic photons provide a good place to search. In the Randall-Sundrum (RS) models of extra dimensions, a new heavy particle could decay to a pair of photons. A plot of the diphoton mass should then reveal a narrow peak above the smooth background expected from Standard Model backgrounds. In Arkani-Hamed-Dimopoulos-Dvali (ADD) models, on the other hand, the influence of extra dimensions should lead to a broad excess of events with large diphoton masses.

The figure shows the diphoton mass spectrum measured by ATLAS. The Standard Model background expectation has been superimposed, as have contributions expected for examples of RS or ADD signals. The data agree well with the background expectation and provide stringent constraints on the extra-dimension models. For instance, the mass of the RS graviton must be larger than 1–2 TeV, depending on the strength of the graviton’s couplings to Standard Model particles.

ADD models can also be probed via the single-photon final state. The ATLAS collaboration has searched for single photons accompanied by a large apparent imbalance in the energy measured in the event, which would result from a particle escaping into the extra dimensions and taking its energy with it. The ATLAS analysis found a total number of such events in agreement with the expectation for the small Standard Model backgrounds. The final result, therefore, was used to establish new constraints on the fundamental scale parameter MD of the so-called ADD Large Extra Dimension (LED) model. The lower limits set on the scale, which improve on previous limits, lie in the range 1.74–1.87 TeV, depending upon the number of extra dimensions.

As expected, photons are proving to be an extremely useful probe for new physics at the LHC, providing important tests of many models. With the higher LHC energy in 2012 and the larger data set being accumulated, photon analyses will continue to provide an ever greater potential for discovery.

Can heavy-ion collisions cast light on strong CP?

The symmetries of parity (P) and its combination with charge conjugation (C) are known to be broken in the weak interaction. However, in the strong interaction the P and CP invariances are respected – although QCD provides no reason for their conservation. This is the “strong CP problem”, one of the remaining puzzles of the Standard Model.

The possibility of observing parity violation in the hot and dense hadronic matter formed in relativistic heavy-ion collisions has been discussed for many years. Various theoretical approaches suggest that in the vicinity of the deconfinement phase transition, the QCD vacuum could create domains – local in space and time – that could lead to CP-violating effects. These could manifest themselves via a separation of charge along the direction of the system’s angular momentum – or, equivalently, along the direction of the strong, approximately 1014 T, magnetic field that is created in non-central heavy-ion collisions and perpendicular to the reaction plane (i.e. the plane of symmetry of a collision, defined by the impact-parameter vector and the beam direction). This phenomenon is called the chiral magnetic effect (CME). Fluctuations in the sign of the topological charge of these domains cause the resulting charge separation to be zero when averaged over many events. This makes the observation of the CME possible only via P-even observables, expressed in terms of two- and multi-particle correlations.

The ALICE collaboration has studied the charge-dependent azimuthal particle correlations at mid-rapidity in lead–lead collisions at the centre-of-mass energy per nucleon pair, √sNN = 2.76 TeV. The analysis was performed over the entire event sample recorded with a minimum-bias trigger in 2010 (about 13 million events). A multi-particle correlator was used to probe the magnitude of the potential signal while at the same time suppressing any background correlations unrelated to the reaction plane. This correlator has the form 〈cos(φα + φβ – 2ΨRP)〉, where φ is the azimuthal angle of the particles and the subscript indicates the charge or the particle type. The orientation of the reaction plane angle is represented by ΨRP; it is not known experimentally but is instead estimated by constructing the event plane using azimuthal particle distributions.

The figure shows the correlator as a function of the collision centrality compared with model calculations, together with results from the Relativistic Heavy-Ion Collider (RHIC). The points from ALICE, shown as full and open red markers for pairs with the same and opposite charge, respectively, indicate a significant difference not only in the magnitude but also in the sign of the correlations for different charge combinations, which is consistent with the qualitative expectations for the CME. The effect becomes more pronounced moving from central to peripheral collisions, i.e. moving from left to right along the x-axis. The previous measurement of charge separation by the STAR collaboration at RHIC in gold–gold collisions at √sNN = 0.2 TeV, also shown in the figure (blue stars), is in both qualitative and quantitative agreement with the measurement at the LHC.

CCnew7_08_12

The thick solid line in the figure shows a prediction for the same-sign correlations caused by the CME at LHC energies, based on a model that makes certain assumptions about the duration and time-evolution of the magnetic field. This model underestimates the observed magnitude of the same-sign correlations seen at the LHC. However, parallel calculations based on arguments related to the initial time at which the magnetic field develops, as well as the same value of the magnetic flux for both energies, suggest that the CME might have the same magnitude at the energies of both colliders. Conventional event-generators, such as HIJING, which do not include P-violating effects, do not exhibit any significant difference between correlations of pairs with the same and opposite charge (green triangles). They were averaged in the figure.

An alternative explanation to the CME assumption was recently provided by a hydrodynamical calculation, suggesting that the correlator being studied may have a negative (i.e. out-of-plane), charge-independent, dipole-flow contribution that originates from fluctuations in the initial conditions of a heavy-ion collision. This could lead to a shift of the baseline, which when coupled to the well known effect in which the local charge conservation induced in a medium exhibits strong azimuthal (i.e. elliptic) modulations, could potentially give a quantitative description of the centrality-dependence observed by both ALICE and STAR. The results from ALICE for the charge-independent correlations are indicated by the blue band in the figure.

The measurements are supplemented by a differential analysis and will be extended with a study of higher harmonics, which will also investigate the correlations of identified particles. These studies are expected to shed light on one of the remaining fundamental questions of the Standard Model.

Searching for new physics in rare kaon-decays

The LHCb experiment was originally conceived of to study particles containing the beauty-flavoured b quark. However, there are many other possibilities for interesting measurements that exploit the unique forward acceptance of the detector. For example, the physics programme has already been extended to include the study of particles containing charm quarks, as well as electroweak physics. Now, a new result from LHCb on a search for a rare kaon-decay has further increased the breadth of the experiment’s physics goals.

This search is for the decay K0S→μ+μ, which is predicted to be greatly suppressed in the Standard Model. The branching ratio is expected to be 5 × 10–12, while the current experimental upper limit (dating from 1973) is 3.2 × 10–7 at 90% confidence level (CL). Although the dimuon decay of the K0L has been observed, with a branching fraction of the order of 10–8, searches for the counterpart decay of the K0S meson are well motivated because such decays can be mediated in independent ways to the K0L decay.

CCnew9_08_12

The analysis is based on the 1.0 fb–1 of data collected by LHCb in 2011. To suppress the background most efficiently, it involves several techniques that were originally developed for the search for B0S → μ+μ, for which LHCb has set the best limit in the world. The analysis also benefits from knowledge of K0S production and reconstruction that has been developed in several previous measurements (including LHCb’s first published paper, on the production of K0S mesons in 900 GeV proton–proton collisions).

To extract an upper limit on the branching fraction, the yield is normalized relative to that in the copious K0S→π+π decay mode. The 90% CL upper limit on the branching ratio B(K0S→μ+μ) is determined to be less than 9 × 10–9, a factor of 30 improvement over the previous most restrictive limit. As the figure shows, no significant evidence of the decay is seen.

Although the new limit is still three orders of magnitude above the Standard Model prediction, it starts to approach the level where new physics effects might begin to appear. Moreover, the data collected by LHCb in 2012 already exceed the sample from 2011 and by the end of the year the total data set should have more than trebled. The collaboration is continuing to search for ways to broaden its physics reach further to make the best use of this unprecedented amount of data and to tune the trigger algorithms for future data-taking and for the LHCb upgrade.

bright-rec iop pub iop-science physcis connect