The Belle collaboration, working at the KEKB facility at KEK, has observed a difference in the direct CP asymmetry for decays of charged and neutral B mesons into a kaon and a pion. This result is consistent with previous measurements from Belle and BaBar, but is more precise.
Two types of CP violation have previously been observed in two neutral meson systems, the K0 and B0. In these, the CP violation arises either in the mixing between the K0 or B0 and its antiparticle or – in direct CP violation – in the decay of these neutral mesons. The observed effects in both cases are larger for the B0 system than for the K0 system, but they are consistent with the Standard Model and the mechanism for CP violation first proposed by Makoto Kobayashi and Toshihide Maskawa.
Now the Belle collaboration has found that direct CP violation differs between the charged decay B± →K±π0 and the related neutral decay to K±π-+. In 535 million BB pairs observed at KEKB, Belle found 2241±157 K+π– and 1856±52 K–π+ events, leading to an asymmetry for BB–0→K–π+ versus B0→K+π– of A0 = –0.094±0.018±0.008, which favours B0→K+π–. For the final states expected for the corresponding charged decays the collabroation found 1600+57/–55 K±π0 events, giving an asymmetry A± = +0.07±.03±0.01, with more K–π0 events. The opposite signs of these two asymmetries suggest that different CP violation effects are at work in charged and neutral B mesons.
The causes of this difference in CP asymmetry is uncertain. The large observed deviation might be explained by either strong interaction effects or new physics – specifically a new source of CP violation, something that is needed to explain the domination of matter over antimatter in the universe. To understand whether new physics is indeed involved in B→Kπ decay, further study of CP violation in other modes is needed. Direct CP violation in B0→K0π0 and mixing in the BsBsbar system would be good candidates, but experimental measurements on these systems are not yet precise enough, and much more data are needed. The search for new physics in CP violation will be one of the major goals of the B factory upgrade at KEK as well as other future B physics facilities.
By Thea Derado, Kaufmann Verlag 2007. Hardback ISBN 9783780630599, €19.95.
Using letters, articles and biographies the author of this book paints a lively and private picture of the tragic life of Lise Meitner, which was thorny for two reasons: she was Jewish and a woman. Born in 1878 Meitner attended school in Vienna but could pass her maturity examination only after expensive private lessons. After studying in Vienna, she moved to Berlin in 1907, and for many years had to earn her living by giving private teaching lessons. Eventually she was accepted by the radiochemist Otto Hahn as a physicist collaborator at the Kaiser Wilhelm Institute in Berlin (but without pay), and so began a creative co-operation and a lifelong friendship. Since women were not allowed to enter the institute, Meitner had to do experiments in a wood workshop in the basement accessible from a side entrance.
Derado describes Meitner’s scientific achievements in an understandable way, particularly experiments leading to nuclear fission and the discovery of protactinium. The technical terms are explained in an appendix.
During her stay in Berlin, Meitner met all the celebrities in physics at the time, such as Max Planck, James Franck, Emil Fischer and Albert Einstein, whose characters are all described in a colurful fashion. She developed warm relations with Max von Laue, one of the few German physicists who had bravely withstood the Nazi regime. Apart from science, music played an important role in her life, and through music Meitner made friends with Planck’s family and happily sang Brahms’ songs with Hahn in the wood workshop.
During the First World War Meitner volunteered for the Austrian army as a radiologist. Working in a military hospital she learned the horrors of war. These experiences, and discussions with Hahn and Einstein led to some inner conflicts. During the persecution of the Jews by the Nazis, Meitner enjoyed a certain protection thanks to her Austrian passport, yet after the annexation of Austria it became impossible for her to leave Germany legally. Neglecting her colleagues’ warnings she hesitated too long, until in July 1938 she saved herself by escaping to Holland. Hahn gave her a diamond ring that he had inherited from his mother as a farewell present.
After a short stay in Holland Niels Bohr arranged for Meitner to stay at the Nobel Institute in Stockholm, which was directed by Manne Siegbahn, and finally in 1947 she obtained a research professorship at the Technical University in Stockholm. I was able to work with her there for a year and can confirm many of the episodes mentioned in the book. She was a graceful little person, with a combination of Austrian charm, Prussian orderliness and a sense of duty; she was also very kind and motherly.
Derado discusses, of course, why Meitner did not share the Nobel Prize with Hahn in 1944. Her merits were uncontested, and even after the publication of the Nobel Prize documents questions remain unanswered. It seems that being a woman had negative influences. However, numerous German and international honours and awards, as well as an overwhelming reception in the US, have compensated to a certain extent.
Meitner never married, but various family ties played an important role in her life. She was particularly attached to her nephew Otto Frisch, with whom she interpreted nuclear fission. She spent her last days with him in Cambridge, where she died in 1968.
In all, this book provides an historically accurate account, at the same time from a female perspective, of the turbulent life of one of the greatest scientists of the 20th century. It is worth reading, not only for those interested in history, but also perhaps as encouragement for young women scientists.
• This is an abridged version of a review originally published in German in Spektrum der Wissenschaft, March 2008.
Commissioning the LHC is making steady progress towards the target of achieving a complete cool down by the middle of June, allowing the first injection of beams soon after. This will come almost exactly 19 years after the start up of LEP, the machine that previously occupied the same tunnel. The LHC’s first collisions will follow later.
Half of the LHC ring – between point 5 and point 1 – was below room temperature by the first week of April, with sectors 5-6 and 7-8 fully cooled. The next step for these sectors will be the electrical tests and powering up of the various circuits for the magnets. From late April onwards, every two weeks the LHC commissioning teams will have a new sector cooled to 1.9 K and ready for testing.
Sector 7-8 was the first to be cooled to 1.9 K in April 2007, and the quadrupole circuits in the sector were powered up to 6500 A during the summer. The valuable experience gained here allowed the hardware commissioning team to validate and improve its procedures and tools so that electrical tests on further sectors could be completed faster and more efficiently. Each sector has 200 circuits to test.
The next electrical tests were carried out on sector 4-5 from November 2007 to mid-February 2008. Once the temperature had been stabilized at 1.9 K by the beginning of December, the circuits were powered up to an initial 8.5 kA. The main dipole circuit was then gradually brought up to 10.2 kA during the last week of January 2008, with the main quadrupole circuits reaching 10.8 kA in February. At this current the magnets are capable of guiding a 6 TeV proton beam.
During this testing of sector 4-5, however, a number of magnet-training quenches occurred for both dipole and quadrupole circuits. Three dipoles in particular quenched at below 10.3 kA, despite having earlier been tested to the nominal LHC operating current of 11.8 kA. It appears that retraining of some magnets will be necessary, which is likely to take a few more weeks. CERN’s management, with the agreement of all of the experiments and after having informed Council at the March session, decided to push for collisions at an energy of around 10 TeV as soon as possible this year, with full commissioning to 14 TeV expected to follow over the winter shutdown. Past experience indicates that commissioning to 10 TeV should be achieved rapidly, with no quenches anticipated.
Sector 5-6 will be the next to cross the 10 kA threshold; electrical tests here began in April. Sector 4-5, meanwhile, was warmed up again to allow mechanics to connect the inner triplet magnets, which were modified after a problem arose during pressure testing last year.
India’s accelerator pioneers began to build the Calcutta cyclotron in the early 1970s but soon found that the industrial infrastructure was not geared up to provide the necessary level of technology. They had, for instance, difficulty in finding a suitable manufacturer for the essential resonator tank. The Garden Reach Ship Builders said they could build watertight ships but had no experience in making tanks that had to be airtight, maintaining a high vacuum (10–6 torr). But build it they did, and the cyclotron was commissioned in June 1977. It is still working well and has catered for almost two generations of experimental nuclear physicists.
In the early 1980s some of us wanted to build a detector to be used at CERN’s SPS to register photons as signals of quark–gluon plasma (QGP), formed in the collision of two nuclei at laboratory energies of typically 200 GeV/nucleon. The protons and neutrons of an atomic nucleus at this energy should “melt” into their fundamental constituents, the quarks and gluons, rather as in the primordial universe a few microseconds after the Big Bang.
The adventure of building a photon multiplicity detector (PMD) was inspiring but not easy. The sheer size and complexity was daunting; the required precision even more so. All of the pundits (i.e. distinguished elderly scientists on funding committees) unanimously declared the project impossible and too ambitious, and generally questioned our credibility. Undaunted, we refused to accept their verdict and against all odds received our initial modest funding.
We did the design in India. The Cyclotron Centre in Calcutta; the Institute of Physics in Bhubaneswar; the universities of Rajasthan and Jammu, and Panjab University joined in. In a short time the group built the PMD, with 55,000 pads, each consisting of a 1–2 cm2 plastic scintillator. Optical fibres inserted diagonally in each pad picked up the photons – the possible signals of QGP.
The PMD was a great success, unprecedented in modern India, particularly on this scale. It required all kinds of creative innovation, with the best suggestions coming from the youngest members of the team. Almost overnight, India became a key player on the world stage in this field. This kind of science and the associated precision technology, which had so far remained dormant, began to flourish; there was no looking back.
The PMD later went through a basic design change with the introduction of a “honeycomb” design, and it was shipped across the Atlantic for the STAR experiment at RHIC at the Brookhaven National Laboratory. With a reputation already established at CERN, entry into RHIC posed no problem whatsoever. The PMD has already accumulated a vast quantity of data at RHIC, with good statistics. Complemented by the results from the PHENIX detector, photons look promising as signals of QGP.
Meanwhile, in India we moved from the room-temperature cyclotron to a superconducting cyclotron using niobium–titanium superconducting wire. Hunting for a suitable company to build our cryostat was an adventure. We searched all of India, but drew a blank. Despite our determination we failed to find a suitable company to build the cryostat, and eventually turned to Air Liquide, France, but not without hiccups. We learned how to manage large-scale liquid helium and maintain a steady liquid helium temperature.
Finally, on 11 January 2005, the magnet became superconducting at a temperature of around 4.2 K and maintained the superconducting state for months. It was a fantastic experience. All of January felt like a carnival. We had made the leap from room-temperature technology to large-scale cryogenic technology.
Meanwhile at CERN, the LHC was looming large on the horizon, with a heavy-ion programme and the ALICE detector. We were thrilled – here was scope for the old workhorse, the PMD, in a more sophisticated guise. Aligarh Muslim University and IIT, Bombay, also joined in our quest. My colleagues at Saha Institute went further, and wanted to participate in the dimuon spectrometer. Colleagues who had remained dormant or busy with routine jobs, were suddenly inspired. They went on to design the MANAS chip for the muon arm. An Indian company, the Semi Conductor Complex Ltd in Chandigarh, enthusiastically offered to build the hardware. After much debate, the MANAS chip became central to the ALICE muon arm and was accepted worldwide.
Last time at CERN, walking in the shadow of ALICE, marvelling at its size and the immaculate precision with which the work was done, I felt like Alice in Wonderland, with “quarkland” beckoning on the horizon.
In the 1970s, India was still a spectator in the world theatre of high science. Individuals who migrated to other parts of the world sometimes excelled. In India, people were proud of them but remained convinced that such feats could not be accomplished back home. In the 1980s, however, there was a major paradigm shift in our mind set. We began to dream of competing with the world from India.
By the beginning of the 21st century, India was no longer a spectator but a significant player on the world stage. The glamour of individual excellence had been replaced by the wisdom of collective effort. We had turned mature and ambitious. What I have presented is a chronicle of that evolution. I am proud and grateful to be a witness and indeed a participant in this evolving panorama.
The voyage that started almost 30 years ago continues with resolve from LHC to FAIR to an ILC and further, making the impossible possible and turning dreams to reality.
Ring imaging Cherenkov (RICH) counters provide a unique tool to identify charged particles by measuring their velocity, even when it differs from the velocity of light by only one part in 10 million. The devices detect the visible and UV photons emitted through the Cherenkov effect, measuring the angle of the Cherenkov radiation with an imaging technique, and hence the velocity. This method offers the possibility of measuring particle velocity in domains where others fail, opening the way to particle identification over a wide range of particle momenta. In the current era of high-resolution and high-precision experiments, the domain of applications is becoming even larger, and more challenging requirements for the design of new detectors are arising.
The 6th International Workshop on Ring Imaging Cherenkov Counters, which took place in Trieste on 15–20 October 2007, covered all of this and more, with experts from around the world to analyse both the state of the art and novel perspectives in the field. Hosted by the Sezione di Trieste of INFN, this was the latest in a series of meetings that have become a reference point in the field of Cherenkov imaging detectors (CERN Courier May 2005 p33). The tradition continued at the Trieste meeting, attracting 120 participants – a quarter coming from outside Europe – with its scientific programme of invited and contributed talks and poster contributions. The workshop also recognized young researchers in the field with the RICH2007–NIM A Young Scientists’ Award, offered by Elsevier for young scientists (under the age of 32) attending the workshop and contributing with a talk or a poster. Nine young people were eligible, and the RICH2007 International Scientific Advisory Committee, chaired by Eugenio Nappi of Bari, awarded the prize to Federica Sozzi, a PhD physics student from the University of Trieste.
From RHIC to DIRC
The workshop provided an opportunity to confirm the central role that RHIC detectors play in particle and nuclear physics, where they form key systems in current and future experiments in a variety of fields: light and heavy quark spectroscopy, K and B physics, nucleon structure, quark–gluon plasma, heavy-ion physics, hadronic matter and hypernuclei. This was evident in the comprehensive review by David Websdale of Imperial College London and in a number of contributions, several dedicated to RICH counters in experiments at CERN, such as the successful upgrade of RICH-1 in COMPASS, the high-momentum particle-identification detector for ALICE and the RICH detectors for LHCb (CERN Courier July/August 2007 p30). In experimental astroparticle physics, RICH detectors are indispensable in balloon and satellite-borne experiments studying the composition of cosmic rays. They also form the complete apparatus in telescopes to detect solar and cosmic neutrinos and in high-energy gamma-ray astronomy, as Eckhart Lorenz of the Max Planck Institute for Physics Munich and ETH Zurich emphasized in his exciting review.
The most innovative approaches in Cherenkov ring imaging techniques that were presented at the meeting centred on concepts derived from the detection of internally reflected Cherenkov light (DIRC) technique. Pioneered in the BaBar experiment at SLAC, this uses quartz as both a radiator and a light guide. Kenji Inami of Nagoya described the time-of-propagation approach, illustrating it with recent results from a test beam. In this technique the measurement of a space co-ordinate is replaced by the high-resolution measurement of a time co-ordinate, resulting in a much smaller photon-detector array. A further development of the DIRC concept is to use fast pixelated photon detectors, which provide high-resolution timing, in order to correct for chromatic dispersion of Cherenkov photons generated in the quartz radiator bars. The focusing-DIRC approach applies this technique and uses a focusing mirror to allow the dispersion of the measured Cherenkov angles generated by the thickness of the quartz bars to be reduced. Jochen Schwiening of SLAC showed how a focusing-DIRC prototype operated in a test beam has demonstrated for the first time the possibility of correcting the chromatic effect, thereby making possible a substantial gain in detector resolution.
The DIRC-derived detectors are largely based on the exceptional time resolution of a few tens of picoseconds that can be obtained with microchannel plate photomultipliers. The characteristics of these photon detectors open the way to other frontier applications, such as the compact time-of-flight detector that Jerry Va’vra of SLAC presented. In the field of vacuum-based photon detectors, which CERN’s Thierry Gys reviewed, traditional photomultipliers can today be produced with a greatly increased quantum efficiency, with peak values more than 40%. Furthermore, hybrid photodetectors, which are vacuum-based detectors where the photoelectrons created are detected in silicon, have become a firm reality, thanks to mass production for the RICH detectors in LHCb.
Innovative approaches
Solid-state and gaseous photon detectors were review topics broached by Junji Haba of KEK and Rachel Chechik of the Weizmann Institute, respectively. The first tests of Cherenkov ring imaging with silicon photomultipliers were reported by Samo Korpar of Ljubljana. In gaseous photon counters, despite the success of the first photon detectors with a solid-state photocathode – namely, multiwire proportional chambers equipped with a layer of caesium iodide as photoconverter – there are problems in exposing a photocathode to a gaseous atmosphere. The bombardment of the photocathode by ions flowing back from the amplification region results in performance limitations. Closed geometries, such as those achieved in multistage micropattern gaseous detectors, represent the new frontier, making possible drastic reductions of the ion backflow down to less than one part in a thousand. The new Cherenkov detector that forms the Hadron Blind Detector of the PHENIX experiment at RHIC at Brookhaven, which has been in operation for several months, is the first counter to follow this innovative approach.
The optimal performance of sophisticated detectors like RICH counters requires excellent technical and technological achievements in a variety of sectors, often achieved using innovative and challenging approaches. This was confirmed by the large number of new developments and ideas discussed at RICH2007. Groups in Novosibirsk and Japan have attained new goals in the production of aerogel, in particular in terms of improved transparency and the production of tiles with a variable refractive index. Optical components in Cherenkov imaging detectors are becoming increasingly important and the requirements now include mirrors, lenses and systems to control and monitor the alignment of huge optical arrangements. The new emphasis on high-resolution time measurements requires extended electronic readout systems to preserve the detector time resolution, such as already in use for the multi-anode photomultiplier tubes of RICH-1 in COMPASS. Sophisticated detector-control systems guarantee optimal performance of the detectors in operation, as in LHCb experiment, for example.
Last but certainly not least, as Guy Wilkinson of Oxford explained, effective algorithms for image-pattern recognition and particle identification are key elements in optimizing the response of Cherenkov imaging counters. Even though the algorithms are applied to different arrangements, there are some common choices: Hough transformations convert the measured quantities into co-ordinates in a space more naturally related to the Cherenkov effect; and likelihood-based algorithms maximize the amount of information extracted from the data.
More exotic applications include the detection of Cherenkov radiation in the radio wavelength region for astroparticle research, as Amy Connoly of University College London discussed. The use of Cherenkov light for calorimetric applications also raised a great deal of interest at the workshop.
In summary, the intense scientific sessions and the numerous talks and contributions at RICH2007 resulted in a picture of great vitality. This was clear from the inspiring introductory talk by Blair Ratcliff of SLAC and in the summary of highlights by Silvia Dalla Torre of Trieste. The participants also benefited from the workshop’s location in the centre of Trieste, directly on a pier, and a social programme that included a visit to the Roman ruins of Aquileia’s river harbour and basilica, a walk through the huge Karst cave Grotta Gigante and a choir recital in Trieste’s cathedral. RICH practitioners already look forward to the next meeting, where appealing host sites have been proposed, such as Marseille and KEK.
• RICH2007 was sponsored by several Italian and other European institutions and private companies. These included INFN, Hadron Physics I3, CERN, the University of Trieste, the Consorzio per la Fisica and Sincrotrone Trieste. For more information, see http://rich2007.ts.infn.it/sponsors.php.
Traditionally at CERN, teams on each experiment, and in some cases each subdetector, have independently developed a detector-control system (DCS) – sometimes known as “slow controls”. This was still the case for the experiments at LEP. However, several factors – the number and geographical distribution of development teams, the size and complexity of the systems, limited resources, the long lifetime (20 years) and the perceived similarity between the required systems – led to a change in philosophy. CERN and the experiments’ management jointly decided to develop, as much as possible, a common DCS for the LHC experiments. This led to the setting up in 1997 of the Joint Controls Project (JCOP) as a collaboration between the controls teams on the LHC experiments and the support groups in CERN’s information technology and physics departments.
The early emphasis in JCOP was on the difficult task of acquiring an understanding of the needs of the experiments and agreeing on common developments and activities. This was a period where disagreements were most prevalent. However, with time the collaboration improved and so did progress. Part of this early effort was to develop a common overall architecture that would become the basis of many of the later activities.
The role of JCOP
In parallel, the JCOP team undertook evaluations to assess the suitability of a number of technologies, primarily commercial ones, such as OLE for Process Control (OPC), the field buses CANBus and ProfiBus, commercial programmable logic controllers (PLCs), as well as supervisory control and data acquisition (SCADA) products. The evaluation of SCADA products eventually led to the selection of the Prozessvisualisierungs und Steuerungs System (PVSS) tool as a major building block for the DCS for experiments. The CERN Controls Board subsequently selected PVSS as the recommended SCADA system for CERN. In addition, and where suitable commercial solutions were not available, products developed at CERN were also evaluated. This led to JCOP’s adoption and support of CERN’s distributed information manager (DIM) middleware system and the SMI++ finite-state machine (FSM) toolkit. Furthermore, developments made in one experiment were also adopted by other experiments. The best example of this is the embedded local monitor board (ELMB) developed by ATLAS. This is a small, low-cost, high-density radiation-tolerant input/output card that is now used extensively in all LHC experiments, as well as in some others.
One major thrust has been the development of the so-called JCOP framework (FW) (figure 1). Based on specifications agreed with the experiments, this provides a customized layer on top of the technologies chosen, such as PVSS, SMI++ and DIM. It offers many ready-to-use components for the control and monitoring of standard devices in the experiments (e.g. CAEN high voltage, Wiener and CAEN low voltage, the ELMB and racks). The FW also extends the functionality of the underlying tools, such as the configuration database tool and installation tool.
These developments were not only the work of the CERN support groups but also depended on contributions from the experiments. In this way the development and maintenance was done once and used by many. This centralized development has not only significantly reduced the overall development effort but will also ease the long-term maintenance – an issue typically encountered by experiments in high-energy physics where short-term collaborators do much of the development work.
As figure 1 shows, the JCOP FW has been developed in layers based on a component model. In this way each layer builds on the facilities offered by the layer below, allowing subdetector groups to pick and choose between the components on offer, taking only those that they require. The figure also illustrates how the JCOP FW, although originally designed and implemented for the LHC experiments, can be used by other experiments and applications owing to the approach adopted. Some components in particular have been incorporated into the unified industrial control system (UNICOS) FW, developed within the CERN accelerator controls group (Under control: keeping the LHC beams on track). The UNICOS FW, initially developed for the LHC cryogenics control system, is now used for many applications in the accelerator domain and as the basis for the gas-control systems (GCS) for the LHC experiments.
In addition to these development and support activities, JCOP provides an excellent forum for technical discussions and the sharing of experience across experiments. There are regular meetings, both at the managerial and the technical levels, to exchange information and discuss issues of concern for all experiments. A number of more formal workshops and reviews have also taken place involving experts from non-LHC experiments to ensure the relevance and quality of the products developed. Moreover, to maximize the efficiency and use of PVSS and the JCOP FW, JCOP offers tailor-made training courses. This is particularly important because the subdetector-development teams have a high turnover of staff for their controls applications. To date, several hundred people have attended these courses.
As experiments have not always tackled issues at the same time, this common approach has allowed them to benefit from the experience of the first experiment to address a particular problem. In addition, JCOP has conducted a number of test activities, which cover the testing of commonly used commercial applications, such as various OPC servers, as well as the scalability of many of the supported tools. Where the tests indicated problems, this provided feedback for the tool developers, including the commercial suppliers. This in turn resulted in significant improvements in the products.
Although JCOP provides the basic building blocks and plenty of support, there is still considerable work left for the subdetector teams around the world who build the final applications. This is a complex process because there are often several geographically distributed groups working on a single subdetector application, and all of the applications must eventually be brought together and integrated into a single homogeneous DCS. For this to be possible, the often small central experiment controls teams play a significant role (figure 2). They not only participate extensively in the activities of JCOP, but also have other important tasks to perform, including development of guidelines and recommendations for the subdetector developments, to ensure easy integration; customization and extension of the JCOP FW for the experiment’s specific needs (e.g. specific hardware used in their experiment but not in the others); support and consultation for the subdetector teams; development of applications for the monitoring and control of the general experiment infrastructure e.g. for the control of racks and environmental monitoring.
As well as selecting, developing and supporting tools to ease the development of the DCSs, there have been two areas where complete applications have been developed. These are the detector safety systems (DSS) and the gas control systems (GCS). The DSS, which is based on redundant Siemens PLCs and PVSS, has been developed in a data-driven manner that allows all four LHC experiments to configure it to their individual needs. Although not yet fully configured, the DSS is now deployed in the four experiments and has been running successfully in some for more than a year.
The approach for the GCS goes one step further. It is also based on PLCs (Schneider Premium) and PVSS, but the PLC and PVSS code of the 23 GCSs is generated automatically using a model-based development technique. In simple terms, there is a generic GCS model that includes all possible modules and options, and each GCS application is defined by a particular combination of these modules and options. The final GCS application is created by selecting from a set of predefined components and configuring them appropriately using an application builder created for the purpose. All 23 GCSs have been generated in this way and are now deployed.
At the time of writing, the four LHC experiment collaborations were all heavily engaged in the commissioning of their detectors and control systems. To date, the integration of the various subdetector-control systems has proceeded relatively smoothly, owing to the homogeneous nature of the subdetector implementations. However, that is not to say that it has been problem free. Some issues of scaling and performance have emerged as the systems have increased in size, with more and more of the detectors being commissioned. However, thanks to the JCOP collaboration, it has been possible to address these issues in common for all experiments.
Despite some initial difficulties, the players involved see the approach described in this article, as well as the JCOP collaboration, as a success. The key here has been the building of confidence between the central team and its clients through the transparency of the procedures used to manage the project. All of the partners need to understand what is being done, what resources are available and that the milestones will be adhered to. The benefits of this collaborative approach include less overall effort, through the avoidance of duplicate development; each central DCS and subdetector team can concentrate on their own specific issues; easier integration between developments; sharing of knowledge and experience between the various teams; and greater commonality between the experiment systems enables the provision of central support. In addition, it is easier to guarantee long-term maintenance with CERN-based central support. Compared with previous projects, JCOP has led to a great deal of commonality between the LHC experiments’ DCSs, and it seems likely that with more centralized resources even more could have been achieved in common.
Could the JCOP approach be applied more widely to other experiment systems? If the project has strong management, then I believe so. Indeed, the control system based on this approach for the LHCb experiment is not limited to the DCS but also covers the complete experiment-control system, which includes the trigger, data-acquisition and readout systems as well as the overall run control. Only time will tell if this approach can, and will, be applied more extensively in future projects.
The scale and complexity of the Large Hadron Collider (LHC) under construction at CERN are unprecedented in the field of particle accelerators. It has the largest number of components and the widest diversity of systems of any accelerator in the world. As many as 500 objects around the 27 km ring, from passive valves to complex experimental detectors, could in principle move into the beam path in either the LHC ring or the transfer lines. Operation of the machine will be extremely complicated for a number of reasons, including critical technical subsystems, a large parameter space, real-time feedback loops and the need for online magnetic and beam measurements. In addition, the LHC is the first superconducting accelerator built at CERN and will use four large-scale cryoplants with 1.8 K refrigeration capability.
The complexity means that repairs of any damaged equipment will take a long time. For example, it will take about 30 days to change a superconducting magnet. Then there is the question of damage if systems go wrong. The energy stored in the beams and magnets is more than twice the levels of other machines. That accumulated in the beam could, for example, melt 500 kg of copper. All of this means that the LHC machine must be protected at all costs. If an incident occurs during operation, it is critical that it is possible to determine what has happened and trace the cause. Moreover, operation should not resume if the machine is not back in a good working state.
The accelerator controls group at CERN has spent the past four years developing a new software and hardware control system architecture based on the many years of experience in controlling the particle injector chain at CERN. The resulting LHC controls infrastructure is based on a classic three-tier architecture: a basic resource tier that gathers all of the controls equipment located close to the accelerators; a middle tier of servers; and a top tier that interfaces with the operators (figure 1).
The LHC Software Application (LSA) system covers all of the most important aspects of accelerator controls: optics (twiss, machine layout), parameter space, settings generation and management (generation of functions based on optics, functions and scalar values for all parameters), trim (coherent modifications of settings, translation from physics to hardware parameters), operational exploitation, hardware exploitation (equipment control, measurements) and beam-based measurements. The software architecture is based on three main principles (figure 2). It is modular (each module has high cohesion, providing a clear application program interface to its functionality), layered (with three isolated logical layers – database and hardware access layer, business layer, user applications) and distributed (when deployed in the three-tier configuration). It provides homogenous application software to operate the SPS accelerator, its transfer lines and the LHC, and it has already been used successfully in 2005 and 2006 to operate the Low Energy Ion Ring (LEIR) accelerator, the SPS and LHC transfer lines.
The front-end hardware of the resource tier consists of 250 VMEbus64x sub-racks and 120 industrial PCs distributed in the surface buildings around the 27 km ring of the LHC. The mission of these systems is to perform direct real-time measurements and data acquisition close to the machine, and to deliver this information to the application software running in the upper levels of the control system. These embedded systems use home-made hardware and commercial off-the-shelf technology modules, and they serve as managers for various types of fieldbus such as WorldFIP, a deterministic bus used for the real-time control of the LHC power converters and the quench-protection system. All front ends in the LHC have a built-in timing receiver that guarantees synchronization to within 1 μs. This is required for time tagging of post-mortem data. The tier also covers programmable logic controllers, which drive various kinds of industrial actuator and sensor for systems, such as the LHC cryogenics systems and the LHC vacuum system.
The middle tier of the LHC controls system is mostly located in the Central Computer Room, close to the CERN Control Centre (CCC). This tier consists of various servers: application servers, which host the software required to operate the LHC beams and run the supervisory control and data acquisition (SCADA) systems; data servers that contain the LHC layout and the controls configuration, as well as all of the machine settings needed to operate the machine or to diagnose machine behaviours; and file servers containing the operational applications. More than 100 servers provide all of these services. The middle tier also includes the central timing that provides the information for cycling the whole complex of machines involved in the production of the LHC beam, from the linacs onwards.
At the top level – the presentation tier – consoles in the CCC run GUIs that will allow machine operators to control and optimize the LHC beams and supervise the state of key systems. Dedicated displays provide real-time summaries of key machine parameters. The CCC is divided into four “islands”, each devoted to a specific task: CERN’s PS complex; the SPS; technical services; and the LHC. Each island is made of five operational consoles and a typical LHC console is composed of five computers (figure 3). These are PCs running interactive applications, fixed displays and video displays, and they include a dedicated PC connected only to the public network. This can be used for general office activities such as e-mail and web browsing, leaving the LHC control system isolated from exterior networks.
Failsafe mechanisms
In building the infrastructure for the LHC controls, the controls groups developed a number of technical solutions to the many challenges facing them. Security was of paramount concern: the LHC control system must be protected, not only from external hackers, but also from inadvertent errors by operators and failures in the system. The Computing and Network Infrastructure for Controls is a CERN-wide working group set up in 2004 to define a security policy for all of CERN, including networking aspects, operating systems configuration (Windows and Linux), services and support (Lüders 2007). One of the group’s major outcomes is the formal separation of the general-purpose network and the technical network, where connection to the latter requires the appropriate authorization.
Another solution has been to deploy, in close collaboration with Fermilab, “role-based” access (RBAC) to equipment in the communication infrastructure. The main motivation to have RBAC in a control system is to prevent unauthorized access and provide an inexpensive way to protect the accelerator. A user is prevented from entering the wrong settings – or from even logging into the application at all. RBAC works by giving people roles and assigning permissions to those roles to make settings. An RBAC token – containing information about the user, the application, the location, the role and so on – is obtained during the authentication phase (figure 4). This is then attached to any subsequent access to equipment and is used to grant or deny the action. Depending on the action made, who is making the call and from where, and when it is executed, access will be either granted or denied. This allows for filtering, control and traceability of modifications to the equipment.
An alarm service for the operation of all of the CERN accelerator chain and technical infrastructure exists in the form of the LHC Alarm SERvice (LASER). This is used operationally for the transfer lines, the SPS, the CERN Neutrinos to Gran Sasso (CNGS) project, the experiments and the LHC, and it has recently been adapted for the PS Complex (Sigerud et al. 2005). LASER provides the collection, analysis, distribution, definition and archiving of information about abnormal situations – fault states – either for dedicated alarm consoles, running mainly in the control rooms, or for specialized applications.
LASER does not actually detect the fault states. This is done by user surveillance programs, which run either on distributed front-end computers or on central servers. The service processes about 180,000 alarm events each day and currently has more than 120,000 definitions. It is relatively simple for equipment specialists to define and send alarms, so one challenge has been to keep the number of events and definitions to a practical limit for human operations, according to recommended best practice.
The controls infrastructure of the LHC and its whole injector chain spans large distances and is based on a diversity of equipment, all of which needs to be constantly monitored. When a problem is detected, the CCC is notified and an appropriate repair has to be proposed. The purpose of the diagnostics and monitoring (DIAMON) project is to provide the operators and equipment groups with tools to monitor the accelerator and beam controls infrastructure with easy-to-use first-line diagnostics, as well as to solve problems or help to decide on responsibilities for the first line of intervention.
The scope of DIAMON covers some 3000 “agents”. These are pieces of code, each of which monitors a part of the infrastructure, from the fieldbuses and frontends to the hardware of the control-room consoles. It uses LASER and works in two main parts: the monitoring part constantly checks all items of the controls infrastructure and reports on problems; while the diagnostic part displays the overall status of the controls infrastructure and proposes support for repairs.
The frontend of the controls system has its own dedicated real-time frontend software architecture (FESA). This framework offers a complete environment for equipment specialists to design, develop, deploy and test equipment software. Despite the diversity of devices – such as beam-loss monitors, power converters, kickers, cryogenic systems and pick-ups – FESA has successfully standardized a high-level language and an object-oriented framework for describing and developing portable equipment software, at least across CERN’s accelerators. This reduces the time spent developing and maintaining equipment software and brings consistency across the equipment software deployed across all accelerators at CERN.
This article illustrates only some of the technical solutions that have been studied, developed and deployed in the controls infrastructure in the effort to cope with the stringent and demanding challenges of the LHC. This infrastructure has now been tested almost completely on machines and facilities that are already operational, from LEIR to the SPS and CNGS, and LHC hardware commissioning. The estimated collective effort amounts to some 300 person-years and a cost of SFr21 m. Part of the enormous human resource comes from international collaborations, the valuable contributions of which are hugely appreciated. Now the accelerator controls group is confident that they can meet the challenges of the LHC.
Control systems are a huge feature of the operation of particle accelerators and other large-scale physics projects. They allow completely integrated operation, including the continuous monitoring of subsystems; display of statuses and alarms to operators; preparation and automation of scheduled operations; archiving data; and making all of the experimental data available to operators and system experts. The latest news from projects around the world formed the main focus of the 11th International Conference on Accelerator and Large Experimental Physics Control Systems (ICALEPCS), which took place on 13–19 October in Knoxville, Tennessee. More than 360 people from 22 countries attended the meeting hosted by the Oak Ridge National Laboratory (ORNL) and the Thomas Jefferson National Accelerator Facility at the Knoxville Conference Center. The 260 presentations, including 71 talks, confirmed the use of established technologies and reviewed their consolidation. Excellent poster sessions also provided plenty of opportunity for discussions with the authors during the coffee breaks.
The weekend prior to the conference saw three related meetings. Almost 50 people attended the Control System Cyber-Security workshop, where eight major laboratories presented and discussed current implementations and future prospects for securing control systems. All have acknowledged the risk and all follow a “defence-in-depth” approach, focusing on network protection and segregation, authorization and authentication, centralized PC installation schemes and collaboration between information-technology and controls experts.
Approaches to control systems
In parallel, 200 people attended meetings of the collaborations developing the open-source toolkits EPICS and TANGO. The EPICS collaboration in particular has grown since previous ICALEPCS meetings. The contributions presented at the conference showed that these two toolkits are the most widely used and are the predominant choice for many facilities. For example, EPICS has recently been selected for use at the Spallation Neutron Source (SNS) at ORNL, while the control system of the ALBA light source in Spain will be based on TANGO.
Alternative solutions employ commercial supervisory control and data acquisition (SCADA) products for control systems. This is the case, for example, at CERN, the Laser Mégajoule project and the SOLEIL synchrotron. At CERN, the cryogenics system for the LHC and the LHC experiments, among others, make extensive use of commercial SCADA systems. The combination of their use with appropriate software frameworks developed in common has largely facilitated the design and construction of these control systems. They are currently being scaled up to their final operational size – a task that has gone smoothly so far (Under control: keeping the LHC beams on track and Detector controls for LHC experiments).
Independent of the approach adopted, the controls community has focused strongly on the software-development process, taking an increasing interest in risk reduction, improved productivity and quality assurance, as well as outsourcing. The conference learned of many efforts for standardization and best practice, from the management of requirements to development, implementation and testing. Speakers from CERN, for example, explained the benefits of the adoption of the Agile design and programming methodology in the context of control-system development.
The ITER tokamak project in Cadarache, France, has taken an approach that uses a unified design to deal with the static and dynamic behaviour of subsystems. The operation of ITER requires the orchestration of up to 120 control systems, including all technical and plasma diagnostic systems. ITER will outsource a large fraction of these control systems, which will be procured “in kind” from the participating teams. Outsourcing also played a major role in the Australian Synchrotron and it involved co-operation between research institutions and industrial companies to enhance and optimize the functionality of their control-system products. Such collaboration needs the definition of strict acceptance criteria and deadlines, but it also allows outsourcing of the risk. The Mégajoule project tested its subcontracting process within a small “vertical slice”, before adapting all of the outsourcing and the integration process to the full-scale system. The Atacama Large Millimetric and Submillimetric Array has provided further lessons about the successful organization of a distributed team and integration of different objects. The project enforced a common software framework on all participating teams, and the integration process focused on functionality rather than on the subsystems.
In addition to the software frameworks for control systems, there are many plug-ins, tools and utilities under development, using, in particular, the Java language. For example, EPICS employs Java at all levels from the front-end Java input/output (I/O) controllers to the supervision layer. Java is now a top candidate for new developments, owing mostly to its productivity and portability, not only for graphical user interfaces (GUIs) and utilities but also for applications that are calculation intensive. The accelerator domain has integrated more advanced Java-related techniques successfully. SLAC, for example, has benefited from the open-source Eclipse technologies, and the Java-based open-source Spring is being deployed in the LHC accelerator control systems at CERN (Under control: keeping the LHC beams on track). However, somewhat contrarily to these common efforts, individual projects have also developed a variety of new electronic logbooks and custom GUI builders.
The flexibility and portability of Java are becoming increasingly combined with the extensible markup language XML. With interoperability in mind, the growing (and correct) usage of XML and associated technologies provides a good basis for openness, data exchange and automation, rather than simply for configuration.
An example of this openness is the adoption of industrial solutions for data management. Modern control systems have seen a rapid growth in data to be archived, together with rising expectations for performance and scalability. File-based or dedicated solutions for data management are reaching their limits, so these techniques are now being replaced by high-performance databases, such as Oracle and PostgreSQL. These databases not only record the parameters of control systems but also are used for administration, documentation management and equipment management. In addition to these well established technologies, some users have chosen ingenious approaches. For example, SPring-8 in Japan has a geographic information system integrated into its accelerator management (figure 1). The Google Maps-like system allows localizing, visualizing and monitoring of equipment in real time, and it has opened up interesting perspectives for the control systems community.
Hardware becomes soft
On the hardware side, VME equipment shows an increased use of embedded controllers, such as digital signal processors and field-programmable gate arrays. Their flexibility brings the controls software directly onto the front end, for example, as cross-compiled EPICS I/O controllers. The development of radiation hard front-ends, for example, for the Compact Linear Collider study and the LHC at CERN, have presented other challenges. Timing systems have also had to face new challenges: the LHC requires independent and asynchronous timing cycles of arbitrary duration; timing distributions, for the accelerators at SOLEIL or the Los Alamos Neutron Science Center, for example, are based on common networks with broadcasting clocks and event-driven data; and modern free-electron lasers (FELs), such as at SPring-8, depend on timing accuracies of femtoseconds to achieve stable laser beams.
FELs and light sources were the main focus of several status reports at the conference. The X-ray FEL project at SPring-8 has implemented its control system in MADOCA, a framework that follows a three-tier control model. The layers consist of an interface layer based on DeviceNet programmable logic controllers and VME crates; communication middleware based on remote procedure calls; and Linux consoles for the GUIs. The control system for the Free-electron Laser in Hamburg (FLASH) at DESY provides bunch-synchronized data recording using a novel integration of a fast DAQ system. The FLASH collaboration carried out an evaluation of the front-end crates used in the telecoms industry, which suggested that they had more reliable operation and integrated management compared with VME crates. The collaboration for the ALICE experiment at the LHC reported on progress with its control system, which is currently being installed, commissioned and prepared for operation, due to start later this year. Other status reports came from the Facility for Antiproton and Ion Research at GSI and the Diamond Light Source in the UK.
The conference concluded with presentations about the new developments and future steps in the evolution of some of the major controls frameworks. These underlined that the ICALEPCS conference not only confirmed the use of established technologies and designs, in particular EPICS and TANGO, but also showed the success of commercial SCADA solutions. Control systems have become highly developed and the conference reviewed consolidation efforts and extensions thoroughly. The social programme included a dinner with bluegrass music and an excellent tour of the SNS, the world’s most intense pulsed accelerator-based neutron source, which rounded off the meeting nicely. Now the controls community awaits the 12th ICALEPCS conference, to be held in Kobe, Japan, in autumn 2009.
With the title Quantum Chromodynamics – String Theory meets Collider Physics, the 2007 DESY theory workshop brought together a distinguished list of speakers to present and discuss recent advances and novel ideas in both fields. Among them was Juan Maldacena from the Institute for Advanced Study, Princeton, pioneer of the interrelationship between gauge theory and string theory, who also gave the Heinrich Hertz lecture for the general public.
From a dynamical point of view, quantum chromodynamics (QCD), the theory of strong interactions, represents the most difficult sector of the Standard Model. Mastering the complexities of strong interactions is essential for a successful search for new physics at the LHC. In addition, the relevance of the QCD phase transition for the early evolution of our universe has ignited an intense interest in heavy-ion collisions, both at RHIC in Brookhaven and at the LHC at CERN. The QCD community is thus deeply engaged in investigations to further our understanding of QCD, to reach the highest accuracy in its theoretical predictions and to advance existing computational tools.
String theory, initially considered a promising theoretical model for strong interactions, was long believed incapable of capturing, in detail, the correct high-energy behaviour. In 1997, however, Maldacena overcame a prominent obstacle for applications of string theory to gauge physics. He proposed describing strongly coupled four-dimensional (supersymmetric) gauge theories through closed strings in a carefully chosen five-dimensional background. In fact, equivalences (dualities in modern parlance) between gauge and string theories emerge, provided that the strings propagate in a five-dimensional space of constant negative curvature. Such a geometry is called an anti deSitter (AdS) space and the duality involving strings in an AdS background became known as AdS/CFT correspondence, where CFT denotes conformal field theory. If the duality turns out to be true, string-theory techniques can give access to strongly coupled gauge physics, a regime that only lattice gauge theory has so far been able to access. Though a string theory dual to real QCD has still to be found, AdS/CFT dualities are beginning to bring string theory closer to the “real world” of particle physics.
With the duality conjecture as its focus, the DESY workshop covered the full spectrum of research topics that have entered this interdisciplinary endeavour. Topics ranged from the role of QCD in the evaluation of experimental data and in Monte Carlo simulations to string theory calculations in AdS spaces.
To begin with the more practical side, QCD clearly dominates the daily analysis of data from RHIC, HERA at DESY, and Fermilab’s Tevatron. Tom LeCompte of Argonne presented results from the Tevatron, and Uta Klein of Liverpool looked at what we have learned from HERA. The results relating to parton densities will be of utmost importance for measurements at the LHC, not least in the kinematic region of small x, which was among the highlights of HERA physics. Diffraction – one of the puzzles for the HERA community – continues to demand attention at the LHC, in particular as a clean channel for the discovery of new physics, as Brian Cox of the University of Manchester explained.
Monte Carlo simulations represent an indispensable tool for analysing experimental data, and existing models need steady improvement as we approach the new energy regime at the LHC. Gösta Gustafson of Lund and Stefan Gieseke of Karlsruhe described the progress that is being made in this respect. Topics of particular current interest include a careful treatment of multiple parton interactions and the implementation of next-to-leading-order (NLO) QCD matrix elements in Monte Carlo programs.
At present, lattice calculations still offer the most reliable framework for studies of QCD beyond the weak coupling limit. Among other issues, the workshop addressed the calculation of low-energy parameters such as hadron masses and decay constants. In this context, Federico Farchioni of Münster noted that the limit of small quark masses calls for careful attention, and Philipp Hägler of Technische Universität, München discussed developments in calculating hadron structure from the lattice. Another important direction concerns the QCD phase structure and, in particular, accurate estimates of the phase-transition temperature, Tc, as Akira Ukawa of Tsukuba explained. Lattice gauge theories also allow the investigation of connections with string theory. Michael Teper of Oxford showed how once the dependence of gauge theory on the number of colours, Nc, is sufficiently well controlled, it may be possible to determine the energy spectrum of closed strings in the limit of large ‘t Hooft coupling.
QCD perturbation theory
NLO and next-to-NLO calculations in QCD perturbation theory are needed to derive precise expressions for cross-sections – they are crucial in describing experimental data at the existing colliders, and indispensable input for the discrimination of new physics from mere QCD background at the LHC. The necessary computations require a detailed understanding of perturbative QCD, as Werner Vogelsang from Brookhaven National Laboratory discussed. For example, the theoretical foundation of kt factorization and of unintegrated parton densities, along with their use in hadron–hadron collisions, is attracting much attention. For higher-order QCD calculations, Alexander Mitov of DESY, Zeuthen, described how advanced algorithms are being developed and applied.
Higher-order computations in QCD are becoming one of the most prominent examples of an extremely profitable bridge between gauge and string theories. Multiparton final states at the LHC have sparked interest in perturbative gauge theory computations of scattering amplitudes that involve a large number of incoming and/or outgoing partons. At the same time there is an urgent need for higher-loop results, which, in view of the rapidly growing number of Feynman diagrams, seem to be out of reach for more conventional approaches. Recent investigations in this direction have unravelled new structures, such as in the perturbative expansion of multigluon amplitudes.
In a few special cases, such as four-gluon amplitudes in N = 4 supersymmetric Yang–Mills theory, these investigations have led to highly non-trivial conjectures for all loop expressions. This was the topic of talks by David Dunbar of Swansea and Lance Dixon of Stanford. According to the AdS/CFT duality, the strong coupling behaviour of these amplitudes should be calculable within string theory. Indeed, Maldacena described how the relevant string-theory computation of four-gluon amplitudes has been performed, yielding results that agree with the gauge theoretic prediction. On the gauge theory side, a conjecture for a larger number of gluons has also been formulated. Maldacena noted that this is currently contested both by string theoretic arguments and more refined gauge theory calculations.
The expressions for four-gluon amplitudes contain a certain universal function, the so-called cusp anomalous dimension, which can again be computed at weak (gauge theory) and strong (supergravity) coupling. Gleb Arutyunov of Utrecht showed how this particular quantity is also being investigated using modern techniques of integrable systems. Remarkably, as Niklas Beisert of the Albert Einstein Institute in Golm explained, a formula for the cusp anomalous dimension in N = 4 super-Yang–Mills theory has recently been proposed that interpolates correctly between the known weak and strong coupling expansions. In addition, Vladimir Braun of Regensburg and Lev Lipatov of Hamburg and St Petersburg described how integrability features in the high-energy regime of QCD, both in the short distance and the small-x limit. The integrable structures have immediate applications to data analysis. Yuri Kovchegov of Ohio also pointed out that low-x physics in QCD, with all the complexities appearing in the NLO corrections, might possess close connections with the supersymmetric relatives of QCD. The higher order generalization of the Balitsky–Fadin–Kuraev–Lipatov pomeron, which is expected to correspond to the graviton, is of particular interest. In this way, studies of the high-energy regime seem to carry the seeds for new relations to string theory.
Another close contact between string theory and QCD appears at temperatures near and above the QCD phase transition. Heavy-ion experiments that probe this kinematic region are currently taking place at RHIC and will soon be carried out at the LHC. CERN’s Urs Wiedemann introduced the topic, and John Harris of Yale presented results and discussed their interpretation. The analysis of RHIC data requires somewhat unusual theoretical concepts, including, for example, QCD hydrodynamics. As in any other system of fluid mechanics, viscosity is an important parameter used to characterize quark–gluon plasmas, but its measured value cannot be explained through perturbative QCD. This suggests that the quark–gluon plasma at RHIC is strongly coupled, so string theory should be able to predict properties such as the plasma’s viscosity through the AdS/CFT correspondence. David Mateos of Santa Barbara and Hong Liu of Boston showed that the string theoretic computation of viscosity and other quantities is indeed possible, based on investigations of gravity in a thermal black-hole background. It leads to values that are intriguingly close to experimental data.
String theory is often perceived as an abstract theoretical framework, far away from the physics of the real world and experimental verification. When considered as a theory of strongly coupled gauge physics, however, it is beginning to slip into a new role – one that offers novel views of qualitative features of gauge theory and, in some cases, even quantitative predictions. The QCD community, on the other hand, is beginning to realize that its own tremendous efforts may profit from the novel alliance with string theory. The participants of the 2007 DESY Theory workshop witnessed this recent shift, through lively discussions and numerous excellent talks that successfully bridged the two communities.
The joint forces of NASA’s Hubble and Spitzer space telescopes have identified a source that is likely to be the most distant galaxy known to date. If confirmed, this discovery will be a new milestone on the path towards the detection of the earliest galaxies emerging from the dark ages.
In astronomy, looking far is looking in the past and thus the most distant galaxies are seen as they were when the universe was only about 1000 million years old. This is roughly the time when the universe became re-ionized by the collective light of early galaxies and marks the end of the "dark ages". This period was dark not only because stars and galaxies were just starting to form but also because the universe was pervaded by cold clouds of gas that were effective in absorbing radiation at wavelengths below the photoionization threshold of hydrogen at 91.2 nm. This leads to a break in the spectrum of the most distant galaxies, which appears shifted from the ultraviolet to the infrared because of the cosmological redshift. For an object at a redshift of z = 7, this "Lyman break" would be shifted by a factor of 1 + z, from 91.2 nm to 730 nm. A source at such a high redshift should thus be detectable in the infrared, while remaining unseen in the visible range. This is the signature for identifying the most distant galaxies.
Strangely, the best places to search these extreme sources are not necessarily the most empty regions of the night sky (CERN Courier November 2006 p10), but can also be the bright areas covered by huge clusters of galaxies. The global cluster mass deforms space–time and provides a natural lens that magnifies the light received from galaxies located far beyond the cluster. Furthermore, a detailed mapping of this gravitational lensing effect shows the places where the magnification of remote galaxies would be maximum and thus tells astronomers where to search. Using this technique, a team of astronomers announced the discovery of a source at a redshift of 10 (CERN Courier May 2004 p13), but this detection turned out to be spurious.
A highly reliable candidate for a galaxy at a redshift of more than seven has now been detected through the galaxy cluster Abell 1689 in a deep exposure with the Hubble Space Telescope. The relatively nearby cluster (redshift z = 0.18) magnifies the light of the remote galaxy by almost a factor ten. This source, found by Larry Bradley from the Johns Hopkins University in Baltimore and colleagues, is much brighter than other high-redshift candidates and therefore the detection of the Lyman break is more significant. The source was unseen with the Hubble Advanced Camera for Surveys at wavelengths shorter than 850 nm, but is detected with high significance (8σ) at 1.1 μm by the Near Infrared Camera and Multi-Object Spectrograph, while becoming dimmer towards longer wavelengths. The authors claim only a star-forming galaxy at a redshift of 7.6 ± 0.4 can reasonably fit these properties.
Subsequent observations with the infrared Spitzer Space Telescope confirm the presence of the source and constrain better the nature of the object. The observations suggest a galaxy with a mass of about 3000 million times that of the sun in the form of stars younger than about 300 million years. This is consistent with the expectations for such early galaxies, but final confirmation of the discovery will require a redshift determination with near-infrared spectroscopy.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.