Comsol -leaderboard other pages

Topics

Time Machines

By Stanley Greenberg; Introduction by David C Cassidy
Hirmer Verlag
Hardback: €39.90 SwFr53.90 £39.95 $59.95

CCboo1_07_12

The American photographer Stanley Greenberg travelled 130,000 km over five years to create the 82 black-and-white photographs included in this large-format book. They are a record of the extraordinary and sometimes surreal complexity of the machinery of modern particle physics. From a working replica of an early cyclotron to the LHC, Greenberg covers the world’s major accelerators, laboratories and detectors. There are images from Gran Sasso, Super-Kamiokande, Jefferson Lab, DESY and CERN, as well as Fermilab, SLAC and LIGO, Sudbury Neutrino Observatory, IceCube at the South Pole and many more.

The LUNA experiment at Frascati is like a giant steel retort-vessel suspended in the air; a LIDAR installation at the Pierre Auger Cosmic Ray Observatory in Argentina is a fantastically hatted creature from outer space bearing the warning “RADIACION LASER”; and the venerable 15-foot bubble chamber sits on the prairie at Fermilab like a massive space capsule that landed in the 1960s. (Who knows where its occupants might be now?)

Not a single person is seen in these beautiful images. They are clean, almost clinical studies of ingenious experiments and intricate machines and they document a world of pipes, concrete blocks, polished steel, electronics and braided ropes of wires. Greenberg has said that his earlier books, such as Invisible New York – which explores the city’s underbelly, its infrastructure, waterworks and hidden systems – are “about how cities and buildings work”, whereas Time Machines is about “how the universe works”. More accurately, perhaps, it is about the things that we build to help us understand how the universe works – but here the builders are invisible, like the particles that they are studying.

In a book whose photographs clearly demonstrate the global nature of particle physics, David Cassidy, author of an excellent biography of Werner Heizenberg, includes a one-sided introduction, concentrating on US labs and achievements. Accelerators are “prototypically American” and his main comment on the LHC is that the US has contributed half a billion dollars to it and that Americans form its “largest national group”. There are also inaccuracies: electroweak theory was confirmed by the discovery of the W and Z bosons at CERN in 1983, not 1973; and the top quark discovery was announced in 1995, not 2008. The introduction does not do justice to Greenberg’s excellent and wide-ranging photography but, fortunately, nor does it detract from it.

Pierre-Gilles de Gennes: A Life in Science

By Laurence Plévert
World Scientific
Paperback: £32
E-book: £41

CCboo2_07_12

Pierre-Gilles de Gennes obtient le prix Nobel de physique en 1991 « pour avoir découvert que les méthodes développées dans l’étude des phénomènes d’ordre s’appliquant aux systèmes simples peuvent se généraliser à des formes plus complexes, cristaux liquides et polymères ». Ni invention ni découverte, c’est un curieux intitulé. Le comité semble honorer un homme plus qu’une contribution identifiée. De fait, la vie de de Gennes se lit comme une épopée. Il naît en 1932 d’une famille alliant la banque et l’aristocratie. Ses parents se séparent, il est doublement choyé. La guerre éclate, c’est l’occasion de vacances alpestres. Cette enfance hors du commun lui apprend discipline et curiosité et lui donne une grande confiance en lui.

Attiré par les sciences à 15 ans, il surmonte les années difficiles de la classe préparatoire en jouant dans un orchestre de jazz. Reçu premier à l’Ecole normale supérieure, il commence la vie libre de normalien, se mariant et devenant papa avant l’agrégation. Il se passionne pour la mécanique quantique et la théorie des groupes, qu’il décortique dans les livres. Feynman est son modèle. L’intuition doit rester souveraine, et il l’applique aussi en politique où il rejette les modes de l’époque. Il vit une révélation avec l’école d’été des Houches, où il rencontre Pauli, Peierls… Les deux mois les plus importants de sa vie, dit-il. Sa vocation pour la physique s’y confirmera, mais quelle voie suivre ? La physique nucléaire ? « J’ai l’impression que personne ne sait décrire une interaction sinon en ajoutant des paramètres de manière ad hoc ».

Sorti de l’ENS, il intègre la division théorique de Saclay. Après son service militaire, il devient professeur à Orsay à l’âge de 29 ans. On lui laisse carte blanche, il s’attaque à la supraconductivité, montant un laboratoire à partir de rien. Se laissant guider par l’imagination, il mélange expérience et théorie. Aux plus jeunes, il insuffle l’enthousiasme, son charisme opère sur tous.

Il quitte Orsay en 1971, appelé au prestigieux Collège de France, pour y créer son propre laboratoire. Il y développe la science de « la matière molle », comprenant les bulles et les sables, les gels, les polymères… Théoricien du pratique, il prône une forte collaboration avec l’industrie. Pluridisciplinaire avant la lettre, il exploite les analogies suggérées par sa grande culture scientifique.

Dans son parcours sans faute, une hésitation apparaît. « Au milieu du chemin de sa vie », il sent le défi de l’âge. Il le relève allègrement fondant une seconde famille, en maintenant un bon rapport avec la première où vivent trois grands enfants. Sa femme ne s’insurge pas. Sa vie privée est aussi fertile que sa carrière, et trois nouveaux enfants naîtront.

Arrive l’heure des distinctions. Il est élu à l’Académie des Sciences, il reçoit la médaille d’or du CNRS, la Légion d’honneur, on lui propose un ministère. Tout en demeurant au Collège de France, il est appelé à la direction de l’ESPCI, qu’il remodèle à son goût, il s’y fait une réputation de despote. C’est un grand patron qui assume sa fonction. De fait, son autorité naturelle suscite chez ses collaborateurs une crainte sacrée. L’apothéose que représente le prix Nobel lui permet d’appliquer ses idées avec encore moins de retenue. Grand communicateur, il popularise ses idées à la télévision.

Un cancer se déclare, il s’accroche à ses activités. Retraité du Collège de France, il poursuit sa vie de recherche à l’Institut Curie dans le domaine des neurosciences. Il meurt en 2007 après une dure bataille.

Pierre-Gilles de Gennes fut un homme de convictions. Parfois décrié pour ses prises de position, il ne craint pas de secouer les habitudes en s’attaquant aux structures sclérosées : « L’université a besoin d’une révolution. » Autre cheval de bataille : la « Big Science » ; il s’oppose au laboratoire de rayonnement synchrotron Soleil et au projet ITER. Humaniste, il publie un délicieux tableau de caractères à la manière de la Bruyère, et il avoue : « J’ai tendance à croire que notre esprit a des besoins autant rationnels qu’irrationnels. »

Bouillonnant d’idées, auteur de 550 publications, homme d’influence qui s’exprime de manière franche, il ose dire : « Il faut accélérer la mort lente de champs épuisés comme la physique nucléaire », et il remarque : « Quand j’ouvrais PRL en 1960, je trouvais chaque fois une idée révolutionnaire, aujourd’hui j’arrive à 2 ou 3 idées par an, dans un journal devenu 5 fois plus épais. » Il est vrai que les idées neuves se font rares. Nous vivons sur l’acquis d’anciennes avancées théoriques, et le Higgs découvert récemment a été postulé il y a 50 ans. D’où le le sentiment dérangeant que le progrès avance plus laborieusement.

Pierre-Gilles de Gennes fut un esprit fertile et passionné, mais il vécut aussi dans une période favorable, offrant des domaines vierges permettant de multiplier les recherches. Une carrière comme la sienne semble impossible aujourd’hui, les spécialités poussées à l’extrême étouffant les initiatives individuelles.

La biographie, très bien écrite par la journaliste Laurence Plévert, est truffée d’anecdotes, elle se lit comme un roman qui emplit le lecteur d’un optimisme renouvelé sur les potentialités de l’aventure humaine et de la recherche fondamentale.

« Renaissance man », dit la quatrième de couverture ; j’oserai comparer Pierre-Gilles de Gennes à un monarque éclairé façon condottiere, ce qui ne contredit pas l’aphorisme d’un journaliste résumant l’attrait de l’homme : « Il est quelqu’un qu’on aimerait avoir comme ami, pour partager le privilège de se sentir un instant plus intelligent. »

This book is a translation of the original French edition Pierre-Gilles de Gennes. Gentleman physicien (Belin, 2009).

Tevatron experiments observe evidence for Higgs-like particle

CCnew1_07_12

The CDF and DØ collaborations at Fermilab have found evidence for the production of a Higgs-like particle decaying into a pair of bottom and antibottom quarks, independent of the recently announced Higgs-search results from the LHC experiments. The result, accepted for publication in Physical Review Letters, will help in determining whether the new particle discovered at the LHC is the long-sought Higgs particle predicted in the Standard Model.

Fermilab’s Tevatron produced proton–antiproton collisions until its shutdown in 2011; the LHC produces proton–proton collisions. In their analyses, the teams at both colliders search for all potential Higgs decay modes to ensure that no Higgs-boson event is missed. While the Standard Model does not predict the mass of the Higgs boson, it does predict that the Standard Model Higgs boson favours decaying into a pair of b quarks if the mass is below 135 GeV. A heavier Higgs would decay most often into a pair of W bosons.

The CDF and DØ teams have analysed the full Tevatron data set – accumulated over the past 10 years. Both collaborations developed substantially improved signal and background separation methods to optimize their search for the Higgs boson, with hundreds of scientists from 26 countries actively engaged in the search.

After careful analysis and multiple verifications, on 2 July CDF and DØ announced a substantial excess of events in the data beyond the background expectation in the mass region between 120 GeV and 135 GeV, which is consistent with the predicted signal from a Standard Model Higgs boson. Two days later, the ATLAS and CMS collaborations announced the observation in collisions at the LHC of a new boson with a mass of about 125 GeV.

At both of the Tevatron and the LHC, b jets are produced in large amounts, drowning out the signal expected when a Standard Model Higgs boson decays to two b quarks. At the Tevatron, the most successful way to search for a Higgs boson in this final state is to look for those produced in association with a W or Z boson. The small signal and large background require that the analysis includes every event that is a candidate for a Higgs produced with a W or Z boson. Furthermore, the analysis must separate the events that are most signal-like from the rest.

In the past two years, the CDF and DØ Higgs-search analysis teams improved the expected Higgs sensitivity of these experiments by almost a factor of two by separating the analysis into multiple search channels, adding acceptance for final decay products as well as developing innovative ways for improving particle-identification methods. Combined with a Tevatron data set of 10 fb–1, these efforts led to the extraction of about 20 Higgs-like events that are not compatible with background-only predictions. These events are consistent with the production and decay of Higgs bosons created by the Tevatron. The signal has a statistical significance of 3.1 σ.

A day to remember

On 4 July, particle physicists around the world eagerly joined many who had congregated early at CERN to hear the latest news on the search for the Higgs boson at the LHC (4 July: a day to remember). It was a day that many will remember for years to come. The ATLAS and CMS collaborations announced that they had observed clear signs of a new boson consistent with being the Higgs boson, with a mass of around 126 GeV, at a siginificance of 5 σ. In this issue of CERN Courier the two collaborations present their evidence (Discovery of a new boson – the ATLAS perspective and Inside story: the search in CMS for the Higgs boson) and CERN’s director-general reflects on broader implications (Viewpoint: an important day for science). There was further good news from Fermilab with new results on the search for the Higgs at the Tevatron, described above.

Proton run for 2012 extended by seven weeks

CCnew2_07_12

An important piece of news that was almost lost in the excitement of the Higgs update seminar on 4 July is that the LHC proton run for 2012 is to be extended. On 3 July, a meeting between CERN management and representatives from the LHC and the experiments discussed the merits of increasing the data target for this year in the light of the announcement to be made the following day (4 July: a day to remember). The conclusion was that an additional seven weeks of running would allow the luminosity goal for the year to be increased from 15 to 20 fb–1. This should give the experiments a good supply of data to work on during the LHC’s first long shut-down as well as allow them to make progress in determining the properties of the new particle.

The original schedule foresaw proton running ending on 16 October, with a proton–ion run planned for November. In the preliminary new schedule, proton running is planned to continue until 16 December, with the proton–ion run starting after the Christmas stop on 18 January 2013 and continuing until 10 February.

Auger determines pp inelastic cross-section at √s = 57 TeV

CCnew3_07_12

Ultra-high-energy cosmic-ray particles constantly bombard the atmosphere at energies far beyond the reach of the LHC. The Pierre Auger Observatory was constructed with the aim of understanding the nature and characteristics of these particles using precise measurements of cosmic-ray-induced extensive air showers up to the highest energies. These studies allow Auger to measure basic particle interactions, recently in an energy range equivalent to a centre-of-mass energy of √s = 57 TeV.

The structure of an air shower is complex and depends in a critical way on the features of hadronic interactions. Detailed observations of air showers in combination with astrophysical interpretations can provide specific information about particle physics up to √s = 500 TeV. This corresponds to an energy of 1020 eV for a primary proton in the laboratory system.

The depth in the atmosphere at which a cosmic-ray air shower reaches its maximum size, Xmax, correlates with the atmospheric depth at which the primary cosmic-ray particle interacted. The distribution of the measured Xmax values for the most deeply penetrating showers exhibits an exponential tail, the slope of which can be directly related to the interaction length of the initiating particle. This, in turn, provides the inelastic proton–air cross-section. The proton–proton (pp) cross-section is then inferred using an extended Glauber calculation with parameters derived from accelerator measurements that have been extrapolated to cosmic-ray energies. This Auger analysis is an extension of a method first used in the Fly’s Eye experiment in Utah (Baltrusaitis et al. 1984).

The composition of the highest-energy cosmic rays – whether they are protons or heavier nuclei – is not known and the purpose of the Auger analysis is to help in understanding it. The analysis targets the most deeply penetrating particles and so is rather insensitive to the nuclear mix. As long as there are at least some primary protons, then it is their cross-section that is measured. Moreover, to minimize systematic uncertainties, the Pierre Auger collaboration has chosen the cosmic-ray energy range of 1018–1018.4 eV (√sNN ˜ 57 TeV) in which protons appear to constitute a significant contribution to the overall flux. The largest uncertainty arises from a possible helium contamination, which would tend to yield too large a proton inelastic cross-section.

The figure shows the experimental result, which is to be published in Physical Review Letters (Abreu et al. 2012). It confirms the cross-section extrapolations implemented in interaction models that predict a moderate growth of the cross-section beyond LHC energies and is in agreement with the ln2(s) rise of the cross-section expected from the Froissart bound.

Heavy-ion jets go with the flow

The studies of central heavy-ion collisions at the LHC by the ALICE, ATLAS and CMS experiments show that partons traversing the produced hot and dense medium lose a significant fraction of their energy. At the same time, the structure of the jet from the quenched remnant parton is essentially unmodified. The radiated energy reappears mainly at low and intermediate transverse momentum, pT, and at large angles with respect to the centre of the jet cone. The ALICE collaboration has studied this pT region in PbPb collisions at a centre-of-mass energy √sNN = 2.76 TeV by using two-particle angular correlations, with some interesting results.

CCnew5_07_12

In the analysis, the associated particles are counted as a function of their difference in azimuth (Δφ) and pseudorapidity (Δη) with respect to a trigger particle in bins of trigger transverse momentum, pT,trig, and associated transverse momentum, pT,assoc. With the aim of studying potential modifications of the near-side peak, correlations independent of Δη are subtracted by an η-gap method: the correlation found in 1 < |Δη| < 1.6 (as a function of Δφ) is subtracted from the region in |Δη| < 1. Figure 1 shows an example in one pT bin: only the near-side peak remains, while by construction the away-side (not shown) is flat.

ALICE studies the shape of the near-side peak by extracting both its rms value (which is a standard deviation, σ, for a distribution centred at zero) in the Δη and Δφ directions and the excess kurtosis (a statistical measure of the “peakedness” of a distribution). The near-side peak shows an interesting evolution towards central collisions: it becomes eccentric.

CCnew6_07_12

Figure 2 presents the rms as a function of centrality in PbPb collisions as well as the one for pp collisions (shown at a centrality of 100). Towards central collisions the σ in Δη (lines) increases significantly, while the σ in Δφ (data points) remains constant within uncertainties. This is found for all of the pT bins studied, from 1 < pT,assoc < 2 GeV/c, 2 < pT,trig < 3 GeV/c to 2 < pT,assoc < 3 GeV/c, 4 < pT,trig < 8 GeV/c (Grosse-Oetringhaus 2012).

The observed behaviour is qualitatively consistent with a picture where longitudinal flow distorts the jet shape in the η-direction (Armesto et al. 2004). The extracted rms and also the kurtosis (not shown here) are quantitatively consistent (within 20%) with Monte Carlo simulations with A MultiPhase Transport Code (AMPT) (Lin et al. 2005). This Monte Carlo correctly reproduces collective effects such as “flow” at the LHC, which stem from parton–parton and hadron–hadron rescattering in the model.

This observation suggests an interplay of the jet with the flowing bulk in central heavy-ion collisions at the LHC. The further study of the low and intermediate pT region is a promising field for the understanding of jet quenching at the LHC, which in turn is a valuable probe of the fundamental properties of quark–gluon plasma.

XENON100 sets record limits

The XENON collaboration has announced the result of analysis of data taken with the XENON100 detector during 13 months of operation at INFN’S Gran Sasso National Laboratory. It provides no evidence for the existence of weakly interacting massive particles (WIMPs), the leading candidates for dark matter. The two events observed are statistically consistent with one expected event from background radiation. Compared with their previous result from 2011, the sensitivity has again been improved by a factor of 3.5. This constrains models of new physics with WIMP candidates even further and it helps to target future WIMP searches.

XENON100 is an ultrasensitive device. It uses 62 kg of ultrapure liquid xenon as a WIMP target and simultaneously measures ionization and scintillation signals that are expected from rare collisions between WIMPs and the nuclei of xenon atoms. The detector is operated deep underground at the Gran Sasso National Laboratory, to shield it from cosmic rays. To avoid false events occurring from residual radiation from the detector’s surroundings, only data from the inner 34 kg of liquid xenon are taken as candidate events. In addition, the detector is shielded by specially designed layers of copper, polyethylene, lead and water to reduce the background noise even further.

In 2011, the XENON100 collaboration published results from 100 days of data-taking. The achieved sensitivity already pushed the limits for WIMPs by a factor 5 to 10 compared with results from the earlier XENON10 experiment. During the new run, a total of 225 live days of data were accumulated in 2011 and 2012, with lower background and hence improved sensitivity. Again, no signal was found.

The two events observed are statistically consistent with the expected background of one event. The new data improve the bounds to 2.0 × 10–45 cm2 for the elastic interaction of a WIMP mass of 50 GeV. This is another factor of 3.5 compared with the earlier results and cuts significantly into the expected WIMP parameter region. Measurements are continuing with XENON100 and a still more sensitive, 100-tonne experiment, XENON1T, is currently under construction.

The XENON collaboration consists of scientists from 15 institutions in China, France, Germany, Israel, Italy, the Netherlands, Portugal, Switzerland and the US.

MoEDAL looks to the discovery horizon

CCnew9_07_12

MoEDAL, the “magnificent seventh” LHC experiment, held its first Physics Workshop in CERN’s Globe of Science and Innovation on 20 June. This youngest LHC experiment is designed to search for the appearance of new physics signified by highly ionizing particles such as magnetic monopoles and massive long-lived electrically charged particles from a number of theoretical scenarios.

Philippe Bloch of CERN commenced the meeting, stressing CERN’s support for the MoEDAL programme. He spoke of the key role that smaller, well motivated “high-risk” experiments such as MoEDAL play in expanding the physics reach of the LHC and reminded the audience that “one cannot predict with certainty where the next discovery will be made”.

Nobel laureate Gerard ’t Hooft began the morning’s theory talks with a reprise of his work on the monopole in grand unified theories (GUTs), elegantly showing how the beautiful monopole mathematics plays an important role in QCD and other fundamental theories. Arttu Rajantie of Imperial College London deftly recounted the story of “Monopoles from the Cosmos and the LHC”, concentrating on more recent theoretical scenarios, such as that of the electroweak “Cho-Maison” monopole, which are detectable at the LHC because they involve particles that are much lighter than the GUT monopole, with masses in the range 1 TeV/c2.

John Ellis and Nikolaos Mavromatos of King’s College London then changed the emphasis from magnetic to electric charge. Ellis described supersymmetry (SUSY) scenarios with massive stable particles (MSPs), such as sleptons, stops, gluinos and R-hadrons, which should be observable by MoEDAL. Mavromatos characterized the numerous non-SUSY scenarios that could give rise to MSPs, such as D-particles, Q-balls, quirks, doubly charged Higgs etc., all of which MoEDAL could detect.

In the afternoon, Albert de Roeck of CERN and Philippe Mermod of the University of Geneva laid out the significant progress made by CMS and ATLAS, respectively, in the quest for new physics revealed by highly ionizing particles. James Pinfold, of the University of Alberta and MoEDAL spokesperson, made the physics case for MoEDAL. He pointed out how its often-superior sensitivity to monopoles and massive slowly moving charged particles expanded the physics reach of the LHC in a complementary way. The MoEDAL collaboration, with 18 institutes from 10 countries, is still a “David” compared with the LHC “Goliaths” but its potential physics impact is second to none.

No workshop dealing with magnetic monopoles would be complete without an account of the search for cosmic monopoles. The two main experiments in this arena – MACRO, installed underground at the Gran Sasso National Laboratory in Italy, and the SLIM experiment, at the high-altitude Mount Chacaltaya Laboratory in Bolivia – were presented by Zouleikha Sahnoun of the SLIM collaboration. These two experiments still have the world’s best limits for GUT and intermediate-mass monopoles. Returning to Earth, David Milstead of Stockholm University described a project to search for trapped monopoles at the LHC. Importantly, this initiative is complementary to that of both MoEDAL and the main LHC experiments.

Why has the monopole not been seen in previous searches at accelerators? Vincete Vento of the University of Valencia offered an ingenious explanation. Monopoles are hiding in monopolium, a bound state of a monopole and an antimonopole, a suggestion that Paul Dirac made in his 1931 paper. Vento went on to describe a couple of ways that MoEDAL might detect monopolium.

In the last talk of the workshop, John Swain of Northeastern University presented the remarkable speculation that at the LHC the neutral Higgs boson could predominantly decay into a nucleus–antinucleus pair. He sketched, and nimbly defended, a theoretical justification for this surprising suggestion. Certainly, such a decay mode would be easily detectable by MoEDAL.

The clear message of the workshop is that MoEDAL has a potentially revolutionary physics programme aimed exclusively at the search for new physics, with the minimum of theoretical prejudices and the maximum exploitation of experimental search techniques. After all, in the words of J B S Haldane: “… the universe is not only queerer than we suppose, but queerer than we can suppose.”

Dark-matter filament binds galaxy clusters

Numerical simulations of structure formation in the universe reveal how clusters of galaxies form at the intersection of dark-matter filaments. The presence of such a filament connecting the galaxy clusters Abell 222 and Abell 223 has finally been detected through its weak gravitational lensing effect on background galaxies.

With the advent of supercomputers it became possible to simulate the action of gravity over cosmic time starting from a rather uniform distribution of matter in the early universe (CERN Courier September 2007 p11). Time-lapse films based on these simulations show the evolution of structure formation in a large volume of the universe. While the universe expands globally, gravity tends to collapse small initial regions of over-density. Matter is therefore contracting locally while being stretched on large scales. The opposite effects of gravitational collapse and cosmic expansion result in a sponge-like structure with a web of filaments delimiting big voids. The densest regions are at the intersection of filaments and are the formation sites of clusters of galaxies. As time proceeds, matter flows along the filaments to the nearest cluster, making the filaments thinner and thinner.

This sponge-like distribution of matter in the universe has been confirmed by mapping the position of thousands of galaxies in the nearby universe for decades. According to the simulations, it is primarily cold dark matter that collapses to shape the filamentary skeleton of the universe; normal, baryonic matter follows the same route to form galaxies along these filaments. The detection of warm–hot intergalactic gas along walls of galaxy over-densities was another piece of evidence for the validity of this scenario (CERN Courier July/August 2010 p14). What remained to be detected was the actual presence of dark matter in these filaments. This has now been achieved by Jörg P Dietrich of the University of Michigan and collaborators in Germany, the UK and the US, looking at the galaxy supercluster Abell 222 and Abell 223.

The technique used to map the distribution of dark matter in clusters of galaxies is always the same. It is called weak gravitational lensing and consists of measuring the small distortion of the shape of background galaxies induced by the presence of the invisible matter (CERN Courier January/February 2007 p11). As mass distorts space–time locally, it changes the path of light from remote galaxies and thus alters their shape as observed from Earth. The problem is that the true shape of the individual galaxies is not known, so it is difficult to know how strong their distortion is. However, by analysing tens of thousands of galaxies, a global trend of distortion can emerge with statistical significance.

Dietrich and colleagues find a bridge of matter between Abell 222 and Abell 223 at the 96% confidence level. The derived surface density of this structure is unexpectedly high compared with dark-matter filaments in numerical simulations. This suggests that the filament is not seen from the side but almost along its major axis, thus increasing its projected mass. The red-shift difference between the two galaxy clusters does, indeed, suggest that they are at about 60 million light-years apart. The binding filament contributes as much as a complete galaxy cluster to the total mass of the supercluster. It is the site of an over-density of galaxies and includes hot intergalactic gas detected in X-rays. This gas contributes to about 9% of the total mass of the filament at most. The remaining mass would essentially be composed of dark matter. This discovery is new evidence that the basic assumptions of numerical simulations are valid; in particular, that cold dark matter is an essential ingredient governing the formation of large-scale structures in the universe.

bright-rec iop pub iop-science physcis connect