Comsol -leaderboard other pages

Topics

Muon g−2 collaboration prepares for first results

The muon g−2 collaboration

The annual “g-2 physics week”, which took place on Elba Island in Italy from 27 May to 1 June, saw almost 100 physicists discuss the latest progress at the muon g−2 experiment at Fermilab. The muon magnetic anomaly, aμ, is one of the few cases where there is a hint of a discrepancy between a Standard Model (SM) prediction and an experimental measurement. Almost 20 years ago, in a sequence of increasingly precise measurements, the E821 collaboration at Brookhaven National Laboratory (BNL) determined aμ = (g–2)/2 with a relative precision of 0.54 parts per million (ppm), providing a rigorous test of the SM. Impressive as it was, the result was limited by statistical uncertainties.

A new muon g−2 experiment currently taking data at Fermilab, called E989, aims to improve the experimental error on aμ by a factor of four. The collaboration took its first dataset in 2018, integrating 40% more statistics than the BNL experiment, and is now coming to the end of a second run that will yield a combined dataset more than three times larger.

A thorough review of the many analysis efforts during the first data run has been conducted. The muon magnetic anomaly is determined from the ratio of the muon and proton precession frequencies in the same magnetic field. The ultimate aim of experiment E989 is to measure both of these frequencies with a precision of 0.1 ppm by employing techniques and expertise from particle-physics experimentation (straw tracking detectors and calorimetetry), nuclear physics (nuclear magnetic resonance) and accelerator science. These frequencies are independently measured by several analysis groups with different methodologies and different susceptibilities to systematic effects.

A recent relative unblinding of a subset of the data with a statistical precision of 1.3 ppm showed excellent agreement across the analyses in both frequencies. The absolute values of the two frequencies are still subject to a ~25 ppm hardware blinding offset, so no physics conclusion can yet be drawn. But the exercise has shown that the collaboration is well on the way to publishing its first result with a precision better than E821 towards the end of the year.

ICTP announces next director

Atish Dabholkar

Atish Dabholkar, a theorist from India, has been appointed the next director of the International Centre for Theoretical Physics (ICTP) in Trieste, Italy. Currently head of ICTP’s high-energy, cosmology and astroparticle physics section, Dabholkar will take up his new position in November. He will succeed Fernando Quevedo, who has led the centre since 2009. Dabholkar’s research has focused on string theory and quantum black holes, and his appointment comes at a time of expansion for ICTP. Over the past 10 years, the centre has hired more researchers and created new research initiatives in quantitative life sciences, high-performance computing, renewable energies and quantum technology. In addition, ICTP has increased its presence with the opening of four partner institutes in Brazil, China, Mexico and Rwanda. “Directing ICTP is a once in a lifetime opportunity due to its unique mission and its big impact in developing countries. I am glad that when I leave in November the institute will be in very good hands,” says Quevedo.

Guido Altarelli Award 2019

Jonathan Gaunt
Josh Bendavid

The fourth edition of the Guido Altarelli Award, which recognises exceptional achievements from young scientists in the field of deep inelastic scattering and related subjects, was awarded during the DIS2019 workshop in Torino, Italy, on 8 April. Jonathan Gaunt of CERN was recognised for his pioneering contributions to the theory and phenomenology of double and multiple parton scattering. Josh Bendavid, also CERN, and a member of the CMS collaboration, received the award for his innovative contributions with original tools to Higgs physics and proton parton density functions at the LHC. The brother of the late Guido Altarelli, Massimo Altarelli, was present at the ceremony and handed the certificates to the two winners.

2019 Dirac Medal and Prize

Viatcheslav Mukhanov, Alexei Starobinsky and Rashid Sunyaev

The International Centre for Theoretical Physics (ICTP) in Italy has awarded its 2019 Dirac Medal and Prize to three physicists whose research has made a profound impact on modern cosmology. Viatcheslav Mukhanov (Ludwig Maximilian University of Munich), Alexei Starobinsky (Landau Institute for Theoretical Physics) and Rashid Sunyaev (Max Planck Institute for Astrophysics) share the prize for “their outstanding contributions to the physics of the cosmic microwave background with experimentally tested implications that have helped to transform cosmology into a precision scientific discipline by combining microscopic physics with the large-scale structure of the universe”.

Julius Wess Award 2018

Sally Dawson

Sally Dawson of Brookhaven National Laboratory has been granted the 2018 Julius Wess Award by the KIT Elementary Particle and Astroparticle Physics Center of Karlsruhe Institute of Technology. She is recognised for her outstanding scientific contributions to the theoretical description and in-depth understanding of processes in hadron colliders, in particular her work relating to the physics of the Higgs boson and top quark. The Julius Wess Award is endowed with €10,000 and is granted annually to elementary particle and astroparticle physicists for exceptional experimental or theoretical scientific achievements.

Winners of 2019 Beamline for Schools competition

Students from the Praedinius Gymnasium in Groningen

Two teams of high-school students, one from the Praedinius Gymnasium in Groningen, Netherlands (pictured), and one from the West High School in Salt Lake City, US, have won CERN’s 2019 Beamline for Schools competition. In October, the teams will travel to DESY in Germany to carry out their proposed experiments together with scientists from CERN and DESY. The Netherlands team “Particle Peers” will compare the properties of the particle showers originating from electrons with those created from positrons, while the “DESY Chain” team from the US will focus on the properties of scintillators for more efficient particle detectors. Since Beamline for Schools was launched in 2014, almost 10,000 students from 84 countries have participated. This year, 178 teams from 49 countries worldwide submitted a proposal for the sixth edition of the competition. Due to the current long shutdown of CERN’s accelerators for maintenance and upgrade, there is currently no beam at CERN, which has opened up opportunities to explore partnerships with DESY and other laboratories.

Bottomonium elliptic-flow no-show

Diagram of elliptic flow

High-energy heavy-ion collisions at the LHC give rise to a deconfined system of quarks and gluons called the quark–gluon plasma (QGP). One of its most striking features is the emergence of collective motion due to pressure gradients that develop at the centre. Direct experimental evidence for this collective motion is the observation of anisotropic flow, which translates the asymmetry of the initial geometry into a final-state momentum anisotropy. Its magnitude is quantified by harmonic coefficients vn in a Fourier decomposition of the azimuthal distribution of particles. As a result of the almond-shaped geometry of the interaction volume, the largest contribution to the asymmetry is the second coefficient, or “elliptic flow”, v2.

A positive v2 has been measured for a large variety of particles, from pions, protons and strange hadrons up to the heavier J/ψ meson. The latter is a curious case as quarkonia such as J/ψ are bound states of a heavy quark (charm or bottom) and its antiquark (CERN Courier December 2017 p11). Quarkonia constitute interesting probes of the QGP because heavy-quark pairs are produced early and experience the full evolution of the collision. In heavy-ion collisions at the LHC, charmonia, such as the J/ψ, dissociate due to screening from free colour charges in the QGP, and regenerate by the recombination of thermalised charm quarks. More massive still, and having a higher binding energy than charmonium, the dissociation of bottomonium ϒ(1S) is expected to be limited to the early stage of the collision when the temperature of the surrounding QGP medium is high. Its regeneration is not expected to be significant because of the small number of available bottom quarks.

The ALICE collaboration recently reported the first measurement of the elliptic flow of the ϒ(1S) meson in lead–lead (Pb–Pb) collisions using the full Pb–Pb data set of LHC Run 2 (figure 1). The measured values of the ϒ(1S) v2 are small and consistent with zero, making bottomonia the first hadrons that do not seem to flow in heavy-ion collisions at the LHC. Compared to the measured ν2 of inclusive J/ψ in the same centrality and pT intervals, the v2 of ϒ(1S) is lower by 2.6 standard deviations. The results are also consistent with the small, positive values predicted by models that include no or small regeneration of bottomonia by the recombination of bottom quarks interacting in the QGP.

These observations, in combination with earlier measurements of the suppression of ϒ(1S) and J/ψ, support the scenario in which charmonia dissociate and reform in the QGP, while bottomonia are dominantly dissociated at early stages of the collisions. Future datasets, to be collected during LHC runs 3 and 4 after a major upgrade of the ALICE detector, will significantly improve the quality of the present measurements.

Building Balkan bridges in theory

SEENET-MTP logo

Twenty years ago, distinguished Austrian theorist and co-inventor of supersymmetric quantum field theory, Julius Wess, concluded that something must be done to revitalise science in former Yugoslavia. One of the 12 founding members of CERN, Yugoslavia was a middle-sized European country with corresponding moderate activities in high-energy physics. Its breakup resulted in a dramatic deterioration of conditions for science, the loss of connections and an overwhelming sense of isolation inside the region.

Wess strongly believed that science is a powerful means to influence the development of society. From 1999 to 2003, his initiative “Wissenschaftler in Global Verantwortung” (WIGV), which translates to “Scientists in Global Responsibility”, provided a platform to connect and support individual researchers, groups and institutions with a focus on former Yugoslavia. Much was achieved during this short time, such as the granting of scholarships in mathematics and theoretical physics, a revival of interrupted schools and conferences and the modernisation of intranet at several Serbian institutions. Funding, initially from Germany, provided an opportunity to researchers from former Yugoslavia to establish contacts and cooperation with many excellent researchers from all around the world.

Goran Djordjevic

It was natural to expand the WIGV initiative to bridge the gap between southeastern and the rest of Europe. Countries to the east and south of Yugoslavia – such as Bulgaria, Greece, Romania and Turkey – have a reasonably strong presence in high-energy physics. On the other hand, they share some similar economic and scientific problems, with many research groups facing insufficient financing, isolation and lacking critical mass.

Therefore, the participants of the UNESCO-sponsored Balkan Workshop 2003 held in Serbia created the southeastern European Network in Mathematical and Theoretical Physics (SEENET-MTP). The network has since grown to include 16 full and seven associated member institutions from 11 countries, and more than 450 individual members. There are also 13 partner institutions worldwide. During its 15 years SEENET-MTP has undertaken: more than 18 projects, mostly concerning mobility and training; 30 conferences, workshops and schools; more than 300 researcher and student exchanges and fellowships; and more than 250 joint papers. Following initial support from CERN’s theory department, a formal collaboration agreement resulted in the joint CERN–SEENET-MTP PhD Training Program with at least 80 students taking part in the first cycle from 2015 to 2018. Vital support also came from the European Physical Society and ICTP Trieste.

In total, the investment provided for SEENET-MTP from international funds, its members, national funds and in-kind support amounts to almost €1 million. It is quite an achievement – if we consider that the results rely mostly on the efforts and good will of many individuals – but it is still much less than an average “EU project”. This raises important questions about maintaining SEENET-MTP’s efforts. 

SEENET-MTP has “thermalised” the system – the network has made people in the region interact. Yet today, we find ourselves asking similar questions that its founders asked themselves 15 years ago. Is there something specific to southeast Europe that deserves special treatment? Is there something specific in high-energy theoretical physics that merits specific funding? Is the financing of high-energy physics primarily a responsibility of governments? And, if so, can Balkan countries do it properly?

Is there something specific to southeast Europe that deserves special treatment?

If the answers to the first three questions are “yes”, and to the last one “no”, a pressing issue concerns extra funding and the role of the European Union (EU). In the six or seven countries in the region that are not yet members of the EU (and which have a very unclear perspective about joining), we need to work out how to fund fundamental sciences in a similar way that Poland, Czech Republic, or “older” EU countries do. At the same time, it is important to consider the future roles of non-EU institutions such as CERN and the ICTP. The recent accession of Serbia to CERN as a full member state, and with Croatia and Slovenia in the process of joining, are promising signs towards closer European integration.

Networking is the most natural and promising auxiliary mechanism to preserve and build local capacity in fundamental physics in the region. The next SEENET Scientific-Advisory Committee and its Council meeting will take place at ICTP Trieste from 20 to 23 October. It will be the right place, if not the last possibility, to transfer the initial ideas and achieved results to an EU-supported project to bolster best practice in the Balkans.

www.seenet-mtp.info/bridges

Grappling with dark energy

Adam Riess of Johns Hopkins University

Could you tell us a few words about the discovery that won you a share of the 2011 Nobel Prize in Physics?

Back in the 1990s, the assumption was that we live in a dense universe governed by baryonic and dark matter, but astronomers could only account for 30% of matter. We wanted to measure the expected deceleration of the universe at larger scales, in the hope that we would find evidence for some kind of extra matter that theorists predicted could be out there. So, from 1994 we started a campaign to measure the distances and redshifts of type-1a supernovae explosions. The shift in a supernova’s spectrum due to the expansion of space gives its redshift, and the relation between redshift and distance is used to determine the expansion rate of the universe. By comparing the expansion rates at two different epochs of the universe, we can estimate the expansion rate of the universe and how it changes over time. We made this comparison in 1998 and, to our surprise, we found that instead of decreasing, the expansion rate was speeding up. A stronger confirmation came after combining our measurements with those of the High-z Supernova Search Team. The result could be interpreted if the universe instead of decelerating is speeding up its expansion.

What was the reaction from your colleagues when you announced your findings?

That our result was wrong! There were understandably different reactions but the fact that two independent teams were measuring an accelerating expansion rate, plus the independent confirmation from measurements of the Cosmic Microwave Background (CMB), made it clear that the universe is accelerating. We reviewed all possible sources of errors including the presence of some yet unknown astronomical process, but nothing came out. Barring a series of unrelated mistakes, we were looking at a new feature of the universe.

There were other puzzles at that time in cosmology that the idea of an accelerating universe could also solve. The so-called “age crisis” (many stars looked older than the age of the universe) was one of them. This meant that either the stellar ages are too high or that there is something wrong with the age of the universe and its expansion. This discrepancy could be resolved by accounting for an accelerated expansion.

What is driving the accelerated expansion?

One idea is that the cosmological constant, initially introduced by Einstein so that general relativity could accommodate a static universe, is linked to the vacuum energy. Today we know that the vacuum energy can’t be the final answer because summing the contributions from the presumed quantum states in the universe produces an enormous number for the expansion rate that is about 120 orders of magnitude higher than observed. This rate is so high that it would have ripped apart galaxies, stars, planets, before any structure was formed.

The accelerating expansion can be due to what we broadly refer to as dark energy, but its source and its physics remain unknown. It is an ongoing area of research. Today we are making further supernovae observations to measure even more precisely the expansion rate, which will help us to understand the physics behind it.

By which other methods can we determine the source of the acceleration?

Today there is a vast range of approaches, using both space and ground experiments. A lot of work is ongoing on identifying more supernovae and measuring their distances and redshifts with higher precision. Other experiments are also looking to baryonic acoustic oscillations that would provide a standard ruler for measuring cosmological distances in the universe. There are proposals to use weak gravitational lensing, which is extremely sensitive to the parameters describing dark energy as well as the shape and history of the universe. Redshift space distortions due to the peculiar velocities of galaxies can also tell us something. We may be able to learn something from these different types of observations in a few years. The hope is to be able to measure the equation-of-state of dark energy with a 1% precision, and its variation over time with about 10% precision. This will offer a better understanding of whether dark energy is the cosmological constant or perhaps some form of energy temporarily stored in a scalar field that could change over time.

Is this one of the topics that you are currently involved with?

Yes, among other things. I am also working on improving the precision of the measurements of the Hubble constant, Ho, which characterises the present state and expansion rate of our universe. Refined measurements of Ho could point to potential discrepancies in the cosmological model.

What’s wrong with our current determination of the Hubble constant?

The problem is that even when we account for dark energy (factoring in any uncertainties we are aware of) we get a discrepancy of about 9% when comparing the predicted expansion rate based on CMB data using the standard “ΛCDM” cosmological model with the present expansion. The uncertainty in this measurement has now gone below 2%, leading to a significance of more than 5σ while future observations from the SH0ES programme would likely reduce it to 1.5%.

A new feature in the dark sector of the universe appears increasingly necessary to explain the present tension

There is something more profound in the disagreement of these two measurements. One measures how fast the universe is expanding today, while the other is based on the physics of the early universe – taking into account a specific model – and measuring how fast it should have been expanding. If these values don’t agree, there is a very strong likelihood that we are missing something in our cosmological model that connects the two epochs in the history of our universe. A new feature in the dark sector of the universe appears in my view increasingly necessary to explain the present tension.

When did the seriousness of the H0 discrepancy become clear?

It is hard to pinpoint a date, but it was between the publication of first results from Planck in 2013, which predicted the value of H0 based on precise CMB measurements, and the publication of our 2016 paper that confirmed the H0 measurement. Since then, the tension has been growing. Various people were convinced along this way as new data came in, while there are people who are still not convinced. This diversity of opinions is a healthy sign for science: we should take into account alternative viewpoints and continuously reassess the evidence that we have without taking anything for granted.

How can the Hubble discrepancy be interpreted?

The standard cosmological model, which contains just six free parameters, allows us to extrapolate the evolution from the Big Bang to the present cosmos – period of almost 14 billion years. The model is based on certain assumptions: that space in the early universe was flat; that there are three neutrinos; that dark matter is very nonreactive; that dark energy is similar to the cosmological constant; and that there is no more complex physics. So one or perhaps a combination of these can be wrong. Knowing the original content of the universe and the physics, we should be able to measure how the universe was expanding in the past and what should be its present expansion rate. The fact that there is a discrepancy means that we don’t have the right understanding.

We think that the phenomenon that we call inflation is similar to what we call dark energy, and it is possible that there was another expansion episode in the history of the universe just after the recombination period. Certain theories predict a form of “early dark energy” becomes significant giving a boost to the universe that matches our current observations. Another option is the presence of dark radiation: a term that could account for a new type of neutrino or for another relativistic particle present in the early history of the universe. The presence of dark radiation would change the estimate of the expansion rate before the recombination period and gives us a way to address the current Hubble-constant problem. Future measurements could tell us if other predictions of this theory are correct or not.

Does particle physics have a complementary role to play?

Oh definitely. Both collider and astrophysics experiments could potentially reveal either the properties of dark matter or a new relativistic particle or something new that could change the cosmological calculations. There is an overlap concerning the contributions of these fields in understanding the early universe, a lot of cross-talk and blurring of the lines – and in my view, that’s healthy.

What has it been like to win a Nobel prize at the relatively early age of 42?

It has been a great honour. You can choose whether you want to do science or not, as long as this choice is available. So certainly, the Nobel is not a curse. Our team is continually trying to refine the supernovae measurements, while this is a growing community. Hopefully, if you come back in a couple of years, we will have more answers to your questions.

The cutting edge of cancer research

Breast cancer cells

Cancer is a heterogeneous phenomenon that is best viewed as a complex system of cells interacting in a changing micro-environment. Individual experiments may fail to capture this reality, given spatially and temporally limited scales of observation, however, in recent years, physicists have contributed insights into the interplay of phenomena at different scales: gene regulatory networks and communities of cells or organisms are two examples of systems whose properties emerge from the behaviour of individual components. Unfortunately, however, such research is usually confined to journals and specialised conferences, hindering the entry of interested physicists into the field. The publication of a new interdisciplinary textbook is therefore most welcome.

La Porta and Zapperi’s The Physics of Cancer, one of the few books devoted to this subject, brings 15 years of exciting and important results in cancer research to a wide audience. The book approaches the subject from the perspective of physics, chemistry, mathematics and computer science. As a result of the vastness of the subject and the brevity of the book, the discussion can occasionally feel superficial, but the main concepts are introduced in a manner accessible to physicists. The authors follow a logical thread within each argument, and furnish the reader with abundant references to the original literature.

The book begins by observing that the “hallmarks” of cancer are not only yet to be understood, but have increased in number. Published at the turn of the millennium, Douglas Hanahan and Robert Weinberg’s seminal paper identified six: sustaining proliferative signalling; evading growth suppressors; enabling replicative immortality; activating invasion and metastasis; inducing angiogenesis; and resisting cell death. Just 11 years later the same authors published an updated review adding four more hallmarks: avoiding immune destruction; promoting inflammation; genome instability and mutation; and deregulating cellular energetics. The amount of research that has been distilled into a handful of concepts is formidable. However, La Porta and Zapperi argue that a more abstract and unifying approach is now needed to gain a deeper understanding. They advocate studying cancer as a complex system with the tools of several disciplines, in particular subfields of physics such as biomechanics, soft-condensed-matter physics and statistical mechanics.

The book is structured in 10 self-contained chapters. The first two present essential notions of cell and cancer biology. The subsequent chapters deal with different features of cancer from an interdisciplinary perspective. A discussion on statistics and computational models of cancer growth is followed by a chapter exploring the generation of vascular networks in its biological, hydrodynamical and statistical aspects. Next comes a mathematical discussion of tumour growth by stem cells – the active and self-differentiating cells thought to drive the growth of cancers. A couple of chapters treat the biomechanics of cancer cells and their migration in the body, before La Porta and Zapperi turn to the dynamics of chromosomes and the origin of the genetic mutations that cause cancer. The final two chapters focus on how to fight tumours, from the perspectives of both the immune system and pharmacological agents.

La Porta and Zapperi’s book isn’t just light reading for curious physicists – it can also serve to guide interested researchers into a rich interdisciplinary area.

bright-rec iop pub iop-science physcis connect