Marcello Giorgi of the University of Pisa and Tatsuya Nakada of the Swiss Federal Institute of Technology in Lausanne (EPFL) have been awarded the Enrico Fermi Prize from the Italian Physical Society for their outstanding contributions to the experimental evidence of CP violation in the heavy-quark sector. Giorgi is cited “for his leading role in experimental high-energy particle physics with particular regard to the BaBar experiment and the discovery of CP symmetry violation in the B meson systems with beauty quarks”, while Nakada is recognised for his conception and crucial leading role in the realisation of the LHCb experiment that led earlier this year to the discovery of CP violation in D mesons with charm quarks. The prize was presented on 23 September during the opening ceremony of the 105th national congress of the Italian Physical Society in L’Aquila, Italy.
Oliver James is chief scientist of the world’s biggest visual effects studio, DNEG, which produced the spectacular visual effects for Interstellar. DNEG’s work, carried out in collaboration with theoretical cosmologist Kip Thorne, led to some of the most physically-accurate images of a spinning black hole ever created, earning the firm an Academy Award and a BAFTA. For James, it all began with an undergraduate degree in physics at the University of Oxford in the late 1980s – a period that he describes as one of the most fascinating and intellectually stimulating of his life. “It confronted me with the gap between what you observe and reality. I feel it was the same kind of gap I faced while working for Interstellar. I had to study a lot to understand the physics of black holes and curved space time.”
A great part of visual effects is understanding how light interacts with surfaces and volumes and eventually enters a camera’s lens and as a student, Oliver was interested in atomic physics, quantum mechanics and modern optics. This, in addition to his two other passions – computing and photography – led him to his first job in a small photographic studio in London where he became familiar with the technical and operational aspects of the industry. Missing the intellectual challenge offered by physics, in 1995 he contacted and secured a role in the R&D team of the Computer Film Company – a niche studio specialising in digital film which was part of the emerging London visual effects industry.
Suddenly these rag-dolls came to life and you’d find yourself wincing in sympathy as they were battered about
Oliver James
A defining moment came in 2001, when one of his ex-colleagues invited him to join Warner Bros’ ESC Entertainment at Alameda California to work on The Matrix Reloaded & Revolutions. His main task was to work on rigid-body simulations – not a trivial task given the many fight scenes. “There’s a big fight scene, called the Burly Brawl, where hundreds of digital actors get thrown around like skittles,” he says. “We wanted to add realism by simulating the physics of these colliding bodies. The initial tests looked physical, but lifeless, so we enhanced the simulation by introducing torque at every joint, calculated from examples of real locomotion. Suddenly these rag-dolls came to life and you’d find yourself wincing in sympathy as they were battered about”. The sequences took dozens of artists and technicians months of work to create just a few seconds of the movie.
Following his work in ESC Entertainment, James moved back to London and, after a short period at the Moving Picture Company, he finally joined “Double Negative” in 2004 (renamed DNEG in 2018). He’d been attracted by Christopher Nolan’s film Batman Begins, for which the firm was creating visual effects, and it was the beginning of a long and creative journey that would culminate in the sci-fi epic Interstellar, which tells the story of an astronaut searching for habitable planets in outer space.
Physics brings the invisible to life
“We had to create a new imagery for black holes; a big challenge even for someone with a physics background,” recalls James. Given that he hadn’t studied general relativity as an undergraduate and had only touched upon special relativity, he decided to call Kip Thorne of Caltech for help. “At one point I asked [Kip] a very concrete question: ‘Could you give me an equation that describes the trajectory of light from a distant star, around the black hole and finally into an observer’s eye?’ This must have struck the right note as the next day I received an email—it was more like a scientific paper that included the equations answering my questions.” In total, James and Thorne exchanged some 1000 emails, often including detailed mathematical formalism that DNEG could then use in its code. “I often phrased my questions in a rather clumsy way and Kip insisted: “What precisely do you mean”? says James. “This forced me to rethink what was lying at the heart of my questions.”
The result for the wormhole was like a crystal ball reflecting each point the universe
Oliver James
DNEG was soon able to develop new rendering software to visualise black holes and wormholes. The director had wanted a wormhole with an adjustable shape and size and thus we designed one with three free parameters, namely the length and radius of the wormhole’s interior as well as a third variant describing the smoothness of the transition from its interior to its exteriors, explains James. “The result for the wormhole was like a crystal ball reflecting each point the universe; imagine a spherical hole in space–time.” Simulating a black hole represented a bigger challenge as, by definition, it is an object that doesn’t allow light to escape. With his colleagues, he developed a completely new renderer that simulates the path of light through gravitationally warped space–time – including gravitational lensing effects and other physical phenomena that take place around a black hole.
Quality standards
On the internet, one can find many images of black holes “eating” other stars of stars colliding to form a black hole. But producing an image for a motion picture requires totally different quality standards. The high quality demanded of an IMAX image meant that the team had to eliminate any artefacts that could show up in the final picture, and consequently rendering times were up to 100 hours compared to the typical 5–6 hours needed for other films. Contrary to the primary goal of most astrophysical visualisations to achieve a fast throughput, their major goal was to create images that looked like they might really have been filmed. “This goal led us to employ a different set of visualisation techniques from those of the astrophysics community—techniques based on propagation of ray bundles (light beams) instead of discrete light rays, and on carefully designed spatial filtering to smooth the overlaps of neighbouring beams,” says James.
DNEG’s team generated a flat, multicoloured ring standing for the accretion disk and positioned it surrounding the spinning black hole. The result was a warped spac–time around the black hole including its accretion disk. Thorne later wrote in his 2014 book The Science of Interstellar: “You cannot imagine how ecstatic I was when Oliver sent me his initial film clips. For the first time ever –and before any other scientist– I saw in ultra-high definition what a fast-spinning black hole looks like. What it does, visually, to its environment.” The following year, James and his DNEG colleagues published two papers with Thorne on the science and visualisation of these objects (Am. J. Phys 83 486 and Class. Quantum Grav. 32 065001).
Another challenge was to capture the fact that the film camera should be traveling at a substantial fraction of the speed of light. Relativistic aberration, Doppler shifts and gravitational redshifts had to be integrated in the rendering code, influencing how the disk layers would look close to the camera as well as the colour grading and brightness changes in the final image. Things get even more complicated closer to the black hole where space–time is more distorted; gravitational lensing gets more extreme and the computation takes more steps. Thorne developed procedures describing how to map a light ray and a ray bundle from the light source to the camera’s local sky, and produced low-quality images in Mathematica to verify his code before giving it to DNEG to create the fast and high-resolution render. This was used to simulate all the images to be lensed: fields of stars, dust clouds and nebulae and the accretion disk around the Gargantua, Interstellar’s gigantic black hole. In total, the movie notched up almost 800 TB of data. To simulate the starry background, DNEG used the Tycho-2 catalogue star catalogue from the European Space Agency containing about 2.5 million stars, and more recently the team has adopted the Gaia catalogue containing 1.7 billion stars.
Creative industry
With the increased use of visual effects, more and more scientists are working in the field including mathematicians and physicists. And visual effects are not vital only for sci-fi movies but are also integrated in drama or historical films. Furthermore, there are a growing number of companies creating tailored simulation packages for specific processes. DNEG alone has increased from 80 people in 2004 to more than 5000 people today. At the same time, this increase in numbers means that software needs to be scalable and adaptable to meet a wide range of skilled artists, James explains. “Developing specialised simulation software that gets used locally by a small group of skilled artists is one thing but making it usable by a wide range of artists across the globe calls for a much bigger effort – to make it robust and much more accessible”.
Asked if computational resources are a limiting factor for the future of visual effects, James thinks any increase in computational power will quickly be swallowed up by artists adding extra detail or creating more complex simulations. The game-changer, he says, will be real-time simulation and rendering. Today, video games are rendered in real-time by the computer’s video card, whereas visual effects in movies are almost entirely created as batch-processes and afterwards the results are cached or pre-rendered so they can be played back in real-time. “Moving to real-time rendering means that the workflow will not rely on overnight renders and would allow artists many more iterations during production. We have only scratched the surface and there are plenty of opportunities for scientists”. Even machine learning promises to play a role in the industry, and James is currently involved in R&D to use it to enable more natural body movements or facial expressions. Open data and open access is also an area which is growing, and in which DNEG is actively involved.
“Visual effects is a fascinating industry where technology and hard-science are used to solve creative problems,” says James. “Occasionally the roles get reversed and our creativity can have a real impact on science.”
Gaurang Yodh, a passionate particle and cosmic-ray physicist and musician, passed away on 3 June at age 90. He was born in Ahmedabad in India. After graduating from the University of Bombay in 1948, he was recruited by the University of Chicago to join the group of Enrico Fermi and Herb Anderson. After Fermi’s death in 1954, he finished his PhD with Anderson in 1955, after which he moved to Stanford where he worked with Wolfgang Panofsky.
He and his wife returned to Bombay (Mumbai) in 1956, where he started accelerator physics programmes at the Tata Institute of Fundamental Research, but he was lured back to the US and took a physics faculty job at the Carnegie Institute of Technology (later Carnegie Mellon University). In 1961 he joined the physics and astronomy department at the University of Maryland and stayed there until 1988, when he moved to the University of California at Irvine, where he finished his career.
Gaurang’s PhD research work at Chicago with Anderson and Fermi studied the interactions of pions with protons and neutrons. With Panofsky he studied electron–nucleon scattering. He continued this work until the late 1960s when his interests shifted from accelerators to cosmic rays. In 1972, with Yash Pal and James Trefil, he showed that the proton–proton cross section increased with energy – a finding later confirmed at CERN.
Prominent work followed with the development of transition radiation detectors for particle identification. His 1975 paper “Practical theory of the multilayered transition radiation detector” is still a standard reference in high-energy and cosmic-ray physics. In the 1980s, Gaurang’s interests shifted again, in this case to study high-energy gamma rays from space. His ideas led to the development of ground-based water Cherenkov telescopes for the study of gamma rays and searches for sources of cosmic rays. In the 1990s and 2000s, Gaurang and collaborators pursued these detection techniques, and their high-altitude offspring, in two major collaborations – MILAGRO and HAWC – and at UC Irvine Gaurang was a contributor to the IceCube collaboration. He was also a strong advocate for the ARIANNA project, which is developing radio techniques to look for astrophysical neutrinos. Throughout his career, Gaurang mentored many PhD students and post-docs who went on to successful careers.
Gaurang was a renowned sitar player who gave concerts at universities and physics conferences, and in 1956 recorded one of the very first albums of Indian music in the US: Music of India (volumes 1 & 2). He was a gentle and caring man with an infectious optimism and a joy for life. His friends enjoyed his good humour, charm and enthusiasm. He is survived by his three children, eight grandchildren and his sister.
Teeming with radiation and data, the heart of a hadron collider is an inhospitable environment in which to make a tricky decision. Nevertheless, the LHC experiment detectors have only microseconds after each proton–proton collision to make their most critical analysis call: whether to read out the detector or reject the event forever. As a result of limitations in read-out bandwidth, only 0.002% of the terabits per second of data generated by the detectors can be saved for use in physics analyses. Boosts in energy and luminosity – and the accompanying surge in the complexity of the data from the high-luminosity LHC upgrade – mean that the technical challenge is growing rapidly. New techniques are therefore needed to ensure that decisions are made with speed, precision and flexibility so that the subsequent physics measurements are as sharp as possible.
The front-end and read-out systems of most collider detectors include many application-specific integrated circuits (ASICs). These custom-designed chips digitise signals at the interface between the detector and the outside world. The algorithms are baked into silicon at the foundries of some of the biggest companies in the world, with limited prospects for changing their functionality in the light of changing conditions or detector performance. Minor design changes require substantial time and money to fix, and the replacement chip must be fabricated from scratch. In the LHC era, the tricky trigger electronics are therefore not implemented with ASICs, as before, but with field-programmable gate arrays (FPGAs). Previously used to prototype the ASICs, FPGAs may be re-programmed “in the field”, without a trip to the foundry. Now also prevalent in high-performance computing, with leading tech companies using them to accelerate critical processing in their data centres, FPGAs offer the benefits of task-specific customisation of the computing architecture without having to set the chip’s functionality in stone – or in this case silicon.
FPGAs can compete with other high-performance computing chips due to their massive capability for parallel processing and relatively low power consumption per operation. The devices contain many millions of programmable logic gates that can be configured and connected together to solve specific problems. Because of the vast numbers of tiny processing units, FPGAs can be programmed to work on many different parts of a task simultaneously, thereby achieving massive throughput and low latency – ideal for increasingly popular machine-learning applications. FPGAs can also support high bandwidth inputs and outputs of up to about 100 dedicated high-speed serial links, making them ideal workhorses to process the deluge of data that streams out of particle detectors (see CERN Courier September 2016 p21).
The difficulty is that programming FPGAs is traditionally the preserve of engineers coding low-level languages such as VHDL and Verilog, where even simple tasks can be tricky. For example, a function to sum two numbers together requires several lines of code in VHDL, with the designer even required to define when the operations happen relative to the processor clock (figure 1). Outsourcing the coding is impractical, given the imminent need to implement elaborate algorithms featuring machine learning in the trigger to quickly analyse data from high-granularity detectors in high-luminosity environments. During the past five years, however, tools have matured, allowing FPGAs to be programmed in variants of high-level languages such as C++ and Java, and bringing FPGA coding within the reach of physicists themselves.
But can high-level tools produce FPGA code with low-enough latency for trigger applications? And can their resource usage compete with professionally developed low-level code? During the past couple of years CMS physicists have trialled the use of a Java-based language, MaxJ, and tools from Maxeler Technologies, a leading company in accelerated computing and data-flow engines, who were partners in the studies. More recently the collaboration has also gained experience with the C++-based Vivado high-level synthesis (HLS) tool of the FPGA manufacturer Xilinx. The work has demonstrated the potential for ground-breaking new tools to be used in future triggers, without significantly increasing resource usage and latency.
Track and field-programmable
Tasked with finding hadronic jets and calculating missing transverse energy in a few microseconds, the trigger of the CMS calorimeter handles an information throughput of 6.5 terabits per second. Data are read out from the detector into the trigger-system FPGAs in the counting room in a cavern adjacent to CMS. The official FPGA code was implemented in VHDL over several months each of development, debugging and testing. To investigate whether high-level FPGA programming can be practical, the same algorithms were implemented in MaxJ by an inexperienced doctoral student (figure 2), with the low-level clocking and management of high-speed serial links still undertaken by the professionally developed code. The high-level code had comparable latency and resource usage with one exception: the hand-crafted VHDL was superior when it came to quickly sorting objects by their transverse momentum. With this caveat, the study suggests that using high-level development tools can dramatically lower the bar for developing FPGA firmware, to the extent that students and physicists can contribute to large parts of the development of labyrinthine electronics systems.
Kalman filtering is an example of an algorithm that is conventionally used for offline track reconstruction on CPUs, away from the low-latency restrictions of the trigger. The mathematical aspects of the algorithm are difficult to implement in a low-level language, for example requiring trajectory fits to be iteratively optimised using sequential matrix algebra calculations. But the advantages of a high-level language could conceivably make Kalman filtering tractable in the trigger. To test this, the algorithm was implemented for the phase-II upgrade of the CMS tracker in MaxJ. The scheduler of Maxeler’s tool, MaxCompiler, automatically pipelines the operations to achieve the best throughput, keeping the flow of data synchronised. This saves a significant amount of effort in the development of a complicated new algorithm compared to a low-level language, where this must be done by hand. Additionally, MaxCompiler’s support for fixed-point arithmetic allows the developer to make full use of the capability of FPGAs to use custom data types. Tailoring the data representation to the problem at hand results in faster, more lightweight processing, which would be prohibitively labour-intensive in a low-level language. The result of the study was hundreds of simultaneous track fits in a single FPGA in just over a microsecond.
Ghost in the machine
Deep neural networks, which have become increasingly prevalent in offline analysis and event reconstruction thanks to their ability to exploit tangled relationships in data, are another obvious candidate for processing data more efficiently. To find out if such algorithms could be implemented in FPGAs, and executed within the tight latency constraints of the trigger, an example application was developed to identify fake tracks – the inevitable byproducts of overlapping particle trajectories – in the output of the MaxJ Kalman filter described above. Machine learning has the potential to distinguish such bogus tracks better than simple selection cuts, and a boosted decision tree (BDT) proved effective here, with the decision step, which employs many small and independent decision trees, implemented with MaxCompiler. A latency of a few hundredths of a microsecond – much shorter than the iterative Kalman filter as BDTs are inherently very parallelisable – was achieved using only a small percentage of the silicon area of the FPGA, so leaving room for other algorithms. Another tool capable of executing machine-learning models in tens of nanoseconds is the “hls4ml” FPGA inference engine for deep neural networks, built on the Vivado HLS compiler of Xilinx. With the use of such tools, non-FPGA experts can trade-off latency and resource usage – two critical metrics of performance, which would require significant extra effort to balance in collaboration with engineers writing low-level code.
Though requiring a little extra learning and some knowledge of the underlying technology, it is now possible for ordinary physicists to programme FPGAs in high-level languages, such as Maxeler’s MaxJ and Xilinx’s Vivado HLS. Development time can be cut significantly, while maintaining latency and resource usage at a similar level to hand-crafted FPGA code, with the fast development of mathematically intricate algorithms an especially promising use case. Opening up FPGA programming to physicists will allow offline approaches such as machine learning to be transferred to real-time detector electronics.
Novel approaches will be critical for all aspects of computing at the high-luminosity LHC. New levels of complexity and throughput will exceed the capability of CPUs alone, and require the extensive use of heterogenous accelerators such as FPGAs, graphics processing units (GPUs) and perhaps even tensor processing units (TPUs) in offline computing. Recent developments in FPGA interfaces are therefore most welcome as they will allow particle physicists to execute complex algorithms in the trigger, and make the critical initial selection more effective than ever before.
Accelerator experts from around the world met from 30 June – 5 July in Dresden’s historic city centre for six days of intense discussions on superconducting radio-frequency (SRF) science and technology. The Helmholtz-Zentrum Dresden-Rossendorf (HZDR) hosted the 19th conference in the biannual series, which demonstrated that SRF has matured to become the enabling technology for many applications. New SRF-based large-scale facilities throughout the world include the European XFEL in Germany, LCLS-II and FRIB in the US, ESS in Sweden, RAON in Korea, and SHINE in China.
The conference opened on Germany’s hottest day of the year with a “young scientists” session comprising 40 posters. The following week featured a programme packed with 67 invited oral presentations, more than 300 posters and an industrial exhibition. Keynote lecturer Thomas Tschentscher (European XFEL) discussed applications of high-repetition-rate SRF-based X-ray sources, while Andreas Maier (University of Hamburg) reviewed rapidly advancing laser-plasma accelerators, emphasising their complementarity with SRF-based systems.
Much excitement in the community was generated by new, fundamental insights into power dissipation in RF superconductors. A better understanding of the role of magnetic flux trapping and impurities for RF losses has pushed state-of-the-art niobium to near its theoretical limit. However, recent advances with Nb3Sn (CERN Courier July/August 2019 p9) have demonstrated performance levels commensurate with established niobium systems, but at a much higher operating temperature (≥ 4.2 K rather than ≤ 2 K). Such performance was unthinkable just a few years ago. Coupled with technological developments for tuners, digital control systems and cavity processing, turn-key high-field and continuous-wave operation at 4.2 K and above appears within reach. The potential benefit for both large-scale facilities as well as compact SRF-based accelerators in terms of cost and complexity is enormous.
The SRF conference traditionally plays an important role in attracting new, young researchers and engineers to the field, and provides them with a forum to present their results. In the three days leading up to the conference, HZDR hosted tutorials covering all aspects from superconductivity fundamentals to cryomodule design, which attracted 89 students and young scientists. During the conference, 18 young investigators were invited to give presentations. Bianca Giaccone (Fermilab) and Ryan Porter (Cornell University) received prizes for the best talks, alongside Guilherme Semione (DESY) for best poster.
The SRF conference rotates between Europe, Asia and the Americas. SRF 2021 will be hosted by Michigan State University/FRIB, while SRF 2023 moves on to Japan’s Riken Nishina Center.
Since 1993 the Rencontres du Vietnam have fostered exchanges between scientists in the Asia-Pacific region and colleagues from other parts of the world. The 15th edition, which brought together more than 50 physicists in Quy Nhon, Vietnam from 4–10 August, celebrated the 30th anniversary of the start of the Large Electron Positron collider (LEP) in 1989, which within a mere three weeks of running had established that the number of species of light active neutrinos is three (CERN Courier September/October 2019 p32). This was a great opportunity to emphasise the important role that colliders have played and will continue to play in neutrino physics. Before the three-neutrino measurement of LEP, the tau neutrino had been established in the years 1975–1986 by a combination of e+e–-collider observations of tau decays, pp collisions (the W → τντ decay was observed at CERN in 1985) and neutrino-beam experiments, where it was observed that taus are never produced by electron– or muon–neutrino beams (for example Fermilab’s E531 experiment in 1986).
A neutrino-oscillation industry then sprang into being, following the discovery that neutrinos have mass. An abundance of recent results on oscillation parameters were presented from accelerator-neutrino beams, nuclear reactors, and atmospheric and astrophysical neutrinos. Interestingly, the data now seem to indicate at > 3σ that neutrinos follow the natural (rather than inverted) mass ordering, in which the most electron-like neutrino has a mass smaller than that of the muon and tau neutrinos. The next 10 years should see this question resolved, as well as a determination of the CP-violating phase of the neutrino-mixing matrix, with a precision of 5–10 degrees.
The fact that neutrinos have mass requires an addition to the Standard Model (SM), wherein neutrinos are massless by definition. There are several solutions, of which a minimal modification is to introduce right-handed neutrinos in addition to the normal ones, which have left-handed chirality. The properties of these heavy neutral leptons would be very well predicted were it not that their mass can lie anywhere from less than an eV to 1010 GeV or more. Being sterile they only couple to SM particles via mixing with normal neutrinos. Consequently, they should be very rare and have long lifetimes, perhaps allowing a spectacular observation in either fixed-target or high-luminosity colliders at the electroweak scale. One possible low-energy indication could be the existence of neutrinoless double-beta decay. Such decays, currently being searched for directly in dedicated experiments worldwide, violate lepton–number conservation in the case where neutrinos possess a Majorana mass term that transforms neutrinos into antineutrinos.
For a fortunate combination of parameters, this could lead to a spectacular signature
The meeting reviewed the status of all aspects of massive neutrinos, from direct mass measurements of the sort successfully executed shortly after the conference by the KATRIN experiment (see KATRIN sets first limit on neutrino mass) to the search for heavy sterile neutrinos in ATLAS and CMS. A new feature of the field is the abundance of experimental projects searching for very weakly, or, to use the newly coined parlance, “feebly”, interacting particles. These range from CERN’s SHiP experiment to future LHC projects such as FASER and Mathusla (for masses from the pion to the B meson); proposed high-luminosity and high-energy colliders such as the Future Circular Collider would extend the search up to the Z mass for mixings between the heavy and light neutrinos down to 10–11. Until recently classified as exotic, these experiments could yield the long-sought-after explanation for the matter–antimatter asymmetry of the universe by combining CP violation with an interaction that transforms particles into antiparticles. For a fortunate combination of parameters, this could lead to a spectacular signature: the production of a heavy neutrino in a W decay, tagged by an associated charged lepton, and followed by its transformation into its antineutrino, which could then be identified by its decay into a lepton of the same sign as that initially tagged (and possibly of a different flavour).
The meeting was thus concluded in continuity with its initial commemoration: could the physics of neutrinos be one of the highlights of future high-energy colliders?
The 29th International Symposium on Lepton–Photon Interactions at High Energies was held in Canada from 5–10 August at the Westin Harbour Castle hotel, right on the Lake Ontario waterfront in downtown Toronto. Almost 300 delegates provided a snapshot of the entire field of particle physics and, for the first time, parallel sessions were convened from abstracts submitted by collaborations and individuals.
The symposium opened with a welcome from Chief Laforme of the Mississauga First Nation. It was followed by highlights from the LHC experiments and updates on plans for the CERN accelerator complex, the CEPC project in China and the recently inaugurated Belle II programme in Japan. The Belle-II collaboration showed early results from their first 6.5 fb–1 of SuperKEKb data, including measurements of previously studied Standard Model (SM) phenomena and a new limit on dark-photon production near 10 GeV. Further plenary sessions covered dark-matter searches, multi-messenger astronomy, Higgs, electroweak and top-quark physics, heavy-ion physics, QCD, exotic-particle searches, flavour physics and neutrino physics.
Tatsuya Nakada offered his views on flavour factories
The symposium ended with a progress report on the European strategy for particle physics and summaries on advances in particle detection and instrumentation, followed by a presentation on outreach and education initiatives from Kétévi Assamagan (Witwatersrand and BNL), and perspectives on future facilities. In the discussion on future flavour facilities, Tatsuya Nakada (EPFL) offered his views on flavour factories, emphasising their important role in guiding future experiments. He stressed the fact that yesterday’s discoveries (most recently the Higgs boson) become today’s workhorses, providing stringent tests of the SM. In the coming decades we are likely to have W and Higgs factories that will further illuminate the remaining shadows in the SM.
A packed public lecture by 2015 Nobel-Prize winner Art McDonald demonstrated the keen interest of the broader public in the continued developments in particle physics, including those in Canada at the SNOLAB underground laboratory, which now hosts several experiments engaged in neutrino physics and dark-matter searches, following the seminal results from the SNO experiment.
The 16th International Conference on Topics in Astroparticle and Underground Physics (TAUP 2019) was held in Japan from 9–13 September, attracting a record 540 physicists from around 30 countries. The 2019 edition of the series, which covered recent experimental and theoretical developments in astroparticle physics, was hosted by the Institute for Cosmic Ray Research of the University of Tokyo, and held in Toyama – the gateway city to the Kamioka experimental site.
Discussions first focused on gravitational-wave observations. During their first two observing runs, reported Patricia Schmidt from Radboud University, LIGO and Virgo confidently detected gravitational waves from 10 binary black-hole coalescenses and one binary neutron star inspiral, seeing one gravitational-wave event every 15 days of observation. It was also reported that, during the ongoing third observing run, LIGO and Virgo have already observed 26 candidate events. Among them is the first signal from a black hole–neutron star merger.
Guido Drexlin revealed the first measurement results on the upper limit of the neutrino mass
The programme continued with presentations from various research fields, a highlight being a report on the first result of the KATRIN experiment (KATRIN sets first limit on neutrino mass). Co-spokesperson Guido Drexlin revealed the first measurement results on the upper limit of the neutrino mass: < 1.1 eV at 90% confidence. This world-leading direct limit – which measures the neutrino mass by precisely measuring the kinematics of the electrons emitted from tritium beta decays – was obtained based on only four weeks of data. With the continuation of the experiment, it is expected that the limit will be reduced further, or even – if the neutrino mass is sufficiently large – the actual mass will be determined. Due to their oscillatory nature, it has been known since 1998 that neutrinos have tiny, but non-zero, masses. However, their absolute values have not yet been measured.
Diversity is a key feature of the TAUP conference. Topics discussed included cosmology, dark matter, neutrinos, underground laboratories, new technologies, gravitational waves, high-energy astrophysics and cosmic rays. Multi-messenger astronomy – which combines information from gravitational-wave observation, optical astronomy, neutrino detection and other electromagnetic signals – is quickly becoming established and is expected to play an even more important role in the future in gaining a deeper understanding of the universe.
The next TAUP conference will be held in Valencia, Spain, from 30 August to 3 September 2021.
Since joining in 1959, Austria has never stopped contributing to CERN. Associated in bygone days with the UA1 experiment at the SPS, where the W and Z bosons were discovered, and later with LEP’s DELPHI experiment, which helped to put the Standard Model on a solid footing, today hundreds of Austrian scientists contribute to CERN’s experimental programme, and its institutes participate in ALICE, ATLAS, CMS and in experiments at the Antiproton Decelerator. Two of the laboratory’s directors, Willibald Jentschke and Victor Frederick Weisskopf, were born in Austria.
To celebrate the 60th anniversary of Austria’s membership, the public were invited to “Meet the Universe” during a series of exhibitions and public events from 5–12 September, organised by the Institute of High Energy Physics (HEPHY) of the Austrian Academy of Sciences. CERN Director-General Fabiola Gianotti opened proceedings by discussing the role of particle colliders as tools for exploration. The following day, 2017 Nobel Prize winner Barry Barish presented his vision for gravitational-wave detectors and the dawn of multi-messenger astronomy. The programme continued with public lectures by Jon Butterworth of University College London, presenting the various experimental paths that could reveal hints for new physics, and Christoph Schwanda of HEPHY discussing the matter–antimatter asymmetry in the universe.
“We’d like to celebrate this important anniversary and continue to contribute to this long-term endeavour together with the other countries that participate in CERN’s research programme,” said Manfred Krammer, both of HEPHY and head of CERN’s experimental physics department.
The long-standing relationship with CERN has offered broad benefits to the Austrian scientific community, a noticeable example being the Vienna Conference on Instrumentation, and since 1993 the Austrian doctoral programme, which has now trained more than 200 participants, has been fully integrated with CERN’s PhD programme. Today, Austria’s collaboration with CERN extends far beyond particle physics. Business incubation centres were launched in Austria in 2015, and the MedAustron advanced hadron-therapy centre (CERN Courier September/October 2019 p10), which was developed in collaboration with CERN, is among the world’s leading medical research facilities.
“CERN is the place to push the frontiers, and scientists from Austria will contribute to make the next steps towards the unknown,” said HEPHY director Jochen Schieck.
The 2019 edition of New Trends in High Energy Physics took place in Odessa, Ukraine, from 12 to 18 May, with 84 participants attending from 21 countries. Initiated by the Bogolyubov Institute for Theoretical Physics at the National Academy of Sciences in the Ukraine and the Joint Institute for Nuclear Research (JINR) in Dubna, the series focuses on new ideas and hot problems in theory and experiment. The series started in 1992 in Kiev under the name HADRONS, changed its title to “New Trends in High-Energy Physics” at the turn of the millennium, took place for a decade in the Crimea, then moved to Natal (Brazil) and Becici (Montenegro), before coming back to Ukraine this year.
This year’s conference had an emphasis on heavy-ion physics and strong interactions, with aspects of the QCD phase diagram such as signatures of the transition from quark–gluon plasma to hadrons highlighted in several talks. The interpretation of recent experimental results on collectivity (the bulk motion of nuclear matter at high temperatures) in terms of the formation of a “perfect liquid” was also discussed. Future searches for glueballs and other exotic hadronic states will contribute to an improved understanding of non-perturbative aspects of QCD.
Many problems of low and intermediate energy physics are still unresolved
Parallel to the quest for the highest possible energies, many problems of low- and intermediate-energy physics are still unresolved, such as the critical behaviour of excited baryonic matter, the nature of exotic resonances and puzzles relating to spin. The construction of new facilities will help answer these questions, with high-luminosity collisions of particles ranging from polarised protons to gold ions at JINR–Dubna’s NICA facility, complemented by fixed-target antiproton and ion studies with unprecedented collision rates at FAIR, the new international accelerator complex at GSI Darmstadt.
Talks on general relativity and cosmology, dark matter and black holes explored the many facets of modern astrophysical observations. Future multi-messenger observations, combining the measurements of the electromagnetic radiation spectrum and neutrinos with gravitational wave signals, are expected to contribute significantly to an improved understanding of the dynamics of binary black-hole and neutron-star mergers. Such measurements are of great significance for a variety of open issues, for example, nuclear physics at densities far beyond the regime accessible in laboratory experiments.
The next edition of the conference will be held in Kiev from 27 June to 3 July 2021.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.