Comsol -leaderboard other pages

Topics

Winners of 2019 Beamline for Schools competition

Students from the Praedinius Gymnasium in Groningen

Two teams of high-school students, one from the Praedinius Gymnasium in Groningen, Netherlands (pictured), and one from the West High School in Salt Lake City, US, have won CERN’s 2019 Beamline for Schools competition. In October, the teams will travel to DESY in Germany to carry out their proposed experiments together with scientists from CERN and DESY. The Netherlands team “Particle Peers” will compare the properties of the particle showers originating from electrons with those created from positrons, while the “DESY Chain” team from the US will focus on the properties of scintillators for more efficient particle detectors. Since Beamline for Schools was launched in 2014, almost 10,000 students from 84 countries have participated. This year, 178 teams from 49 countries worldwide submitted a proposal for the sixth edition of the competition. Due to the current long shutdown of CERN’s accelerators for maintenance and upgrade, there is currently no beam at CERN, which has opened up opportunities to explore partnerships with DESY and other laboratories.

Bottomonium elliptic-flow no-show

Diagram of elliptic flow

High-energy heavy-ion collisions at the LHC give rise to a deconfined system of quarks and gluons called the quark–gluon plasma (QGP). One of its most striking features is the emergence of collective motion due to pressure gradients that develop at the centre. Direct experimental evidence for this collective motion is the observation of anisotropic flow, which translates the asymmetry of the initial geometry into a final-state momentum anisotropy. Its magnitude is quantified by harmonic coefficients vn in a Fourier decomposition of the azimuthal distribution of particles. As a result of the almond-shaped geometry of the interaction volume, the largest contribution to the asymmetry is the second coefficient, or “elliptic flow”, v2.

A positive v2 has been measured for a large variety of particles, from pions, protons and strange hadrons up to the heavier J/ψ meson. The latter is a curious case as quarkonia such as J/ψ are bound states of a heavy quark (charm or bottom) and its antiquark (CERN Courier December 2017 p11). Quarkonia constitute interesting probes of the QGP because heavy-quark pairs are produced early and experience the full evolution of the collision. In heavy-ion collisions at the LHC, charmonia, such as the J/ψ, dissociate due to screening from free colour charges in the QGP, and regenerate by the recombination of thermalised charm quarks. More massive still, and having a higher binding energy than charmonium, the dissociation of bottomonium ϒ(1S) is expected to be limited to the early stage of the collision when the temperature of the surrounding QGP medium is high. Its regeneration is not expected to be significant because of the small number of available bottom quarks.

The ALICE collaboration recently reported the first measurement of the elliptic flow of the ϒ(1S) meson in lead–lead (Pb–Pb) collisions using the full Pb–Pb data set of LHC Run 2 (figure 1). The measured values of the ϒ(1S) v2 are small and consistent with zero, making bottomonia the first hadrons that do not seem to flow in heavy-ion collisions at the LHC. Compared to the measured ν2 of inclusive J/ψ in the same centrality and pT intervals, the v2 of ϒ(1S) is lower by 2.6 standard deviations. The results are also consistent with the small, positive values predicted by models that include no or small regeneration of bottomonia by the recombination of bottom quarks interacting in the QGP.

These observations, in combination with earlier measurements of the suppression of ϒ(1S) and J/ψ, support the scenario in which charmonia dissociate and reform in the QGP, while bottomonia are dominantly dissociated at early stages of the collisions. Future datasets, to be collected during LHC runs 3 and 4 after a major upgrade of the ALICE detector, will significantly improve the quality of the present measurements.

Building Balkan bridges in theory

SEENET-MTP logo

Twenty years ago, distinguished Austrian theorist and co-inventor of supersymmetric quantum field theory, Julius Wess, concluded that something must be done to revitalise science in former Yugoslavia. One of the 12 founding members of CERN, Yugoslavia was a middle-sized European country with corresponding moderate activities in high-energy physics. Its breakup resulted in a dramatic deterioration of conditions for science, the loss of connections and an overwhelming sense of isolation inside the region.

Wess strongly believed that science is a powerful means to influence the development of society. From 1999 to 2003, his initiative “Wissenschaftler in Global Verantwortung” (WIGV), which translates to “Scientists in Global Responsibility”, provided a platform to connect and support individual researchers, groups and institutions with a focus on former Yugoslavia. Much was achieved during this short time, such as the granting of scholarships in mathematics and theoretical physics, a revival of interrupted schools and conferences and the modernisation of intranet at several Serbian institutions. Funding, initially from Germany, provided an opportunity to researchers from former Yugoslavia to establish contacts and cooperation with many excellent researchers from all around the world.

Goran Djordjevic

It was natural to expand the WIGV initiative to bridge the gap between southeastern and the rest of Europe. Countries to the east and south of Yugoslavia – such as Bulgaria, Greece, Romania and Turkey – have a reasonably strong presence in high-energy physics. On the other hand, they share some similar economic and scientific problems, with many research groups facing insufficient financing, isolation and lacking critical mass.

Therefore, the participants of the UNESCO-sponsored Balkan Workshop 2003 held in Serbia created the southeastern European Network in Mathematical and Theoretical Physics (SEENET-MTP). The network has since grown to include 16 full and seven associated member institutions from 11 countries, and more than 450 individual members. There are also 13 partner institutions worldwide. During its 15 years SEENET-MTP has undertaken: more than 18 projects, mostly concerning mobility and training; 30 conferences, workshops and schools; more than 300 researcher and student exchanges and fellowships; and more than 250 joint papers. Following initial support from CERN’s theory department, a formal collaboration agreement resulted in the joint CERN–SEENET-MTP PhD Training Program with at least 80 students taking part in the first cycle from 2015 to 2018. Vital support also came from the European Physical Society and ICTP Trieste.

In total, the investment provided for SEENET-MTP from international funds, its members, national funds and in-kind support amounts to almost €1 million. It is quite an achievement – if we consider that the results rely mostly on the efforts and good will of many individuals – but it is still much less than an average “EU project”. This raises important questions about maintaining SEENET-MTP’s efforts. 

SEENET-MTP has “thermalised” the system – the network has made people in the region interact. Yet today, we find ourselves asking similar questions that its founders asked themselves 15 years ago. Is there something specific to southeast Europe that deserves special treatment? Is there something specific in high-energy theoretical physics that merits specific funding? Is the financing of high-energy physics primarily a responsibility of governments? And, if so, can Balkan countries do it properly?

Is there something specific to southeast Europe that deserves special treatment?

If the answers to the first three questions are “yes”, and to the last one “no”, a pressing issue concerns extra funding and the role of the European Union (EU). In the six or seven countries in the region that are not yet members of the EU (and which have a very unclear perspective about joining), we need to work out how to fund fundamental sciences in a similar way that Poland, Czech Republic, or “older” EU countries do. At the same time, it is important to consider the future roles of non-EU institutions such as CERN and the ICTP. The recent accession of Serbia to CERN as a full member state, and with Croatia and Slovenia in the process of joining, are promising signs towards closer European integration.

Networking is the most natural and promising auxiliary mechanism to preserve and build local capacity in fundamental physics in the region. The next SEENET Scientific-Advisory Committee and its Council meeting will take place at ICTP Trieste from 20 to 23 October. It will be the right place, if not the last possibility, to transfer the initial ideas and achieved results to an EU-supported project to bolster best practice in the Balkans.

www.seenet-mtp.info/bridges

Grappling with dark energy

Adam Riess of Johns Hopkins University

Could you tell us a few words about the discovery that won you a share of the 2011 Nobel Prize in Physics?

Back in the 1990s, the assumption was that we live in a dense universe governed by baryonic and dark matter, but astronomers could only account for 30% of matter. We wanted to measure the expected deceleration of the universe at larger scales, in the hope that we would find evidence for some kind of extra matter that theorists predicted could be out there. So, from 1994 we started a campaign to measure the distances and redshifts of type-1a supernovae explosions. The shift in a supernova’s spectrum due to the expansion of space gives its redshift, and the relation between redshift and distance is used to determine the expansion rate of the universe. By comparing the expansion rates at two different epochs of the universe, we can estimate the expansion rate of the universe and how it changes over time. We made this comparison in 1998 and, to our surprise, we found that instead of decreasing, the expansion rate was speeding up. A stronger confirmation came after combining our measurements with those of the High-z Supernova Search Team. The result could be interpreted if the universe instead of decelerating is speeding up its expansion.

What was the reaction from your colleagues when you announced your findings?

That our result was wrong! There were understandably different reactions but the fact that two independent teams were measuring an accelerating expansion rate, plus the independent confirmation from measurements of the Cosmic Microwave Background (CMB), made it clear that the universe is accelerating. We reviewed all possible sources of errors including the presence of some yet unknown astronomical process, but nothing came out. Barring a series of unrelated mistakes, we were looking at a new feature of the universe.

There were other puzzles at that time in cosmology that the idea of an accelerating universe could also solve. The so-called “age crisis” (many stars looked older than the age of the universe) was one of them. This meant that either the stellar ages are too high or that there is something wrong with the age of the universe and its expansion. This discrepancy could be resolved by accounting for an accelerated expansion.

What is driving the accelerated expansion?

One idea is that the cosmological constant, initially introduced by Einstein so that general relativity could accommodate a static universe, is linked to the vacuum energy. Today we know that the vacuum energy can’t be the final answer because summing the contributions from the presumed quantum states in the universe produces an enormous number for the expansion rate that is about 120 orders of magnitude higher than observed. This rate is so high that it would have ripped apart galaxies, stars, planets, before any structure was formed.

The accelerating expansion can be due to what we broadly refer to as dark energy, but its source and its physics remain unknown. It is an ongoing area of research. Today we are making further supernovae observations to measure even more precisely the expansion rate, which will help us to understand the physics behind it.

By which other methods can we determine the source of the acceleration?

Today there is a vast range of approaches, using both space and ground experiments. A lot of work is ongoing on identifying more supernovae and measuring their distances and redshifts with higher precision. Other experiments are also looking to baryonic acoustic oscillations that would provide a standard ruler for measuring cosmological distances in the universe. There are proposals to use weak gravitational lensing, which is extremely sensitive to the parameters describing dark energy as well as the shape and history of the universe. Redshift space distortions due to the peculiar velocities of galaxies can also tell us something. We may be able to learn something from these different types of observations in a few years. The hope is to be able to measure the equation-of-state of dark energy with a 1% precision, and its variation over time with about 10% precision. This will offer a better understanding of whether dark energy is the cosmological constant or perhaps some form of energy temporarily stored in a scalar field that could change over time.

Is this one of the topics that you are currently involved with?

Yes, among other things. I am also working on improving the precision of the measurements of the Hubble constant, Ho, which characterises the present state and expansion rate of our universe. Refined measurements of Ho could point to potential discrepancies in the cosmological model.

What’s wrong with our current determination of the Hubble constant?

The problem is that even when we account for dark energy (factoring in any uncertainties we are aware of) we get a discrepancy of about 9% when comparing the predicted expansion rate based on CMB data using the standard “ΛCDM” cosmological model with the present expansion. The uncertainty in this measurement has now gone below 2%, leading to a significance of more than 5σ while future observations from the SH0ES programme would likely reduce it to 1.5%.

A new feature in the dark sector of the universe appears increasingly necessary to explain the present tension

There is something more profound in the disagreement of these two measurements. One measures how fast the universe is expanding today, while the other is based on the physics of the early universe – taking into account a specific model – and measuring how fast it should have been expanding. If these values don’t agree, there is a very strong likelihood that we are missing something in our cosmological model that connects the two epochs in the history of our universe. A new feature in the dark sector of the universe appears in my view increasingly necessary to explain the present tension.

When did the seriousness of the H0 discrepancy become clear?

It is hard to pinpoint a date, but it was between the publication of first results from Planck in 2013, which predicted the value of H0 based on precise CMB measurements, and the publication of our 2016 paper that confirmed the H0 measurement. Since then, the tension has been growing. Various people were convinced along this way as new data came in, while there are people who are still not convinced. This diversity of opinions is a healthy sign for science: we should take into account alternative viewpoints and continuously reassess the evidence that we have without taking anything for granted.

How can the Hubble discrepancy be interpreted?

The standard cosmological model, which contains just six free parameters, allows us to extrapolate the evolution from the Big Bang to the present cosmos – period of almost 14 billion years. The model is based on certain assumptions: that space in the early universe was flat; that there are three neutrinos; that dark matter is very nonreactive; that dark energy is similar to the cosmological constant; and that there is no more complex physics. So one or perhaps a combination of these can be wrong. Knowing the original content of the universe and the physics, we should be able to measure how the universe was expanding in the past and what should be its present expansion rate. The fact that there is a discrepancy means that we don’t have the right understanding.

We think that the phenomenon that we call inflation is similar to what we call dark energy, and it is possible that there was another expansion episode in the history of the universe just after the recombination period. Certain theories predict a form of “early dark energy” becomes significant giving a boost to the universe that matches our current observations. Another option is the presence of dark radiation: a term that could account for a new type of neutrino or for another relativistic particle present in the early history of the universe. The presence of dark radiation would change the estimate of the expansion rate before the recombination period and gives us a way to address the current Hubble-constant problem. Future measurements could tell us if other predictions of this theory are correct or not.

Does particle physics have a complementary role to play?

Oh definitely. Both collider and astrophysics experiments could potentially reveal either the properties of dark matter or a new relativistic particle or something new that could change the cosmological calculations. There is an overlap concerning the contributions of these fields in understanding the early universe, a lot of cross-talk and blurring of the lines – and in my view, that’s healthy.

What has it been like to win a Nobel prize at the relatively early age of 42?

It has been a great honour. You can choose whether you want to do science or not, as long as this choice is available. So certainly, the Nobel is not a curse. Our team is continually trying to refine the supernovae measurements, while this is a growing community. Hopefully, if you come back in a couple of years, we will have more answers to your questions.

The cutting edge of cancer research

Breast cancer cells

Cancer is a heterogeneous phenomenon that is best viewed as a complex system of cells interacting in a changing micro-environment. Individual experiments may fail to capture this reality, given spatially and temporally limited scales of observation, however, in recent years, physicists have contributed insights into the interplay of phenomena at different scales: gene regulatory networks and communities of cells or organisms are two examples of systems whose properties emerge from the behaviour of individual components. Unfortunately, however, such research is usually confined to journals and specialised conferences, hindering the entry of interested physicists into the field. The publication of a new interdisciplinary textbook is therefore most welcome.

La Porta and Zapperi’s The Physics of Cancer, one of the few books devoted to this subject, brings 15 years of exciting and important results in cancer research to a wide audience. The book approaches the subject from the perspective of physics, chemistry, mathematics and computer science. As a result of the vastness of the subject and the brevity of the book, the discussion can occasionally feel superficial, but the main concepts are introduced in a manner accessible to physicists. The authors follow a logical thread within each argument, and furnish the reader with abundant references to the original literature.

The book begins by observing that the “hallmarks” of cancer are not only yet to be understood, but have increased in number. Published at the turn of the millennium, Douglas Hanahan and Robert Weinberg’s seminal paper identified six: sustaining proliferative signalling; evading growth suppressors; enabling replicative immortality; activating invasion and metastasis; inducing angiogenesis; and resisting cell death. Just 11 years later the same authors published an updated review adding four more hallmarks: avoiding immune destruction; promoting inflammation; genome instability and mutation; and deregulating cellular energetics. The amount of research that has been distilled into a handful of concepts is formidable. However, La Porta and Zapperi argue that a more abstract and unifying approach is now needed to gain a deeper understanding. They advocate studying cancer as a complex system with the tools of several disciplines, in particular subfields of physics such as biomechanics, soft-condensed-matter physics and statistical mechanics.

The book is structured in 10 self-contained chapters. The first two present essential notions of cell and cancer biology. The subsequent chapters deal with different features of cancer from an interdisciplinary perspective. A discussion on statistics and computational models of cancer growth is followed by a chapter exploring the generation of vascular networks in its biological, hydrodynamical and statistical aspects. Next comes a mathematical discussion of tumour growth by stem cells – the active and self-differentiating cells thought to drive the growth of cancers. A couple of chapters treat the biomechanics of cancer cells and their migration in the body, before La Porta and Zapperi turn to the dynamics of chromosomes and the origin of the genetic mutations that cause cancer. The final two chapters focus on how to fight tumours, from the perspectives of both the immune system and pharmacological agents.

La Porta and Zapperi’s book isn’t just light reading for curious physicists – it can also serve to guide interested researchers into a rich interdisciplinary area.

Dipole marks path to future collider

Installation of the MDP

Researchers in the US have demonstrated an advanced accelerator dipole magnet with a field of 14.1 T – the highest ever achieved for such a device at an operational temperature of 4.5 K. The milestone is the work of the US Magnet Development Program (MDP), which includes Fermilab, Lawrence Berkeley National Laboratory (LBNL), the National High-Field Magnetic Field Laboratory and Brookhaven National Laboratory. The MDP’s “cos-theta 1” (MDPCT1) dipole, made from Nb3Sn superconductor, beats the 13.8 T at 4.5 K achieved by LBNL magnet “HD2” a decade ago, and follows the 14.6 T at 1.9 K (13.9 T at 4.5 K) reached by “FRESCA 2” at CERN in 2018, which was built as a superconducting-cable test station. Together with other recent advances in accelerator magnets in Europe and elsewhere, the result sends a positive signal for the feasibility of next-generation hadron colliders.

The MDP was established in 2016 by the US Department of Energy to develop magnets that operate as closely as possible to the fundamental limits of superconducting materials while minimising the need for magnet training. The programme aims to integrate domestic accelerator-magnet R&D and position the US in the technology development for future high-energy proton-proton colliders, including a possible 100 km-circumference facility at CERN under study by the Future Circular Collider (FCC) collaboration. In addition to the baseline design of MDPCT1, other design options for such a machine have been studied and will be tested in the coming years.

“The goal for this first magnet test was to limit the coil mechanical pre-load to a safe level, sufficient to produce a 14 T field in the magnet aperture,” explains MDPCT1 project leader Alexander Zlobin of Fermilab. “This goal was achieved after a short magnet training at 1.9 K: in the last quench at 4.5 K the magnet reached 14.1 T. Following this successful test the magnet pre-stress will be increased to reach its design limit of 15 T.”

The result sends a positive signal for the feasibility of next-generation hadron colliders

The development of high-field superconducting accelerator magnets has received a strong boost from high-energy physics in the past decades. The current state of the art is the LHC dipole magnets, which operate at 1.9 K to produce a field of around 8 T, enabling proton-proton collisions at an energy of 13 TeV. Exploring higher energies, up to 100 TeV at a possible future circular collider, requires higher magnetic fields to steer the more energetic beams. The goal is to double the field strength compared to the LHC dipole magnets, reaching up to 16 T, which calls for innovative magnet design and a different superconductor compared to the Nb-Ti used in the LHC. Currently, Nb3Sn (niobium tin) is being explored as a viable candidate for reaching this goal. High-temperature superconductors, such as REBCO, MgB2 and iron-based materials, are also being studied.

HL-LHC first

The first accelerator magnets to use Nb3Sn technology are the 11 T dipole magnets and the final-focusing magnets under development for the high luminosity LHC (HL-LHC), which will be installed around the interaction points. But the FCC would require more than 5000 superconducting dipoles grouped for powering in series and operating continuously over long time periods. A number of critical aspects underlie the design, cost-effective  manufacturing and reliable operation of 16 T dipole magnets in future colliders. Among the targets for the Nb3Sn conductor is a critical current density of 1500 A/mm2 at 16 T and 4.2 K – almost a 50% increase compared to the current state of the art. In addition to the conductor, developing an industry-adapted design for 16 T dipoles and other accelerator magnets with higher performance presents a major challenge.

Training quench history for the MDPCT1 demonstrator magnet

The FCC collaboration has launched a rigorous R&D programme towards 16T magnets. Key components are the global Nb3Sn conductor development programme, featuring a network of academic institutes and industrial partners, and the 16 T magnet-design work package supported by the EU-funded EuroCirCol project. This is now being followed by a 16 T short-model programme aiming at constructing model magnets with several partners worldwide such as the US MDP. Unit lengths of Nb3Sn wires with performance at least comparable to that of the HL-LHC conductor have already been produced by industry and cabled at CERN, while, at Fermilab, multi-filamentary wire produced with an internal oxidation process has already exceeded the critical current density target for the FCC – just two examples of many recent advances in this area. EuroCirCol, which officially wound up this year (see Study comes full EuroCirCol), has also enabled a design and cost model for the magnets of FCC, demonstrating the feasibility of Nb3Sn technology.

“The enthusiasm of the worldwide superconductor community and the achievements are impressive,” says Amalia Ballarino, leader of the conductor activity at CERN. “The FCC conductor development targets are very challenging. The demonstration of a 14 T field in a dipole accelerator magnet and the possibility of reaching the target critical current density in R&D wires are milestones in the history of Nb3Sn conductor and a reassuring achievement for the FCC magnet development programme.”

CERN and the Higgs Boson

CERN and the Higgs Boson, by James Gillies

James Gillies’ slim volume CERN and the Higgs Boson conveys the sheer excitement of the hunt for the eponymous particle. It is a hunt that had its origins at the beginning of the last century, with the discovery of the electron, quantum mechanics and relativity, and which was only completed in the first decades of the next. It is also a hunt throughout which CERN’s science, technology and culture grew in importance. Gillies has produced a lively and enthusiastic text that explores the historical, theoretical, experimental, technical and political aspects of the search for the Higgs boson without going into oppressive scientific detail. It is rare that one comes across a monograph as good as this.

Gillies draws attention to the many interplays and dialectics that led to our present understanding of the Higgs boson. First of all, he brings to light the scientific issues associated with the basic constituents of matter, and the forces and interactions that give rise to the Standard Model. Secondly, he highlights the symbiotic relationship between theoretical and experimental research, each leading the other in turn, and taking the subject forward. Finally, he shows the inter-development of the accelerators, detectors and experimental methods to which massive computing power had eventually to be added. This is all coloured by a liberal sprinkling of anecdotes about the people that made it all possible.

Complementing this is the story of CERN, both as a laboratory and as an institution, traced over the past 60 years or so, through to its current pre-eminent standing. Throughout the book the reader learns just how important the people involved really are to the enterprise: their sheer pleasure, their commitment through the inevitable ups and downs, and their ability to collaborate and compete in the best of ways.

A ripping yarn, then, which it might seem churlish to criticise. But then again, that is the job of a reviewer. There is, perhaps, an excessively glossy presentation of progress, and the exposition continues forward apace without conveying the many downs of cutting-edge research: the technical difficulties and the many immensely hard and difficult decisions that have to be made during such enormous endeavours. Doing science is great fun but also very difficult – but then what are challenges for?

There is, perhaps, an excessively glossy presentation of progress

A pertinent example in the Higgs-boson story not emphasised in the book occurred in 2000. The Large Electron Positron collider (LEP) was due to be closed down to make way for the LHC, but late in the year LEP’s ALEPH detector recorded evidence suggesting a Higgs boson might be being observed at a mass of 114–115 GeV – although, unfortunately, not seen by the other experiments (see p32). Exactly this situation had been envisaged when not one but four LEP experiments were approved in the 1980s. After considerable discussion LEP’s closure went ahead, much to the unhappiness and anger of a large group of scientists who believed they were on the verge of a great discovery. This made for a very difficult environment at CERN for a considerable time thereafter. We now know the Higgs was found at the LHC with a mass of 125 GeV, vindicating the original decision of 2000.

A few more pictures might help the text and fix the various contributors in readers’ minds, though clearly the book, part of a series of short volumes by Icon Books called Hot Science, is formatted for brevity. I also found the positioning of the important material on applications such as positron emission tomography and the world wide web to be unfortunate, situated as it is in the final chapter, entitled “What’s the use?” Perhaps instead the book could have ended on a more upbeat note by returning to the excitement of the science and technology, and the enthusiasm of the people who were inspired to make the discovery happen.

CERN and the Higgs Boson is a jolly good read and recommended to everyone. Whilst far from the first book on the Higgs boson, Gillies’ offering distinguishes itself with its concise history and the insider perspective available to him as CERN’s head of communications from 2003 to 2015: the denouement of the hunt for the Higgs.

From SUSY to the boardroom

Former particle physicist Andy Yen has set himself a modest goal: to transform the business model of the internet. In the summer of 2013, following the Snowden security leaks, he and some colleagues at CERN started to become concerned about the lack of data privacy and the growing inability of individuals to control their own data on the internet. It prompted him, at the time a PhD student from Harvard University working on supersymmetry searches in the ATLAS experiment, and two others, to invent “ProtonMail” – an ultra-secure e-mail system based on end-to-end encryption.

The Courier met with Yen and Bart Butler, ProtonMail’s chief technology officer and fellow CERN alumnus, at the company’s Geneva headquarters to find out how a discussion in CERN’s Restaurant 1 was transformed into a company with more than 100 employees serving more than 10 million users.

If you are a Gmail user, then you are not Google’s customer, you are the product that Google sells to its real customer, which is advertisers

“The business model of the internet today really isn’t compatible with privacy,” explains Yen. “It’s all about the relationship between the provider and customer. If you are a Gmail user, then you are not Google’s customer, you are the product that Google sells to its real customer, which is advertisers. With ProtonMail, the people who are paying us are also our users. If we were ever to betray the trust of the user base, which is paying us precisely for reasons of privacy, then the whole business model collapses.”

Anyone can sign up for a ProtonMail account. Doing so generates a pair of public and private keys based on secure RSA-type encryption implementations and open-source cryptographic libraries. User data is encrypted using a key that ProtonMail does not have access to, which means the company cannot decrypt or access a user’s messages (nor offer data recovery if a password is forgotten). The challenge, says Yen, was not so much in developing the underlying algorithms, but in applying this level of security to an e-mail service in a user-friendly way.

In 2014 Yen and ProtonMail’s other co-founders, Jason Stockman and Wei Sun, entered a competition at MIT to pitch the idea. They lost, but reasoned that they had already built the thing and got a couple of hundred CERN people using it, so why not open it up to the world and see what happens? Within three days of launching the website 10,000 people had signed up. It was surprising and exciting, says Yen, but also scary. “E-mail has to work. A bank or something might close down their websites for an hour of maintenance once in a while, but you can’t do that with e-mail,” he says.

ProtonMail’s CERN origins (the name came from the fact that its founders were working on the Large Hadron Collider) meant that the technology could first come under the scrutiny of technically minded people – “early adopters”, who play a vital role in the life cycle of new products. But what might be acceptable to tech-minded people is not necessarily what the broader users want, says Yen. He quickly realised that the company had to grow, and that he had been forced into a “tough and high-risk” decision between ProtonMail and his academic career. Eventually deciding to take the leap, Harvard granted him a period of absence, and Yen set about dealing with the tens of thousands of users who were waiting to get onto the service.

In need of cash, the fledgling software outfit decided to try something unusual: crowd funding. This approach broke new ground in Switzerland, and ProtonMail soon became a test case in tax law as to whether such payments should be considered revenue or donation (the authorities eventually ruled on the former). But the effort was a huge success, raising 0.5 million Swiss Francs in a little over two months. “Venture capital (VC) was a mystery to us,” says Yen. “We didn’t know anybody, we didn’t have a business plan, we were just a few people writing code. But, funnily enough, the crowd sourcing, in addition to the money itself, got a lot of attention and this attracted interest from VCs.” A few months later, ProtonMail had received 2 million Swiss Francs in seed funding.

“It is one thing to have an idea – then we had to actually do what we’d promised: build a team, hire people, scale up the product and have some sort of company to run things, with corporate identity, accounting, tax compliance, etc. There wasn’t really a marketing plan… it was more of a technical challenge to build the service,” says Yen. “If I was to give advice to someone in my position five years ago, then there isn’t a lot I could say. Starting a company is something new for almost everybody who does it, and I don’t think physicists are at a disadvantage compared to someone who went to business school. All you have to do is work hard, keep learning and you have to have the right people around you.”

It’s not a traditional company – 10–15% of the staff today is CERN scientists

It was around that time, in 2015, when Butler, also a former ATLAS experimentalist working on supersymmetry and one-time supervisor of Yen, joined ProtonMail. “A lot of that year was based around evolving the product, he says. “There was a big difference between what the product originally was versus what it needed to be to scale up. It’s not a traditional company – 10–15% of the staff today is CERN scientists. A lot of former physicists have developed into really good software engineers, but we’ve had to bring in properly trained software engineers to add the rigour that we need. At the end of the day, it’s easier to teach a string theorist how to code than it is to teach advanced mathematics and complex cryptographic concepts to someone who codes.”

With the company, Proton Technologies, by then well established – and Yen having found time to hotfoot it back to Harvard for one “very painful and ridiculous” month to write up his PhD thesis – the next milestone came in 2016 when ProtonMail was actually launched. It was time to begin charging for accounts, and to provide those who already had signed up with premium paid-for services. It was the ultimate test of the business model: would enough people be prepared to pay for secure e-mail to make ProtonMail a viable and even profitable business? The answer turned out to be “yes”, says Yen. “2016 was make or break because eventually the funding was going to run out. We discussed whether we should raise money to buy us more time. But we decided just to work our asses off instead. We came very close but we started generating revenue just as the VC cash ran out.”

Since then, ProtonMail has continued to scale up its services, for instance introducing mobile apps, and its user base has grown to more than 10 million. “Our main competitors are the big players, Google and Microsoft,” says Yen. “If you look at what Google offers today, it’s actually really nice to use. So the longer vision is: can we offer what Google provides — services that are secure, private and beneficial to society? There is a lot to build there, ProtonDrive, ProtonCalendar, for example, and we are working to put together that whole ecosystem.”

A big part of the battle ahead is getting people to understand what is happening with the internet and their data, says Butler. “Nobody is saying that when Google or Facebook began they went out to grab people’s data. It’s just the way the internet evolved: people like free things. But the pitfalls of this model are becoming more and more apparent. If you talk to consumers, there is no choice in the market. It was just e-mail that sold your data. So we want to provide that private option online. I think this choice is really important for the world and it’s why we do what we do.”

 

Cloud services take off in the US and Europe

Fermilab has announced the launch of HEPCloud, a step towards a new computing paradigm in particle physics to deal with the vast quantities of data pouring in from existing and future facilities. The aim is to allow researchers to “rent” high-performance computing centres and commercial clouds at times of peak demand, thus reducing the costs of providing computing capacity. Similar projects are also gaining pace in Europe.

“Traditionally, we would buy enough computers for peak capacity and put them in our local data centre to cover our needs,” says Fermilab’s Panagiotis Spentzouris, one of HEPCloud’s drivers. “However, the needs of experiments are not steady. They have peaks and valleys, so you want an elastic facility.” All Fermilab experiments will soon submit jobs to HEPCloud, which provides a uniform interface so that researchers don’t need expert knowledge about where and how best to run their jobs.

The idea dates back to 2014, when Spentzouris and Fermilab colleague Lothar Bauerdick assessed the volumes of data coming from Fermilab’s neutrino programme and the US participation in CERN’s Large Hadron Collider (LHC) experiments. The first demonstration of HEPCloud on a significant scale was in February 2016, when the CMS experiment used it to achieve about 60,000 cores on the Amazon cloud, AWS, and, later that year, to run 160,000 cores using Google Cloud Services. Most recently in May 2018, the NOvA team at Fermilab was able to execute around 2 million hardware threads at a supercomputer at the National Energy Research Scientific Computing Center of the US Department of Energy’s Office of Science. HEPCloud project members now plan to enable experiments to use the state-of-the art supercomputing facilities run by the DOE’s Advanced Scientific Computing Research programme at Argonne and Oak Ridge national laboratories.

Europe’s Helix Nebula

CERN is leading a similar project in Europe called the Helix Nebula Science Cloud (HNSciCloud). Launched in 2016 and supported by the European Union (EU), it builds on work initiated by EIROforum in 2010 and aims to bridge cloud computing and open science. Working with IT contractors, HNSciCloud members have so far developed three prototype platforms and made them accessible to experts for testing.

The results and lessons learned are contributing to the implementation of the European Open Science Cloud

“The HNSciCloud pre-commercial procurement finished in December 2018, having shown the integration of commercial cloud services from several providers (including Exoscale and T-Systems) with CERN’s in-house capacity in order to serve the needs of the LHC experiments as well as use cases from life sciences, astronomy, proton and neutron science,” explains project leader Bob Jones of CERN. “The results and lessons learned are contributing to the implementation of the European Open Science Cloud where a common procurement framework is being developed in the context of the new OCRE [Open Clouds for Research Environments] project.”

The European Open Science Cloud, an EU-funded initiative started in 2015, aims to bring efficiencies and make European research data more sharable and reusable. To help European research infrastructures move towards this open-science future, a €16 million EU project called ESCAPE (European Science Cluster of Astronomy & Particle Physics ESFRI) was launched in February. The 3.5 year-long project led by the CNRS will see 31 facilities in astronomy and particle physics collaborate on cloud computing and data science, including CERN, the European Southern Observatory, the Cherenkov Telescope Array, KM3NeT and the Square Kilometre Array (SKA).

In the context of ESCAPE, CERN is leading the effort of prototyping and implementing a FAIR (findable, accessible, interoperable, reproducible) data infrastructure based on open-source software, explains Simone Campana of CERN, who is deputy project leader of the Worldwide LHC Computing Grid (WLCG). “This work complements the WLCG R&D activity in the area of data organisation, management and access in preparation for the HL-LHC. In fact, the computing activities of the CERN experiments at HL-LHC and other initiatives such as SKA will be very similar in scale, and will likely coexist on a shared infrastructure.”

Galaxies thrive on new physics

This supercomputer-generated image of a galaxy suggests that general relativity might not be the only way to explain how gravity works. Theorists at Durham University in the UK simulated the universe using hydrodynamical simulations based on “f(R) gravity” – in which a scalar field enhances gravitational forces in low-density regions (such as the outer parts of a galaxy) but is screened by the so-called chameleon mechanism in high-density environments such as our solar system (see C Arnold et al. Nature Astronomy; arXiv:1907.02977).

The left-hand side of the image shows the scalar field of the theory: bright-yellow regions correspond to large scalar-field values, while dark-blue regions correspond to to a very small scalar fields, i.e. regions where screening is active and the theory behaves like general relativity. The right-hand side of the image shows the gas density with stars overplotted. The simulation, which was based on a total of 12 simulations for different model parameters and resolutions, and which required a total runtime of about 2.5 million core-hours, shows that spiral galaxies like our Milky Way could still form even with different laws of gravity.

“Our research definitely does not mean that general relativity is wrong, but it does show that it does not have to be the only way to explain gravity’s role in the evolution of the universe,” says lead author Christian Arnold of Durham University’s Institute for Computational Cosmology.

bright-rec iop pub iop-science physcis connect