Comsol -leaderboard other pages

Topics

Classical Charged Particles (third edition)

by Fritz Rohrlich, World Scientific. Hardback ISBN 9789812700049 £33 ($58).

51LpaOUbewS._SX328_BO1,204,203,200_

Originally written in 1964, this text is a study of the classical theory of charged particles. Many applications treat electrons as point particles, but there is nevertheless a widespread belief that the theory is beset with various difficulties, such as an infinite electrostatic self-energy and an equation of motion that allows physically meaningless solutions. The classical theory of charged particles has meanwhile been largely ignored and left incomplete. Despite the efforts of great physicists such as Lorentz, Poincaré and Dirac, it is usually regarded as a “lost cause”. Thanks to more recent progress, however, the author has been able to resolve the various problems and to complete this unfinished theory successfully.

Optical Trapping and Manipulation of Neutral Particles Using Lasers: a Reprint Volume with Commentaries

by Arthur Ashkin, World Scientific. Hardback ISBN 9789810240578 £102 ($187). Paperback ISBN 9789810240585 £58 ($106).

9789810240585

This volume by the pioneer of optical trapping and “optical tweezers” contains selected papers and extensive commentaries on laser trapping and the manipulation of neutral particles using radiation pressure forces. These optical methods have had a revolutionary impact on the fields of atomic and molecular physics, biophysics and many aspects of nanotechnology. With his colleagues, Ashkin first demonstrated optical levitation, the trapping of atoms, and “tweezer” trapping and manipulation of living cells and biological particles. This extensive review should be of interest to researchers and students in atomic physics, molecular physics, biophysics and nanotechnology.

Principles of Phase Structures in Particle Physics

by Hilegard Meyer-Ortmanns and Thomas Reisz, World Scientific. Hardback ISBN 9789810234416 £71 ($131).

41pi1dvR6uL._SX330_BO1,204,203,200_

The phase structure of particle physics shows up in matter at extremely high densities and/or temperatures as reached in the early universe or in heavy-ion collisions in modern laboratory experiments. This book cover the various analytical and numerical tools needed to study this phase structure. These include convergent and asymptotic expansions in strong and weak couplings, dimensional reduction, renormalization group studies, gap equations, Monte Carlo simulations with and without fermions, finite-size and finite-mass scaling analyses, and the approach of effective actions as a supplement to first-principle calculations.

ICFA calls for stability in science funding

The International Committee for Future Accelerators (ICFA) has issued a statement on the need for continuous and stable funding for large international science projects, such as the proposed International Linear Collider. This statement is a reaction to recent cuts in the science budgets of the UK and the US, and addresses governments and science funding agencies around the world.

In the statement, ICFA expresses its deep concern about the recent decisions in the UK and the US on spending for long-term international science projects. It points out that “frontier science relies increasingly on stable international partnerships, since the scientific and technical challenges can only be met by enabling outstanding men and women of science from around the world to collaborate, and by joining forces to provide the resources they need to succeed”.

The statement continues: “A good example is the proposed International Linear Collider. In order to advance the understanding of the innermost structure of matter and the early development of the universe, several thousand particle physicists and accelerator scientists around the world, during the past 15 years, have co-ordinated their work on developing the technologies necessary to make a linear collider feasible.

“In view of these tightly interlinked efforts, inspired and driven by the scientific potential of the linear collider, the sudden cuts implemented by two partner countries have devastating effects. ICFA feels an obligation to make policy makers aware of the need for stability in the support of major international science efforts. It is important for all governments to find ways to maintain the trust needed to move forward international scientific endeavours.”

Barry Barish and the GDE: mission achievable

Barry Barish likes a challenge. He admits to a complete tendency to go for the difficult in his research – in his view, life is an adventure. Some might say that his most recent challenge would fit well with a certain famous TV series: “Your mission, should you choose to accept it… is to produce a design for the International Linear Collider that includes a detailed design concept, performance assessments, reliable international costing, an industrialization plan, and siting analysis, as well as detector concepts and scope.”

CCbar1_02_08

Barish did indeed accept the challenge in March 2005, when he became director of the Global Design Effort (GDE) for a proposed International Linear Collider (ILC). He started in a directorate of one – himself – at the head of a “virtual” laboratory of hundreds of physicists and engineers around the globe. To run the “lab” he has set up a small executive committee, which includes three regional directors (for the Americas, Asia and Europe), three project managers and two leading accelerator experts. There are also boards for R&D, change control and design cost.

Barish operates from his base at Caltech, where he has been since 1962 and ultimately became Linde Professor of Physics (now emeritus). His taste for research challenges became evident in the 1970s, when he was co-spokesperson with Frank Sciulli (also at Caltech) of the “narrow band” neutrino experiment at Fermilab that studied weak neutral currents and the quark substructure of the nucleon. He later became US spokesperson of the collaboration behind the Monopole, Astrophysics and Cosmic Ray Observatory, which operated from 1989 to 2000 in the Gran Sasso National Laboratory (LNGS). The experiment did not find monopoles, but it set the most stringent upper limits so far on their existence.

In 1991 he also began to lead the design of the GEM detector for the Superconducting Super Collider project, together with Bill Willis of Columbia University. In October 1993, however, the US congress infamously shut down the project and Barish found himself in search of a new challenge. He did not have to look far, as Caltech was already involved in the Laser Interferometer Gravitational-wave Observatory (LIGO), conceived to search for effects even more difficult to detect than neutrinos. The project was already approved and just beginning to receive funding. Barish became principal investigator in 1994 and director of the LIGO Laboratory in 1997.

Here was an incredibly challenging project, Barish explains, that was “making the audacious attempt to measure an effect of 1 in 1021“. It has indeed achieved this precision, but has not yet detected gravitational waves. “Now it’s down to nature,” says Barish, who found the work on LIGO very satisfying. “There is no way I would have left it except for an exciting new challenge – and the ILC is certainly challenging.” He says that it was hard to move on, “but I felt I could make a difference”. Moreover, he adds: “The likelihood is that the ILC will be important for particle physics.”

At 72 years old, Barish does not expect to participate in the ILC – the earliest it could start up would be in the 2020s. “The plan is short term. The question was whether I could pull together a worldwide team to conceive of a design that will do the job,” he says. With no background in accelerator physics, Barish may not seem the obvious choice for the task. However, he points out that “coming in from the outside, not being buried in the forest, can be very useful”. In addition he believes that he is a good student, and that a good student can be a good leader: “If you do your homework, if the people you work with respect you, then it’s possible.”

An important factor in building the team behind the GDE is that there is not as much history of collaboration in accelerator physics as there is in experimental particle physics. Barish points out that many of the members of the accelerator community have met only at conferences. There has never been real collaboration on accelerator design, so the GDE is a learning process in more than one sense. There are also interesting sociological issues, as the GDE has no physical central location, and meetings usually take place via video and tele-conferencing. Barish likens his job as director to “conducting the disparate instruments in an orchestra”.

In February 2007, the GDE reached a major milestone with the release of the Reference Design Report (RDR) for a 31 km long electron–positron linear collider, with a peak luminosity of about 2 × 1034 cm–2s–1, at a top centre-of-mass energy of 500 GeV, and the possibility of upgrading to 1 TeV. The report contains no detailed engineering; it might state, for example, that a magnet is needed for a certain task, but it does not describe how to build it. The report also contains a preliminary cost estimate, of some $6700 m plus 13,000 person-years of effort.

The final goal will be to produce a strong engineering design, and to optimize costing to form a serious proposal. An appealing deadline is the 2010 ICHEP meeting in Paris. By then there should be results from the LHC that could justify the project. “The main job,” says Barish “is to design a good machine and move once it’s justified.”

In the meantime there is important R&D to be done. Two key areas concern the high-voltage gradient proposed in the machine – an average of 31.5 MV/m – and the effects of electron clouds. Electrons from the walls of the beam pipe cause the positron beam to blow up, thereby reducing the luminosity, and ultimately the number of events. The clouds decay naturally and cease to be a problem if there is sufficient time between bunches, but this reduces the collision rate. The conservative option to keep the rate high would be to have two positron rings to inject alternate pulses into the linac. However, this has huge cost implications so, as Barish says: “There is huge motivation to solve the problem.” One attractive possibility that needs further investigation involves grooving and coating the beam pipe, which could reduce the electron cloud a hundredfold.

However, just before the end of 2007, bad news on funding in both the US and the UK struck a major blow to the plan foreseen at the time that the RDR was released. The UK dealt the first strike, stating that it would “cease investment” in the project, while the US reduced funding for the ILC from $60 m to $15 m as part of a hastily agreed compromise budget for FY2008. Barish recalls the complete surprise of the congressional decision on a budget that President Bush had put forward in February 2007. “We went to bed as normal on Friday (14 December), and woke up on Monday to find the project axed out.”

The cuts in the two countries are both quantitatively and qualitatively different. In one sense the UK’s decision is more serious, as it appears to be a policy decision taken with no input from the community (see Particle physics in the UK is facing a severe funding crisis). Barish says that the main loss here to the GDE is in intellectual leadership. He hopes that continued funding in the UK for general accelerator R&D will mean that the project does not lose people that he says are irreplaceable. In contrast, he expects to see the R&D for the ILC revived in the US budget for FY2009 (starting October 2008), albeit at a level lower than the $60 m originally promised for FY2008. Here the problem is how to cope with the loss of people over the coming months, as there is no funding left to support them in the current budget. Where it hurts most, says Barish, is that the US will not be able to develop the same level of home-grown expertise in the technology required for the ILC, compared with Japan or Europe.

A revival of the ILC in the US budget was a key assumption when Barish and the GDE executive committee met for a relatively rare face-to-face meeting at DESY on 12 January to formulate a new plan. At least the collaboration that Barish has forged is “strong enough to give us the ability to adjust and move on, even with reduced goals”. The aim of the new plan that has emerged is to reduce the scope of the R&D work, but maintain the original schedule of completion by 2010 for items with the highest technical risk, while stretching other parts of the programme to 2012.

The work on high-gradients, underway globally, and tests at Cornell University on reducing the electron cloud will remain high priorities for part one of the newly defined Technical Design Phase, to be ready for 2010. Part two, which will focus on the detailed engineering and industrialization, should be ready by 2012.

Looking further ahead, Barish acknowledges that an ILC-like machine could be the end of the line for very high-energy accelerators, but he points out that accelerators for other applications have a promising future. The GDE itself is already providing an important role in teaching accelerator physics to a new generation. “There is no better way to train them than on something that is pushing the state of the art,” he says. In fact, he sees training as a limiting factor in breeding new experts – whether young people or “converts” from other areas of physics, as many accelerator physicists now are. One problem that he is aware of is that “accelerator people are not revered – but they should be!”.

Despite the recent setbacks with the GDE, Barish remains determined to achieve his mission. “In these ambitious, long-range projects you are going to hit huge bumps in the road, but you have to persevere,” he says. What is vital in his view, is that the agenda should remain driven by science, and that this alone should determine if and when the ILC is built on the firm foundations laid by the GDE. Let us hope that those who fund particle physics have the vision to ensure that one day he can say: “Mission accomplished.”

The Void

by Frank Close, Oxford University Press. Hardback ISBN 9780199225903, £9.99 ($19.50).

CCboo1_02_08

This is a small book – you can read it in an evening – about the intriguing subject of “nothing”. Close takes us through history from the earliest philosophers, who concluded that “Nature abhors a void”, through the period of arguments about the non-existent Ether, up to the present time, where the void is considered to be a seething quantum-mechanical foam. He describes how the concepts of space and time are linked to the different ideas of “the Void” and ends with current speculations: maybe our entire universe is a quantum fluctuation with near-zero total energy. I learnt that the different forces are influenced by the structure of the void and that some constants of nature may be the random result of spontaneous symmetry breaking, both of which added to my very tenuous non-grasping of the Higgs question.

So far so good. Fortunately, I had already read a number of texts around the subject, for some passages are difficult to grasp because of the sometimes ungrammatical sentences – page 35 gets my all-time prize for totally confusing the reader.

It is unclear to me who the target audience is; the level required to understand the text ranges widely depending on the chapter. Close sometimes uses advanced concepts without explanation and has to rely on more than a little familiarity with the mysteries of quantum mechanics. As with most popular books of this type, those mysteries remain whole, although I must say in favour of The Void that it manages, for once, to leave out Schrödinger’s cat.

I also wonder if the text has been proof-read. Here are just two examples from too large a set: though Close was once head of communications and public education at CERN, he tells us that CERN started in 1955 (it was 1954) in a sentence that cannot be parsed in any language. I share some of his criticisms of CERN’s exhibition centre, but find it difficult to accept that for an entire page he uses “La Globe”, when the correct French is “le Globe” as can be read on CERN’s public website. Did no-one spot this? Fortunately for the author, but unfortunately for the publishing business, this book is not alone in being the victim of such sloppiness.

So, The Void is well worth reading; then send in your corrections.

Why particle physics needs stability

CCpoi13_02_08

A premature end to SLAC’s B-factory, a stop to UK investment in the International Linear Collider (ILC) project and more than 300 lay-offs at Fermilab and SLAC – 2007 ended on a bad note. Do these cuts in the US and the UK signal a general downturn for particle physics? No! In the UK they are the combined result of organizational changes and an emphasis on national facilities(see Particle physics in the UK is facing a severe funding crisis). In the US they arose from disputes in congress; and there at least, the US president’s budget for FY2009 looks more positive. The reasons for these cuts are thus too specific to call them a trend – all the more since, for example, KEK’s five-year plan strongly endorses ILC research and funding has increased recently in countries such as Germany.

We have seen frequent ups and downs in funding over the decades. So is it business as usual? Not quite. Nowadays, these ups and downs must be seen in the framework of global co-operation and the interdependence of projects in particle physics.

The size and cost of our facilities are so large that they can only be realized in a truly international context: a machine like the LHC will exist only once in the world. Equally, it would be inefficient to clone billion-euro projects for future neutrino physics, super B-factories or astroparticle physics. Moreover, the R&D necessary for high energies and intensities for future accelerators – and for future detectors – can only be performed in a stable and organized worldwide effort.

In theory, everyone agrees that a global distribution of responsibilities is the most cost-effective approach, allowing particle physics to make the best use of worldwide interests and expertise, and guaranteeing a broad and complementary exploration of our field. Making this a reality, however, is another business. It requires agreements at a transnational level and, in particular, reliability and continuity of support.

Here we evidently have a problem: although international in character, funding of high-energy physics projects is, and will be, largely national. There are few internationally binding treaties like the one for CERN, generating a stable financial situation. Most agreements are memoranda of understanding or even less formal. Common goals are subject to the “good will” of national funding and therefore to changing economical situations and national political and scientific priorities. Particularly vulnerable are the projects that require significant R&D without clearly defined financial contributions.

We can only progress if we repeatedly make it clear what it is we give back to society in return.

In addition there is the mere fact that supporting national facilities is politically easier than financing international ones. Which representative would lobby for a project that is not in their country or constituency? Surely it is better to cut the ILC than the local research facility.

Consider the example of the ILC more closely. The consensus is that it will be the next big machine, and that there will be only one machine. Even if the ILC is not imminent, R&D is mandatory to optimize costs and come to a technically sound proposal. Following this ideal, the worldwide community formed a global network and began to develop special expertise (see Barry Barish and the GDE: mission achievable). The UK, for example, was leading the effort on damping rings, beam delivery and positron sources. Ceasing support for ILC R&D in the UK therefore cuts a large hole in the international network. Who can take over – and at what cost? Yes, stopping R&D saves money in the short term, but in the long term it will cost more. Maybe even more damaging, a loss of confidence in pursuing projects internationally could result.

What can we conclude? Well, the first point is rather trivial: particle physics is part of society. We are not free from general economic constraints and we have to compete with important social, political, ecological and scientific goals. We can only progress if we repeatedly make it clear what it is we give back to society in return.

However, we really need a more organized way of setting internationally agreed priorities, with more binding definitions of national responsibilities and financial commitments for large-scale projects, including their R&D phase. The CERN Council strategy group, together with the funding agencies in the CERN council, is an important tool and should be a step towards a transcontinental equivalent. Note however: even this is no guarantee for reliability, as is evident from the termination of US contributions to the ITER project.

CERN, as any other laboratory hosting a large-scale facility, should see itself as an important part of a large network, serving the interests of its members and contributing states. Universities and national laboratories should not be seen as an appendix, but as key participants. We should work actively to make it evident in all countries that contributing to CERN eventually feeds back into domestic technological and scientific progress.

An excellent and successful LHC project is key to further international co-operation in particle physics. If nature reveals new effects and causes public excitement, many problems we face now will be easier to solve.

Particle physics in the UK is facing a severe funding crisis

When the UK announced its science budget for 2008–2011 on 11 December, it looked like good news. An additional £1200 m was to be spent on science, and at the end of the period the budget would be 19% higher than at the start – an increase of more than 11% after inflation. The Science and Technology Facilities Council (STFC), which is responsible for the CERN budget as well as for UK particle and nuclear physics, astronomy, space science, the Rutherford Appleton and Daresbury Laboratories, ESA, ESO, ILL, ESRF and much else, received an extra £185 m, representing an increase of 13.6% (6% after inflation).

CCnew8_02_08

However, the headlines hid a darker truth. Once the accounting was done correctly, this increase to STFC translated into a deficit of £80 m. Later the same week, Richard Wade, the UK delegate to the CERN Council, was obliged to make the following statement “whilst we strongly support CERN and the consolidation programme, under the circumstances I cannot vote in favour of the increased budget at this meeting”.

The problem arises because much of the increase is directed to issues such as capital depreciation of STFC facilities and maintenance in the UK’s universities. Of the £185 m, nearly half (£82 m) is in so-called “non-cash”, which is a balance-sheet adjustment to take account, for example, of the cost of capital and depreciation; this is not available for spending on the research programme. Most of the rest goes straight to the universities as a supplement to research grants to pay much of the “full economic cost” of research. What remains is the “flat cash” to pay for the science that STFC does, and this is eaten away as inflation bites.

To make matters worse, STFC has inherited liabilities of about £40 m from previous decisions by ministers to run the Synchrotron Radiation Source (SRS) at Daresbury for a while in parallel with the new Diamond third-generation synchrotron source. The SRS now has to be decommissioned, and there was an unexpected VAT bill from the Treasury on the operation of the new facility by Diamond Light Source Ltd. There are also increased costs for running Diamond and the second target stations for the ISIS spallation neutron source, which have been known about for some four years, but which were not yet fully funded. As a result, STFC has an £80 m hole in its budget, just to continue with what it does now.

The decisions STFC has made to accommodate the hole are severe: withdrawal from major international programmes, job losses estimated to lie in the hundreds (including probably some compulsory redundancies) and cut-backs across exploitation grants for almost all projects. As a result the UK is withdrawing from important international commitments – the Gemini telescopes, the International Linear Collider and ground-based solar-terrestrial physics. Other programmes are also likely to be affected.

There is widespread anger and dismay in the UK, as these decisions were taken with no proper peer review and no consultation with the community. Concerns are shared not only by the particle physicists and astronomers directly affected by the cuts. The Royal Society, the Institute of Physics and the Royal Astronomical Society have all expressed concern, as have university vice-chancellors.

Members of parliament (MPs) are also concerned. Many have received letters pointing out the damage that the cuts will do to the country’s international reputation, and to the image of physics and astronomy in the eyes of those considering what to study at university – there had been fragile signs of a recovery in the number of UK students wishing to study physics. There have been debates and questions in parliament ,and a committee of MPs is now looking into the matter. More than 15,000 people, including Stephen Hawking, Peter Higgs, Sir Patrick Moore and Nobel laureates Sir Paul Nurse, and Sir Harry Kroto, have signed a petition calling on the Prime Minister to reverse the decision to cut vital UK contributions to particle physics and astronomy.

Un Improbable Chemin de Vie

par André Krzywicki, L’Harmattan. Broché ISBN 2296011934, €17.50.

André Krzywicki connaît certainement trop bien la théorie des probabilités pour ne pas réaliser que le titre de son autobiographie est une contradiction dans les termes. Un événement envisagé dans le futur peut être probable ou improbable, mais ce qui est déjà arrivé est déjà arrivé, un point c’est tout. Cependant tout le monde comprend très bien ce que veut dire le titre, à savoir que tout ce qui est arrivé était, a priori, très improbable. Improbable qu’il survive à la terreur nazie, comme ce fut le cas pour nos amis du CERN, Georges Charpak, Jacques Prentki et Marcel Vivargent par exemple. Improbable qu’il survive à la poliomyélite. Improbable qu’il s’en tire avec un handicap sérieux mais supportable lui permettant d’avoir une vie sentimentale normale. Improbable enfin de pouvoir s’installer à l’Ouest, à Orsay (près de Paris), où il terminera sa carrière comme physicien théoricien au plus haut niveau. Incontestablement, tout cela valait la peine d’être raconté.

CCbok1_01_08

André Krzywicki est né à Varsovie d’un père aristocrate catholique et d’une mère juive, écrivain célèbre déjà avant la guerre. Officier, son père est fait prisonnier par les Russes et exécuté à Kharkov, massacre peut être moins connu que celui de Katyn. L’auteur a un frère aîné, le préféré de sa mère. Cette dernière comprend qu’accepter de porter l’étoile de David est tomber dans un piège. Elle se réfugie avec ses deux enfants sous un faux nom à la campagne. Mais ils se font repérer par les Allemands qui, par chance, s’y prennent à deux fois pour venir les chercher. La seconde fois, la famille avait disparu, cachée par des voisins. Elle retourne à Varsovie. Elle est témoin de l’insurrection du ghetto (de l’extérieur!) et de l’insurrection de Varsovie écrasée à cause du cynisme de Staline.

Ensuite, surviennent la mort catastrophique de son frère aîné, puis l’adaptation au régime communiste. Avec beaucoup d’honnêteté, André Krzywicki reconnaît qu’il s’est lancé à fond dans les jeunesses communistes tandis que sa mère semblait louvoyer avec le régime. Par deux fois, elle est envoyée en mission culturelle dans des ambassades à l’étranger. Il décrit son amour pour le sport brutalement bloqué par la polio dont il risque de mourir. D’autres, autour de lui, y resteront par manque de soins. Il parvient à force d’efforts à surmonter une partie de sa paralysie, mais il devra utiliser des béquilles toute sa vie comme le savent ceux qui le connaissent.

C’est peut-être à cause de son handicap qu’il s’oriente vers la physique théorique et atterrit à l’institut de la rue Hoza, sur lequel il porte un jugement un peu trop sévère à mon goût. Il y avait là de bons éléments, par exemple, mon regretté ami Lukaszuk qui, lui, est resté en Pologne et a été exilé sur la Baltique à cause de sa participation à Solidarité.

Lors d’une première escapade à l’Ouest, à Copenhague, André Krzywicki invite son ami Ziro Koba qui lui présente son élève, l’excentrique mais génial Holger Nielsen que nous connaissons bien au CERN. Ensuite, pour des raisons idéologiques et scientifiques, il part à l’Ouest définitivement. Au CERN, dont il fait beaucoup d’éloges, il bénéficie de l’aide de Jacques Prentki, alors que Léon Van Hove essaie de le persuader de retourner à Varsovie (un peu comme Van Hove avait réexpédié Martin Veltman à Utrecht, ce qui valut à ce dernier de recontrer Gerard ‘t Hooft avec lequel il partagea le prix Nobel!). Finalement, avec l’aide de Louis Leprince-Ringuet et de Maurice Lévy, il s’installe à Orsay. J’admire qu’il ait réussi ce prodige car ces deux personnalités marquantes du monde scientifique français n’avaient pas d’atomes crochus.

Ses témoignages de la vie scientifique parisienne sont très intéressants. Il y décrit, avec un oeil critique, le fonctionnement de la recherche et de l’enseignement et surtout, il dresse une peinture impitoyable des événements de Mai 1968. Il raille la veulerie de la plupart des enseignants et des chercheurs. Il décrit la séquestration de Jean Nuyts accusé d'”élitisme” parce qu’il enseignait la théorie des champs. Pour lui, Mai 1968 a été surtout l’occasion pour les médiocres de se pousser en avant! Dans l’ensemble, c’est vrai. Mais il y avait parmi les meneurs, des gens qui avaient fait d’excellents travaux avant (par exemple, Jean Marc Lévy-Leblond). Nous avons aussi droit à une description réaliste du milieu scientifique où, il n’y a pas que des saints, mais parfois des voleurs, agissant de différentes façons, dont nous avons tous été victimes un jour ou l’autre. Ce qui rend la compétition entre les physiciens pire que celle entre les hommes d’affaires, disait un ancien ingénieur du CERN, Pierre Amiot, c’est que les hommes d’affaires luttent pour l’argent tandis que les physiciens se battent pour la gloire. Roy Glauber (bien avant de recevoir le Prix Nobel), lui fait une intéressante remarque: “Vers 50 ans les gens souffrent de ne pas recevoir la considération qu’ils méritent”. Il explique aussi le pour et le contre du système des citations qui “rapporte” surtout aux plus connus.

Sur son œuvre personnelle André Krzywicki est relativement discret. C’est un mérite du livre qu’il ne contienne pas de formules. Tout au plus, on lit “nucléon , quark, couleur”. L’homme peut être d’une très grande modestie: “il n’est pas exclus que cet ouvrage (de mathématiques pour la physique) soit la seule chose qui reste de moi” (p117). Mais il ne résiste pas à l’envie de répéter les compliments (et les emprunts) que lui ont fait les grands de ce monde comme Ken Wilson et Dick Feynman.

Sur sa vie sentimentale complexe, l’auteur est très honnête, donnant même des détails d’ordre sexuels. Mais on voit bien que parmi toutes les femmes qu’il a rencontrées, il n’y en a qu’une qui a été le grand amour. Il s’agit d’Ela, décédée d’un cancer à Orsay. C’est un peu comme Feynman qui a eu beaucoup d’aventures, mais un seul grand amour, Arlene, morte de la tuberculose à Albuquerque, alors qu’il travaillait à Los Alamos. Une dernière remarque : alors qu’il conserve un attachement viscéral à la Pologne, on comprend qu’il se sent vraiment chez lui en France.

Ma conclusion est que ce livre vaut vraiment la peine d’être lu, non seulement par des physiciens, mais aussi par des personnes connaissant le milieu de la physique, par exemple des époux ou épouses de physiciens ou des membres non scientifiques du personnel du CERN. Je pense qu’il serait très souhaitable qu’une traduction en Anglais en soit faite.

From BCS to the LHC

It was a little odd for me, a physicist whose work has been mainly on the theory of elementary particles, to be invited to speak at a meeting of condensed-matter physicists celebrating a great achievement in their field. It is not only that there is a difference in the subjects that we explore. There are deep differences in our aims, in the kinds of satisfaction that we hope to get from our work.

Condensed-matter physicists are often motivated to deal with phenomena because the phenomena themselves are intrinsically so interesting. Who would not be fascinated by weird things, such as superconductivity, superfluidity, or the quantum Hall effect? On the other hand, I don’t think that elementary-particle physicists are generally very excited by the phenomena they study. The particles themselves are practically featureless, every electron looking tediously just like every other electron.

Another aim of condensed-matter physics is to make discoveries that are useful. In contrast, although elementary-particle physicists like to point to the technological spin-offs from elementary-particle experimentation, and these are real, this is not the reason that we want these experiments to be done, and the knowledge gained by these experiments has no foreseeable practical applications.

Most of us do elementary-particle physics neither because of the intrinsic interestingness of the phenomena that we study, nor because of the practical importance of what we learn, but because we are pursuing a reductionist vision. All of the properties of ordinary matter are what they are because of the principles of atomic and nuclear physics, which are what they are because of the rules of the Standard Model of elementary particles, which are what they are because…well, we don’t know, this is the reductionist frontier, which we are currently exploring.

I think that the single most important thing accomplished by the theory of John Bardeen, Leon Cooper, and Robert Schrieffer (BCS) was to show that superconductivity is not part of the reductionist frontier (Bardeen et al. 1957). Before BCS this was not so clear. For instance, in 1933 Walter Meissner raised the question of whether electric currents in superconductors are carried by the known charged particles, electrons and ions. The great thing that Bardeen, Cooper, and Schrieffer showed was that no new particles or forces had to be introduced to understand superconductivity. According to a book on superconductivity that Cooper showed me, many physicists were even disappointed that “superconductivity should, on the atomistic scale, be revealed as nothing more than a footling small interaction between electrons and lattice vibrations”. (Mendelssohn 1966).

His testimony was so scrupulously honest that I think it helped the SSC more than it hurt it.

The claim of elementary-particle physicists to be leading the exploration of the reductionist frontier has at times produced resentment among condensed-matter physicists. (This was not helped by a distinguished particle theorist, who was fond of referring to condensed-matter physics as “squalid state physics”.) This resentment surfaced during the debate over the funding of the Superconducting Super Collider (SSC). I remember that Phil Anderson and I testified in the same Senate committee hearing on the issue, he against the SSC and I for it. His testimony was so scrupulously honest that I think it helped the SSC more than it hurt it. What really did hurt was a statement opposing the SSC by a condensed-matter physicist who happened at the time to be the president of the American Physical Society. As everyone knows, the SSC project was cancelled, and now we are waiting for the LHC at CERN to get us moving ahead again in elementary-particle physics.

During the SSC debate, Anderson and other condensed-matter physicists repeatedly made the point that the knowledge gained in elementary-particle physics would be unlikely to help them to understand emergent phenomena like superconductivity. This is certainly true, but I think beside the point, because that is not why we are studying elementary particles; our aim is to push back the reductive frontier, to get closer to whatever simple and general theory accounts for everything in nature. It could be said equally that the knowledge gained by condensed-matter physics is unlikely to give us any direct help in constructing more fundamental theories of nature.

So what business does a particle physicist like me have at a celebration of the BCS theory? (I have written just one paper about superconductivity, a paper of monumental unimportance, which was treated by the condensed-matter community with the indifference it deserved.) Condensed-matter physics and particle physics are relevant to each other, despite everything I have said. This is because, although the knowledge gained in elementary-particle physics is not likely to be useful to condensed-matter physicists, or vice versa, experience shows that the ideas developed in one field can prove very useful in the other. Sometimes these ideas become transformed in translation, so that they even pick up a renewed value to the field in which they were first conceived.

The example that concerns me is an idea that elementary-particle physicists learnt from condensed-matter theory – specifically from the BCS theory. It is the idea of spontaneous symmetry breaking.

Spontaneous symmetry breaking

In particle physics we are particularly interested in the symmetries of the laws of nature. One of these symmetries is invariance of the laws of nature under the symmetry group of three-dimensional rotations, or in other words, invariance of the laws that we discover under changes in the orientation of our measuring apparatus.

When a physical system does not exhibit all the symmetries of the laws by which it is governed, we say that these symmetries are spontaneously broken. A very familiar example is spontaneous magnetization. The laws governing the atoms in a magnet are perfectly invariant under three-dimensional rotations, but at temperatures below a critical value, the spins of these atoms spontaneously line up in some direction, producing a magnetic field. In this case, and as often happens, a subgroup is left invariant: the two-dimensional group of rotations around the direction of magnetization.

Now to the point. A superconductor of any kind is nothing more or less than a material in which a particular symmetry of the laws of nature, electromagnetic gauge invariance, is spontaneously broken. This is true of high-temperature superconductors, as well as the more familiar superconductors studied by BCS. The symmetry group here is the group of two-dimensional rotations. These rotations act on a two-dimensional vector, whose two components are the real and imaginary parts of the electron field, the quantum mechanical operator that in quantum field theories of matter destroys electrons. The rotation angle of the broken symmetry group can vary with location in the superconductor, and then the symmetry transformations also affect the electromagnetic potentials, a point to which I will return.

The symmetry breaking in a superconductor leaves unbroken a rotation by 180°

The symmetry breaking in a superconductor leaves unbroken a rotation by 180°, which simply changes the sign of the electron field. In consequence of this spontaneous symmetry breaking, products of any even number of electron fields have non-vanishing expectation values in a superconductor, though a single electron field does not. All of the dramatic exact properties of superconductors – zero electrical resistance, the expelling of magnetic fields from superconductors known as the Meissner effect, the quantization of magnetic flux through a thick superconducting ring, and the Josephson formula for the frequency of the AC current at a junction between two superconductors with different voltages – follow from the assumption that electromagnetic gauge invariance is broken in this way, with no need to inquire into the mechanism by which the symmetry is broken.

Condensed-matter physicists often trace these phenomena to the appearance of an “order parameter”, the non-vanishing mean value of the product of two electron fields, but I think this is misleading. There is nothing special about two electron fields; one might just as well take the order parameter as the product of three electron fields and the complex conjugate of another electron field. The important thing is the broken symmetry, and the unbroken subgroup.

It may then come as a surprise that spontaneous symmetry breaking is mentioned nowhere in the seminal paper of Bardeen, Cooper and Schrieffer. Their paper describes a mechanism by which electromagnetic gauge invariance is in fact broken, but they derived the properties of superconductors from their dynamical model, not from the mere fact of broken symmetry. I am not saying that Bardeen, Cooper, and Schrieffer did not know of this spontaneous symmetry breaking. Indeed, there was already a large literature on the apparent violation of gauge invariance in phenomenological theories of superconductivity, the fact that the electric current produced by an electromagnetic field in a superconductor depends on a quantity known as the vector potential, which is not gauge invariant. But their attention was focused on the details of the dynamics rather than the symmetry breaking.

This is not just a matter of style. As BCS themselves made clear, their dynamical model was based on an approximation, that a pair of electrons interact only when the magnitude of their momenta is very close to a certain value, known as the Fermi surface. This leaves a question: How can you understand the exact properties of superconductors, like exactly zero resistance and exact flux quantization, on the basis of an approximate dynamical theory? It is only the argument from exact symmetry principles that can fully explain the remarkable exact properties of superconductors.

Though spontaneous symmetry breaking was not emphasized in the BCS paper, the recognition of this phenomenon produced a revolution in elementary-particle physics. The reason is that (with certain qualification, to which I will return), whenever a symmetry is spontaneously broken, there must exist excitations of the system with a frequency that vanishes in the limit of large wavelength. In elementary-particle physics, this means a particle of zero mass.

The first clue to this general result was a remark in a 1960 paper by Yoichiro Nambu, that just such collective excitations in superconductors play a crucial role in reconciling the apparent failure of gauge invariance in a superconductor with the exact gauge invariance of the underlying theory governing matter and electromagnetism. Nambu speculated that these collective excitations are a necessary consequence of this exact gauge invariance.

Nambu put this idea to good use in particle physics

A little later, Nambu put this idea to good use in particle physics. In nuclear beta decay an electron and neutrino (or their antiparticles) are created by currents of two different kinds flowing in the nucleus, known as vector and axial vector currents. It was known that the vector current was conserved, in the same sense as the ordinary electric current. Could the axial current also be conserved?

The conservation of a current is usually a symptom of some symmetry of the underlying theory, and holds whether or not the symmetry is spontaneously broken. For the ordinary electric current, this symmetry is electromagnetic gauge invariance. Likewise, the vector current in beta decay is conserved because of the isotopic spin symmetry of nuclear physics. One could easily imagine several different symmetries, of a sort known as chiral symmetries, that would entail a conserved axial vector current. However, it seemed that any such chiral symmetries would imply either that the nucleon mass is zero, which is certainly not true, or that there must exist a triplet of massless strongly interacting particles of zero spin and negative parity, which isn’t true either. These two possibilities simply correspond to the two possibilities that the symmetry, whatever it is, either is not, or is, spontaneously broken, not just in some material like a superconductor, but even in empty space.

Nambu proposed that there is indeed such a symmetry, and it is spontaneously broken in empty space, but the symmetry in addition to being spontaneously broken is not exact to begin with, so the particle of zero spin and negative parity required by the symmetry breaking is not massless, only much lighter than other strongly interacting particles. This light particle, he recognized, is nothing but the pion, the lightest and first discovered of all the mesons. In a subsequent paper with Giovanni Jona-Lasinio, Nambu presented an illustrative theory in which, with some drastic approximations, a suitable chiral symmetry was found to be spontaneously broken, and in consequence the light pion appeared as a bound state of a nucleon and an antinucleon.

So far, there was no proof that broken exact symmetries always entail exactly massless particles, just a number of examples of approximate calculations in specific theories. In 1961 Jeffrey Goldstone gave some more examples of this sort, and a hand-waving proof that this was a general result. Such massless particles are today known as Goldstone bosons, or Nambu–Goldstone bosons. Soon after, Goldstone, Abdus Salam and I made this into a rigorous and apparently quite general theorem.

Cosmological fluctuations

This theorem has applications in many branches of physics. One is cosmology. You may know that today the observation of fluctuations in the cosmic microwave background are being used to set constraints on the nature of the exponential expansion, known as inflation, that is widely believed to have preceded the radiation-dominated Big Bang. But there is a problem here. In between the end of inflation and the time that the microwave background that we observe was emitted, there intervened a number of events that are not at all understood: the heating of the universe after inflation, the production of baryons, the decoupling of cold dark matter, and so on. So how is it possible to learn anything about inflation by studying radiation that was emitted long after inflation, when we don’t understand what happened in between? The reason that we can get away with this is that the cosmological fluctuations now being studied are of a type, known as adiabatic, that can be regarded as the Goldstone excitations required by a symmetry, related to general co-ordinate invariance, that is spontaneously broken by the space–time geometry. The physical wavelengths of these cosmological fluctuations were stretched out by inflation so much that they were very large during the epochs when things were happening that we don’t understand, so they then had zero frequency, which means that the amplitude of these fluctuations was not changing, so that the value of the amplitude relatively close to the present tells us what it was during inflation.

Werner Heisenberg continued to believe this into the 1970s

But in particle physics, this theorem was at first seen as a disappointing result. There was a crazy idea going around, which I have to admit that at first I shared, that somehow the phenomenon of spontaneous symmetry breaking would explain why the symmetries being discovered in strong-interaction physics were not exact. Werner Heisenberg continued to believe this into the 1970s, when everyone else had learned better.

The prediction of new massless particles, which were ruled out experimentally, seemed in the early 1960s to close off this hope. But it was a false hope anyway. Except under special circumstances, a spontaneously broken symmetry does not look at all like an approximate unbroken symmetry; it manifests itself in the masslessness of spin-zero bosons, and in details of their interactions. Today we understand approximate symmetries such as isospin and chiral invariance as consequences of the fact that some quark masses, for some unknown reason, happen to be relatively small.

Though based on a false hope, this disappointment had an important consequence. Peter Higgs, Robert Brout and François Englert, and Gerald Guralnik, Dick Hagen and Tom Kibble were all led to look for, and then found, an exception to the theorem of Goldstone, Salam and me. The exception applies to theories in which the underlying physics is invariant under local symmetries, symmetries whose transformations, like electromagnetic gauge transformations, can vary from place to place in space and time. (This is in contrast with the chiral symmetry associated with the axial vector current of beta decay, which applies only when the symmetry transformations are the same throughout space–time.) For each local symmetry there must exist a vector field, like the electromagnetic field, whose quanta would be massless if the symmetry was not spontaneously broken. The quanta of each such field are particles with helicity (the component of angular momentum in the direction of motion) equal in natural units to +1 or –1. But if the symmetry is spontaneously broken, these two helicity states join up with the helicity-zero state of the Goldstone boson to form the three helicity states of a massive particle of spin one. Thus, as shown by Higgs, Brout and Englert, and Guralnik, Hagen and Kibble, when a local symmetry is spontaneously broken, neither the vector particles with which the symmetry is associated nor the Nambu–Goldstone particles produced by the symmetry breaking have zero mass.

This was actually argued earlier by Anderson, on the basis of the example provided by the BCS theory. But the BCS theory is non-relativistic, and the Lorentz invariance that is characteristic of special relativity had played a crucial role in the theorem of Goldstone, Salam and me, so Anderson’s argument was generally ignored by particle theorists. In fact, Anderson was right: the reason for the exception noted by Higgs et al. is that it is not possible to quantize a theory with a local symmetry in a way that preserves both manifest Lorentz invariance and the usual rules of quantum mechanics, including the requirement that probabilities be positive. In fact, there are two ways to quantize theories with local symmetries: one way that preserves positive probabilities but loses manifest Lorentz invariance, and another that preserves manifest Lorentz invariance but seems to lose positive probabilities, so in fact these theories actually do respect both Lorentz invariance and positive probabilities; they just don’t respect our theorem.

Effective field theories

The appearance of mass for the quanta of the vector bosons in a theory with local symmetry re-opened an old proposal of Chen Ning Yang and Robert Mills, that the strong interactions might be produced by the vector bosons associated with some sort of local symmetry, more complicated than the familiar electromagnetic gauge invariance. This possibility was specially emphasized by Brout and Englert. It took a few years for this idea to mature into a specific theory, which then turned out not to be a theory of strong interactions.

Perhaps the delay was because the earlier idea of Nambu, that the pion was the nearly massless boson associated with an approximate chiral symmetry that is not a local symmetry, was looking better and better. I was very much involved in this work, and would love to go into the details, but that would take me too far from BCS. I’ll just say that, from the effort to understand processes involving any number of low-energy pions beyond the lowest order of perturbation theory, we became comfortable with the use of effective field theories in particle physics. The mathematical techniques developed in this work in particle physics were then used by Joseph Polchinski and others to justify the approximations made by BCS in their work on superconductivity.

The story of the physical application of spontaneously broken local symmetries has often been told, by me and others, and I don’t want to take much time on it here, but I can’t leave it out altogether because I want to make a point about it that will take me back to the BCS theory. Briefly, in 1967 I went back to the idea of a theory of strong interactions based on a spontaneously broken local symmetry group, and right away, I ran into a problem: the subgroup consisting of ordinary isospin transformations is not spontaneously broken, so there would be a massless vector particle associated with these transformations with the spin and charges of the ρ meson. This, of course, was in gross disagreement with observation; the ρ meson is neither massless nor particularly light.

The theory requires a massless vector particle, but it is not the ρ meson, it is the photon

Then it occurred to me that I was working on the wrong problem. What I should have been working on were the weak nuclear interactions, like beta decay. There was just one natural choice for an appropriate local symmetry, and when I looked back at the literature I found that the symmetry group I had decided on was one that had already been proposed in 1961 by Sheldon Glashow, though not in the context of an exact spontaneously broken local symmetry. (I found later that the same group had also been considered by Salam and John Ward.) Even though it was now exact, the symmetry when spontaneously broken would yield massive vector particles, the charged W particles that had been the subject of theoretical speculation for decades, and a neutral particle, which I called the Z particle, to mediate a “neutral current” weak interaction, which had not yet been observed. The same symmetry breaking also gives mass to the electron and other leptons, and in a simple extension of the theory, to the quarks. This symmetry group contained electromagnetic gauge invariance, and since this subgroup is clearly not spontaneously broken (except in superconductors), the theory requires a massless vector particle, but it is not the ρ meson, it is the photon, the quantum of light. This theory, which became known as the electroweak theory, was also proposed independently in 1968 by Salam.

The mathematical consistency of the theory, which Salam and I had suggested but not proved, was shown in 1971 by Gerard ‘t Hooft; neutral current weak interactions were found in 1973; and the W and Z particles were discovered at CERN a decade later. Their detailed properties are just those expected according to the electroweak theory.

There was (and still is) one outstanding issue: just how is the local electroweak symmetry broken? In the BCS theory, the spontaneous breakdown of electromagnetic gauge invariance arises because of attractive forces between electrons near the Fermi surface. These forces don’t have to be strong; the symmetry is broken however weak these forces may be. But this feature occurs only because of the existence of a Fermi surface, so in this respect the BCS theory is a misleading guide for particle physics. In the absence of a Fermi surface, dynamical spontaneous symmetry breakdown requires the action of strong forces. There are no forces acting on the known quarks and leptons that are anywhere strong enough to produce the observed breakdown of the local electroweak symmetry dynamically, so Salam and I did not assume a dynamical symmetry breakdown; instead we introduced elementary scalar fields into the theory, whose vacuum expectation values in the classical approximation would break the symmetry.

This has an important consequence. The only elementary scalar quanta in the theory that are eliminated by spontaneous symmetry breaking are those that become the helicity-zero states of the W and Z vector particles. The other elementary scalars appear as physical particles, now generically known as Higgs bosons. It is the Higgs boson predicted by the electroweak theory of Salam and me that will be the primary target of the new LHC accelerator, to be completed at CERN sometime in 2008.

But there is another possibility, suggested independently in the late 1970s by Leonard Susskind and me. The electroweak symmetry might be broken dynamically after all, as in the BCS theory. For this to be possible, it is necessary to introduce new extra-strong forces, known as technicolour forces, that act on new particles, other than the known quarks and leptons. With these assumptions, it is easy to get the right masses for the W and Z particles and large masses for all the new particles, but there are serious difficulties in giving masses to the ordinary quarks and leptons. Still, it is possible that experiments at the LHC will not find Higgs bosons, but instead will find a great variety of heavy new particles associated with technicolour. Either way, the LHC is likely to settle the question of how the electroweak symmetry is broken.

It would have been nice if we could have settled this question by calculation alone, without the need for the LHC, in the way that Bardeen, Cooper and Schrieffer were able to find how electromagnetic gauge invariance is broken in a superconductor by applying the known principles of electromagnetism. But that is just the price we in particle physics have to pay for working in a field whose underlying principles are not yet known.

• This article is based on the talk given by Steven Weinberg at BCS@50, held on 10–13 October 2007 at the University of Illinois at Urbana–Champaign to celebrate the 50th anniversary of the BCS paper. For more about the conference see www.conferences.uiuc.edu/bcs50/.

bright-rec iop pub iop-science physcis connect