Comsol -leaderboard other pages

Topics

Programming the Universe: A Quantum Computer Scientist Takes on the Cosmos

by Seth Lloyd, Alfred A Knopf. Hardback ISBN 1400040922, $25.95.

CCEboo5_07-06

I borrowed this book from my local library a couple of months ago and found it so irritating that I gave up after the first few chapters. When I agreed to review it I decided that I’d better read it a little more thoroughly – amazingly, this time I really enjoyed it.

Some of the anecdotes and name-dropping are rather annoying and I’m not sure that I can embrace the central thesis – that our universe is a giant quantum computer (QC) computing itself. (I would have thought that the inherent randomness of things argues against the universe as computer.) However, the book does contain an unusually informative and quirky account of the theory of our surroundings, from small to large, and it is very entertaining and easy to read.

As a sort of theoretical theory book, it is not real science that we are looking at here. It takes the current theories of particle physics and cosmology, assumes that they are all correct and then constructs a new all-embracing theory. Somewhere in the book a claim is made that there are predictions, but I didn’t see any sign of them (theories that make no predictions are unfortunately getting more and more common these days).

Perhaps the book is way ahead of its time. The most important force in the universe is surely gravity, so when some future theorist has finally developed a quantum theory of gravity, then we might be ready for it.

I have heard that the intelligent-design people are unhappy with this book, but they shouldn’t be. Lloyd has presented them with a great opportunity: surely the hypothetical intelligent designer and the hypothetical programmer of the big hypothetical QC within which we live might be one and the same.

As alert readers of CERN Courier will already know, a recently published experimental result (O Hosten et al. 2006 Nature 439 949) has confirmed theoretical speculation and demonstrated that QCs compute the same whether they are on or off. So here’s an interesting thought: amazing as our universe is, if Lloyd is right it might not even have been turned on yet.

Indeed, it all reminds me a bit too much of Deep Thought and a misspent youth, but it’s a fun book. I guess it takes a quantum-mechanical engineer to view things in such an odd manner. I do recommend this book, but urge that you don’t take it too seriously. Also make sure that you read it twice and remember that the answer might well be 42.

God Created the Integers. The Mathematical Breakthroughs that Changed History

Edited, with commentary, by Stephen Hawking, Running Press. Hardback ISBN 0762419229, £19.99 ($29.95).

CCEboo6_07-06

“God created the integers, all the rest is the work of Man,” are the words of 18th-century mathematician Leopold Kronecker, whose thought-provoking statement makes a fitting title to this collection. Following on the path of his previous collection Standing on the Shoulders of Giants, Hawking has brought together representative works of the most influential mathematicians of the past 2500 years, from Euclid to Alan Turing. (Incidentally, Kronecker did not make the cut to be included, but his best friend Karl Weierstrass did.)

The collection outlines the life of each mathematician before reproducing a selection of original work. The sections are self-contained so the book can be read over time or out of sequence. Reading it in one go has its advantages, however: the beautifully terse ancient Greek text contrasts well with the flowery style of George Boole, for instance. Also, there are recurrent themes, such as the problem of the continuity of a function, or David Hilbert’s challenges, which are tackled by several mathematicians in this volume.

Each section starts with an introduction where Hawking’s delightful pen takes us through the mathematician’s biography and then patiently through the most important points of his works. Hawking’s introductions are informative, understandable and in places amusing – for instance, the hilarious story of when Kurt Goedel is taken by his friend, Albert Einstein, for his US nationality hearings.

There is a wealth of information in the original works reproduced and I mention here a few points that I found interesting.

Euclid’s The Elements concerns geometry, and number theory – geometry is the means of visualizing important proofs, such as the infinitude of prime numbers. Archimedes, although better known for his engineering skills, was one of the best mathematicians of antiquity. One of the works reproduced is The Sand Reckoner, Archimedes’ ambitious attempt to measure the mass of the visible universe using the measured size of the Earth and the Sun to arrive at his answer: 1063 grains of sand. What is interesting is his treatment of errors – although he knows the size of the Earth, he conservatively assumes a figure 10 times as big.

The collection covers another Greek, Diophantus, although he is perhaps best known from a footnote that the mathematician Pierre de Fermat (not covered) wrote in the margin next to one of his theorems, which became known as Fermat’s last theorem. It puzzled mathematicians for many years until finally proved in 1992.

Isaac Newton has probably the shortest space allocated in the book, but quantity is not proportional to importance. Newton was not only a great physicist, but also a brilliant mathematician: he invented calculus. In another link between mathematics and physics, Joseph Fourier derived his trigonometric series while trying to solve a physics problem on heat transfer.

Carl Friedrich Gauss, a mathematical prodigy, is considered by many as the greatest mathematician of all time. Less well known is the fact that he worked for a period as a surveyor and that he achieved international fame when he calculated the position of asteroid Ceres in 1801.

Bernhard Riemann generalized geometry in a way that proved essential to Albert Einstein more than 60 years later. Ironically, Riemann was worrying about deviations from the Euclidian model for the infinitesimally small.

While Hilbert does not feature in this volume, his statement of the three most important challenges of mathematics inspired mathematicians who do appear. Goedel would prove the incompleteness and inconsistency of mathematics, whereas Turing would disprove the decidability of mathematics some years later.

When reading about the lives of the great men featured a few worrying recurrent themes appear, some in line with caricatures of mathematicians. Mental problems and an unfulfilled private life occur with alarming frequency. Less expected, perhaps, is the struggle for professional recognition.

Unfortunately, the book is not perfect and I do have a few gripes. The Greek text is deeply flawed and quite unacceptable for such a publication; the typesetting is appalling and proofreading evidently was non-existent. Running Press would do well to rectify this on future editions. Throughout the book, the hallmarks of a rushed job are everywhere. There are far too many typing errors, which is especially bad when formulae are concerned; it is not always clear if a footnote was from the original author, or inserted by the editor; and finally, some figures (the ones appearing in Henri Lebesgue’s section) were of surprisingly low quality.

Even so, this is an impressive collection of works that are part of our intellectual heritage. It is an important addition to every library, but bedside (or poolside) reading it is not.

Entangled World. The Fascination of Quantum Information and Computing

by Jürgen Audretsch (ed.), Wiley-VCH. Hardback ISBN 3527404708, £22.50 ($33.80).

Entangled World is a 2006 English translation of “Verschränkte Welt – Faszination de Quanten” (2002). Based on lectures about “physics and philosophy of correlated quantum systems” given at the University of Konstanz in the winter semester of 2000/2001, it presents a clear and simple overview of quantum mechanics and its applications (especially via entanglement) to novel technologies like quantum computing.

CCEboo2_07-06

The lectures in the book are written in a clear and informal style, but are aimed at a level that is too high for an average non-physicist and too low for a practising physicist. Given that they are based on university lectures, this is perhaps not surprising and this book might best be thought of as supplementary reading for a proper quantum mechanics course.

For example, bra and ket notation are introduced, but you’d have to know what vectors and complex numbers are to follow the explanation. A discussion of Bell’s inequality assumes that the reader knows what a probability density is, as well as what an integral means. Those who do not have a background in mathematics or physics would probably still find this an interesting read, but they would have to be willing to skip over many of the details; finding a simpler book might be a better idea for them.

A practising physicist who has not had time to keep up with recent advances in quantum computation and other fields dealing with quantum information may well find sections of this book useful as quick and painless introductions to emerging quantum technologies – ones that genuinely go beyond the possibilities offered by an arbitrarily large amount of classical hardware. Those with a philosophical bent are likely to find much of interest, as there is a fair amount of historical information and interesting quotes with emphasis on the thoughts of physicists rather than professional philosophers.

Although the lectures are by a number of different authors, the book flows well and the notation is consistent. With the slightly rough translation, one could easily think that this was written by a single author.

With respect to the translation, I am aware that it is all but impossible for anyone who is not a native speaker to make a perfectly smooth translation and in many ways the translation is quite good. That said, a final pass by a native English speaker would have been useful. The same criticism could be levelled at many publishers, but it doesn’t seem unreasonable to ask for the same level of editing that goes into a book originally written in English. Even the short description of the book at is written in rather poor English. Come on publishers – there is no shortage of underpaid physicists who would make small corrections to translated texts for you!

Overall, I like the layout. Numerous illustrations help to make the text clear, and Erich Joos’ chapter on decoherence has the useful feature of separating out material “for physicists” into shaded boxes, much as New Scientist used to do many years ago, so that a popular article could include a piece of higher-level information without disturbing the overall flow of the text. This has always seemed a great idea to me and I wish that it would come back into vogue.

A few interesting points are raised that would even be of interest to a relatively advanced physics student, especially with respect to decoherence, which receives little treatment, if at all, in most of the classic textbooks. Another noteworthy feature of this book is the inclusion of experimental information in the form of plots and sketches of how pieces of equipment are put together.

All in all the book has much to offer and is reasonably priced, but one should be aware of the level at which it is pitched.

Muon neutrinos vanish on way to Minnesota

Muon neutrinos definitely disappear en route from Fermilab in Illinois to Soudan in Minnesota. This is the conclusion from the first results of the Main Injector Neutrino Oscillation Search (MINOS), presented at a seminar at Fermilab on 30 March, which showed that MINOS has observed the disappearance of a significant fraction of these neutrinos. The observation is consistent with the phenomenon of neutrino oscillation, in which neutrinos change from one kind to another, and corroborates earlier observations of muon-neutrino disappearance, made by the Super-Kamiokande and KEK-to-Kamioka (K2K) experiments in Japan.

CCEmuo1_05-06

The Fermilab side of the MINOS experiment comprises a beam-line in a 1220 m long tunnel pointing towards Soudan. The tunnel holds the carbon target and beam-focusing elements that generate neutrinos from protons accelerated by Fermilab’s Main Injector accelerator. A neutrino detector, the MINOS “near detector” located 100 m underground on the Fermilab site, measures the composition and intensity of the neutrino beam as it leaves the laboratory. The Soudan side of the experiment features the 6000 tonne “far” detector about 700 m underground, which measures the properties of the neutrinos after their 725 km trip to northern Minnesota.

If neutrinos did not change as they travel away from Fermilab, the MINOS detector in Soudan should have recorded 177±11 muon neutrinos. Instead, the collaboration found only 92 muon-neutrino events – a clear observation of muon-neutrino disappearance. The deficit as a function of energy is consistent with the hypothesis of neutrino oscillations, which can occur only if different neutrino types have different masses. The MINOS observations yield a value of Δm2, the square of the mass difference between two types of neutrinos, equal to 0.0031 ±0.0006 (statistical uncertainty) ±0.0001 (systematic uncertainty) eV2.

In the oscillation scenario, muon neutrinos can transform into electron neutrinos or tau neutrinos, but alternative models – such as neutrino decay and extra dimensions – are not yet excluded. The MINOS collaboration will need to record much more data to test more precisely the exact nature of the disappearance process. Over the next few years, the experiment should collect about fifteen times more data, yielding more results with higher precision.

The MINOS neutrino experiment follows on from the K2K long-baseline neutrino experiment in Japan. From 1999-2001 and 2003-2004, K2K sent neutrinos created at an accelerator at the KEK laboratory to a detector in Kamioka, a distance of about 240 km. Compared with K2K, the distance in the MINOS experiment is three times longer, and both the intensity and the energy of the MINOS neutrino beam are higher. These advantages have enabled the MINOS experiment to observe in less than a year about three times as many neutrinos as K2K did in around four years. Later this year the CERN Neutrinos to Gran Sasso project will start delivering muon neutrinos to the Gran Sasso National Laboratory in Italy.

• The MINOS experiment includes about 150 scientists, engineers, technical specialists and students from 32 institutions in six countries, including Brazil, France, Greece, Russia, the UK and the US. The US Department of Energy provides the major share of the funding, with additional funding from the US National Science Foundation and the UK’s Particle Physics and Astronomy Research Council. For more information on the experiment see www-numi.fnal.gov/.

Belle experiment finds evidence for rare missing-energy decay

The Belle experiment has recently revealed evidence for a rare and long-sought missing-energy decay of the B meson, B → τν. This has allowed the Belle Collaboration to measure the B-meson decay constant, fB, for the first time. The results were announced at the Flavor Physics and CP Violation conference in Vancouver, and have been submitted to Physical Review Letters (Ikado et al. 2006).

CCEbel1_06-06

The Belle experiment is a collaborative effort of scientists from universities and laboratories in America, Asia, Australia and Europe. It operates at the KEK High Energy Physics Laboratory in Japan – home to KEKB, the world’s highest-luminosity particle accelerator, which recently achieved a peak luminosity of 1.6 × 1034 cm-2s-1.

In the decay mode B → τν, the B meson (a strongly interacting bound state of a b quark and anti-u quark) transforms into a final state containing only leptons. Previously, because this decay process had not been seen, researchers had to rely entirely on either calculations in lattice quantum chromodynamics (QCD) or models to obtain the parameter fB, which is needed to interpret many other measurements in particle physics, including the Cabibbo-Kobayashi-Maskawa unitarity triangle constraints from Bd-Bbard mixing.

The decay mode B → τν is especially hard to find. Not only is it rare – about 1 in 10,000 charged B decays contains such an event – but tau leptons often decay to an electron or muon together with two neutrinos, which escape the detector unseen. This means that the experimental signature is simply a single charged track accompanied by missing energy, and is frequently mimicked by less-interesting background processes.

The Belle experiment operates at the U(4S) resonance where each B meson is produced accompanied by an anti-B meson partner and nothing else. The experimental breakthrough that allowed the discovery of the missing-energy decay mode involved detecting all the decay products of the B meson accompanying the sought-after decay, thereby constraining the energy and momentum of the missing or undetected particles. This technique has a very low efficiency and is only possible because of the unprecedented luminosity of the KEKB accelerator, which provided the Belle experimenters with 457 million charged B mesons to study. Even so, this was barely sufficient to discover this rare and unusual process.

Based on the events that they have found, the Belle team reports a preliminary value of fB = 176+28-23 (stat.) +20-19 (syst.) MeV, which is compatible with the most recent calculations in unquenched lattice QCD. Conversely, if fB is taken from lattice QCD, the Belle measurement of B → τν gives a tight constraint on charged Higgs masses at high tanβ in extensions of the Standard Model, where tanβ is the ratio of vacuum expectation values.

This breakthrough in the detection of a rare missing-energy decay is the first step towards the observation of exotic decays such as B → Kννbar, B → dark matter, and other possible types of unusual and new physics processes. Although the experimental technique for observing a rare missing-energy mode has now been established, two orders of magnitude more BBbar pairs are probably needed to find B → Kννbar and exploit all of the possibilities this technique has to offer. This will be possible at the proposed KEK Super B-Factory facility.

HAPPEx shows the proton is not so strange

Two more rounds of data taken by the Hall A Proton Parity Experiment (HAPPEx) at the US Department of Energy’s Jefferson Lab have provided the most precise constraint yet on nucleon strangeness. The result, presented at the American Physical Society April meeting in Dallas, reveals that the strange-quark contribution to the proton’s overall charge distribution and magnetic moment is small. It amounts to no more than 1% of the proton’s charge radius and no more than 4% of its magnetic moment – and in both cases, the final value could be consistent with zero.

CCEhap1_06-06

It may seem unusual that strange quarks should be important in determining the properties of the proton as, unlike up and down quarks, they are not thought of as permanent residents of the proton. However, the strange quark may appear as part of the proton’s quark-gluon sea, the seething mass of particles that constantly blink in and out of existence due to strong force energy.

CCEhap2_06-06

A useful method of accessing strange quarks is through parity-violating electron scattering, in which the interference of the electromagnetic force and neutral weak force is measured by scattering a beam of polarized electrons off target particles. Since the electromagnetic force is parity-symmetric, while the weak force is not, a longitudinally polarized electron beam allows experimenters to separate the electromagnetic and weak components, and by comparing their strengths they can disentangle the contributions of the up, down and strange quarks.

The HAPPEx Collaboration measured a combination of strange-quark contributions to the charge distribution and magnetization of the proton, which are represented via GsE and GsM, the strange electric and magnetic form factors, respectively. To disentangle the two form factors, the collaboration took data on two different targets: hydrogen and helium (4He). 4He has no net spin and hence no magnetic moment, and so allowed the researchers to isolate GsE.

HAPPEx took data on both targets during 2005, using a longitudinally polarized 3 GeV electron beam from Jefferson Lab’s Continuous Electron Beam Accelerator Facility. A gallium arsenide superlattice photocathode provided an average beam polarization of 86% with rapidly flipping helicity. The beam was sent into a 20-cm long cryogenic aluminum target vessel containing either hydrogen or 4He in Jefferson Lab’s Hall A. Septum magnets then deflected elastically scattered electrons, which were at a forward angle of 6°, to the Hall A High Resolution Spectrometers (HRS), located at 12.5°.

The HRS allowed a very clean separation of elastic events, with an average value of momentum-transfer squared, Q2 = 0.1 (GeV/c)2. A Cherenkov electromagnetic shower calorimeter covered the distribution of elastic events in the spectrometer focal plane. The signal was integrated over each period of constant helicity. A blinding factor was placed on the data and removed only a week before the result was presented in Dallas.

The HAPPEx results indicate small values for the strange form factorss GsM = 0.12±0.24 and GsE = -0.002±0.017. While these results are consistent with previous results from HAPPEx (Aniol et al. 2006) and world data, they reveal that the large values and possible radical Q2 dependence of the strange form factors suggested by previous data in this kinematic region, are highly unlikely. Also, while these new data are accurate enough to eliminate many models of strangeness content, they do not rule out sizable contributions at higher Q2. They are also compatible with a new analysis of world data, the result of which is in excellent agreement with modern calculations based on non-perturbative quantum chromodynamics using lattice methods and chiral extrapolation (Young et al. 2006).

FLASH produces the shortest wavelength yet

On 26 April, the vacuum-ultraviolet and soft X-ray free-electron laser (FEL) facility at DESY generated pulses at the shortest wavelength yet, using electron bunches supplied by the TESLA Test Facility (TTF) linac. The laser facility already produced the shortest wavelengths achieved with a FEL, with pulses at 32 nm. Now it has reached a new record with a wavelength of only 13.1 nm.

CCEfla1_06-06

Equipped with five superconducting accelerator modules, the TTF linac can accelerate electron bunches to an energy of 700 MeV. This is sufficient for the bunches to emit laser pulses at 13.1 nm as they subsequently traverse the undulator. A sixth module, to be installed in 2007, will allow a further increase in energy to 1 GeV, making it possible to generate wavelengths as low as 6 nm. The pulses produced are shorter than 50 fs, leading appropriately to the new name for the facility, FLASH, which was chosen to be simpler and more attractive than VUV-FEL.

After a successful first data-taking run that ended in February, on 8 May the newly named FLASH began once again to serve its users for a second measuring period.

Neutrinos provide new route to heavy elements in supernovae

During their long lifetimes stars generate their energies by nuclear fusion in their interiors, which are generally accepted to be the breeding grounds for carbon and heavier elements. The heaviest elements made this way are iron and nickel; heavier elements are thought to be built by slow and/or rapid neutron-capture reactions, the s- and r-processes. Although these mechanisms for nucleosynthesis have been known for some time, the abundances of some heavy elements have remained a mystery. Now Carla Fröhlich of the Universität Basel and Gabriel Martínez-Pinedo of the Gesellschaft für Schwerionenforschung, Darmstadt, and colleagues have proposed a novel nucleosynthesis that might solve these puzzles.

CCEfla2_06-06

When a massive star forms a supernova, part of the matter in the stellar interior forms a neutron star, and the liberated energy, mainly in the form of neutrinos, contributes to the ejection of the stellar mantle into the interstellar medium. The temperature of the deepest ejected layers is so hot that nuclei are decomposed into free protons and neutrons. The tremendous flux of neutrinos and antineutrinos, which accompanies the birth of the neutron star, can be absorbed by the nucleons and so determines the relative abundance of protons and neutrons and hence the composition of the nuclei that form when the ejected matter reaches cooler regions.

During the later stages of the explosion the matter is expected to become rich in neutrons, so supernovae are believed to be the site of heavy-element production by the r-process. However, it has been realized very recently that during the first second of the explosion the ejected material is rich in protons.

When Fröhlich and colleagues studied the nucleosynthesis in this proton-rich environment they discovered possible solutions to two long-standing problems. First, they could reproduce the abundances of elements such as scandium, copper and zinc, for which calculations had previously fallen notoriously short. More surprisingly, they also noticed the appearance of heavier elements such as strontium, molybdenum, ruthenium and beyond (C Fröhlich et al. 2006).

This heavy-element production can be attributed to a novel nucleosynthesis process, which Fröhlich and colleagues named the νp process after the two main contributors: proton capture, which transports matter sequentially to higher charges, and (anti)neutrinos, which are captured by free protons and so change the protons to neutrons. This presence of neutrons allows the flow in element creation to circumvent long-lived nuclei such as 56Ni and 64Ge, so enabling the synthesis of heavier elements.

The νp process is a primary process, that is, it should occur in all core-collapse supernovae. As a consequence there should already be fingerprints of νp nucleosynthesis in the earliest and most primitive stars. Indeed, finding strontium in the most metal-poor, and hence oldest, star observed so far came as a big surprise last year. This might now be explained as debris from the νp process that had operated in an earlier supernova. Further observations of elemental abundances in metal-poor stars combined with progress in supernova modelling and improved knowledge of the nuclei involved – as expected from future facilities such as the Facility for Antiproton and Ion Research – will help to disentangle the importance of the νp process for the abundances of the elements in the universe.

CDF measures matter-antimatter B0s transition

The CDF collaboration at Fermilab has announced the precision measurement of the matter-antimatter transitions for the B0s meson, which consists of a bottom quark bound to a strange anti-quark. The announcement came less than a month after the news that the D0 collaboration had measured the first upper and lower bounds on the oscillation frequency.

CCEcdf1_06-06

In a seminar at Fermilab on 10 April, the CDF Collaboration reported on their analysis of 1 fb-1 of proton-antiproton collision data acquired by the CDF detector between February 2002 and January 2006, during Tevatron Run II. Within the 700-member CDF Collaboration – from 61 institutions and 13 countries – a team of 80 researchers from 27 institutions performed the data analysis leading to the precision measurement just one month after the data-taking was completed.

The team used semileptonic and hadronic decays of the B0s and found a signature consistent with B0s-Bbar0s oscillations, with a probability that the data could randomly fluctuate to mimic such a signature of 0.5%. Analysis yielded a preliminary result for the B0s-Bbar0s oscillation frequency, Δms, of 17.33+0.42-0.21(stat.)±0.07(sys.) ps-1 – in agreement with D0’s result of 17 < Δms Δ 21 ps-1. The CDF Collaboration also derived a value for the ratio of the related parameters of the Cabibbo-Kobayashi-Maskawa matrix, |Vtd|/|Vts| = 0.208+0.008-0.007 (stat.+sys.).

This precision measurement from CDF will immediately be interpreted within different theoretical models, in particular in the context of supersymmetry. General versions of supersymmetry predict an even faster transition rate than has been measured, so some of those theories can be ruled out based upon this result. More information will come from combining precise measurements of B0s-Bbar0s oscillations and searching for the rare decay of B0s mesons into muon pairs. Both the D0 and CDF experiments expect to achieve improved results in these areas in the near future.

Highly ionized uranium produces best test of theory

Researchers at the Lawrence Livermore National Laboratory (LLNL) in California have made the most precise test so far of quantum electrodynamics (QED). In studies of highly ionized, lithium-like uranium, they have measured the two-loop Lamb shift for the first time (Beiersdorfer et al. 2005).

QED is a well-established theory that describes at the quantum level all phenomena involving the electromagnetic force. Its extremely accurate predictions have been tested by various experiments, including measurements of the tiny shift in the energy levels in hydrogen discovered by Willis Lamb in 1951, owing to the self-interaction of the electron. Tests of so-called one-loop QED (self-energy and vacuum polarization) confirmed the theoretical predictions with high precision, and theorists and experimentalists are now looking to evaluate higher-order QED processes.

Highly charged ions offer an opportunity for high-accuracy calculations of atomic properties within QED, in that they provide a strong-field environment and relatively simple spectra. These conditions allow high-precision measurements of some transitions. Moreover, measurements of lithium-like systems such as U89+ are more sensitive to higher-order QED terms than those of hydrogen-like systems.

Using the SuperEBIT high-energy electron-beam ion trap at LLNL, the researchers measured the 2s(1/2)-2p(1/2) transition in U89+ with an accuracy that is nearly an order of magnitude better than previously available. The team monitored X-ray emission from the ions with a high-purity germanium detector, and used a spectrometer specifically developed for this experiment for spectroscopy at extreme ultraviolet wavelengths.

The results allow the researchers to infer a two-loop Lamb shift in lithium-like U89+ of 0.20 eV. This is also in excellent agreement with the recent calculation of the two-loop Lamb shift for the 1s level in hydrogen-like U91+.

bright-rec iop pub iop-science physcis connect