High-speed networking over the Internet is becoming increasingly important as CERN and other laboratories around the world gear up for the Grid. Olivier Martin takes a look at the evolution towards CERN’s current involvement in long-distance data-transfer records.
On five separate occasions during 2003, a team led by Harvey Newman of Caltech and Olivier Martin of CERN established new records for long-distance data transfer, earning a place for these renowned academic institutions in the Guinness Book of Records. This year, new records are expected to be set as the performance of single-stream TCP (Transmission Control Protocol) is pushed closer towards 10 Gbps (gigabits per second). In 1980 “high speed” meant data transfers of 9.6 kbps (kilobits per second), using analogue transmission lines. So the achievement of 10 Gbps in 2004 corresponds to an increase by a factor of 1 million in 25 years – an advance that is even more impressive than the classic “Moore’s law” of computer processing, in which the number of transistors per integrated circuit (i.e. the processing power) follows an almost exponential curve, increasing by a factor of two every 18 months, or 1000 every 15 years.
While chasing such records may sound like an irrelevant game, the underlying goal is of great importance for the future of data-intensive computing Grids. In particular, for CERN and all the physicists across the world working on experiments at the Large Hadron Collider (LHC), the LHC Computing Grid will depend critically on sustainable multi-gigabit per second throughput between different sites. The evolution of such long-distance computing capabilities at CERN has been an important part of CERN’s development as a laboratory, not only for European users but also for those across the globe.
The early days
Computer networks have been of increasing importance at CERN since the early 1970s, when the first links were set up between experiments and the computer centre. The first external links, for example to the Rutherford Laboratory in the UK, were only established during the late 1970s and had very limited purposes, such as remote job submission and output file retrieval. Then from 1974 onwards, together with the EARN/BITnet and UUCP mail network initiatives, there was an extraordinary development in electronic mail. However, it was only in the late 1980s that the foundations for today’s high-speed networks were truly laid down. Indeed, the first international 2 Mbps (megabits per second) link was installed by INFN during the summer of 1989, just in time for the start-up of CERN’s Large Electron Positron collider. However, there was still no Europe-wide consensus on a common protocol, and as a consequence multiple backbones had to be maintained, e.g. DECnet, SNA, X25 and TCP/IP (TCP/Internet Protocol).
Back in late 1988, the National Science Foundation (NSF) in the US made an all-important choice when it established NSFnet, the first TCP/IP-based nationwide 1.5 Mbps backbone. This was initially used to connect the NSF-sponsored Super Computer Centers and was later extended to serve regional networks, which themselves connected universities. The NSFnet, which is at the origin of the academic as well as the commercial Internet, served as the emerging commercial Internet backbone until its shutdown in 1995.
In 1990 CERN picked up on this development – not without courage – and together with IBM and other academic partners in Europe developed the use of EASInet (European Academic Supercomputer Initiative Network), a multi-protocol backbone that took account of Europe’s networking idiosyncrasies. EASInet, which also provided a 2 Mbps TCP/IP backbone to European researchers, had a 1.5 Mbps link to NSFnet through Cornell University and was at the origin of the European Internet, together with EBONE. These developments established TCP/IP as the major protocol for Internet backbones around the world.
The Internet2 land-speed records
In 2000, to stimulate continuing research and experimentation in TCP transfers, the Internet2 project, a consortium of approximately 200 US universities working in partnership with industry and government, created a contest – the Internet2 land-speed record (I2LSR). This involves sending data across long distances by “terrestrial” means – that is, by underground as well as undersea fibre-optic networks, as opposed to by satellite – using both the current Internet standard, IPv4, and the next-generation Internet, IPv6. The unit of measurement for the contest is bit-metres per second – a very wise and fair decision as the complexity of achieving high throughput with standard TCP installations, e.g. on Linux, is indeed proportional to the distance.
In 2003 CERN and its partners were involved in several record-breaking feats. On 27-28 February a team from Caltech, CERN, LANL and SLAC entered the science and technology section of the Guinness Book of Records when they set an IPv4 record with a single 2.38 Gbps stream over a 10,000 km path between Geneva and Sunnyvale, California, by way of Chicago. Less than three months later, a new IPv6 record was established on 6 May by a team from Caltech and CERN, with a single 983 Mbps stream over 7067 km between Geneva and Chicago.
However, thanks to the 10 Gbps DataTAG circuit (see “DataTAG” box), which became available in September 2003, new IPv4 and IPv6 records were established only a few months later, first between Geneva and Chicago, and then between Geneva, California and Arizona. On 1 October a team from Caltech and CERN achieved the amazing result of 38.42 petabit-metres per second with a single 5.44 Gbps stream over the 7073 km path between Geneva and Chicago. This corresponds to the transfer of 1.1 terabytes of physics data in less than 30 minutes, or the transfer of a full-length DVD to Los Angeles in about 7 seconds.
Then in November a longer 10 Gbps path to Los Angeles, California and Phoenix, Arizona, became available through Abilene, the US universities’ backbone, and CALREN, the California Research and Education Network. This allowed the IPv4 and IPv6 records to be broken yet again on 6 November, achieving 5.64 Gbps with IPv4 over a path of 10,949 km between CERN and Los Angeles, i.e. 61.7 petabit-metres per second. Five days later, a transfer at 4 Gbps with IPv6 over 11,539 km between CERN and Phoenix through Chicago and Los Angeles established a record of 46.15 petabit-metres per second.
As with all records, there is still ample room for improvement. With the advent of PCI Express chips, faster processors, improved motherboards and better 10GigE network adapters, there is little doubt it will be feasible to push the performance of single-stream TCP transport much closer to 10 Gbps in the near future, that is, well above 100 petabit-metres per second.
As Harvey Newman, head of the Caltech team and chair of the ICFA Standing Committee on Inter-Regional Connectivity, has pointed out, these records are a major milestone towards the goal of providing on-demand access to high-energy physics data from around the world, using servers that are affordable to physicists from all regions. Indeed, for the first time in the history of wide-area networking, performance has been limited only by the end systems and not by the network: servers side by side have the same TCP performance as servers separated by 10,000 km.