translight / starlight - star tap

34
UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 1 TransLight / StarLight NSF Cooperative Agreement OCI-0441094 www.startap.net/translight ANNUAL REPORT February 1, 2005 – January 31, 2006 Submitted February 23, 2006 Thomas A. DeFanti, Maxine Brown, Alan Verlo, Laura Wolf Electronic Visualization Laboratory University of Illinois at Chicago 851 S. Morgan St., Room 1120 Chicago, IL 60607-7053 [email protected] Table of Contents 1. Participants 3 1.A. Primary Personnel 3 1.B. Other Senior Personnel (Excluding PI and Co-PI) 3 1.C. Other Organizations That Have Been Involved as Partners 4 1.D. Other Collaborators or Contacts 4 2. Activities and Findings 5 2.A. Research Activities 5 2.A.1. Goals and Objectives 5 2.A.2. Accomplishments and Milestones 5 2.A.3. Infrastructure Status 5 2.A.4. NYC/AMS Network Operations and Engineering 8 2.A.5. CHI/AMS Network Operations and Engineering 9 2.A.6. Project Governance/Management and Oversight 11 2.A.7. Meeting and Conference Participation 11 2.B. Research Findings 15 2.B.1. Lessons Learned 15 2.B.2. NSF Interactions 15 2.B.3. IRNC Project Interactions and Leveraging of Resources 15 2.B.4. International Collaborations: GLIF 16 2.B.5. E-Science Support (Quantified Science Drivers) 17 2.B.6. Other NSF-Funded Infrastructure Leveraging of Resources 27 2.C. Research Training 30 2.D. Education/Outreach 30 3. Publications and Products 31 3.A. Journals/Papers 31 3.B. Books/Publications 31 3.C. Internet Dissemination 31 3.D. Other Specific Products 31 4. Contributions 32 4.A. Contributions within Discipline 32 4.B. Contributions to Other Disciplines 32 4.C. Contributions to Human Resource Development 32 4.D. Contributions to Resources for Research and Education 32

Upload: others

Post on 11-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 1

TransLight / StarLight

NSF Cooperative Agreement OCI-0441094 www.startap.net/translight

ANNUAL REPORT February 1, 2005 – January 31, 2006

Submitted February 23, 2006

Thomas A. DeFanti, Maxine Brown, Alan Verlo, Laura Wolf Electronic Visualization Laboratory

University of Illinois at Chicago 851 S. Morgan St., Room 1120

Chicago, IL 60607-7053 [email protected]

Table of Contents 1. Participants 3

1.A. Primary Personnel 3 1.B. Other Senior Personnel (Excluding PI and Co-PI) 3 1.C. Other Organizations That Have Been Involved as Partners 4 1.D. Other Collaborators or Contacts 4

2. Activities and Findings 5 2.A. Research Activities 5

2.A.1. Goals and Objectives 5 2.A.2. Accomplishments and Milestones 5 2.A.3. Infrastructure Status 5 2.A.4. NYC/AMS Network Operations and Engineering 8 2.A.5. CHI/AMS Network Operations and Engineering 9 2.A.6. Project Governance/Management and Oversight 11 2.A.7. Meeting and Conference Participation 11

2.B. Research Findings 15 2.B.1. Lessons Learned 15 2.B.2. NSF Interactions 15 2.B.3. IRNC Project Interactions and Leveraging of Resources 15 2.B.4. International Collaborations: GLIF 16 2.B.5. E-Science Support (Quantified Science Drivers) 17 2.B.6. Other NSF-Funded Infrastructure Leveraging of Resources 27

2.C. Research Training 30 2.D. Education/Outreach 30

3. Publications and Products 31 3.A. Journals/Papers 31 3.B. Books/Publications 31 3.C. Internet Dissemination 31 3.D. Other Specific Products 31

4. Contributions 32 4.A. Contributions within Discipline 32 4.B. Contributions to Other Disciplines 32 4.C. Contributions to Human Resource Development 32 4.D. Contributions to Resources for Research and Education 32

Page 2: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 2

4.E. Contributions Beyond Science and Engineering 32 5. Special Requirements 33

5.A. Objectives and Scope 33 5.B. Special Reporting Requirements 33 5.C. Animals, Biohazards, Human Subjects 33

6. Program Plan 34 6.A. Plans and Milestones 34 6.B. Proposed Activities 34

6.B.1. Capacity Planning and Circuit Upgrades 34 6.B.2. Network Operations and Engineering 34 6.B.3. Community Support and Satisfaction 34

Page 3: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 3

1. Participants

1.A. Primary Personnel

Participant’s Name(s) Project Role(s) >160 Hours/Yr Thomas A. DeFanti (1) Principal Investigator Yes Maxine Brown (2) Co-Principal Investigator Yes (1) Tom DeFanti, PI, receives one-month funding and primarily focuses on managing the link procurement process, network

engineering, budgets and accounts payable, and interfaces with personnel from Internet2/Abilene and DANTE/GÉANT2. DeFanti also participates in monthly IRNC phone calls and attends meetings as time permits.

(2) Maxine Brown, co-PI, primarily focuses on managing documentation and education and outreach activities, and is responsible for TransLight/StarLight quarterly and annual reports, web pages and events planning. The co-PI also participates in monthly IRNC phone calls and attends meetings as requested.

1.B. Other Senior Personnel (Excluding PI and Co-PI) Additional people who contributed greatly to the project, and received a salary, wage, stipend or other support from this grant: Participant’s Name(s) Project Role(s) >160 Hours/Yr Alan Verlo (3) Professional staff Yes Laura Wolf (4) Professional staff Yes Steve Sander (5) Professional staff Yes Pat Hallihan (6) Professional staff Yes Lance Long (7) Professional staff Yes Linda Winkler (8) Professional staff Yes Rick Summerhill (9) Professional staff Yes Roberto Sabatino (10) Professional staff Yes Erik-Jan Bos (11) Professional staff Yes Kees Neggers (12) Other Senior Personnel Yes Joe Mambretti (13) Other Senior Personnel Yes (3) Alan Verlo is the TransLight/StarLight network engineer, and is a member of the StarLight engineering team. For several

years, Verlo has also been a member of the SCinet committee, focusing on enabling international SC research demos that have connections in Chicago. He was also co-chair of the iGrid 2005 international cyberinfrastructure team, responsible for clusters and international networking.

(4) Laura Wolf is the TransLight/StarLight technical writer and web developer. (5) Steve Sander is the TransLight/StarLight budget, accounts payable and equipment procurement person. (6) Pat Hallihan reports to Alan Verlo and is technical support staff. (7) Lance Long reports to Alan Verlo and is technical support staff. (8) Linda Winkler of Argonne National Laboratory, while not compensated by the University of Illinois at Chicago (UIC),

serves as part-time StarLight engineer with Alan Verlo, and assists with TransLight/StarLight. For many years, Winkler has been a member of the SCinet committee, focusing on enabling international SC research demos that have connections in Chicago. She was also co-chair of the iGrid 2005 international cyberinfrastructure team, responsible for clusters and international networking.

(9) Rick Summerhill is the Internet2 Director Network Research, Architecture, and Technologies and, while not compensated by UIC, is one of the stewards of the TransLight/StarLight link that connects Abilene at MAN LAN to GÉANT2 at their POP at the Amsterdam Internet Exchange.

(10) Roberto Sabatino is the DANTE/GEANT2 chief network engineer and, while not compensated by UIC, is one of the stewards of the TransLight/StarLight link that connects Abilene at MAN LAN to GÉANT2 at their POP at the Amsterdam Internet Exchange.

(11) Erik-Jan Bos is the SURFnet chief network engineer and, while not compensated by UIC, is one of the stewards of the TransLight/StarLight link connecting StarLight in Chicago to NetherLight at the Amsterdam Internet Exchange in Amsterdam.

(12) Kees Neggers is SURFnet Managing Director and a founder and current chair of GLIF. While not compensated by UIC, he does the tenders and procures both TransLight/StarLight links on UIC’s behalf, and is one of the stewards of the TransLight/StarLight link connecting StarLight in Chicago to NetherLight in Amsterdam.

Page 4: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 4

(13) Joe Mambretti is the StarLight managing director and head of the International Center for Advanced Internet Research (iCAIR) at Northwestern University. While not compensated by UIC, Joe has been a strong supporter and advisor regarding our IRNC efforts. Mambretti has assisted with connectivity issues, not only at StarLight, but also at MAN LAN.

1.C. Other Organizations That Have Been Involved as Partners

SURFnet SURFnet, the national network for research and education in the Netherlands <www.surfnet.nl>, is a TransLight/StarLight “key institutional partner,” responsible for negotiating, procuring and implementing the TransLight OC-192 circuit(s) between Open Exchanges in the USA and in Europe, which UIC pays for upon receipt of an invoice from SURFnet, as has been our practice with our previous NSF HPIIS Euro-Link award. Mathematics and Computer Science Division, Argonne National Laboratory Argonne National Laboratory <www.mcs.anl.gov> has been, and continues to be, a strong supporter of US international networking activities. Linda Winkler has facilitated STAR TAP/StarLight engineering since its inception, and is the lead engineer today; her salary comes from ANL. International Center for Advanced Internet Research (iCAIR), Northwestern University Joe Mambretti, director of iCAIR <www.icair.org>, also runs the StarLight facility <www.startap.net/starlight>, and is assisting with connectivity issues, not only at StarLight, but also at MAN LAN. 1.D. Other Collaborators or Contacts

ESnet The Energy Sciences Network, (ESnet) <www.es.net> is funded by the DOE Office of Science to provide network and collaboration services in support of the agency's research missions, serving thousands of Department of Energy scientists and collaborators worldwide. ESnet provides direct connections to all major DOE sites with high-performance speeds, as well as fast interconnections to more than 100 other networks. TransLight/StarLight funding makes a 10Gbps link available to Internet2, DOE and DANTE, to connect Abilene and ESnet with the pan-European backbone, GÉANT2. DANTE Owned by European NRENs, DANTE <www.dante.net> is an organization that plans, builds and operates pan-European networks for research and education. The GÉANT2 project is a collaboration between 26 National Research & Education Networks representing 30 countries across Europe, the European Commission, and DANTE. Its principal purpose has been to develop the GÉANT2 network -- a multi-gigabit pan-European data communications network for research and education; see <www.geant2.net>. TransLight/StarLight funding makes a 10Gbps link available to Internet2, DOE and DANTE, to connect Abilene and ESnet with the pan-European backbone, GÉANT2. Internet2 Internet2 <www.internet2.edu> is a consortium of leading US research universities working in partnership with industry and government to develop and deploy advanced network applications and technologies. Abilene <http://abilene.internet2.edu> is an Internet2 high-performance backbone network that enables the development of advanced Internet applications and the deployment of leading-edge network services to Internet2 universities and research labs across the country. TransLight/StarLight funding makes a 10Gbps link available to Internet2, DOE and DANTE, to connect Abilene and ESnet with the pan-European backbone, GÉANT2. National LambdaRail (NLR) NLR <www.nlr.net> is a major initiative of US research universities and private sector technology companies to provide a national-scale infrastructure for research and experimentation in networking technologies and applications. TransLight/StarLight considers itself, in part, to be the international extension of NLR, and wants to encourage data-intensive e-science drivers needing gigabits of bandwidth to use NLR and international links for schedulable production services not available with “best effort” networks.

Page 5: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 5

2. Activities and Findings

2.A. Research Activities

2.A.1. Goals and Objectives The NSF International Research Network Connections (IRNC) Translight/Starlight award is responsible for providing a minimum of OC-192 connectivity between the US and Europe. The goals of the IRNC program in general, and TransLight/StarLight specifically, are to:

• Fund international network links between US and foreign science and engineering communities • Encourage the use of advanced architectures • Support advanced science and engineering requirements • Encourage the development and leveraging of deployed infrastructure to meet current and anticipated needs • Enable network engineers to engage in system and technology demonstrations and rigorous

experimentation

In cooperation with US and European national research and education networks, Translight/Starlight is implementing a strategy to best serve established production science, including usage by those scientists, engineers and educators who have persistent large-flow, real-time, and other advanced application requirements.

2.A.2. Accomplishments and Milestones In Year 1, TransLight/StarLight funded two international links, which were both delivered July 2005.

• NYC/AMS: OC-192 routed connection between MAN LAN in New York City and NetherLight at the Amsterdam Internet Exchange (AMS-IE) connecting GÉANT2 to the USA Abilene and ESnet networks

• CHI/AMS: OC-192 switched connection between StarLight in Chicago and NetherLight in Amsterdam (also at the AMS-IE) that is part of the GLIF LambdaGrid fabric

Note: The CHI/AMS link was a continuation of the NSF HPIIS Euro-Link connection, and was operational on July 1 when the funded Euro-Link link ended. The NYC/AMS link became operational in July 2005. (See Section 2.A.3.)

We have also worked with our IRNC and TransLight/StarLight partners on various activities, to: • Support science and engineering research and education applications at major events and activities,

including the iGrid 2005 Workshop in San Diego, September 26-30, 2005, and research demonstrations at SC|05 in Seattle, November 12-18, 2005.

• Enable state-of-the-art international network services similar to and interconnected with those currently offered or planned by domestic research networks, including Abilene, National LambdaRail (NLR) and ESnet.

• Share network engineering tools and best practices

2.A.3. Infrastructure Status Overview and Changes

GÉANT PoP @ AMS-IENetherLight

StarLightMAN LAN

GÉANT PoP @ AMS-IENetherLight

StarLightMAN LAN

Page 6: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 6

The NYC/AMS TransLight/StarLight link is a L3 connection between GÉANT2 and Abilene and ESnet for both L3 production usage as well as backup usage1. The CHI/AMS link is a L1 connection between StarLight and NetherLight that can be configured as 8 1GigEs or 1 10GigE for data-intensive applications requiring lightpaths.

Topology During the February − April 2005 timeframe, the OC-192 tenders were completed. Vendors were selected for price and path diversity; Tyco (later bought by VSNL) was awarded the NYC/AMS link and Global Crossing, which previously carried the HPIIS/Euro-Link circuit, was awarded the CHI/AMS link. NYC/AMS…VSNL International (the new owner of Tyco Global Network and our OC-192 provider) delivered the 10Gbps NYC/AMS link on June 30; however, GÉANT2 was not ready in New York, so testing was postponed until after the July 4th weekend. Subsequently, a problem with cabling at New York’s 32 Avenue of the Americas facility was diagnosed and fixed on July 14 by VSNL and SARA engineers. GÉANT2 waited until after the weekend to make it operational, so the link became operational July 20th.

Unfortunately, GÉANT2 did not get delivery of a 10GigE interface for its Juniper M160 router at MAN LAN, so could not connect to the TransLight/StarLight link via the MAN LAN 6513 until mid-August. Instead, GÉANT2 established temporary peering using one of its two GigE connections to New York, maintaining the configuration that had been in place for over a year between Abilene and GÉANT2. In early September, the new interface card arrived and a new BGP session was established.

When the HPIIS Euro-Link connection was in place, most outbound traffic from the US to Europe went over GÉANT’s 2x1GigE connections in New York and Washington DC. Inbound traffic from Europe to the US went primarily to Chicago (Euro-Link) and Washington DC (GÉANT) and then to Abilene. When Chicago (Euro-Link) went away, all points on the Abilene network that were close to New York (as measured in fiber route miles on the backbone circuits) started talking to GÉANT via the New York peering.

Once the TransLight/StarLight link was in place, Abilene engineers reported that it appeared as though GÉANT2 modified its routing policy to move traffic from Europe to peer with Abilene at MAN LAN instead of Washington DC. (Traffic charts showed a sharp drop in Washington DC traffic, from 100Mbps to 25Mbps.) As of September 14, 2005, GÉANT2 moved all its MAN LAN traffic to the new 10 GigE peering and requested that one of its 1GigE connections to the MAN LAN switch be dephased.

CHI/AMS…Global Crossing provides the CHI/AMS link, which is a continuation of the NSF HPIIS Euro-Link setup. Its configuration of one OC-48 (for Abilene peering) and 6x1GE (2 x 1GE for CA*net4 peering) was kept in place until the NYC/AMS link became operational on July 20th. It is matched by a SURFnet-paid OC-192. Global Crossing is the carrier for both, although separate fiber routes have been procured for the two circuits.

We are managing the CHI/AMS and SURFnet circuits as lambdas, tracking GLIF models as they develop. On the IRNC circuit, our express goal is to provide science, engineering, and education production application access to Layer-2 (L2) 1GigE VLANs, 10GigE VLANs, and Layer-1 (L1) OC-192 circuits. As these applications clearly need to be nurtured by our computer scientists and engineers, we are allowed by NSF to encourage the development and testing of new protocols and policies that encourage usage. The challenge is to manage them in a science-friendly, replicable way. Simply said, the condition of use is that it is a science/engineering/education use and once it fully saturates the OC-192 beyond testing phase, it moves off to its own link. If 10Gbps saturation does not occur (say, only 1-2Gbps are used), we can handle the usage with VLANs. If a dozen such Gbps applications emerge, we will be very pleased. We are working with CANARIE on UCLP and will encourage adoption/evaluation of this technology to achieve GLIF and NSF goals simultaneously on this IRNC link between NetherLight and StarLight.

PoP Connectivity and Peering Note: The GLIF Technical Engineering Working Group is defining GLIF Open Lambda Exchanges, or GOLEs. MAN LAN, StarLight and NetherLight are all GOLEs. The NYC/AMS and CHI/AMS links are, from an operating perspective, treated similarly, as permanent 10Gbps lightpaths that are either handed off to routers (NYC/AMS) or switches (CHI/AMS). NYC/AMS…Currently, this OC-192 is connected between GÉANT2 owned-and-operated Juniper M160 routers located at the MAN LAN facility and the NetherLight facility at the AMS-IE (operated by SARA). In New York, the router connects to Abilene’s HDXc; in Amsterdam, the router connects to SURFnet’s HDXc at NetherLight. 1 Bill Johnston (ESnet) is to talk with Kevin Thompson regarding the appropriate percentage of production and backup usage.

Page 7: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 7

In December 2005, Rick Summerhill met with Roberto Sabatino, DANTE chief network engineer, to discuss re-engineering the NYC/AMS link to plug into switches instead of routers, in support of production IP services and, in the future, potential lightpath services. This second phase will hopefully be completed in May 2006.

CHI/AMS…This OC-192 is connected to Force10s that connect, in turn, to a CANARIE-owned HDXc box at

Page 8: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 8

StarLight and a SURFnet-owned HDXc box at NetherLight. The SURFnet OC-192 is also connected to these HDXc boxes. The HDXc installation at StarLight was completed just a few weeks prior to the iGrid 2005 event at the end of September 2005, and the links were heavily used for iGrid and SC events as well as production traffic. Via the Force10, this CHI/AMS link peers with a number of advanced international networks from Canada, Japan, Taiwan, Korea, CERN and the UK, as well as Abilene/HOPI, NLR, ESnet, and regional optical networks.

2.A.4. NYC/AMS Network Operations and Engineering Usage Internet2 makes MRTG traffic statistics of Abilene/GÉANT2 peering at MAN LAN available at: http://stryper.uits.iu.edu/abilene/summary.cgi?network=nycm-geant-newyork&data=bits GÉANT2 statistics are password protected, but we will try to arrange open access for publication on the TransLight/StarLight website. GÉANT2 statistics are located at: http://stats.geant.net/cgi-bin/cricket-1.0.2/grapher.cgi?target=%2Fbandwidth%2Fny1.ny.geant.net%2Fso-7_0_0;list-single-target=so-7_0_0;ranges=d%3Aw;view=Aggregate

Internet2’s MRTG monthly bandwidth utilization chart appears on the left. GÉANT2’s Cricket bandwidth utilization chart appears on the right.

Routing Policies The NYC/AMS link is a routed, L3 connection providing connectivity between GÉANT2 in Europe and Internet2’s Abilene network, DoE’s ESnet and CANARIE’s CA*net4 at the MAN LAN exchange point. While other links between Abilene and GÉANT2 exist, this link is being preferred for traffic between Abilene and GÉANT2.

The link itself was tendered and is managed by SURFnet. It is a 10Gbps lambda implemented between MAN LAN and NetherLight. One lightpath of 10Gbps is permanently allocated over this lambda, handed off to GÉANT2 routers owned and operated by DANTE of Cambridge, UK. Peering Policies Internet2’s Abilene network and the GÉANT2 network follow their established peering policies with respect to accessing and transiting traffic that might flow over this link. Abilene peers with networks listed at <http://abilene.internet2.edu/peernetworks/>. Abilene, by default, transits traffic between all non-US peers. GÉANT2 interconnects participant networks, listed at <http://www.geant.net/server/show/nav.121>, and connects with other research and education networks outside the European region, such as EUMEDCONNECT, TEIN2, CLARA, TENET, and others. GÉANT2 provides Abilene not only with access to all of its participant networks, but also with transit to these other GÉANT2 peer networks. Internet2 and GÉANT2 will continue to work together to address routing issues brought about by the increasing interconnectivity between all regional/continental-scale networks and the transit being provided across them.

Measurement and Performance Tools For almost two years, Internet2 and GÉANT2 have been jointly developing a network performance, measurement and monitoring infrastructure for deployment across both the Abilene and GÉANT2 (as well as on their connector) networks, in order to provide both network engineer and end user access to information about the performance of the networks. The perfSONAR project is the result of this collaboration (and the successor name to the Internet2-initiated pIPEs project). Currently, Abilene has deployed this measurement infrastructure as part of its Abilene Observatory project and deployments are beginning across the GÉANT2 and connector National Research & Education Network infrastructures in Europe.

Page 9: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 9

Security Internet2 and GÉANT2 work in close coordination on issues related to network operations security. The NYC/AMS link is just one of many monitored by the NOCs of the two networks. The Abilene NOC and GÉANT2 NOC have methods for contacting one another in response to observed incidents. The Abilene NOC and REN-ISAC monitor the Abilene network in the US, including connector and peering connections. As part of semi-annual operational and technical direction coordination meetings, Internet2 and GÉANT2 (along with ESnet and CANARIE) are planning an operational security exercise designed to further highlight areas where communications methods, security incident tools and procedures could be improved.

GÉANT2 security information is at <http://www.geant.net/server/show/nav.599>. Abilene security information is at <http://abilene.internet2.edu/security/>.

It should be noted that the carrier, VSNL International, and the telco hotel, MAN LAN, maintain robust procedures to secure their cables and connections.

NOC Operations NOC operations for Abilene are handled by the Abilene NOC based at Indiana University: <http://abilene.internet2.edu/NOC/>.

NOC operations for the MAN LAN facility (through which Abilene and GEANT peer in New York) are also handled by the Global NOC at Indiana University: <http://globalnoc.iu.edu/>

NOC operations for GEANT are handled by the GEANT NOC. Information about GEANT2 operations can be found at <http://www.geant2.net/server/show/nav.759>

2.A.5. CHI/AMS Network Operations and Engineering Usage Multiple lightpaths, varying over time, run over this lambda. Several semi-permanent lightpaths supporting research projects − with bandwidths ranging from 150Mbps to 1Gbps − have made use of this link throughout 2005. The IRNC link has also proven popular as a vehicle for lightpaths required for shorter periods; for example, during the iGrid and Supercomputing events, where the usage of the link had to be scheduled.

While L1 links cannot be monitored for usage, the CHI/AMS link goes from the NetherLight HDXc into the University of Amsterdam’s (UvA’s) Force10, for which Cricket usage diagrams are available: <http://trafficlight.uva.netherlight.nl/cricket/grapher.cgi?target=%2Ff10%2Ften_giga_eth_4_1;view=Octets> .

Similarly, the CHI/AMS link goes from the StarLight HDXc into the StarLight Force10, for which MRTG usage diagrams are available: <http://starlsd.sl.startap.net/mrtg/206.220.241.244_tengigabitethernet_1_1.html>. Note: At the time of this report, the “yearly” chart had incorrect data, implying that the MRTG system had errors and its counters were affected, so the chart is not included here. However, comparisons made only a few days ago by a UvA engineer showed the UvA and StarLight Force10 charts to be in agreement. StarLight engineers are investigating.

UvA’s Cricket bandwidth utilization chart of the CHI/AMS TransLight/StarLight link.

Page 10: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 10

We maintain that meaningful lambda usage measurements are those taken by the applications themselves. See Research Findings (Section 2.B.6) on performance measurements obtained by a data-file-transfer application as part of our NSF-funded OptIPuter efforts. We compared bandwidth performance between Chicago and San Diego using the 10GigE CAVEwave switched network and a TeraGrid 10Gbps SONET routed network and achieved >9Gbps data transfers. We would like to duplicate these tests over the IRNC links to University of Amsterdam (via MAN LAN and StarLight).

Routing Policies The CHI/AMS link is a 10Gbps lambda implemented between StarLight and NetherLight. Since no IP routers are on the lambda, there are no routing policies to report. Peering Policies Lightpaths are L1 point-to-point connections, so traditional peering policies don’t apply. Instead, peering is based on the GLIF principle that resources are shared among collaborating participants; the owner of any resource decides on its use.

Measurement and Performance Tools Lightpaths are L1 transmission pipes that cannot be measured optically; monitoring and measuring can only be done at L2 and above. The CHI/AMS link endpoints are currently monitored on the L2/L3 Force10 switches at StarLight and University of Amsterdam; see Usage charts above.

NetherLight is currently developing reporting tools for lightpaths cross-connected at NetherLight, but these tools are currently not available.

Alan Verlo attended the E2Epi Network Performance Workshop at the Winter 2006 ESCC/Internet2Joint Techs Workshop (February 2006); the Workshop covered the basics of Internet2-sponsored performance utilities NDT, OWAMP and BWCTL. Internet2 has these servers at each of their Abilene POPs so that tests can be run between various segments of Abilene for troubleshooting problems.

• NDT (Network Diagnostic Tool by Rich Carlson) enables measuring network performance directly to a user’s desktop PC and diagnosing typical network problems.

• OWAMP (One Way Active Measurement Protocol by Jeff Boote) enables latency measurements in one direction on a network, which aids in isolating where congestion is occurring on the network.

• BWCTL (Bandwidth Test Controller by Jeff Boote), implements a resource allocation and scheduling wrapper for the Iperf bandwidth test utility, and enables both regularly scheduled and on-demand tests to coexist on the same server without colliding with each other.

StarLight has currently deployed a NDT server (ndt.sl.startap.net) and an Iperf server (iperf.sl.startap.net). Both of these servers are connected to both the routed R&E networks and to the CAVEwave lambda network and the CHI/AMS TransLight/StarLight link. In the future, StarLight plans to add the BWCTL functionality to the existing Iperf server, as well as to deploy an OWAMP server.

Security In general, lambda networking is considered a more secure way of networking as, in principle, shared packet devices are not located in the path, but only at the very edges.

It should also be noted that both the carrier, Global Crossing, and the NetherLight facility located at the Amsterdam Internet Exchange in the SARA Computing and Networking Services building on the campus of the Science Park Amsterdam maintain robust procedures to secure their cables and connections.

NOC Operations SURFnet subcontracts to the NOC Alliance, a consortium of Telindus <www.telindus.nl> and SARA for NetherLight NOC operations. Active monitoring of NetherLight and its links is done on a 24x7 basis from the highly secured International Service Center of Telindus in Belgium. NOC operations for StarLight is subcontracted to Argonne National Laboratory; see <http://www.startap.net/starlight/ENGINEERING/network_operations.html>.

Page 11: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 11

2.A.6. Project Governance/Management and Oversight

TransLight/StarLight’s governing structure shall evolve during the award period to support all critical or significant project activities. Initially, the structure is very simple and based on mutual cooperation among related groups.

Tom DeFanti is principal investigator and project director of TransLight/StarLight, and serves as the primary point of contact with our NSF program officer. DeFanti is also steward, with Kees Neggers of SURFnet, of the CHI/AMS link, and has appointed Doug Van Houweling of Internet2 and Dai Davies of DANTE as stewards of the NYC/AMS link. StarLight and NetherLight (SURFnet) provide network engineering and operations support for the CHI/AMS link; Abilene/GÉANT2 provide support for the NYC/AMS link. In year 2, TransLight/StarLight will work more closely with NLR, as the CHI/AMS link is an international extension of NLR.

On behalf of TransLight/StarLight, SURFnet negotiates and procures the OC-192 NYC/AMS and CHI/AMS links. SURFnet is a key institutional partner of this IRNC award. Tom DeFanti has been working with Kees Neggers and SURFnet since the beginning of the NSF HPIIS program, and UIC has had long-standing procedures in place to pay invoices from SURFnet for transoceanic connectivity without charging any overhead to the grant.

DeFanti, in addition to overseeing the annual tendering, payment and installation of the links, is responsible for assuring project annual milestones are met; coordinating all ongoing project management and oversight activities with the NSF; serving as the day-to-day project manager; and, serving as a member of the IRNC Program Management Group (of IRNC PIs).

Maxine Brown is co-principal investigator of TransLight/StarLight and is responsible for all documentation, including quarterly and annual reports and web-based materials. Brown has also, in the past year, done more outreach, and has given invited presentations at several meetings and conferences about TransLight/StarLight efforts. Editorial writer Laura Wolf assists Brown with writing and web development as well as coordinates meetings, visits, and participation at major conferences.

Alan Verlo is the TransLight/StarLight network engineer and a member of the StarLight engineering team, so is involved in all network engineering and operations support. For several years, Verlo has also been a member of the SCinet committee, focusing on enabling international SC research demos that have connections in Chicago.

2.A.7. Meeting and Conference Participation To date, TransLight/StarLight principals have participated in several meetings and conferences to promote IRNC activities. Major activities are listed here. February 8-9, 2006. Alan Verlo and Linda Winkler attended interim meetings of the GLIF Tech and Control Plane Working Groups <http://www.glif.is/meetings/2006-interim/>. February 2-8, 2006. Alan Verlo and Linda Winkler attended the Winter 2006 ESCC/Internet2Joint Techs Workshop, Albuquerque, New Mexico <http://jointtechs.es.net/newmexico2006>. They also attended the JET meeting (February 7). Verlo attended the E2Epi Network Performance Workshop (February 4).

Page 12: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 12

February 3, 2006. Tom DeFanti and Maxine Brown participated in an IRNC phone call to discuss status updates. February 2-3, 2006. Jason Leigh of UIC/EVL attended PFLDnet 2006 (the 4th International Workshop on Protocols for Fast Long-Distance Networks) in Nara, Japan, to present the paper “A Case for UDP Offload Engines in LambdaGrids.” <http://www.evl.uic.edu/core.php?mod=4&type=3&indi=285> January 22-26, 2006. Maxine Brown attended and participated in the 21st APAN meeting in Tokyo, giving the presentation “TransLight/StarLight and the OptIPuter.” This was part of a one-day session organized by John Silvester, PI of TransLight/Pacific Wave, on Next Generation Applications over Next Generation Exchanges. January 17-20, 2006. Tom DeFanti, Maxine Brown, Alan Verlo, Laura Wolf, Joe Mambretti and Linda Winkler, as well as Cees de Laat (University of Amsterdam) and Bram Stolk and Ronald van der Pol (SARA, in Amsterdam) attended the annual OptIPuter All Hands Meeting and Backplane Workshop at UCSD, San Diego, California. <http://www.optiputer.net/events/presentation_temp.php?id=24>. January 6, 2006. Tom DeFanti and Maxine Brown participated in an IRNC phone call to discuss status updates. December 12-13, 2005. Alan Verlo attended an NSF Cybersecurity Workshop in Washington DC. December 12, 2005. Dr. Velikhov, a GLORIAD partner, visited Tom DeFanti and Larry Smarr at UCSD, San Diego, California, at the request of Natasha Bulashova of GLORIAD. November 16, 2005. Alan Verlo and Linda Winkler participated in the monthly JET (Joint Engineering Team) meeting, held during SC|05. November 14-19, 2005. Maxine Brown and Joe Mambretti attended SC|05 in Seattle; Rick Summerhill, Alan Verlo and Linda Winkler participated in SCInet. November 15, Brown attended a dinner hosted by PNWGP; John Silvester of TransLight/Pacific Wave and Dave Reese of CENIC/NLR also attended. November 16, Brown attended a dinner hosted by PRAGMA. November 14, 2005. Maxine Brown, Ana Preston and Rick Summerhill participated in an IRNC meeting at SC|05 in Seattle to discuss status updates. October 18, 2005. Alan Verlo and Linda Winkler participated in the monthly JET (Joint Engineering Team) meeting. October 16-November 7, 2005. Brown gave presentations on TransLight/StarLight and participated in several international Asia-Pacific, meetings, notably:

• PRAGMA-9 meeting, Hyderabad, India (October 16-24). Note: During PRAGMA, Brown met S. Ramakrishnan, Director General of the Center for Development of Advanced Computing (DCAC) of Pune University, who is also with India’s Ministry of Communications and Information Technology, spearheading the GARUDA project on grid computing; he and his staff were very interested in iGrid 2005.

• Sathya Sai Institute of Higher Learning (SSSIHL) in Puttaparthi (near Bangalore), hosted by Radha Nandkumar of UIUC/NCSA (October 25-28)

• GLORIAD/Hong Kong Light (HKLight) meetings in Hong Kong (October 29-31) • Chinese American Networking Symposium (CANS 2005), Shenzhen, China (November 1-2) • CANS 2005 Extension program sponsored by Wuhan University, Wuhan and Yichang, China (Nov 3-7)

October 11, 2005. Julio Ibarra visited UIC while in Chicago to attend LHC meetings at FermiLab. He visited UIC to learn more about all our projects, and to discuss our complementary IRNC activities. October 5, 2005. Tom DeFanti and Maxine Brown participated in an IRNC phone call to discuss status updates. September 30, 2005. Maxine Brown met with Larry Smarr and members of the Chinese Academy of Sciences, particularly the Computer Network Information Facility (including BaoPing Yan, who is part of GLORIAD), at the Calit2 building on the UCSD campus, San Diego, California, at lunch during the GLIF meeting. This visit was organized by Peter Arzberger, head of PRAGMA. September 26-30, 2005. Tom DeFanti, Maxine Brown, Alan Verlo, Joe Mambretti, Linda Winkler, Kees Neggers, Erik-Jan Bos attended and participated in the iGrid 2005 Workshop (Sept 26-29) and the annual GLIF meeting (Sept 29-30) at UCSD, San Diego, California. It should be noted that all IRNC-funded links were in use for real-time application demonstrations done during iGrid 2005. <www.igrid2005.org> September 19-22, 2005. On behalf of TransLight/StarLight, Julio Ibarra presented an update of our activities at the

Page 13: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 13

International Task Force (ITF) meeting at the Fall 2005 Internet2 Member Meeting in Philadelphia. September 14, 2005. Alan Verlo and Linda Winkler participated in the monthly JET (Joint Engineering Team) meeting. September 12-14, 2005. Tom DeFanti, Joe Mambretti, Kees Neggers and Linda Winkler attended the NCO/LSN’s Optical Network Testbeds Workshop 2 (ONT2) at NASA Ames,California, as well as Maartin Büchli of DANTE. Mambretti, ONT2 co-chair, gave a presentation on “OMNInet Roadmap” and DeFanti gave a presentation on “OptIPuter Roadmap,” both of which involve international Dutch collaborations over the CHI/AMS TransLight/StarLight and SURFnet links. <http://www.nren.nasa.gov/workshop8/agenda.html> August 31, 2005. Tom DeFanti and Maxine Brown participated in an IRNC phone call to discuss status updates. August 19-23, 2005. Tom DeFanti attended the GigaPort Next Generation project’s Scientific Advisory Committee (SAC) annual meeting in Amsterdam, of which he is a member. (SURFnet receives its funding from the GigaPort project, so the SAC is an oversight committee.) August 16, 2005. Alan Verlo and Linda Winkler participated in the monthly JET (Joint Engineering Team) meeting. August 10, 2005. John O'Callaghan (head of the APAC Australian supercomputer centers program), Rhys Francis (Manager of the APAC Grid Program) and Robin Stanton (Australian National University) visited UIC to learn more about our various projects, including TransLight/StarLight. August 8, 2005. Takahiro Ueno, Yukihisa Osaka and Koichi Hiragami from the Network Testbed Advancement Division of NiCT, the Japanese agency that funds the JGN-II 10Gbps link from Tokyo to Chicago, visited UIC and StarLight to meet with Tom DeFanti and Joe Mambretti to discuss collaboration opportunities. July 29, 2005. Tom DeFanti and Maxine Brown participated in an IRNC phone call to discuss status updates. July 17-20, 2005. Alan Verlo attended the Summer 2005 ESCC/Internet2/CANARIE Joint Techs Workshop co-hosted by BCNET, the University of British Columbia and the British Columbia Institute of Technology, in Burnaby, Canada. He also attended a Bill Johnston (DOE)-organized BOF for a loosely defined international techs “group” (July 19) as well as the monthly JET meeting (July 18th). <http://jointtechs.es.net/Vancouver20051.htm> June 21, 2005. Alan Verlo and Linda Winkler participated in the monthly JET (Joint Engineering Team) meeting. June 6, 2005. Tom DeFanti and Maxine Brown participated in an IRNC phone call to discuss status updates.

June 4-5, 2005. The Coordinating Committee for International Research Networks (CCIRN) annual meeting was held in conjunction with the TERENA Conference 2005 in Poznan, Poland. Joe Mambretti, head of the CCIRN Digital Video Working Group (DVWG), could not attend, but Grant Miller offered to give a short presentation in his place; Mambretti provided information on the DVWG as well as StarLight, TransLight, GLIF and iGrid 2005. May 17, 2005. Alan Verlo and Linda Winkler participated in the monthly JET (Joint Engineering Team) meeting. May 2, 2005. Joe Mambretti represented TransLight/StarLight at the Internet2 Spring 2005 Member Meeting IRNC panel, held at the International Task Force (ITF) meeting. Presentations are available at <http://international.internet2.edu/resources/events/2005/2005SMM-itf2.html>. April 19, 2005. Alan Verlo and Linda Winkler participated in the JET Optical Networking Testbed (ONT) Workshop at NSF. April 15, 2005. An IRNC TransLight/StarLight meeting was held in Chicago to coordinate/engineer trans-Atlantic circuits between North American networks (Abilene, CA*net, ESnet) and GÉANT2. Attendees included Rene Hatem (CANARIE); Dai Davies and Roberto Sabatino (DANTE); Joe Burrescia, Joe Metzger and Bill Johnston (via Polycom) (ESnet); Rick Summerhill, Heather Boyles and Steve Corbató (Internet2); Kevin Thompson via telephone (NSF); Joe Mambretti (StarLight); and, Tom DeFanti, Maxine Brown, Alan Verlo, and Linda Winkler (TransLight/ StarLight). The discussion involved the definition of an open exchange, such as StarLight, and what equipment was being placed at MAN LAN in New York City and in GÉANT2 European Distributed Access locations. April 8, 2005. Tom DeFanti participated in an IRNC phone call to discuss IRNC status updates and an IRNC session at the Internet2 Spring 2005 Member Meeting on May 2. March 15, 2005. Alan Verlo and Linda Winkler participated in the monthly JET (Joint Engineering Team) meeting. March 11, 2005. Tom DeFanti attended the NSF IRNC Kick-Off meeting in Washington DC. Presentations can be found at <http://www.ciara.fiu.edu/whren_kickoff/irnc_kickoff_05.htm>. February 13-16, 2005. Alan Verlo and Linda Winkler attended the Winter ESCC/Internet2 Techs Workshop,

Page 14: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 14

hosted by the University of Utah in Salt Lake City. They also attended a JET meeting held there on February 16. February 13, 2005. Alan Verlo and Linda Winkler attended the interim GLIF Technical (TECH) working group meeting in Salt Lake City. September 26, 2004. Tom DeFanti, Maxine Brown and Joe Mambretti attended an IEEAF-facilitated discussion at the Internet2 Fall Member Meeting, with representatives from Internet2, DANTE, WIDE, SURFnet and IRNC. One topic was IRNC awards and collaboration issues, and another major issue was the role and nature of the NYC exchange point (MAN LAN), and how to create an Open Exchange similar to StarLight and the PNWGP. September 12, 2004. IRNC award recipients, as well as Internet2, NLR and CANARIE, met at UIC to self-organize and discuss how we would work together to engineer links. At one point we called ourselves the 5-TOPS (Trans-Oceanic Production Science). Attending were: John Silvester (USC); Dave Richardson (U Washington/PNWGP); Greg Cole (GLORIAD); Chip Cox and Heidi Alvarez (Florida International U); Jim Williams and Chris Robb (Indiana U); Tom West (NLR); Joe Mambretti (Northwestern U); Linda Winkler (Argonne National Lab); Rene Hatém (CANARIE); Steve Corbató (Internet2); Tom DeFanti, Maxine Brown and Alan Verlo (UIC).

Page 15: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 15

2.B. Research Findings 2.B.1. Lessons Learned Problems and Issues The chief problem/issue is a high-level one. How do we attract, manage and retain high-bandwidth users? When do we claim success: (1) when the application runs? (2) when the link is saturated? or, (3) when the successful users procure their own links?

Avoidable Problems Tom DeFanti, PI, was limited by NSF to one month of IRNC salary per year, which is equivalent to working approximately two days a month, 11 months a year. The NSF HPIIS Euro-Link award paid him for 4 months per year for most of the award period, or 8 days a month. On a budget of two days/month, most PI conference travel, especially to international meetings, is impractical (for example, a single trip would use up 3 months of time). Currently allocated PI time is insufficient to pro-actively develop IRNC usage, by devoting time to such activities as contacting potential users, working with NLR and/or Internet2 if potential users have connectivity issues, attending and giving presentations at conferences, and maintaining an active relationship with GLIF peers.

The IRNC program does not pay for graduate students or REUs. While we understand this is an infrastructure award, we also run a research laboratory with very talented students who can assist with the installation and implementation of network engineering and measurement tools and techniques. The NSF HPIIS Euro-Link award did provide REU funds for several years, and other, related funding for the StarLight facility, which is now concluding, paid for graduate students.

Unavoidable Problems The cost of transatlantic (and domestic) OC-192s is within reach of single domain large-scale projects, so we expect high-bandwidth users to “graduate” and get their own links. This has and will negatively impact our usage statistics.

L1 links from SURFnet, UKLight, GÉANT2, CERN, Abilene/HOPI, and others compete with our IRNC links.

2.B.2. NSF Interactions Positive NSF/OCI has been very supportive of the Layer diversity that TransLight/StarLight is attempting to achieve.

Negative To our knowledge, NSF/OCI has not actively introduced IRNC opportunities to other NSF divisions. NSF does not have a program in place to fund international computational or experimental science that would use IRNC bandwidth, so collaborations are mostly left up to us to attract and retain. We attract international science collaborations, and train technical people, as part of iGrid and SC conference activities, but these collaborations are mostly transient without substantial follow-up funding.

2.B.3. IRNC Project Interactions and Leveraging of Resources iGrid 2005 and SC|05 Together with our IRNC siblings, we supported science and engineering research and education applications at major events and activities, including the iGrid 2005 Workshop in San Diego, September 26-30, 2005, and research demonstrations at SC|05 in Seattle, November 12-18, 2005. Notably, for iGrid, all IRNC links were used. For information on iGrid, see <www.igrid2005.org>. Specific iGrid US/European collaborations are listed below.

Abilene/GÉANT2 and StarLight/NetherLight Compatibilities TransLight/StarLight seamlessly connects Abilene to GÉANT2, and the lambda-connected networks at StarLight and NetherLight, thereby assuring that international network services conform to those currently offered or planned by domestic research networks. The US National LambdaRail (NLR) connects to the IRNC CHI/AMS link, connecting users to similar European networks that come into NetherLight, and in Year 2 of this award, we will make a concerted effort to encourage and document usage.

Page 16: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 16

Global Lambda Integrated Facility (GLIF)

IRNC principal investigators and/or individuals from all participating IRNC institutions attended the 2005 GLIF meeting at iGrid 2005. Specifically, Tom DeFanti, Maxine Brown, Alan Verlo, Laura Wolf, Linda Winkler, Rick Summerhill, Erik-Jan Bos, Kees Neggers and Joe Mambretti attended from TransLight/StarLight; John Silvester and Ron Johnson attended from TransLight/Pacific Wave; Julio Ibarra and Heidi Alvarez attended from WHREN; and, Greg Cole, Natasha Bulashova and their Russian, Chinese and Korean collaborators attended from GLORIAD. Steve Wallace from Indiana University also attended, though not an official representative of TransPAC2.

One of GLIF’s major engineering activities is to define GLIF Open Lambda Exchanges (GOLEs) that enable interoperability and interconnectivity of L1, L2 and L3 links. TransLight/StarLight’s hubs in MAN LAN/New York, StarLight/Chicago and NetherLight/Amsterdam are GOLEs, connecting our 2 x 10Gbps lambdas as either permanent or configurable links. 2.B.4. International Collaborations: GLIF Above, we talk of cooperation among the majority of IRNC members to support and enable GLIF growth. The GLIF map, which appears below and is posted on the GLIF website <www.glif.is>, shows the extent of this international virtual organization that supports persistent data-intensive scientific research and middleware development on LambdaGrids. To create this infrastructure, National Research Networks, countries, consortia, institutions and individual research initiatives are providing the physical layer. The world’s premier research and education networking engineers are defining GLIF Open Lambda Exchanges (GOLEs) to assure the interconnectivity and interoperability of links by specifying equipment, connection requirements and necessary engineering functions and

GLIF map was developed by Maxine Brown of UIC (data compilation) and Bob Patterson of NCSA/UIUC (visualization). The NSF IRNC and IRNC-related links (and NSF TeraGrid) are drawn in various shades of orange. They are not the same shade of orange because, in some instances, foreign countries are paying for portions of the links (e.g., Korea, China and Canada are contributing to GLORIAD). The color-coding is subtle, but does somewhat indicate NSF’s contribution to the evolving global LambdaGrid.

Page 17: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 17

services. Computer scientists are exploring the development of intelligent optical control planes and new transport protocols, building on the wealth of middleware that currently exists. And, e-science teams are the primary drivers for these new application-empowered networks, where e-science represents very-large-scale applications − such as astronomy, bioinformatics, environmental, geoscience, high-energy physics − that study very complex micro- to macro-scale problems over time and space. GLIF is building more than a network − it is building an integrated facility in which broad multidisciplinary teams can work together.

2.B.5. E-Science Support (Quantified Science Drivers) For many years, we have documented international applications on the StarLight website <http://www.startap.net/starlight/APPLICATIONS/> and, more specifically, US/European applications on the Euro-Link website <http://www.startap.net/euro-link/APPLICATIONS/>. However, science has no geographical boundaries; all science is global. As international collaborations become more prevalent, as collaborations expand from two to three or four continents, and as more transoceanic links become operational, it is impossible to identify and document these applications − they are ubiquitous. Of more interest to us, is to identify and serve high-end applications − that is, the data-intensive e-science applications requiring advanced networking capabilities − for they are the drivers for new networking tools and services to advance the state-of-the-art of production science.

Below is a subset of the US/European applications demonstrated at iGrid 2005; more complete documentation can be found at <http://www.igrid2005.org/program/demos_list.html>. Quantitative results are being documented in articles to be published in Summer 2006 in the Elsevier journal Future Generation Computer Systems: The International Journal of Grid Computing: Theory, Methods and Applications. Some of these applications subsequently went on to win awards at SC|05 and Internet2, and these awards are indicated if known. It should be noted that NSF provided support for iGrid 2005 (award OISE-0527983, Maxine Brown, principal investigator).

While we could not track all the applications using IRNC links for SC|05, we did, however, have to schedule the IRNC and SURFnet 10Gbps links between Amsterdam and Chicago, and the CAVEwave link from Chicago to Seattle (site of SC|05), for several bandwidth-intensive applications, which were also were showcased at iGrid.

iGrid 2005 US/European Applications Note: All iGrid applications had a US component, as they were run from/to UCSD, San Diego, California.

AMROEBA-EA: Computational Astrophysics Modeling Enabled by Dynamic Lightpath Switching

http://www.igrid2005.org/program/applications/scservices_amroebaea.html

Collaborators International Center for Advanced Internet Research, Northwestern University, US Laboratory for Computational Astrophysics, UCSD, US Advanced Internet Research Group, University of Amsterdam, NL

Enzo is a grid-based adaptive mesh refinement (AMR) simulation code for cosmological structure formations. The AMROEBA-EA (Adaptive Mesh Refinement Optical Enzo Backplane Architecture Enabled Application) project investigates the potential for conducting data-intensive Enzo simulations on a distributed infrastructure based on dynamic lightpath provisioning.

Collaborative Analysis

http://www.igrid2005.org/program/applications/videoservices_analysis.html

Collaborators Sandia, USA High Performance Computing Center Stuttgart (HLRS), Germany

Sandia and HLRS are developing a high-resolution architectural virtual environment in which physically based characters, such as vehicles and dynamic cognitive human avatars, interact in real-time with human participants.

Page 18: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 18

Data Reservoir on IPv6: 10Gbps Disk Service in a Box

http://www.igrid2005.org/program/applications/dataservices_datareservoir.html

Collaborators University of Tokyo, Japan Fujitsu Computer Technologies, Japan SARA, The Netherlands Pacific Northwest GigaPoP, USA

The Data Reservoir project’s goal is to create a global grid infrastructure to enable distributed data sharing and high-speed computing for data analysis and numerical simulations. At the center of this infrastructure is the 2-PFLOPS system being developed as part of the GRAPE-DR project, to be operational in 2008. At iGrid, storage services that use the GRAPE-DR’s single-box high-density 10Gbps storage server and an IPv6 fast TCP data transfer protocol demonstrated, for the first time, 10Gbps TCP utilization in a production environment.

On November 14, 2005, this group won the Internet2 Land Speed Records (I2-LSR) in both the IPv6 single-stream and IPv6 multi-stream categories. The team from the University of Tokyo, the WIDE Project, and Chelsio Communications successfully transferred data at a rate of 5.58 Gbps over a distance of over 30,000 kilometers traversing the WIDE, IEEAF, JGN2 networks. Achieving 167,400 terabit-meters per second (Tb-m/s), the team more than doubled the existing record, surpassing it by 131 percent.

“Data Reservoir on very-long-distance IPv6/IPv4 network” received the SC|05 Bandwidth Challenge award for “Fastest IPv6.” They achieved 6.84 Gbps peak traffic on IPv6.

DataWave: Ultra-High-Performance File Transfer Enabled By Dynamic Lightpaths

http://www.igrid2005.org/program/applications/dataservices_datawave.html

Collaborators iCAIR, NU, USA Nortel, USA Advanced Internet Research Group, University of Amsterdam, NL National Center for Data Mining, University of Illinois at Chicago (UIC), USA Electronic Visualization Laboratory (EVL), UIC, USA

Science applications generate extremely large files that must be globally distributed. Most network performance methods focus on memory-to-memory data transfers as opposed to data-file-to-data-file. This project demonstrates the potential for transferring extremely large files using dynamic L1 services.

Dead Cat

http://www.igrid2005.org/program/applications/vizservices_cat.html

Collaborators SCS, UvA, NL Advanced Internet Research Group (AIR), UvA, NL

Using a small handheld display device, viewers in San Diego “see” inside the body of a small panther in a bottle of formaldehyde (or a similar substitute). The handheld device’s location, relative to the animal, is tracked, and its coordinates are transferred to Amsterdam, where a volumetric dataset of a small panther, created with a CT scan, is stored. Information is extracted from the database, rendered, and volumetric pictures/frames are sent back to the display, creating a stunning real-time experience.

Page 19: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 19

Dynamic Provisioning Across Administrative Domains

http://www.igrid2005.org/program/applications/lambdaservices_provisioning.html

Collaborators Internet2, USA Argonne National Laboratory, USA Mid Atlantic Crossroads (MAX) GigaPoP, USA Information Sciences Institute, USA MIT Haystack, USA NiCT, Japan Onsala, Sweden JIVE, NL Westerbork Observatory/ASTRON, NL NORDUnet, Nordic countries Hybrid Optical and Packet Infrastructure Project (HOPI) Design Team, USA

Collaborators ESLEA National e-Science Centre, Edinburgh, UK University of Manchester, UK University College London, UK UKLight, UK

Dynamic Resource Allocation via GMPLS Optical Networks (DRAGON) demonstrates dynamic provisioning across multiple administrative domains to enable Very Long Baseline Interferometry (VLBI). Using simultaneous radio-telescope observations in the USA (MIT Haystack), Japan (Kashima) and Europe (Onsala in Sweden, Jodrell in the UK, Westerbork in The Netherlands), scientists collect data to create ultra-high-resolution images of distant objects and make precise measurements of the Earth’s motion in space. The HOPI testbed facilitates connectivity between various domains across the globe to simultaneously transfer VLBI data from these stations to the MIT Haystack Observatory at 512Mbps/station.

ESLEA

http://www.igrid2005.org/program/applications/escience_eslea.html

Collaborators ESLEA National e-Science Centre, Edinburgh, UK University of Manchester, UK University College London, UK Other Collaborators UKERNA/UKLight/ULCC, UK Argonne National Laboratory, USA Stanford Linear Accelerator Center (SLAC), USA JIVE, NL Westerbork Observatory, ASTRON, NL Jodrell Bank, UK MIT Haystack, USA Onsala, Sweden

The UK e-Science ESLEA (Exploitation of Switched Lightpaths for eScience Applications) project demonstrates the benefits of switched lightpaths for scientific applications using UKLight. At iGrid, UKLight is used to move data for three e-science disciplines: high-energy physics, computational science, and radio astronomy.

SC|05 HPC Analytics Challenge Award was awarded to the ESLEA “SPICE: Simulated Pore Interactive Computing Experiment” demonstration (University College London, University of Manchester, University of Edinburgh, Tufts University, TeraGrid, Nottingham University, NCSA/TeraGrid, Pittsburgh Supercomputing Center, Argonne

Page 20: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 20

National Lab, CCLRC Daresbury. This prize was tied with one of Bob Grossman’s applications “Real Time Change Detection and Alerts from Highway Traffic Data.”

Exploring Remote and Distributed Data Using Teraflows

http://www.igrid2005.org/program/applications/dataservices_teraflows.html

Collaborators NCDM, UIC, USA International Center for Advanced Internet Research, Northwestern University, USA Advanced Internet Research Group, University of Amsterdam, NL CERN Kyushu Institute of Technology, Japan Queen’s University, Canada

High-performance data services for data exploration, data integration and data analysis are demonstrated; these services scale to 1Gbps and 10Gbps networks. These services are layered over the High-Performance Protocols (HPP) toolkit. In turn, these network protocols are layered over services for setting up, monitoring, and tearing down optical paths. Applications built with these data services demonstrate “photonic data services,” i.e., advanced L3 protocols and services that are closely integrated with L2 and L1 services.

From Federal Express to Lambdas: Transporting SDSS Data Using UDT

http://www.igrid2005.org/program/applications/dataservices_lambdas.html

Collaborators NCDM, UIC, USA Johns Hopkins University, USA Korea Astronomy and Space Science Institute, Korea Korea Institute of Science and Technology Information, Korea Department of Astronomy, School of Science, University of Tokyo, Japan National Astronomical Observatory, China Chinese Academy of Sciences, China School of Physics, University of Melbourne, Australia Max-Planck-Institut für Plasmaphysik, Germany

Until now, Sloan Digital Sky Survey (SDSS) datasets have been shared with collaborators using Federal Express. Optical paths and new transport protocols are enabling these datasets to be transported using networks over long distances. The SDSS DR3 multi-terabyte dataset is transported between Chicago and various sites in Europe and Asia using NCDM’s data transport protocol UDT (UDP-based Data Transport Protocol) over the Teraflow Testbed, an infrastructure designed to test new 10Gbps network protocols and data services with high-volume data flows, or teraflows, over both routed networks as well as optical networks.

Global Lambdas for Particle Physics Analysis

http://www.igrid2005.org/program/applications/lambdaservices_physics.html

Collaborators Caltech, USA Stanford Linear Accelerator Center (SLAC), USA Fermi National Accelerator Laboratory (FNAL), USA University of Florida, USA University of Michigan, USA CERN Korea Advanced Institute of Science and Technology, Korea Kyungpook National University, Korea Universidade do Estado do Rio de Janeiro, Brazil University of Manchester, UK

Page 21: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 21

GLORIAD, USA Cisco Systems, Inc., USA

Caltech, SLAC and FNAL use advanced networks to demonstrate analysis tools that will enable physicists to control worldwide grid resources when analyzing major high-energy physics events. Components of this “Grid Analysis Environment” are being developed by such projects as UltraLight, FAST, PPDG, GriPhyN and iVDGL.

First prize for the SC|05 Bandwidth Challenge went to the team from CalTech, Fermi and SLAC for their entry “Distributed TeraByte Particle Physics Data Sample Analysis,” which was measured at a peak of 131.57 Gbps of IP traffic. This entry demonstrated high-speed transfers of particle physics data between host labs and collaborating institutes in the USA and worldwide. Using state-of-the-art WAN infrastructure and Grid Web Services based on the LHC Tiered Architecture, they showed real-time particle event analysis requiring transfers of Terabyte-scale datasets.

Global N-Way Interactive High-Definition Video Conferencing over Long-Pathway, High-Bandwidth Networks

http://www.igrid2005.org/program/applications/videoservices_nwayconf.html

Collaborators ResearchChannel, USA University of Washington and Pacific Northwest GigaPoP, USA AARNet, Australia Australian Partnership for Advanced Computing (APAC), Australia SURFnet, NL University of Wisconsin-Madison, USA WIDE, Japan

This is a demonstration of real-time, high-resolution, high-definition communication with very low latency among multiple sites across the world. People at each site use very-high-quality video conferencing to communicate with the other sites and, in some cases, two-way, completely “uncompressed” (i.e., not compressed) raw HDTV is sent. The University of Washington developed this system in conjunction with the Pacific Northwest GigaPoP, ResearchChannel and AARNet. Other partners include SURFnet, University of Wisconsin-Madison, WIDE and APAC.

A special session is planned with Larry Smarr, extending a welcome from iGrid to the APAC’05 conference in Queensland, Australia. The APAC keynote by Ian Foster, is delivered in high definition from Queensland to iGrid in San Diego.

GLVF: An Unreliable Stream of Images

http://www.igrid2005.org/program/applications/vizservices_glvf_stream.html

Collaborators Electronic Visualization Laboratory, University of Illinois at Chicago, US SARA Computing and Networking Services, NL

SARA has developed a light-weight system for visualizing ultra-high-resolution 2D and 3D images. A key design element is the use of UDP, a lossy network protocol, which enables higher transfer rates but with possible resulting visual artifacts. The viewer may tolerate these artifacts moreso than lower throughput, as the artifacts have a short lifespan. At iGrid, this lossy approach is compared to the more robust SAGE system.

Visualization scientists from the USA, Canada, Europe and Asia are creating the Global Lambda Visualization Facility (GLVF), an environment to compare network-intensive visualization techniques on a variety of different display systems.

Page 22: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 22

GLVF: Scalable Adaptive Graphics Environment

http://www.igrid2005.org/program/applications/vizservices_glvf_sage.html

Collaborators EVL, UIC, USA US Geological Survey National Center for Earth Resources Observation and Science, USA SARA Computing and Networking Services, NL Korea Institute of Science and Technology Information, Korea Center for the Presentation of Science, University of Chicago, USA

The OptIPuter project’s Scalable Adaptive Graphics Environment (SAGE) system coordinates and displays multiple incoming streams of ultra-high-resolution computer graphics and live high-definition video on the 100Megapixel LambdaVision tiled display system. Network statistics of SAGE streams are portrayed using CytoViz, an artistic network visualization system. UIC participates in the Global Lambda Visualization Facility.

GridON: Grid Video Transcoding using User-Controlled Lightpaths

http://www.igrid2005.org/program/applications/vizservices_gridon.html

Collaborators i2CAT/UPC, Spain Communications Research Centre, Canada

GridON is an application that converts raw SDI video to MPEG-2, using UCLP to create on-demand lightpaths to access appropriate remote computers during the process.

This application is part of the iGrid demo “World’s First Demonstration of X GRID Application Switching using User Controlled LightPaths.”

HD Multipoint Conference

http://www.igrid2005.org/program/applications/videoservices_hdmulticonf.html

Collaborators Masaryk University/CESNET, CZ CESNET, CZ Louisiana State University, USA

High-definition (HD) interactive video requires minimal transmission latency to be truly effective, which is afforded by today’s optical networks. While point-to-point “raw” HD video streams (i.e., streams with no latency due to encoding) have been successfully demonstrated, multicast HD is more demanding. One raw stream takes 1.5Gb, so multi-site meetings can easily saturate 10Gbps links. In addition, it is advantageous to do real-time translation of raw HD video into less demanding formats in order to allow sites without adequate bandwidth to participate. A shared “virtual” lecture hall is created among the iGrid site in San Diego, Masaryk University and Louisiana State University. Each site has at least one HD camera; HD streams are multicast to the other sites at the highest bandwidth. Additional USA and European collaborators can receive/send digital video (DV) streams that are translated in real-time from/to raw HD. Synchronously, the lecture is also sent to a distributed storage system for archiving purposes.

Page 23: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 23

Interactive Remote Visualization across the LONI and the National LambdaRail

http://www.igrid2005.org/program/applications/vizservices_loni.html

Collaborators Center for Computation and Technology, Louisiana State University (LSU), USA LSU, USA Masaryk University/CESNET Czech Republic Zuse Institute Berlin, Germany MCNC, USA National Center for Supercomputing Applications, University of Illinois at Urbana-Champaign, USA Lawrence Berkeley National Laboratory, USA Vrije Universiteit, NL

Interactive visualization coupled with computing resources and data storage archives over optical networks enhance the study of complex problems, such as the modeling of black holes and other sources of gravitational waves. At iGrid, scientists visualize distributed, complex datasets in real time. The data to be visualized is transported across the Louisiana Optical Network Initiative (LONI) network to a visualization server at LSU in Baton Rouge using state-of-the-art transport protocols interfaced by the Grid Application Toolkit (GAT) and dynamic optical networking configurations. The data, visualization and network resources are reserved in advance using a co-scheduling service interfaced through the GAT. The data is visualized in Baton Rouge using Amira, and HD video teleconferencing is used to stream the generated images in real time from Baton Rouge to Brno and to San Diego. Tangible interfaces, facilitating both ease-of-use and multi-user collaboration, are used to interact with the remote visualization.

International 10Gbps Line-Speed Security

http://www.igrid2005.org/program/applications/lambdaservices_10gbsecurity.html

Collaborators Nortel, Canada Electronic Visualization Laboratory, University of Illinois at Chicago, USA Argonne National Laboratory, USA Calit2, University of California, San Diego, USA International Center for Advanced Internet Research, Northwestern University, USA SARA Computing and Networking Services, NL

This security demonstration uses AES-256 encryption and OME switching hardware to send encrypted streaming media at 10Gbps speeds with minimal latency from Nortel’s Ottawa facility, and from Amsterdam, through StarLight in Chicago, to iGrid in San Diego. Such capability is desirable for many applications of lightpaths to reduce vulnerability to theft or modification of commercially valuable information. Applications can enable and disable the encryption on demand during the iGrid demonstration.

IPv4 Link-Local IP Addressing for Optical Networks

http://www.igrid2005.org/program/applications/lambdaservices_ipv4.html

Collaborators

Advanced Internet Research Group, Universiteit van Amsterdam, NL

When setting up lightpaths, the end nodes use IP addresses to communicate. Address assignment is now done manually. An automatic solution is demonstrated that uses IPv4 link-local addresses while retaining the authenticity of the hosts.

Page 24: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 24

Large-Scale Multimedia Content Delivery over Optical Networks for Interactive TV Services

http://www.igrid2005.org/program/applications/videoservices_multimedia.html

Collaborators Poznan Supercomputing and Networking Center (PSNC), Poland

A high-quality multimedia content delivery system is being developed for National Public Television of Poland, for delivery of live TV, Video-on-Demand and Audio-on-Demand with interactive access over broadband IP networks. While terrestrial, satellite and cable platforms are used for digital TV distribution, only a completely IP-based solution offers true interactivity, enabling the creation of new content access services.

PSNC has developed a scalable architecture Content Delivery System (CDS), in which Regional Data Centers (RDCs) can obtain content from content providers and then distribute it among themselves and to lower-level proxy/cache servers. During iGrid, one CDS node is set up in San Diego. Multimedia content is sent over a Gigabit link from RDC Poznan to San Diego, triggered by many simultaneous user requests.

Large-Scale Simulation and Visualization on the Grid with the GridLab Toolkit and Applications

http://www.igrid2005.org/program/applications/scservices_gridlab.html

Collaborators Poznan Supercomputing and Networking Center (PSNC), Poland Louisiana State University, USA Masaryk University, Czech Republic PIONIER National Optical Network, Poland Konrad Zusse Zentrum, Germany Vrije University, NL SZTAKI, Hungary University of Lecce, Italy Cardiff University, UK

GridLab is European Commission-funded research project for the development of application tools and middleware for Grid environments, and will include capabilities such as dynamic resource brokering, monitoring, data management, security, information, and adaptive services. GridLab technologies are being tested with Cactus (a framework for scientific numerical simulations) and Triana (a visual workflow-oriented data analysis environment). At iGrid, a Cactus application runs a large-scale black-hole simulation at one site and writes the data to local discs, and then transfers all the data to be post processed and visualized to another site. In the meantime, the application checkpoints and migrates the computation to another machine, possibly several times due to external events such as system performance is decreasing or a machine is going to be shutdown. Every application migration requires a transfer of several gigabytes of checkpoint data, together with the output data for visualization.

LightForce: High-Performance Data Multicast Enabled By Dynamic Lightpaths

http://www.igrid2005.org/program/applications/dataservices_lightforce.html

Collaborators International Center for Advanced Internet Research, Northwestern University, USA Electronic Visualization Laboratory (EVL), University of Illinois at Chicago (UIC), USA Nortel, Canada Nortel, USA Advanced Internet Research Group, University of Amsterdam, NL

Advanced applications require reliable point-to-point transport of extremely large data files

Page 25: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 25

among multiple nodes. Currently no communication service meets these data transport needs cost effectively. LightForce demonstrates the potential for transporting multiple gigibits of data with virtually no jitter by exploiting the capabilities of dynamic lightpath switching. It uses dynamic L1 services and rapidly changing topologies that integrate multiple source and destination nodes, including remote nodes. Key technologies demonstrated include transparent mapping, lightpath control, resource allocation, arbitration among contending demands, extended FEC, error control, traffic stream quality, performance monitoring, and new protocols that exploit enhanced methods for low levels of BER.

Real-Time Multi-Scale Brain Data Acquisition, Assembly, and Analysis using an End-to-End OptIPuter

http://www.igrid2005.org/program/applications/dataservices_braindata.html

Collaborators National Center for Microscopy Imaging Research, UCSD, USA Electronic Visualization Laboratory, University of Illinois at Chicago, USA Concurrent Systems Architecture Group, UCSD, USA International Center for Advanced Internet Research, Northwestern University, USA Laboratory for Advanced Computing, UIC, USA Cybermedia Center, Osaka University, Japan Research Center for Ultra High Voltage Electron Microscope, Osaka University, Japan KDDI R&D Laboratories, Japan National Center for High Performance Computing, Taiwan Advanced Internet Research Group, University of Amsterdam, NL Korea Institute of Science and Technology Information, Korea

The operation of a scientific experiment on a global testbed consisting of visualization, storage, computational, and network resources is demonstrated. These resources are bundled into a unified platform using OptIPuter-developed technologies, including dynamic lambda allocation, advanced transport protocols, the Distributed Virtual Computer (DVC) middleware, and a multi-resolution visualization system running over the Scalable Adaptive Graphics Environment (SAGE). With these layered technologies, runs a multi-scale correlated microscopy experiment where a biologist images a sample and progressively magnifies it, zooming from an entire system, such as a rat cerebellum, to an individual spiny dendrite.

Real-Time Observational Multiple Data Streaming and Machine Learning for Environmental Research using Lightpaths

http://www.igrid2005.org/program/applications/sciservices_datastreaming.html

Collaborators National Center for High Performance Computing (NCHC), Taiwan National Museum of Marine Biology & Aquarium (NMMBA), Taiwan Academia Sinica, Taiwan San Diego Supercomputer Center, USA Calit2, UCSD, USA Nara Institute of Science and Technology, Japan Cybermedia Center, Osaka University, Japan CANARIE, Canada Artificial Intelligence Applications Group, Edinburgh University, UK

The Ecogrid is a grid-based ecological and biological surveillance system that uses sensors and video cameras to monitor Taiwan’s coral reef ecosystem and wildlife behavior. Based on grid technologies, the Ecogrid can be regarded as an open-layered pipeline: the sensors collect raw images and transmit information over IP; grid middleware is used for distributed computing and data storage; and, then the data is analyzed, synthesized and archived for scientific discovery. At iGrid, images from high-definition underwater monitoring cameras

Page 26: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 26

in the coral reef reserve of Kenting, Taiwan, and from an innovative omni-directional camera for telepresence from Osaka, are streamed to San Diego, demonstrating the technologies that have been developed for real-time data streaming. Also shown is stereo image streaming, which is currently being studied as a way to do machine learning.

This application is part of the iGrid demo “World’s First Demonstration of X GRID Application Switching using User Controlled LightPaths.”

Token-Based Network Element Access Control and Path Selection

http://www.igrid2005.org/program/applications/lambdaservices_tokenbased.html

Collaborators Advanced Internet Research Group (AIR), University of Amsterdam (UvA), NL

For grid computing to successfully interface with control planes and firewalls, new security techniques must be developed. Traditional network access security models either use an “outsourcing” model or an (OGSA-based) “configuration” model. The “push,” or token, model, demonstrated here, works at lower network levels. In this model, an application’s or user’s access rights are determined by a token issued by an authority. The token is used to signal the opening of the data path. The advantage of using tokens is that a path can be pre-provisioned and an application or user holding tokens can access the network resource potentially faster then in the other models.

Transfer, Process and Distribution of Mass Cosmic Ray Data from Tibet

http://www.igrid2005.org/program/applications/dataservices_cosmicray.html

Collaborators Chinese Academy of Sciences (CAS), China Istituto Nazionale di Fisica Nucleare, Italy

The Yangbajing (YBJ) International Cosmic Ray Observatory is located in the YBJ valley of the Tibetan highland. The ARGO-YBJ Project is a Sino-Italian Cooperation, which was started in 2000 and will be fully operational in 2007, to research the origin of high-energy cosmic rays. It will generate more than 200 terabytes of raw data each year, which will then be transferred from Tibet to the Beijing Institute of High Energy Physics, processed and made available to physicists worldwide via a web portal. Chinese and Italian scientists are building a grid-based infrastructure to handle this data, which will have about 400 CPUs, mass storage and broadband networking, a preview of which is demonstrated at iGrid.

Virtual Laboratory on Demand

http://www.igrid2005.org/program/applications/sciservices_virtuallab.html

Collaborators Poznan Supercomputing and Networking Center (PSNC), Poland

While grid computing lets researchers access equipment remotely, there is no way to monitor what’s happening locally. The Virtual Laboratory (VLAB) project is a framework to enable end users to directly access and monitor usage of grid resources (workflow, accounting, job prioritization), particularly remote, expensive instrumentation, such as in chemistry (spectrometer), radio astronomy (radio telescope) and medicine (CAT scanner). At iGrid, VLAB is used to run remote instruments in Poland − spectrometers and a radio telescope − and real-time visualization and video stream feedback is sent to San Diego to provide feedback to local users.

Page 27: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 27

VM Turntable: Making Large-Scale Remote Execution more Efficient and Secure with Virtual Machines Riding on Dynamic Lightpaths

http://www.igrid2005.org/program/applications/lambdaservices_turntable.html

Collaborators Nortel, USA Nortel, Canada Advanced Internet Research Group, University of Amsterdam, NL International Center for Advanced Internet Research, Northwestern University, USA

The VM Turntable is structured around Xen-based Linux Virtual Machines that can be migrated in real time while still supporting live applications - transporting the whole set of memory pages and hard disk contents to various destinations. The live migration of Virtual Machines exploits a high degree of pipelining between the staggered operations of assembling the data to be transferred, verifying its integrity, and finally halting and transferring execution. To maintain lightpath security, the VM Turntable utilizes a token-based approach to efficiently enforce policies at both the bearer and control path levels (see the iGrid application “Token-based Network Element Access Control and Path Selection”).

World’s First Demonstration of X GRID Application Switching using User Controlled LightPaths

http://www.igrid2005.org/program/applications/lambdaservices_xgrid.html

Collaborators CANARIE, Canada Communications Research Centre (CRC), Canada University of Waterloo, Canada i2CAT/Universitat Politècnica de Catalunya, Spain National Center for High Performance Computing (NCHC), Taiwan Korea Institute of Science and Technology Information (KISTI), Korea Gwangju Institute of Science and Technology, Korea

Researchers from CANARIE and CRC in Canada, NCHC in Taiwan, KISTI in Korea and i2CAT in Spain are collaborating on the world’s first demonstration of multiple end users dynamically controlling the setup and switching of lightpaths to various grid application resources worldwide using CANARIE’s UCLP (User Controlled LightPath) management software. In the future, grid users will be able to quickly configure (and reconfigure) global optical lightpaths among collaborating research institutions so their applications can exchange data in real time.

2.B.6. Other NSF-Funded Infrastructure Leveraging of Resources Application-Level Performance Measurement LambdaStream, developed by EVL with NSF OptIPuter funding, is an application-level data transport protocol that supports high-bandwidth multi-gigabit streaming for network-intensive applications. An application-level protocol enables users to select and modify available protocols. Systems are capable of sending data with TCP or UDP protocols, but typically special sys-admin privileges or machine modifications are necessary to change and/or optimize them when sending large datasets over high-performance networks. LambdaStream enables the user, or application, to achieve high throughput over high-speed networks by building on top of existing TCP and UDP protocols without any machine modification or special sys-admin privileges. LambdaStream’s key characteristics include loss recovery and unique adaptive rate control and configurability as both a reliable and unreliable transport protocol. LambdaStream is also instrumented, meaning that it keeps track of the amount of data it is sending and the rate at which it is being sent.

We maintain that meaningful lambda usage measurements are those taken by the applications themselves. We would like to duplicate the following tests over IRNC links to Amsterdam (via MAN LAN and StarLight).

Page 28: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 28

OptIPuter Cluster-to-Cluster Network Performance Experiments Over CAVEwave and TeraWave In January 2006, the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago performed file-transfer tests to determine whether the same network performance obtained over L2 switched networks could be obtained with dedicated 10Gbps paths on L3 routed networks using specialized transport protocols, enabling data-intensive science applications to have more options when sending large datasets over wide-area networks. This work was supported by the NSF-funded OptIPuter project.

Specifically, these tests compared throughput over a TeraGrid 10Gbps SONET routed network using MPLS between Chicago and San Diego (herein called “TeraWave”) and a 10Gbps switched LAN PHY network between the same locations (called “CAVEwave”). The file-transfer programs invoked LambdaStream, an EVL-developed, application-level, reliable-UDP protocol, to stream visualization datasets, and the measured performance across the two networks was comparable. In the coming year, we would like to duplicate these tests, or similar tests, over the IRNC links to University of Amsterdam (via MAN LAN and StarLight).

EVL has a 10Gbps connection between its location on the UIC campus and the StarLight facility in downtown Chicago, where the link interconnects with StarLight’s Force10 switch. A 30-node OptIPuter cluster at EVL, with 1Gbps NICs per node, connects to an in-house switch that aggregates traffic and sends it to StarLight. The Force10 connects directly to CAVEwave, which is a 10Gbps wave on the National LambdaRail (NLR); the circuit connects Chicago to San Diego through Seattle, where it connects to a 28-node OptIPuter cluster at the Calit2 building on the University of California, San Diego (UCSD) campus. The Force10 also connects to the TeraGrid Juniper T640, also located at StarLight; its circuit connects Chicago to Los Angeles to the UCSD San Diego Supercomputer Center where, for this test, it was directly connected to the same OptIPuter cluster at Calit2. Given their footprints, the CAVEwave has an average round-trip-time (RTT) of 78.3ms, and the TeraWave has an average RTT of 56.2ms.

The LambdaStream protocol was used to send multiple 1Gbps data streams over each of these networks to saturate the 10Gbps links. The dataset used was a 0.3-meter high-resolution map of 5,000 square miles of the city of Chicago created by the U.S. Geological Survey (USGS) National Center for Earth Resources Observation and Science (EROS). The map consists of 3,000 files of tiled images that are 75MB each, for a total of 220GB of information.

EVL performed three different LambdaStream tests over the two separate TeraWave and CAVEwave networks: moving data from memory to memory, from disk to memory, and from disk to disk. MRTG (Multi Router Traffic Grapher) was used to monitor traffic through StarLight’s Force10 switch, and a similar MRTG-like traffic utilization tool called Cricket was used to monitor traffic through the TeraGrid’s Juniper T640 router at StarLight. To determine the maximum throughput to be expected, researchers first used the iperf bandwidth measurement tool to send the same data using the unreliable UDP protocol. The performance metrics achieved are in the table below.

Using iperf, the maximum throughput achieved over the switched 10Gbps CAVEwave network was 9.75Gb, and the maximum throughput over the routed 10Gbps TeraWave was 9.45Gb. Using LambdaStream, CAVEwave achieved speeds of 9.23Gbps reliable memory-to-memory, 9.21Gbps reliable disk-to-memory, and 9.30Gbps reliable disk-to-disk transfers; and, TeraWave achieved speeds of 9.16 reliable memory-to-memory, 9.15 reliable disk-to-memory, and 9.22 reliable disk-to-disk transfers. Slight variations in traffic speed, depending on whether the data originated in Chicago or in San Diego, are due to asymmetries in the infrastructure, and resulted in negligible measurements.

LambdaStream Throughput Over 10Gbps Links iperf: Unreliable UDP

(Gb) 10 streams per direction1

Reliable UDP Memory-to-Memory (Gb)

10 streams per direction2

Reliable UDP Disk-to-Memory (Gb) 16 streams per direction3

Reliable UDP Disk-to-Disk (Gb)

20 streams per direction3 CAVEwave Chicago to San Diego

9.73 9.19 9.08 9.30

CAVEwave San Diego to Chicago

9.75 9.23 9.21 9.01

TeraWave Chicago to San Diego

9.43 9.14 9.03 9.22

TeraWave San Diego to Chicago

9.45 9.16 9.15 9.02

1: Using iperf, CAVEwave moved 10 UDP streams with 0% packet loss and effective throughputs, as shown. Using iperf, TeraWave moved 10 UDP streams with 1% packet loss and effective throughputs, as shown. 2: Bidirectionally, CAVEwave achieved 18.19Gbps and TeraWave achieved 18.06Gbps doing memory-to-memory transfers. 3: Disk bandwidth is 500Mb maximum, so more nodes were used to saturate the 10Gbps link.

Page 29: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 29

The MRTG-measured bandwidth speeds were verified by LambdaStream, which is instrumented to monitor the amount of information it sends and receives. Moreover, MRTG measurements are five-minute averages, but LambdaStream measurements are continuous, and therefore more accurate. For several years now, EVL has been designing application-level protocols and instrumenting them to do their own traffic measurement. These protocols have been used for routed (L3) and switched (L2) networks, but are necessary when using L1 optical switches, as bandwidth utilization cannot be measured any other way.

The results of this experiment prove that data-intensive science can effectively use hybrid networks (routed and switched) to move large files effectively. Routed networks are typically shared networks, as the cost of routers makes it prohibitive to permanently dedicate links among specific sites. Switched networks are less costly, so can be dedicated, and provide applications with known and knowable characteristics, such as guaranteed bandwidth (for data movement), guaranteed latency (for visualization, collaboration and data analysis) and guaranteed scheduling (for remote instruments). While this experiment showed that dedicated (unshared) paths on routed networks using specialized transport protocols can perform as well as those on switched networks supporting file transfers using LambdaStream, it relies on the availability of large routers with OC-192 ports, which are significantly more expensive than ports on switches. These tests were performed on transcontinental networks; however, we believe that similar results can be achieved using hybrid transoceanic networks.

Weekly graphs showing 30-minute averages for the three tests (memory-to-memory, disk-to-memory and disk-to-disk) show CAVEwave usage

(left − week of January 4-11) and TeraGrid usage (right − week of January 6-13). These tests coincided with other experiments on the same links, so time was shared; hence, the “spaces” between the test spikes. Green represents traffic into Chicago and blue represents traffic out of Chicago.

CAVEwave maximum into Chicago was 9,295.0Mb (92.9% utilization) and maximum out of Chicago was 9,124.8Mb (91.2% utilization). TeraWave maximum into Chicago was 9,129.3Mb (91.3% utilization) and maximum out of Chicago was 9,275.7Mb (92.7%) utilization.

UDP Offload Engines in LambdaGrids Though TCP/IP is considered the de facto standard for Internet-related wide-area computing, its failure for LambdaGrids is well documented. UDP/IP-based protocols are emerging as a feasible solution for meeting the performance goals in such environments. While such protocols have been able to avoid most of the drawbacks of TCP/IP, they are still plagued by the drawbacks of UDP/IP, such as a host-based implementation, limiting their performance on high-speed networks. Researchers have attempted to fix this drawback in TCP/IP using hardware offloaded TCP/IP implementations, such as TCP Offload Engines (TOEs). Given these two orthogonal developments, it is not completely clear about which is the better solution; i.e., a rate-controlled UDP/IP-based protocol that is implemented in the host or a hardware offloaded TCP/IP solution, such as the TOE. We combined the benefits of both these solutions to design and develop an emulated UDP Offload Engine (UOE) based on the Chelsio T110 TOE; a solution that would transparently improve the performance of existing UDP/IP-based applications. Evaluations of our emulated UOE stack show that our design can achieve up to 35% improvement in the performance while maintaining significantly lower CPU usage. In the future, we plan to design a version of LambdaStream to utilize the UOE and benefit from the rate control on the network adapter.

UIC/EVL recently published the paper “A Case for UDP Offload Engines in LambdaGrids” at the Fourth International Workshop on Protocols for Fast Long- Distance Networks (PFLDnet 2006) in Nara, Japan (see Section 3.A).

Page 30: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 30

2.C. Research Training National Research Network (NRN) management and engineers from Internet2, ESnet, DANTE and NLR work closely with IRNC management and engineers at UIC and SURFnet, as well as at MAN LAN, StarLight, and NetherLight, to facilitate connectivity and greater advances in global networking than a single-investigator effort would afford. In addition, numerous researchers, middleware developers, network engineers and international NRNs are involved as users of TransLight. This global, dedicated community has elected to work together, on a persistent basis, to further the goals of international e-science collaboration. 2.D. Education/Outreach TransLight/StarLight’s primary education and outreach activities include web documentation, journal articles, and conference presentations and demonstrations. We also provide PowerPoint presentations, and other teaching materials to collaborators to give presentations at many conferences, government briefings, etc. Since 1986, EVL has partnered with NCSA, ANL, and more recently NU/iCAIR, in ongoing efforts to develop national/international collaborations at major professional conferences, notably ACM/IEEE Supercomputing (SC), IEEE High Performance Distributed Computing (HPDC), and Internet2 meetings. We have participated in European conferences (e.g., GLIF/LambdaGrid Workshops), NORDUnet annual meetings and a UKERNA seminar on optical networking. Our success has been in the development of teams, tools, hardware, system software, and human interface models on an accelerated schedule to enable multi-site collaborations for complex problem solving. We recently organized the iGrid 2005 in San Diego in September 2005, and plan to participate in the SC 2005 conference in Seattle in November 2005, to promote the goals of IRNC and TransLight/StarLight.

Page 31: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 31

3. Publications and Products 3.A. Journals/Papers V. Vishwanath, P. Balaji, W. Feng, J. Leigh and D.K. Panda, “A Case for UDP Offload Engines in LambdaGrids,” Fourth International Workshop on Protocols for Fast Long- Distance Networks (PFLDnet 2006), Nara, Japan, February 2-3, 2006. <http://www.evl.uic.edu/cavern/rg/20051031_vishwanath> Tom DeFanti, Maxine Brown, Joe Mambretti, John Silvester, Ron Johnson, “TransLight, a Major US Component of the GLIF” Cyberinfrastructure Technology Watch (CTWatch) Quarterly, Volume 1, Number 2, May 2005, <www.ctwatch.org/quarterly/articles/2005/05/trans-light> 3.B. Books/Publications None at this time. 3.C. Internet Dissemination www.startap.net/translight 3.D. Other Specific Products Other than the information reported here, we have not developed any other specific product of significance.

Page 32: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 32

4. Contributions

4.A. Contributions within Discipline TransLight/StarLight, by its very nature, is interdisciplinary. There is clearly a fine team of computer scientists, computational scientists and networking engineers involved with TransLight, facilitating greater advances in global networking than single-investigator efforts could produce. TransLight developed its management team in the Chicago area (UIC/EVL), and leveraged the efforts of national networking groups (Internet2, ESnet and NLR) and international NRNs (DANTE and SURFnet) technical and administrative contacts. 4.B. Contributions to Other Disciplines Within the Computational Science and the Computer Science communities, TransLight/StarLight efforts help lead 21st century discipline science and computer science innovation. TransLight’s OC-192 L3 circuit among Abilene, ESnet and GÉANT2 provides greater connectivity, and the OC-192 L2 circuit between StarLight and NetherLight provides a unique infrastructure to study the effects of long-distance, high-bandwidth networks on advanced applications. 4.C. Contributions to Human Resource Development We promote TransLight/StarLight through web documentation, journal articles, demonstrations and presentations at major networking conferences (e.g., Supercomputing, HPDC and Internet2), PowerPoint presentations and other instructional material. We teach the infrastructure, the grid advancements, the technological innovations and the application advancements that global connectivity enables. In fact, thanks to previous NSF funding of STAR TAP, StarLight and Euro-Link, STAR TAP has a mailing list of ~1,000 <[email protected]> individuals, from academia, government and industry, interested in information about international advanced networking developments. 4.D. Contributions to Resources for Research and Education TransLight/StarLight is a necessary and integral part of application advances and technological innovations for the Computational Science and Computer Science communities, as well as of major interest to network engineers. In particular, the L2 TransLight/StarLight circuit between StarLight and NetherLight represents a major resource for science and technology. 4.E. Contributions Beyond Science and Engineering Because of TransLight/StarLight’s interest in advanced applications and light-path provisioning, we often get inquiries from network equipment manufacturers and telecommunication providers about partnering with us to create and showcase a marketplace for wavelength-based network services and products. We look forward to working with these companies and introducing them to the Nation’s foremost university and Federal laboratory networking engineers, computer programmers and applications scientists, who are developing and using today’s evolving grid technologies. Our users expect us to grow in capacity and sophistication, and we look forward to the engineering challenges ahead.

Page 33: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 33

5. Special Requirements

5.A. Objectives and Scope A brief summary of the work to be performed during the next year of support if changed from the original proposal. Our scope of work has not changed. 5.B. Special Reporting Requirements Do special terms and conditions of your award require you to report any specific information that you have not yet reported? No. 5.C. Animals, Biohazards, Human Subjects Has there been any significant change in animal care and use, biohazards, or use of human subjects from what was originally approved (or approved later)? No.

Page 34: TransLight / StarLight - STAR TAP

UIC TransLight/StarLight Annual Report, February 1, 2005 – January 31, 2006 34

6. Program Plan

6.A. Plans and Milestones In cooperation with US and European national research and education networks, Translight/Starlight will continue to implement a strategy to best serve established production science, including usage by those scientists, engineers and educators who have persistent large-flow, real-time, and other advanced application requirements.

6.B. Proposed Activities

6.B.1. Capacity Planning and Circuit Upgrades Working with SURFnet, we will procure and implement multi-10Gbps trans-atlantic networks. Working with Internet2/Abilene and DANTE/GÉANT2, we will implement new circuit engineering (see Section 2.A.3, PoP Connectivity and Peering, IRNC Circuit Diagram 2).

Based on current best networking practices in Europe, Canada, and the US, and working in cooperation with Rick Summerhill of Internet2/Abilene and Roberto Sabatino of DANTE/GÉANT2, we will determine if there is adequate surplus bandwidth on the NYC/AMS link to groom it into shared L3 routed networking and some number of GigE circuits. If the NYC/AMS link is operating at maximum reasonable capacity, or if measurements indicate that it will achieve saturation in Year 2, we can also move more traffic to the CHI/AMS link.

6.B.2. Network Operations and Engineering We will continue to work with our IRNC and TransLight/StarLight partners to investigate and provide our users with advanced networking technologies and services.

We will also work with IRNC members on measurement and performance tools, as appropriate, and evaluate metrics, implementation and analysis methods conducted by NSF-designated entities and recommend any required adjustments.

6.B.3. Community Support and Satisfaction In the coming months, we want to spend time analyzing the ~400 pages of letters of support received for the joint IRNC TransLight/StarLight and TransLight/Pacific Wave proposals we submitted in response to the IRNC solicitation, to identify candidate early adopters of GigE circuits for production science. We will create a report of our analysis and, more importantly, we will work with these users and potential users with large-flow applications to provision persistent transatlantic GigE circuits for their use.

Moreover, we will work with NLR, SURFnet/NetherLight, Internet2/Abilene, DANTE/GÉANT2 and relevant regional entities to identify and encourage lambda usage among US/European computer scientists and discipline scientists who have ongoing large-flow, real-time, and other advanced application requirements. We will work with our partners to connect these scientists with GigE circuits end-to-end, provided that NSF agrees that such effort is within scope and that usage can be adequately measured; thereby meeting NSF’s expectations.

More effort will be spent documenting our activities on the TransLight/StarLight website.

Also, we fully intend to continue our involvement in national and international network meetings, including IRNC, JET, Internet2, NLR and GLIF meetings, as appropriate.