ed 336 066 ir 015 073 title institution - eric · 2014. 4. 9. · document resume ed 336 066 ir 015...

51
DOCUMENT RESUME ED 336 066 IR 015 073 TITLE High Performance Computing and Networking for Science--Background Paper. INSTITUTION Congress of the U.S., Washington, D.C. Office of Technology Assessment. REPORT NC OTA-BP-CIT-59 PUB DATE Sep 89 NOTE 51p.; Fcc reports and hearings on the High Performarce Computing Acts of 1989, 1990, and 1991, see ED 323 244, ED 329 226, and ED 332 693-694. AVAILABLE FROM Superintcndent of Documents, U.S. Government Printing Office, Washington, DC 20402-9325 (Stock No. 052-003-01164-6; $2.25). PUB TYPE Information Analyses (070) EDRS PRICE MF01/PC03 Plus Postage. DESCRIPTORS *Computer Networks; Federal Government; Federal Legislation; *Information Networks; *Information Technology; International Programs; *National Programs; Public Policy; *Research and Development; Telecommunications IDENTIFIERS *High Performance Computing; *National Research and Education Network; Supercomputers ABSTRACT The Office of Technology Assessment is conducting an assessment of the eff3cts of new information technologies--includirg high performance computing, data networking, and mass data archiving--on research and development. This paper offers a view of the issues and their implications for current discussions about Federal supercomputer initiatives and legislative initiatives concerning a national data communication network. The observations to date emphasize the critical importance of advanced information technology to research and development in the United States, the interconnection of these telecommunications technologies into a national system program, and the need for immediate and coordinated federal action to bring into being an advanced information technology infrastructure to support U.S. research, engineering, and education. High performance computers are discussed in detail using the Cornell Theory Center, the National Center for Supercomputing Applications, the Pittsburgh Supercomputing Center, the San Diego Supercomputer Center, and the John von Neumann National Supercomputer Center as examples. Several high performance computer facilities at the state level are also reviewed, as well as ct-anges in the scientific computing environment, the review and renewal of the Nr*tional Science Foundation (NSF) Centers, and international programs in Japan and Europe. A detailed discussion of the status of and policy issues surrounding data networking for science focused on the proposed National Research and Education Network (NREN) concludes the document. A list of reviewers and the names and affiliations of the High Performance Computing and Networking for Science Advisory Panel are inciuded. (DB)

Upload: others

Post on 19-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

  • DOCUMENT RESUME

    ED 336 066 IR 015 073

    TITLE High Performance Computing and Networking forScience--Background Paper.

    INSTITUTION Congress of the U.S., Washington, D.C. Office ofTechnology Assessment.

    REPORT NC OTA-BP-CIT-59PUB DATE Sep 89NOTE 51p.; Fcc reports and hearings on the High

    Performarce Computing Acts of 1989, 1990, and 1991,see ED 323 244, ED 329 226, and ED 332 693-694.

    AVAILABLE FROM Superintcndent of Documents, U.S. Government PrintingOffice, Washington, DC 20402-9325 (Stock No.052-003-01164-6; $2.25).

    PUB TYPE Information Analyses (070)

    EDRS PRICE MF01/PC03 Plus Postage.DESCRIPTORS *Computer Networks; Federal Government; Federal

    Legislation; *Information Networks; *InformationTechnology; International Programs; *NationalPrograms; Public Policy; *Research and Development;Telecommunications

    IDENTIFIERS *High Performance Computing; *National Research andEducation Network; Supercomputers

    ABSTRACTThe Office of Technology Assessment is conducting an

    assessment of the eff3cts of new information technologies--includirghigh performance computing, data networking, and mass dataarchiving--on research and development. This paper offers a view ofthe issues and their implications for current discussions aboutFederal supercomputer initiatives and legislative initiativesconcerning a national data communication network. The observations todate emphasize the critical importance of advanced informationtechnology to research and development in the United States, theinterconnection of these telecommunications technologies into anational system program, and the need for immediate and coordinatedfederal action to bring into being an advanced information technologyinfrastructure to support U.S. research, engineering, and education.High performance computers are discussed in detail using the CornellTheory Center, the National Center for Supercomputing Applications,the Pittsburgh Supercomputing Center, the San Diego SupercomputerCenter, and the John von Neumann National Supercomputer Center asexamples. Several high performance computer facilities at the statelevel are also reviewed, as well as ct-anges in the scientificcomputing environment, the review and renewal of the Nr*tional ScienceFoundation (NSF) Centers, and international programs in Japan andEurope. A detailed discussion of the status of and policy issuessurrounding data networking for science focused on the proposedNational Research and Education Network (NREN) concludes thedocument. A list of reviewers and the names and affiliations of theHigh Performance Computing and Networking for Science Advisory Panel

    are inciuded. (DB)

  • cez

    HIGH PERFORMANCECOMPUTING & NETWORKING

    U S DEPARTMENT OF EDUCATIONOffice of Educational Re learch and Improvement

    EDUCATIONAL RESOURCES INFORMATIONCENTER (ERIC)

    0 This document has been reproouced asreceived from the person or organizationoriginating

    0 Minor changes have been made to improvereproduction quality

    Points of view or opinions stated in this document do not necessarily represent officialOERI position or policy

    FOR SCIENCE

    A

    Deer fumy moan arse s.

  • Office of Technology Assessn.ent

    Congressional Board of the 101st Congress

    EDWARD NI. KENNEDYALissdebusetts,

    CLARENCE . MILLER, Ohio, Vit

    Senate House

    ERNEST F. HOLLINCS MORRIS K. I'DALLSnuth Cam/ma 11/Un,

    CLAIBORNE PELL (;EORC.F. F. BROWN, .11(Rhode IsIdnd C,Iliforn Li

    FED Sri:AA:ASALIck4

    ORR IN ; ILA 1 1rtah

    JOHN D. DINtiELLhiqm)

    DON SLNDQUISTTen 11('!siT

    CHAR LES E. (i1(:\SSLEy AM() 11( )L(;HT)Nlot% .Ve%%

    l'ork

    DAVID S YITER. C/hurnhinCM' rill murs Com (Rey)

    1:ILASF. N. PETERS()N, A'neersitv of

    CHRLES A BOWSLIERGencidi.11«,untlitg ()trice

    JOHN IL GIBBONS(Nunvotmg

    Advisory Council

    NE11. HARE WILI PERRYStAte 1. 111VCrW 1M) 1r01110h

    ANI ES C !IV NT SALLY RIDEUnwer!-nv n; Tennesse Cahlornid Spare Institute

    HENRY Kt )111.ER J SEPII E ROSS'tliversity rIzona (:Ongressiomil Researt. Service

    \11(:11FI. I HLBOLIN JOSI LEI)11(IiERGAlu he/ T. I idihoott hiergt ( :IC Rockefeller l'tillersit%

    Director

    JOHN H. ( -,111BONs

    )IIN SINIS1.sibelli Cud! Aline, inc.

    111AN \I)C(I III 1I,C kind >CI>I I li,)t it( thr( ) I .\ 1d. iNflt% CU ii. ()I Ind! IS (III

  • HIGH PERFORMANCECOMPUTING & NETWORKING

    FOR SCIENCE

    BACKGROUND PAPER

    CONGRESS OF THE UNITED STATES OFFICE OF TECHNOLOGY ASSESSMENT

    4

  • Recommended Citation:

    U.S. Congress, Office of itchnology Assessment, High Performance Computing andNetworking for Science--Background Paper, OTA-BP-CIT-59 (Washington, DC: U.S.Government Printing Office, September 1989).

    Libraiy of Congress Catalog Card Number 89-600758

    For sale by the Superintendent of DocvmentsU.S. Government Printing Office, Washington, DC 20402-9325

    (Order form can be found in the back of this report.)

    11

  • Foreword

    Information technology is fundamental to today's research and development:high performance computers for solving complex problems; high-speed datacommunication networks for exchanging scientific and engineering information; verylarge electronic archives for storing scientific and technical data; and new displaytechnologies for visualizing the results of analyses.

    This background paper explores key issues concerning the Federal role insupporting national high performance computing facilities and in developing anational research and education network. It is the first publication from ourassessment, Information Technology and Research, which was requested by the HouseCommittee on Science and lbchnology and the Senate Committee on Commerce,Science, and Transportation.

    OTA gratefully acknowledges the contributions of the many experts, within andoutside the government, who served as panelists, workshop participants, contractors,reviewers, detailees, and advisers for this document. As with all OTA reports,however, the content is solely me responsibihty of OTA and does not necessarilyconstitute the consensus or endorsement of the advisory panel, workshop participants,or the Ibchnology Assessment Board.

    JOHN H. GIBBONSDirector

    6

    III

  • High Performance Computing and Networking for Science Advisory Panel

    Charles BenderDirectorOhio Supercomputer Center

    Charles DeLisiChairmanDepartment of Biomathematical

    ScienceMount Sinai School of Medicine

    Deborah L. EstrinA istant Professorr-Jrnputer Science DepurtmentUidversity of Southern California

    Robert EwaldVice President, SoftwareCray Research, inc.

    Kenneth FlammSenior FellowThe Brookings Institution

    Malcolm CetzAssociate ProvostInformation Services & TechnologyVanderbilt University

    Ira GoldsteinVice President, ResearchOpen Software Foundation

    Robert E. KrautManagerInterpersonal Communicadons GroupBell Communications Research

    John P. (Pat) Crecine, ChairmanPresident, Georria Institute of Technology

    Lawrence LandweberChairmanComputer Science DepartmentUniversity of Wisconsin-Madison

    Carl LedbetterPresident/CEOETA Systems

    Donald MarshVice President, TechnologyContel Corp.

    Michael J. McGillVice Presidentlechnical Assessment & DevelopmentOCLC, Computer Library Center, Inc.

    Kenneth W. NevesManagerResearch & Development ProgramBoeing Computer Services

    Bernard O'LearManager of SystemsNational Center for Atmospheric

    Research

    William PoduskaChairman of the BoardStellar Computer, Inc.

    ttium. RichDirectoiArtificiAl Intelligence LabMicroelectronics and Computer

    Technology Corp.

    Sharon J. RogersUniversity LibrarianGelman LibraryThe George Washington University

    William SchraderPresidentNYSERNET

    Kenneth ToyPost-Co aduate Resear,h

    GeophysicistScripps Institution of Oceanography

    Keith UncaphezVice PresidentCorporation for the National

    Research Initiatives

    Al WeisVice PresidentEngineering & Scientific ComputerData Systems DivisionIBM Corp.

    NOTE: OTA is grateful for the valuable assistance and thoughtful critiques provided by the advisory panel. The views expresseicl inthis OTA background paper, however, are the sole responsibility of the Office tg Technology Assessment.

    iv

  • OTA Project StaffHigh Performance Computing

    John Andelin, Assistant Director, OTAScience, Information, and Natural Resources Division

    James W. Cur lin, Program ManagerCommunication and Information Technologies Program

    Fred W. Weingarten, Project Director

    Charles N. Brownstein, Senior Analyst]

    Lisa Heinz, Analyst

    Elizabeth I. Miller, Research Assistant

    Administrati ve St4ff

    Elizabeth Emaauel, Administrative Assistant

    Karolyn Swauger, Secretary

    Jo Anne Price, Secretary

    Bill BarteloneLegislative/Federal Program

    ManagerCray Research, inc.

    I Detailix from NSF

    Other Contributors

    Mervin JonesProgram AnalystDefense Automation Resources

    Information Center

    Timothy LynaghSupervisory Data and

    Program AnalystGovernment Services Administration

  • Janice AbrahamExecutive DirectorCornell Theory CenterCornell University

    Lee R. AlleyAssistant Vice President for

    Information Resources ManagementArizona State University

    James AlmondDirectorCenter for High Performance

    ComputingBalcones Research Center

    Julius ArchibaldDepartment ChairmanDepartment of Computer ScienceState University of New YorkCollege of Plattsburgh

    J. Gary AugustsonExecutive DirectorComputer and Information

    SystemsPennsylvania State University

    Philip AustinPresidentColorado State University

    Steven C. BeeringPresidentPurdue University

    Jerry Berkman .Fortran SpezialistCentral Computing ServicesU.Liversity of California at Berkeley

    Kathleen BernwriDirector for Science Policy

    and liThnology ProgramsCray Research, Inc.

    Justin L. BloomPresidentTechnology International, Inc.

    Charles N. BrownsteinExecutive OfficerComputing & latbrinfition Science &

    EngineeringNational Science Foundation

    List of Reviewers

    Eloise E. ClarkVice President, Academic

    AffairsBowling Green University

    Paul ColemanProfessorInstitute of Geophysics and

    Space PhysicsUniversity of California

    Michael R. DingersonAssociate Vice Chancellor for

    Research and Dean of theGraduate School

    University of Mississippi

    Christopher EoyangDirectorInstitute for Supercomputing

    Research

    David FarberProfessorComputer & Information

    Science DepartmentUniversity of Pennsylvania

    Sidney FernbachIndependent Consultant

    Susan FratkinDirector, Special ProgramsNASULGC

    Doug GaleDirector of Computer

    Research CenterOffice of the ChancellorUniversity of Nebraska-Lincoln

    Robert GillespiePresidentGillespie, Folkner

    & Associates, Inc.

    Eiichi GotoDirector, Computer CenterUniversity of lbkyo

    C.K. GunsalusAssistant Vice Chancellor

    for ResearchUniversity of lainois

    at Urbana-Champaign

    fJ

    Judson M. HarperVice President of ResearchColorado State University

    Gene HempSenior Associate V.P. for

    Academic AffairsUniversity of Florida

    Nobuaki ledaSenior Vice PresidentNTT America, Inc.

    Hiroshi InoseDirector GeneralNational Center for Science

    Information System

    Heidi JamesExecutive SecretaryUnited States Activities

    BoardIEEE

    Russell C. JonesUniversity Research ProfessorUniversity of Delaware

    Brian Kahin, Esq.Research Affiliate on

    Communications PolicyMassachusetta Institute of Technology

    Robert KahnPresidentCorporation of National

    Research Initiatives

    Hisao KanaiExecutive Vice PresidentNEC Corporation

    Hiroshi KashiwagiDeputy Director-GeneralElectrotechnical Laboratory

    Lauren KellyDepartment of Commerce

    Thomas KeyesProfessor of Chemi-stryBoston University

    Continued on next pee.)

  • List a; RVA9Wers (continued)

    Doyle KnightPresidentJohn von Neumann National

    Supercomputer CenterConsortium for Scientific

    Computing

    Mike LevineCo-director of the Pittsburgh

    Supercomputing CenterCarnegie Mellon University

    George E. LindamoodProgram DirectorIndustry ServiceGartner Group, Inc.

    M. Stuart LynnVice President for

    Information TechnologiesCornell University

    Ikuo MakinoDirectorElectrical Machinery & Consumer

    Electronics DivisionMinistry of International

    Trade and Indus !Ty

    Richard MandelbaumVice Provost for ComputingUniversity of Rochester

    Martin MassengaleChancellorU.aversity of Nebraska-Lincoln

    Gerald W. MayPresidentUniversity of New Mexico

    Yoshiro MildDirector, Policy Research DivisionScience and Thchnology Policy BureauScience and Technology Agency

    Takeo MiuraSenior Executive Managing DirectorKtachi, Ltd.

    J. Gerald MorganDean of EngineeringNew Mexico State University

    V. Rama MurthyVice Provost for Academic

    AffairsUniversity of Minnesota

    Shoichi NinomoiyaExecutive DirectorFujitsu Limited

    Bernard O'LearManager of SystemsNational Center for Atmospheric

    Research

    Ronald OrcuttExecutive DirectorProject AthenaMIT

    Tad PinkertonDLectorOffice of Information

    'TechnologyUniversity of Wisconsin-Madison

    Harold J. RavechePresidentStevens Institute of Technology

    Ann RedelfManager of Information

    Servicesr.'ornell Theory CenterCornell University

    Glenn RicartDirectorComputer Science CenterUniversity of Maryland in

    College Park

    Ira RicherProgram ManagerDARPA/ISTO

    John RiganatiDirector of Systems ResearchSupercomputer Research CenterInstitute for Defense Analyses

    Mike RobertsVice PresidentEDUCOM

    David RosellePresidentUniversity of Kentucky

    Nora Sabel liNational Center for Supercomputing

    ApplicationsUniversity of Illinois at

    Urbana-Champaign

    1 0

    Steven SamplePresidentSUNY, Buffalo

    John SellPresidentMinnesota Supercomputer Center

    Hiroshi ShimaDeputy Director-General for

    Technology A.'airsAgency of Industrial Science and

    Technology,, MITI

    Yoshio ShimamotoSenior Scientist (Retired)Applied Mathematics DepartmentBrookhaven National Laboratory

    Charles SorberDean, School of EngineeringUniversity of Pittsburgh

    Harsey StoneSpecial Assistant to the

    PresidentUniversity of Delaware

    Dan SulzbachManager, User ServicesSan L''.ego Supercomputer Center

    Tatsuo TanakaExecutive DirectorInteroperability TechnologyAssociation for Information Processing,

    Japan

    Ray TolandPresidentAlabama Supereomputing Network

    Authority

    Kenneth ToloVice ProvostUniversity of Texas

    at Austin

    Kenneth ToyPost-Graduate Research

    GeophysicistScripps Institution of

    Oceanography

    August B. Thrnbull, IllProvost & Vice President,

    Academic AffairsFlorida State University

    Continued on next page

    vii

  • List of Reviewers (continued)

    Gerald TimerChancellorUniversity of Mississippi

    Douglas Van HouwelingVice Provost for Information

    & TechnologyUniversity of Michigan

    Anthony VillasenorProgram ManagerScience NetworksOffice of Space Science and

    AcplicationsNational Aeronautics and

    Space Administration

    Hugh WalshData Systems DivisionIBM

    Richard WestAssistant Vice President,

    IS&ASUniversity of California

    Steve WolffProgram Director for NetworkingComputing & Information Science &

    EngineeringNational Science Foundation

    James WoodwardChancellorUniversity of North Carolina

    at Charlotte

    Akihiro YoshikawaResearch DirectorUniversity of California, BerkeleyBRIE/IIS

    NOTE: OTA is grateful for the valuable assistance and thoughtful critiques provided by the advisory panel, The views expressed inthis OTA background paper, however, are the sole responsibility of the Office of Thchnology Assessment,

    1 1

  • ContentsPage

    Chapter I: Introduction and Overview Other NationsPage

    19

    Observations 1 Chapter 3: Networks21

    RESEARCH AND INFORMATION THE NATIONAL RESEARCH AND

    TECHNOLOGYA FUTURE EDUCATION NETWORK (NREN) 22

    SCENARIO 1 The Origins of Research Networking. . 22

    MAJOR ISSUES AND PROBLEMS 1 The Growing Demand for Capability and

    NATIONAL IMPORTANCETHE NEED Connectivity23

    FOR ACTION 3 The Present NUN23

    Economic Importance 3 Research Networking as a Strategic

    Scientific Importance 4 High *Technology Infrastructure25

    Timing 4 Fetieral Coordination of the Evolving

    Users 5 Internet25

    Collaborators 5 Players in the NV'N26

    Service Providers 5 The NREN in the International

    Chapter 2: High Performance Computers . .- Thlecommunications Environment 28

    WHAT IS A HIGH PERFORMANCE Policy Issues28

    COMPUTER? 7 Planning Amidst Uncertainty29

    HOW FAST IS FAST? 8 Network Scope and Access29

    THE NATIONAL SUPERCOMPUTER Policy and Management Structure31

    CENTERS 9 Financing and Cost Recovery32

    The Cornell Theor / Center 9 Network Use 33

    Thc National Center for Supercomputing Longer-Thrm Science Policy Issues 33

    Applications 10 Tbchnical Questions34

    Pittsburgh Supercomputing Center 10 Federal Agency Plans:

    San Diego Supercomputer Center 10 FCCSET/FRICC34

    John von Neumann National NREN Management Desiderata35

    Supercomputer Center 11OTHER HPC FACILITIES

    Minnesota Supercomputer CenterThe Ohio Supercomputer CenterCenter for High Performance Computing,

    Thxas (CHPC)Alabama Supercomputer Network

    11

    11

    12

    1213

    FiguresFigure1-1. An Information Infrastructure for

    Research1-2. Distribution of Federal Supercomputers ...

    Page

    214

    Commercial Labs 13Federal Centers 13 Tables

    CHANGING ENVIRONMENT 13 TablePage

    REVIEW AND RENEWAL OF TI-IE NSF 2-1. Some Key Academic High Performance

    CENTERS 15 Computer Installations12

    THE INTERNATIONAL ENVIRONMENT . 16 3-1. Principal Policy Issues in Network

    Japan 16 Development30

    Europe 17 3-2. Proposed NREN Budget36

    1 2 ix

  • Chapter 1

    Introduction and Overview Observations

    The Office of lechnology Assessment is conduct-ing an assessment of the effects of new informationtechnologiesincluding high performance comput-ing, data networking, and mass data archiving---Jnresearch and development. This background paperoffers a midcourse view of the issues and discussestheir implications for current discussions aboutFederal supercomputer initiatives and legislativeinitiatives concerning a national data communica-tion network.

    Our obst rvations to date emphasize the criticalimportance of advanced information technologyto research and development in the United States,the interconnection of these technologies into anational system (and, as a result, the tightercoupling of policy choices regarding them), andthe need for immediate and coordinated Federalaction to bring into being an advanced informa-tion technology infrastructure to support U.S.research, engineering, and education.

    RESEARCH AND INFORMATIONTECHNOLOGYA FUTURE

    SCENARIOWithin the next decade, the desks and laboratory

    benches of most scientists and engineers will beentry points to a complex electronic web of informa-tion technologies, resources and information serv-ices, connected together by high-speed data commu-nication networks (see figure 1-1). These technolo-gies will be critical to pursuing research in mostfields. Through powerful workstation computers ontheir desks, researchers will access a wide variety ofresources, such as:

    an interconnected assortment of local campus,State and regional, natiolial, and even interna-tional data communication networks that linkusers worldwide;specialized and general-purpose computers in-cluding supercomputers, minisupercomputers,mainframes, and a wide variety of specialarchitectures tailored to specific applications;collections of application programs and soft-ware tools to help users find, modify, ordevelop programs to support their research;

    archival storage systems that contain spe-cialized research databases;experimental apparatussuch as telescopes,environmental monitoring devices, sei smographs,and so ondesigned to be set-up and operatedremotely;services that support scientific communication,including electronic mail, computer confer-encing systems, bulletin boards, and electronicjournals;a "dir,ital library" containing reference mate-rial, books, journals, pictures, sound record-ings, films, software, and other types of infor-mation in electronic form; andspecialized output facilities for displaying theresults of experiments or calculations in morereadily understandable and visualizable ways.

    Many of these resources are already used in someform by some scientists. Thus, the scenario that isdrawn is a straightforward extenskin of currentusage. Its importance for the scientific communityand for government policy stems from three trends:1) the rapidly and continually increasing capabilityof the technologies; 2) the integration of thesetechnologies into what we will refer to as an"information infrastructure"; and 3) the diffusion ofinformation technology into the work of mostscientific disciplines.

    Few scientists would use all the resources andfacilities listed, at least on a daily basis; and theparticular choice of resources eventually madeavailable on the network will depend on how thetastes and needs of research users evolve. However,the basic form, high-speed data networks connectinguser workstations with a worldwide assortment ofinformation technologies and services, is becominga crucial foundation for scientific research in mostdisciplines.

    MAJOR ISSUES AND PROBLEMSDeveloping this system to its full potential will

    require considerable thought and effort on the part ofgovernment at all levels, industry, research institu-tions, and the scientific community, itself. It willpresent policymakers with some difficult questionsand decisions.

    -1-

    1 3

  • 2

    Figura 1-1An Information Infrastructure for Rasaarch

    On-line

    experiments

    -0 \

    Mainframes

    Workstations

    Electronicjournals

    SOURCE: Office of Technology Assessment, 1989.

    Networks

    Associated Services

    Special purposecomputers

    Scientific applications are ..emanding ontechnological capability. A substantial R&D com-ponent will need to accompany programs in-tended to advance R&D use of informationtechnology. lb realize the potential benefits of thisnew infrastructure, research users need advances insuch areas as:

    more powerful computer designs;

    more powerful and efficient computationaltechniques and software; overly high-speedswitched data communications;

    improved technoiogies for visualizing dataresults and interacting with computers; and

    Electronic mail

    Bulletin boards

    Digital electroniclibraries

    Data archives

    Supercomputers

    new methods for storing and accessing infor-mation from very irge data archives.

    An important characteristic of this system is thatdifferent parts of it will be funded and operated bydifferent entities and made available to users indifferent ways. For example, databases could beoperated by government agencies, professional soci-eties, non-profit journals, or commercial firms.Computer facilities could similarly be operated bygovernment, industry, or univerities. The network,itself, already is an assemblage of pieces funded oroperated by various agencies in the Federal Govern-ment; by States and regional authorities; and by localagencies, firms and educational institutions. Keep-

    1 4

  • 3

    ing these components interconnected technologi-cally and allowing users to move smoothly amongthe resources titzy need will present difficultmanagement and policy problems.

    Furthermore, the system will require significantcapital investment to build and maintain, as well asspecialized technical expertise to manage. How thevarious components are to be funded, how cr.itsare to be allocated, and how the key componentssuch as the network will be managed over thelong term will be important questions.

    Since this system as envisioned would be sowidespread and fundamental to the process ofresearch, access to it would be crucial to participa-tion in science. Questions of access and participa-tion are crucial to planning, management, andpolicymaking for the network and for many ofthe services attached to it.

    Changes in information law brought about by theelectronic revolution will create problems and con-flicts for the scientific community and may influ-ence how and by whom these technologies are used.The resolution of broaAer information issuessuch as security and privacy, intellectual prop-erty protection, access controls on sensitive infor-mation, and government dissemination practicescould affect whether and how information tech-nologies will be used by researchers and who mayuse them.

    Finally, to the extent that, over the long run,modern information technology becomes so funda-mental to the research process, it will transform thevery nature of that process and the institutions-libraries, laboratories, universities, and so onthatserve it. These basic changes in science wouldaffect government both in the operation of its ownlaboratories and in its broader relationship as asupporter and consumer of research. Conflictsmay also arise to the extent that governmentbecomes centrally involved, both through fund-ing and through management, with the tradition-ally independent and uncontrolled communicationchannels of science.

    NATIONAL IMPORTANCE--THE NEED FOR ACTION

    Over the last 5 years, Congress has becomeincreasingly concerned about information technol-ogy and research. The National Science Foundation(NSF) has been authorized to establish supercom-puter centers and a science network. Bills (S 1067HR 3131) are being considered in the Congress toauthorize a major effort to plan and develop anational research and education network and tost mulate information technology use in science andc..acation. Interest in the role information technol-ogy could play in research and education hasstemmed, first, from the government's major role asa funder, user, and participant in research and,secondly, from concern Lir ensuring the strength andcompetitiveness of the U.S. economy.

    Observation I : The Federal Government needs toestablish its commitment to the advanced infor-mation technology infrastructure necessary forfurthering U.S. sci,nce and education. This needstems directly from the importance of science andtechnology to economic growth, the importanceof information technology to research and devel-opment, and the critical timing for certain policydecisions.

    Economic Importance

    A strong national effort in science and technologyis critical to the long-term economic competitive-ness, national security, and social well-being of theUnited States. That, in the modern internationaleconomy, technological innovation is concomitantwith social and ecoriomic growth is a basic assump-tion held in most political and economic systems inthe world these days; and we will take it here as abasic premise. It has been a basic finding in manyOTA studies.' (This observation is not to suggestthat technology is a panacea for all social problems,nor that serious policy problems are not often raisedby its use.) Benefits from of this infrastructure areexpected to flow into the economy in three ways:

    First, the information technology industry canbenefit directly. Scientific use has always been a

    1For example, U.S. Congress, Office of Technology Assessment, Technology and the American Economic Transition, OTA-TET-283 (Washington,

    DC: U.S. Government Printing Office, May 1988) and Information Technology R&D: Critical Trends and Issues, OTA -ClT-268 (Washington, DC: U.S.

    Government Printing Office, February 1985).

    15

  • major source of innovation in computers and com-munications technology. Packet-switched data com-munication, now a wieely used commercial offering,was first developed by the Defense AdvancedResearch Projects Agency (DARPA) to support itsresearch community. Department of Energy (DOE)national laboratories have, for many years, madecontributions to supercomputer hardware and soft-ware. New initiatives to develop higher speedcomputers and a national science network couldsimilarly feed new concepts back to the computerand communications industry as well as to providersof information services.

    Secondly, by improving the tools and methodolo-gies fox- R&D, the infrastructure will impact theresearch process in many critical high technologyindustries, such as pharmaceuticals, airframes, chem-icals, consumer electronics, and many others. Inno-vation and, hence, international competitiveness inthese key R&D-intensive sectors can be improved.

    The economy as a whole stands to benefit fromincreased technological capabilities of informationsystems and improved understanding of how to usethem. A National Research and Education Networkcould be the precursor to a much broader highcapacity network serving the United States, andmany research applications developed for highperformance computers result in techniques muchmore broadly applicable to commercial firms.

    Scientific Importance

    Research and development is, inherently, anintormation activity. Researchers generate, organ-ize, and interpret information, build models, com-municate, and archive results. Not surprisingly,then, they are now dependent on information tech-nology to assist them in these tasks. Many majorstudies by many scientific ind policy organizationsover the years--as far back as the President'sScience Advisory Committee (PSAC) in the middle1960s, and as recently as a report by COSEPUP ofthe National Research Council published in 19882--have noted these trends and analyzed the implica-tions for science support. The key points are asfollows:

    Scientific and technical information is increas-ingly being generated, stored and distributed inelectronic form;Computer-based communications and data han-dling are becoming essential for accessing,manipulatthg, analyzing, and communicatingdata and research results; and,In many computationally intensive R&D areas,from climate research to groundwater modelingto airframe design, major advances will dependupon pushing the state of the art in highperformance computirg, very large databases,visualization, and other related informationtechnologies. Some of these applications havebeen labeled "Grand Challenges." These proj-ects hold promise of great social benefit, suchas designing new vaccines and drugs, under-standing global warming, or modeling theworld economy. However, for that promise tobe realized in those fields, researchers requiremajor advances in available computationalpower.Many proposed and ongoing "big science"projects, from particle accelerators and largearray radio telescopes to the NASA EOSsatellite project, will create vast streams of newdata that must be captured, analyzed, archived,and made availab:e to the research community.These new demands could well overtax thecapability of currently available resources.

    Timing

    Government decisions being made now and in thenear future will shape the long-term utility andeffectiveness of the information technology infra-structure for science. For example:

    NSF is renewing its multi-year commitments toall or most of the existing National Supercom-puting Centers.Executive agencies, under the informl aus-pices of the Federal Research Internet Coordi-nating Committee (FRICC), are developing anational "backbone" network for science. Deci-sions made now will have long term influenceon the nature of the network, its technicalcharacteristics, its cost, its management, serv-

    2Panel on information Thchno logy and thc Conduct of Research, Committee on Science. Engineering, and Public Policy, Worrnation Technologyand the Conduct of Research: The User's View (Washington, DC: National Academy Pins, 1989).

    1 f;

  • 5

    ices available on it, access, and the informationpolicies that will govern its use.The basic communications industry is in flux,as are the policies and rules by which govern-ment regulates it.Congress and the Executive Branch are cur-rently considering, and in some cases havestarted, several new major scientific projects;including a space station, the Earth OrbitingSystem, the Hubble space telescope, the super-conducting supercollider, human genome map-ping, and so on. Thchnologies and policies areneeded to deal with these "firehoses of data." Inaddition, upgrading the information infrastruc-ture could open these projects and data streamsto broad access by the research community.

    Observation 2: Federal policy in this area needs tobe more broadly based than has been traditionalwith Federal science efforts. Planning, building,and managing the information technology infra-structure requires cutting across agency pro-grams and the discipline and mission-oriontedapproach of science support. In addition, manyparties outside the research establishment willhave important roles to play and stakes in theoutcome of the effort.

    The key information technologieshigh per-formance computing centers, data communicationnetworks, large data archives, along with a widerange of supporting softwareare used in allresearch disciplines and support several differentagency missions. In many cases, economies of scaleand scope dictate that some of these technologies(e.g., supercomputers) be treated as common re-sources. Some, such as communication networks,are most efficiently used if shared or interconnectedin some way.

    There are additional scientific reasons to treatinformation resources as a broadly used infrastruc-ture: fostering communication among scientistsbetween disciplines, sharing resources and tech-niques, and expanding access to databases andsoftware, for instance. However, there are very fewmodels from the history of Federal science supportfor creating and maintaining infrastructure-like re-sources for science and technology across agencyand disciplinary boundaries. Furthermore, since thenetworks, computer systems, databases, and so on

    1 "I

    interconnect and users must move smoothly amongthem, the system requires a high degree of coordina-tion rather than being treated as simply a conglomer-ation of independent facilities.

    However, if information technology resources forscience are treated as infrastructure, a major policyissue is one of boundaries. Who is it to serve; whoare ii beneficiaries? Who should participate indesigning it, building and operating it, providingservices over it, and using it? The answers to thesequestions will also indicate to Congress who shouldbe part of the policymaking and planning process;they will govern the long term scale, scope, and thetechnological characteristics of the infrastructureitself; and they will affect the patterns of support forthe facilities. Potentially interested parties includethe following:

    Users

    Potential users might include academic and indus-trial researchers, teachers, graduate, undergraduate,and high school students, as well as others such asthe press or public interest groups who need accessto and make use of scientific information. Institu-tions, such as universities and colleges, libraries, andschools also have user interests. Furthermore, for-eign scientists working as part of internationalresearch teams or 'n firms that operate internation-ally will wish access to the U.S. system, which, inturn, will need to be connected with other nation'sresearch infrastructures.

    Collaborators

    Another group of interested parties include Stateend local governments and parts of the informationindustry. We have identified them with the term"collaborators" because they will be participating infunding, building, and operating the infrastructure.States are establishing State supercomputer centersand supporting local and regional networking, somecomputer companies participate in the NSF NationalSupercomputer Centers, and some telecommunica-tion firms are involved in parts of the sciencenetwork.

    Service Providers

    Finally, to the extent that the infrastructure servesas a basic tool for most of the research anddevelopment community, information service pro-

  • 6

    viders will require access to make their productsavailable to scientific users. The service providersmay include government agencies (which provideaccess to government scientific databases, for exam-ple), libraries and library utilities, journal andtext-book publishers, professional societies, andprivate software and dataLas,e providers.

    Observation 3: Several irtformation policy issueswill be raised in managing and using the network.Depending on how they are resolved, they couldsharply restrict the utility and scope of networkuse in thl scientific community.

    Security and privRcy have already become ofmajor cc ncem and will pose a problem. In general,users will want the network and the services on it tobe as open as possible; however, they will also wantthe networks and services to be as robust anddependable as possiblefree from deliberate oraccidental disruption. Furthermore, different re-sources will require different levels of security.Some bulletin boards and electronic mail servicesmay want to be as open and public as possible; othersmay require a high level of privacy. Soni ,12tabasesmay be unique and vital resources that will need avery high level of protection, others may not be socritical. Maintaining an open, easily accessible

    network while protecting privacy and valuableresources will require careful balancing of legal andtechnological controls.

    Intellectual property protection in an electronicenvironment may pose difficult problems. Providerswill be concerned that electronic databases. soft-ware, and even electmnic formats of printed journalsand other writings will not be adequately protected.In some cases, the product, itself, may not be wellprotected under existing law. In other cases elec-tronic formats coupled with a communicationsnetwork erode the ability to control restrictions oncopying and disseminating.

    Access controls may be called for on material thatis deemed to be sensitive (although unclassified) forreasons of national security or economic competi-tiveness. Yet, the networks will be accessibieworldwide and the ability to identify and controlusers may be limited.

    The above observations have been broad, lookingat the overall collection of information technologyresources for science as an integrated system and atthe questions raised by it. The remaining portion ofthis paper will deal specifically with high perform-ance computers and networking.

    1 S

  • Chapter 2

    High Performance Computers

    An important set of issues has been raised duringthe last 5 years around the topic of high performancecomputing (HPC). These issues stem from a grow-ing concern in both the executive branch and inCongress that U.S. science is impeded significantlyby lack of access to HPC1 and by concerns over thecompetidveness implications of new foreign tech-nology initiatives, such as the Japanese "FifthGeneration Project." In response to these concerns,policies have been developed and promoted withthree goals in mind.

    1. To advance vital research applications cur-rently hampered by lack of access to very highspeed computers.

    2. lb accelerate the development of new FIPCtechnology, providing onhanced tools for re-search and stimulating the competitiveness ofthe U.S. computer industry.

    3. lb improve software tools and techniques forusing HPC, thereby enhancing their contribu-tion to general U.S. economic competitive-ness.

    In 1984, the National Science Foundation (NSF)initiated a group of programs intended to improvethe availability and use of high performance comput-ers in scientific research. As the centerpiece of itsinitiative, after an initial phase of buying anddistributing time at existing supercomputer centers,NSF established five National Supercomputer Cen-ters.

    Over the course of this and the next year, theinitial multiyear contracts with the National Centersare coming to an end, which has provoked a debateabout whether and, if so, in what form they shouldbe renewed. NSF undertook an elaborate review andrenewal process and announced that, depending onagency funding, it is prepared to proceed withrenewing at least four of the centers2. In thinkingabout the next Eteps in the evolution of the advancedcomputing program, the science agencies and Con-gress have asked some basic questions. Have ourperceptions of the needs of research for HPCchanged since the centers were started? If so, how?

    .1111154,1111111=

    Have we learned anything about the effectiveness ofthe National Centers approach? Should the goals ofthe Advanced Scientific Computing (ASC) andother related Federal programs be refined or rede-fined? Should alternative approaches be considered,either to replace or to supplement the contributionsof the centers?

    OTA is presently engaged in a broad assessmentof the impacts of information technology on re-search, and as part of that inquiry, is examining thequestion of scientific computational resources. It hasbeen asked by the requesting committees I f' aninterim paper that might help shed some light on theabove questions. The full assessment will not becompleted for several months, however; so thispaper must confine itself to some tentative observa-tions.

    WHAT IS A HIGH PERFORMANCECOMPUTER?

    The term, "supercomputer," is commonly used inthe press, but it is not necessarily useful for policy.In the first place, the definition of power in acomputer is highly inexact and depends on manyfactors including processor speed, memory size, andso on. Secondly, there is not a clear lower boundaryof supercomputer power. IBM 3090 computerscome in a wide range of configurations, some of thelargest of which are the basis of ipercomputercenters at institutions such as Cornell, the Universi-ties of Utah, and Kentucky. Finally, technology ischanging rapidly and with it our conceptions ofpower and capability of various types of machines.We use the more general term, "high performancecomputers," a term that includes a variety ofmachine types.

    One class of HPC consists of very large, powerfulmachines, principally designed for very large nu-merical applications such as those encountered inscience. These computers are the ones often refe ledto as "supercomputers." They are expensive, costingup to several million dollars each.

    'Peter D. Laz, Report of the Panel on Lioge-Scale Computing in Science and Engineering (Washington, DC: National Science Foundation, 1982).

    20ne of the five centers, the John von Newnann National Supercomputer Center, has been based on ETA-10 technology. The Ccntcr ha.s been asked

    to resubmit a proposal showing revised plans in ruction to the withdrawal of that machine from the market.

    -7-

  • 8

    A large-scale computer's power comes from acombination of very high-speed electronic compo-nents and specialized architecture (a term used bycomputer designers to describe the overall logicalarrangement of the computer). Most designs use acombination of "vector processing" and "parallel-ism" in their design. A vector processor is anarithmetic unit t f the computer that produces a seriesof similar calculations in an overlapping, assemblyline fashion. (Many scientific calculations can be setup in this way.)

    Parallelism uses several processors, assuming thata problem can be broken into large independentpieces that can be computed on separate processors.Currently, large, mainframe HPC's such as thoseoffered by Cray, IBM, are only modestly parallel,having as few as two up to as many as eightprocessors.3 The trend is toward more parallelprocessors on these large systems. Some expertsanticipate as many as 512 processor machinesappearing in the near future. The key problem to datehas been to understand how problems can be set upto take advantage of the potential speed advantage oflarger scale parallelism.

    Several machines are now on the market that arebased on the structure and logic of a large supercom-puter, but use cheaper, slower electronic compo-nents. These systems make some sacrifice in speed,but cost much less to manufacture. Thus, an applica-tion that is demanding, but that does not necessarilyrequire the resources of a full-size supercomputer,may be much more cost effective to run on such a"minisuper."

    Other types of specialized systems have alsoappeared on the market and in the research labora-tory. These machines represent attempts to obtainmajor gains in computation speed by means offundamentally different architectures. They are knownby colorful names such as "Hypercubes," "Connec-tion Machines," "Datafiow Processors," "ButterflyMachines," "Neural Nets," or "Fuzzy Logic Com-puter. " Although they differ in detail, many of thesesys' ns are based on large-scale parallelism. That is,theli designers attempt to get increases in processingspeed by hooking together in some way a largenumberhundreds or even thousandsof simpler,

    slower and, hence, cheaper processors. The problemis that computational mathematicians have not yetdeveloped a good theoretical or experiential frame-work for understanding in general how to arrangeapplications to take full advantage of these mas-sively parallel systems. Hence, they are still, by andlarge, experimental, even though some are now onthe market and users have already developed appli-cations software for them. Experimental as thesesystems may seem now, many experts think that anysignificantly large increase in computational powereventually must grow out of experimental systemssuch as these or from some other form of massivelyparallel architecture.

    Finally, "workstations," the descendants of per-sonal uesktop computers, are increasing in power;new chips now in development will offer thecomputing power nearly equivalent to a Cray 1supercomputer of the late 1970s. Thus, althoughtop-end HPCs will be correspondingly more power-ful, scientists who wish to do serious cmputing willhave a much wider selection of options in the nearfuture.

    A few policy-related conclusions flow from thisdiscussion:

    The term "Supercomputer" is a fluid one,potentially covering a wide variety of machinetypes, and the "supercomputer industry" issimilarly increasingly difficult to identify clearly.Scientists need access to a wide range of highperformance computers, ranging from desktopworkstations to full-scale supercomputers, andthey need to move smoothly among thesemachines as their research needs dictate.Hence, government policy needs to oe flexibleand broadly based, not overly focused onnarrowly defined classes of machines.

    HOW FAST IS FAST?1lopular comparisons of supercomputer speeds are

    ustially based on processing speed, the measurebeing "FLOPS," or "Floating Point Operation PerSecond." The term "floating point" refers to aparticular format for numbers within the computerthat is used for scientific calculation; and a floating

    3To distinguish between this modest level and the larger scale parallelism found on some more experimental machines, some experts refer to thislimited parallelism as "multiprocessing."

    20

  • 9

    point "operation" refers to a single arithmetic step,such as adding two numbers, using the floating pointformat. Thus, FLOPS measure the speed of thearithmetic processor. Currently, the largest super-computers have processing speeds ranging up toseveral billion FLOPS.

    However, pure processing speed is not by itself auseful measure of the relative power of computers.lb set why, let's look at an analogy.

    In a supermarket checkout counter, the calcula-tion speed of the register does not, by itself,determine how fast customers can purchase theirgroceries and get an of the store. Rather, the speedof checkout is also affected by the rate at which eachpurchase can be entered into the register and theoverall time it takes to complete a transaction witha customer and start a new one. Of course, ulti-mately, the length of time the customer must wait inline to get to the clerk may be the biggest determi-nant of all.

    Similarly, Li a computer, how fast calculationscan be set tin and presented to the processor and howfast new jobs and their associated data can be movedin, and completed work moved out of the computer,determines how much of the processor's spetd canactually be harnessed. (Some users refer to this as"solution speed.") In a computer, those speeds aredetermined by a wide variety of hardware andsoftware characteristics. And, similar to the storecheckout, as a fast machine becomes busy, usersmay have to wait a significant time to get their turn.From a user's perspective, then, a theoretically fastcomputer can look very slow.

    In order to fully test a machine's speed, expertsuse what are called "benchmark programs," sampleprograms that reprocuce the actual work low:. Sinceworkloads vary, there are several different bench-mark programs, and they are constantly beingrefined and revised. Measuring a supercomputer'sspeed is, itself, a complex and important area ofresearch. It lends insight not only into what type ofcomputer currently on the market is best to use forparticidar applications: but carefully structured meas-urements can also show where bottlenecks occurand, hence, where hardware and software improve-ments need to be made.

    One can draw a few policy implications fromthese observations on speed:

    Since overall speed improvement is closelylinked with how their machines are actuallyprogrammed and used, computer designers arecritically dependent on feedback from that partof the user community which is pushing theirmachines to the limit.There is no "fastest" machine. The speed of ahigh performance computer is too dependent onthe skill with which it is used and programmed,and the particular type of job it is being askedto perform.Until machines are available in the market andhave been tested for overall performance,policymakers should be skeptical of announce-ments based purely on processor speeds thatsome company or country is producing "fastermachines."Federal R&D programs for improving highperformance computing need to stress softwareand computational mathematics as well asresearch on machine architecture.

    THE NATIONALSUPERCOMPUTER CENTERS

    In February of 1985, NSF selected four sites toestablish national supercomputing centers: The Uni-versity of California at San Diego, The University ofIllinois at Urbana-Champaign, Cornell Universityand the john von Neumann Center in Princeton. Afifth site, Pittsburgh, was added in early 1986. Thefive NSF centers are described briefly below.

    The Cornell Theory Center

    The Cornell Theory Center is located on thecampus of Cornell University. Over 1,900 usersfrom 125 institutions access the center. AlthoughCornell does not have a center-oriented network, 55academic institutions are able to utilize the resourcesat Cornell through special nodes. A 14-memberCorporate Research Institute works within the centerin a variety of university-industry cost sharingprojects.

    In November of 1985 Cornell received a 3084computer from IBM, which was upgraded to afour-processor 3090/400VF a year later. The 3090/400VF was replaced by a six-processor 3090/600E

  • 10

    in May, 1987. In October, 1988 a second 3090/600Ewas added. The Cornell center also operates severalother smaller parallel systems, including an InteliPCS/2, a Transtech NT 1000, and a TopologixT1000. Some 50 percent of the resources of North-east Parallel Architecture Center, which include twoConnection machines, an Encore, and an MimiFX,/80, are accessed by the Cornell facility.

    Until October of 1988, all IBM computers were"on loan" to Cornell for as long as Cornell retainedits NSF fanding. The second IBM 3090/600, pro-cured in October, will be paid for by an NSF grant.Over the past 4 years, corporate support for theCornell facility accounted for 48 percent of theoperating costs. During those same years, NSF andNew York State accounted for 37 percent and 5percent respectively of the facility's budget. Thisfunding has allowed the center to maintain a staff ofabout 100.

    The National Center forSupercennputing Applications

    The National Center for Supercomputing Appli-cations (NCSA) is operated by the University ofIllinois at Urbana-Champaign. The Center has over2,500 academic users from about 82 academicaffiliates. Each affiliate receives a block grant oftime on the Cray X-MP/48, training for the Cray, andhelp using the network to access the Cray.

    The NCSA received its Cray X-MP/24 en October1985. That machine was upgraded to a CrayX-MP/48 in 1987. In October 1988 a Cray-2s/4-128was installed, giving the center two Cray machines.This computer is the only Cray-2 now at an NSFnational center. The center also houses a ConnectionMachine 2, an Alliant FX/80 and FX/8, and over 30graphics workstations.

    In addiidon to NSF funding, NCSA has solicitedindustrial support, Amoco, Eastman Kodak, EliLilly, FMC Corp., Dow Chemical, and Motorolahave each conuibuted around $3 million ovet a3-year penod to the NCSA. In fiscal year 1989corporate support has amounted to 11 percent ofNCSA's funding. About 32 percent of NCSA'sbudget came from NSF while the State of Illinoisand the Univershy of Illinois accounted for theremaining 27 percent of the center's $21.5 millionbudget. The center has a full-time staff of 198.

    Piitsburgh Supercomputing Center

    The Pittsburgh Supercomputing Center (PSC) isrun jointly by the University of Pittsburgh, Carnegie-Mellon University, andi Westinghouse Electric Corp.More than 1,4,00 users from 44 States utilize thecenter. Twenty-seven universities are affiliated withPSC.

    The center received a Cray X-MP148 in March of1986. In December of 1988 PSC became the firstnon-Federal laboratory to possess a Cray Y-MP.Both machines were being used simultaneously fora short time, however the center has phased out theCray X-MP. The renter's graphics hardware in-cludes a Pixar image computer, an Ardent Titan, anda Silicon Graphics IRIS workstation.

    The operating projection at PSC for fiscal year1990, a "typical year," has NSF supporting 58percent of the center's budget while indust y andvendors account for 2`... percent of the costs. TheCommonwea'.h of Pennsylvania and the NationalInstitutes of Health both support PSC, accountingfor Si percent and 4 percent of budget respectively.Excluding working students, the center has a staff ofaround 65.

    San Diego Snpercomputer Center

    The San Diego Supercomputer Center (SDSC) islocated on the cainpus of the University of Califor-nia at San Diego and is operated by GeneralAtomics. SDSC is linked to 25 consortium membersbut has a user base in 44 States. At the end of 1988,over 2,700 users were accessing the center. SDSChas 48 industrial partners who use the facility'shardwane, software, and support staff.

    A Cray X-MP/48 was installed in December,1985. SDSC's first upgrade, a Y-MP8/864, isplanned for December, 1989. In addition to the Cray,SDSC has 5 Sun workstations, two IRIS worksta-tions, an Evans and Sutherland terminal, 5 Apolloworkstationa, a Pixar, an Ardent Titan, an SCS-40minisupercomputer, a Supertek S-1 minisupercom-puter, and two Symbolics Machines.

    The University of California at San Diego spendsmore than $250,000 a year on utilities and servicesfor SDSC. For fiscal year 1990 the SDSC believesNSF will account for 47 percent of the center'soperating budget. The State of California currently

  • provides $1.25 million per year to the center and in1988, approved funding of $6 million over 3 years toSDSC for research in scientific visualization. Forfiscal year 1990 the State is projected to support 10percent of the center's costs. Industrial support,which has given the center $12.6 million in dona-tions and in-kind services, is projected to provide 15percent of the total costs of SDSC in fiscal year1990.

    John von NeumannNational Supercomputer Center

    The John von Neumann National SupercomputerCenter (JvNC), located in Princeton New Jersey, ismanaged by the Consortium tor Scientific Comput-ing Inc., an organization of 13 institutions from NewJersey, Pennsylvania, Massachusetts, New York,Rhode Island, Colorado, and Arizona. Currentlythere are over 1,400 researchers from 100 institutesaccessing the center. Eight industrial corporationsutilize the JvNC facilities.

    At present there are two Cyber 205 and twoETA-10s, in use at the JvNC. The first ETA-10 wasinstalled, after a 1-year delay, in March of 1988. Inaddition to these machines there is a Pixar II, twoSilicon Graphics IRIS and video animation capabili-ties.

    When the center was established in 1985 by NSF,the New Jersey Commission on Science and Thch-nology committed $12.1 million to the center over a5-year period. An addition $13.1 million has beenset-aside for the center by the New Jersey Commis-sion for fiscal year 1991-1995. Direct funding fromthe State of New Jersey and university sourcesconstitutes 15 percent of the center's budget forfiscal year 1991-1995. NSF will account for 60percent of the budget. Projected industry revenueand cost sharing account for 25 percent of costs.Since the announcement by CDC to close its ETAsubsidiary, the future of JvNC is uncertain. Planshave been proposed to NSF by JvNC to purchase aCray Research Y-MP, eventually upgrading to aC-90. NSF is reviewing the plan and a decision onrenewal is expected in October of 1989.

    OTHER HPC FACILITIESBefore 1984 only three universities operated

    supercomputers: Purdue University, the Universityof Minnesota, and Colorado State University. TheNSF supercomputing initiative established five newsupercomputer centers that were nationally accessi-ble. States and universities began funding their ownsupercomputer centers, both in response to growingneeds on campus and to increased feeling on the partof State leaders that supercomputer facilities couldbe important stimuli to local R&D and, therefore, toeconomic development. Now, many State and uni-versity centers offer access to high performancecomputers;4 and the NSF centers are only part of amuch larger HPC environment including nearly 70Federal installations (see table 2-1)

    Supercomputer center operators perceive theirroles in different ways. Some want to be a proactiveforce in the research community, leading the way byhelping develop new applications, training users,and so on. Others are content to follow in the pathtLt the NSF National Centers create. These differ-ences in goals/missions lead to varied services andcomputer syetems. Some centers are "cycle shops,"offering computing time but minimal support staff.Other centers maintain a large support staff and offerconsulting, training sessions, and even assistancewith software development. Four representativecenters are described below:

    Minnesota Supercomputer Center

    The Minnesoli Supercomputer Center, originallypart of the University of Minnesota, is a for-profitcomputer center owned by the University of Minne-sota. Currently, several thousand researchers use thecenter, over 700 of which are from the University ofMinnesota. The Minnesota Supercomputing Insti-tute, an academic unit of the University, channelsuniversity usage by provie' g grants to the studentsthrough a peer review prc .ess.

    The Minnesota Supercomputer Center receivedits first machine, a Cray 1A, in September, 1981. Inmid 1985, it installed a Cyber 205; and in the latterpart of that year, two Cray 2 computers wereinstalled within 3 months of each other. Minnesota

    4The number cannot he estimated exactly. First, it depends on the definition of supercomputer one uses. Secondly, thc number keeps changing as

    States announce new plans for centers and as large research universities purchase their own HPCs.

  • 12

    Theis 2-1Federee UncieesifiedSupercomputer Mete Pistons

    LaboratoryNumber

    of machinesDepartramt ofEnergy

    Los Alamos National Lab 6Livermore Nation) Lab, NMFECC 4LiverMOIE National Lab 7Sandia National Lab, Livermore 3Sandia National Lab, Albuquerque 2Oak Ridge National Lab 1Idaho Falls National Engineering 1Argonne National Lab 1Knolls Atomic Power Lab 1Bettie Atomic POWer Lab 1SavannaWDOE 1Richland/DOE 1Ssheneotady Naval Reactors/DOE 2Pittsburgh Naval Reastors/DOE 2

    Deftertinent of DelmoreNaval Research Lab 1Navti Ship R&D Center 1Fleet Numerical Oceanography 1Naval Underwater System Crrnmend 1Naval Weapons Center 1Martin letarietta/NTB 1Air Force Vtieapons Lab 2Air Force Global VA:tether 1Arnold Engineering and Development 1Wright Patterson AFB 1Aerospace Corp. 1Army BMW° Research Lab 2Army/Tacom 1Army/Huntsville IAmiy/Kweialeln 1Army/WES (on order) 1Army /Wenn IDefense Nuclear Agency 1

    NASAAmesGoddardLewis

    52

    Lang*Marshal

    Depertrnent of CommerceNationef Inst of Standards and TechnologyNational Oceanic & Atmospheric Administration 4

    Environmentol Protection AgencyRaleigh, North Carolina

    Deoerimmt of ifseith end Human *twice.National Institutes of HealthNational Cancer Institute

    SOURCE:Office ct Technology Aseessment estimate.

    bought its third Cray 2, the only one in use now, atthe end of 1988, just after it installed its ETA-10.The ETA-10 has recently been decommissioned dueto the closure of ETA. A Cray X-MP has been added,giving them a total of two supercomputers. TheMinnesota Supercomputer Center has acquired more

    supercomputers than anyone outside the FederalGovernment.

    The Minnesota State Legislature provides fundsto the University for the purchasing of supercom-puter time. Although the University buys a substan-tial portion of supercomputing time, the center hasmany industrial clients whose identities are preprie-tary, but they include representatives of the auto,aerospace, petroleum, and electronic industries.They are charged a fee for the use of the facility.

    The Ohio Supercomputer Center

    The Ohio Supercomputer Center (OSC) origi-ntozd from a coalition of scientists in the State. Thect aces, located on Ohio State University's campus,is 7,onnected to 20 other Ohio universities via the

    Academic Research Network (OARNET). Asc. -,vauary 1989, three private firms were using theCenter's resources.

    In August, 1987, OSC installed a Cray X-MP/24,which was upgraded to an Cray X-MP128 a yearlater. The center replaced the X-MP in August 1989with a Cray Research Y-MP. In addition to Crayhardware, there are 40 Sun Graphic workstations, aPixar U, a Stallar Graphics machine, a SiliconGraphic workstation and a Abekas Still Storemachine. The Center maintains a staff of about 35people.

    The Ohio General Assembly began funding thecenter in the summer of 1987, appropriating $7.5million. In March of 1988, the Assembly allocated$22 million for the acquisition of a Cray Y-MP. OhioState University has pledged $8.2 million to aug-ment the center's budget. As of February 1989 theState has spent $37.7 million in funding.5 OSC'sannual budget is around $6 million (not includingthe purchase/leasing of their Cray).

    Center for High Performance Computing,Texas (CHPC)

    The Center for High Performance Computing islocated at The University of Thxas at Austin. CHPCserves all 14 institutions, 8 academic institutions,and 6 health-related organizations, in the Universityof Thxas System.

    nine Ware, "Ohioans: Bluing Computer," Ohio, February 1989, p. 12.

  • 13

    The University of Texas installed a Cray X-MP/24 in March 1986, and a Cray 14se in November of1988. The X-MP is used primarily for research. Forthe time being, the Cray 14se is being used as avehicle for the conversion of users to the Unixsystem. About 40 people staff the center.

    Original funding for the center and the CrayX-MP came from bonds and endowments from bothTim University of Texas system and The Universityof lbxas at Austin. The annual budget of CHPC isabout $3 million. About 95 percent of the center'soperating budget comes from State funding andendowments. Five percent of the costs are recoveredfrom selling CPU time.

    Alabama Supercomputer Network

    The George C. Wallace Supercomputer Center,located in Huntsville Alabama, serves the needs ofresearchers throughout Alabama. Through the Ala-bama Supercomputer Network, 13 Alabama institu-tions, university and government sites, are con-nected to the center. Under contract to the State,Boeing Computer Services provides the supportstaff and technical skills to operate the center.Support staff are located at each of the nodes to helpfacilitate the use of the supercomputer from remotesites.

    A Cray X-MP/24 arrived in 1987 and becameoperational in early 1988. In 1987 the State ofAlabama agreed to finance the center. The Stateallocated $2.2 million for the center and $38 millionto Boeing Services for the initial 5 years. Theaverage yearly budget is $7 million. The center hasa support staff of about 25.

    Alabama universities are guaranteed 60 percent ofthe available time at no cost while cor iercialresearchers are charged a user fee. The impetus forthe State to create a supercomputer center has beenstated as the technical superiority a supercomputerwould bring, which would draw high-tech industryto the State, enhance interaction between industryand the universities, and promote research and theassociated educational programs within the univer-sity.

    Commercial Labs

    A few corporations, such as the Boc:,ig ComputerCorp., have been selling high performance computertime for a while. Boeing operates a Cray X-MP/24.Other commercial sellers of high performance com-puting time include the Houston Area ResearchCenter (HARC). HARC operates the only JapaneseSupercomputer in America, the NEC 5X2. Thecenter offers remote services.

    Computer Sciences Corp. (CSC), located in FallsChurch, Virginia, has a 16-processor FLEX/32 fromFlexible Comruter Corp., a Convex 120 fromConvex Computer Corp, and a DAP2i0 from ActiveMemory Technology. Federal agencies comprisetwo-thirds of CSC's customers.6 Power ComputingCo., located in Dallas, 'Ibxas, offers time on a CrayX-MP/24. Situated in Houston, Texas, Supercom-puting lechnology sells time on its Cray X-MP/28.Opticom Corp., of San Jose, California, offers timeon a Cray X-MP/24, Cray 1-M, Convex C220, andCl XP.

    Federal CentersIn an informal poll of Federal agencies, OTA

    identified 70 unclassified installations that operatesupercomputers, confirming the commonly expressedview that the Federal Government still represents amajor part of the market for TrIPC in the United States(see figure 2-1). Many of these centers serve theresearch needs of government scientists and engi-neers and are, thus, part of the total researchcomputing environment. Some are available tonon-Federal scientists, others are closed.

    CHANGING ENVIRONMENTThe scientific computing environment has

    changed in important ways during the few years thatNSF's Advanced Scientific Computing Programshave existed. Some of these changes are as follows:

    The ASC programs, themselves, have notevolved as originally planned. The original NSFplanning document for the ASC program originallyproposed to establish 10 supercomputer centers overa 3-year period; only 5 were funded. Center manag-ers have also expressed the strong opinion that NSFhas not met many of its original commitments for

    6Norris Parka Smith, "More Than Just Buying Cycles," Supercomputer Review, April 1989.

  • 14

    Sum 2-1Distribution of Federal SupercomputersSupercomputers

    40

    35

    30

    25 1

    20 -I

    15 -4

    10 I

    5

    0DOE

    33

    19

    10

    5

    2

    MaiDoD NASA Commerce HSS

    Agencies

    SOURCE: Office of Technology Assessment, 1989.

    1

    seeseee

    EPA

    funding in successive years of the contracts, forcingthe centers to change their operations' priorities andsearch for support in other directions.

    Ttchnology has changed. There has been a burstof innovation in the HPC industry. At the top of theLine, Cray Research developed two lines of ma-chines, the Cray 2 and the Cray X-MP (and itssuccessor, the Y-MP) that are much more powerfulthan the Cray 1, which was considered the leadingedge of supercomputing for several years by themid-1980s. IBM has delivered several 3090s equippedwith multiple vector processors and has also becomea partner in a project to develop a new supercom-puter in a joint venture with SSI, a firm started byStev :hen, a noted supercomputer architect previ-ously with Cray Research.

    More recently, major changes have occurred inthe industry. Control Data has closed down ETA, itssupercomputer operation. Cray Research has beenbroken into two partsCray Computer Corp. andCray Research. Each will develop and market adifferent line of supercomputers. Cray Researchwill, initially, at least, concentrate on the Y-MPmodels, the upcoming C-90 machines, and theirlonger term successors. Cray Computer Corp., underthe leadership of Seymour Cray, will concentrate on

    development of the Cray 3, a machine based ongallium arsenide electronics.

    At the middle and lower end, the HPC industryhas introduced several new so-called "mini-supercomputers"many of them based on radicallydifferent system concepts, such as massive parallel-ism, and many designed for specific applications,such as high-speed graphics. New chips promisevery high-speed desktop workstations in the nearfuture.

    Finally, three Japanese manufacturers, NEC, Fujitsu,and Hitachi have beel successfully building andmarketing supercomputers that are reportedly com-petitive in performance with U.S. machines.7 Whilethese machines have, as yet, not penetrated the U.S.computer market, they indicate the potential com-petitiveness of the Japanese computer industry in theinternational HPC markets, and raise questions forU.S. policy.

    Many universities and State systems haveestablished "supercomputer enters" to serve theneeds of their researchers.8 Many of these centershave only recently been formed, some have not yetinstalled their systems, so their operational experi-ence is, at best, limited to date. Furthermore, someother centers operate systems that, while verypowerful scientific machines, are not considered byall experts to be supercomputers. Nevertheless, theserenters provide high performance scientific comput-ing to the research community, and create newdemands for Federal support for computer time.

    Individual scientist and mearch teams are alsogetting Federal and private support from theirsponsors to buy their own "minisupercomputers." Insome case ;, these systems are used to develop andcheck out software eventually destined to run onlarger machines; in other cases, researchers seem tofind these machines adequate for their needs. Ineither mode of use, these departmental or laboratorysystems expand the range of possible sourcesresearchers can turn to for high performance corn-puting. Soon, desktop workstations will have per-formance equivalent to that of supercomputers af adecade ago at a significantly lower cost.

    as shown above, comparing the power and performance of supercomputers is a complex and arcane field, OTA will refrain from comparingor ranking systems in any absolute sense.

    eSee National Association of State Universities and Land-Grant Colleges, Supercomputirtg for the 1990's: A Shared Responsibility (Washington,DC: !limy 1989).

  • 15

    Finally, some important changes have oc-curred in national objectives or perceptions ofissues. For example, the development of a very highcapacity national science network (or "internet") hastaken on a much greater significance. Originallyconceived of in the narrow context of tying togethersupercomputer centers and providing regional ac-cess to them, the science network has now come tobe thought of by its proponents as a bas. infrastruc-ture, potentially extending throughout (and, perhaps,even beyond) the entire scientific, technical, andeducational community.

    Science policy is also changing, as new importantand costly projects have been started or are beingseriously considered. Projects such as the supercol-lider, the space station, NASA's Earth ObservingSystem (EOS) program, and the human genomemapping may seem at first glance to compete forfunding with science networks and supercomputers.However, they will create formidable new demandsfor computation, data communications, and datastorage facilities; and, hence, constitute additionalarguments for investments in an information tech-nology infrastructure.

    Finally, some of the research areas in the so-called"Grand Challenges"9 have attained even greatersocial importancesuch as fluid flow modelingwhich will help the design of faster and more fuelefficient planes and ships, climate modeling to helpunderstand long term weather patterns, and thestructural analysis of proteins to help understanddiseases and design vaccines and drugs to fightthem.

    REVIEW AND RENEWALOF THE NSF CENTERS

    Based on the recent review, NSF has concludedthat the centers, by and large, have been successfuland arc operating smoothly. That is, their systemsare being fully used, they have trained many newusers, and they are producing good science. In lightof that conclusion, NSF has tentatively agreed torenewal for the three Cray-based centers and theIBM-based Cornell Center. The John von NeumannCenter in Princeton has beer based on ETA-10computers. Since ETA was closed down, rF put

    the review of the JvNC on hold pending review of arevised plan that Las now been submitted. A decisionis expected soon.

    Due to the environmental changes noted above,if the centers are to continue in their presentstatus as special NSF-sponsored facilities, theNational Supercomputer Centers will need tosharply define their roles in terms of: 1) the usersthey intend to serve, 2) the types of applicationsthey serve, and 3) the appropriate balance be-tween service, education, and research.

    The NSF centers are only a few of a growingnumber of facilities that provide access to HPCresources. Assuming that NSF's basic objective is toassure researchers access to the most appropriatecomputing for their work, it will be under increasingpressure to justify dedicating funds to one limitedgroup of facilities. Five years ago, few U.S. aca-demic supercomputer centers existed. When scien-tific demand was less, manageriai attention wasfocused on the immediate problem of getting equip-ment installed and of developing an experienceduser community. Under those circumstances, someambiguity of purpose may have been acceptable andunderstandable. However, in light of the prolifera-tion of alternative technologies and centers, as wellas growing demand by researchers, unless the

    rposes of the National Centers are more clearlydelineated, the facilities are at risk of being asked toserve too many roles and, as a result, serving nonewell.

    Some examples of possible choices are as fol-lows:

    1. Provide Access to HPCProvide access to the most powerful, leadingedge, supercomputers available.Serve the HPC requirements for research pro-jects of critical importance to the FederalGovernment, for example, the "Grand Chal-lenge" topics.Serve the needs of all NSF-funded researchersfor HPC.Serve the needs of the (academic, educational,and/or industrial) scientific community forHPC.

    'Grand Challenge" research topics arc questions of major social importance that require for progress substantially greatercomputing resources than

    are currently available. The term was first coined by Nobel Laureate physicist, Kenneth Wilson.

  • 16

    2. Educate and Train

    Provide facilities and programs to teach scien-tists and students how to use high performancecomputing in their research.

    3. Advance the State of HPC Use in Research

    Develop applications and system software.Serve as centers for research in computationalscience.Work with vendors as test sites for advancedHPC systems.

    As the use of HPC expands into more fields andamong more researchers, what are the policies forproviding access to the necessary computing re-sources? The Federal Government needs to de-velop a comprehensive analysis of the require-ments of the scientific researchers for highperformance computing, Federal policies of sup-port for scientific computing, and the variety ofFederal and State/private computing facilitiesavailable for research.

    We expect that OTA's final report will contributeto this analysis from a congressional perspective.However, the executive branch, includinr; both leadagencies and OSTP also need to participate activelyin this policy and planning process.

    THE INTERNATIONALENVIRONMENT

    Since some of the policy debate over HPCs hasinvolved comparison with foreign programs, thissection will conclude a brief description of the statusof HPC in some other nations.

    JapanThe Ministry of International Trade and Industry

    (MITI), in October of 1981, announced the undertak-ing of two computing projects, one on artificialintelligence, the Fifth Generation Computer Project,and one on supercomputing, the National Super-speed Computer Project. The publicity surroundingMITI's announcement focused on fifth generationcomputers, but brought the more general subject ofsupercomputing to the public avention. (The term

    "Fifth Generation" refers to computers speciallydesigned for artificial intelligence applications, es-pecially those that involve logical inference or"reasoning.")

    Although in the eyes of many scientists the FifthGeneration project has fallen short of its originalgoals, eight years later it has produced someaccomplishments in hardware architecture and arti-ficial intelligence software. MITI's second project,dealing with supercomputers, has been more suc-cessful. Since 1981, when no supercomputers weremanufactured by the Japanese, three companieshave designed and produced supercomputers.

    The Japanese man tfacturers followed the Ameri-cans into the supercomputer market, yet in the shorttime since their entrance, late 1983 for Hitachi andFujitsu, they have rapidly gained ground in HPChardware. One company, NEC, has recently an-nounced a supercomputer with processor speeds upto eight times faster than the present fastest Ameri-can machine.° Outside of the United States, Japanis the single biggest market for and supplier ofsupercomputers, although American supercomputercompanies account for less than one-fifth of allsupercomputers sold in Japan."

    In the present generation of supercomputers, U.S.supercomputers have some advantages. One ofAmerican manufacturer's major advantages is theavailability of scientific applications software. TheJapanese lag behind the Americans in softwaredevelopment, although resources are being devotedto research in software by the Japanese manufactur-ers and government and there is no reason to thinkthey will not be successful.

    Another area in which American firms differ fromthe Japanese has been in their t se of multiprocessorarchitecture (although this picture is now changing).For several years, American supercomputer compa-nies have been designing machines with multi-processors to obtain speed. The only Japanesesupercomputer that utilizes multiprocessors is theNEC system, which will not be available until thefall of 1990.

    IcThe NEC machine is not scheduled for delivery until 1990, at which time faster Cray computers may well be on the market also. See also thecomments above about computer speed.

    "Marjorie Sun, "A Global Supercomputer Race for High Stakes," Science, February 1989, vol. 243, pp. 1004-1006.

    c)4 j

  • American firms have been active in the Japanesemarket, with mixed success.

    Since 1979 Cray has sold 16 machines in Japan.Of the 16 machines, 6 went to automobile manufac-turers, 2 to NTT, 2 to Recruit, 1 to MITI, 1 tolbshiba, 1 to Aichi Institute of Thchnology, and 1 toMitsubishi Electric. None have gone to publicuniversities or to government agencies.

    IBM offers their 3090 with attached vectorfacilities. IBM does not make public its customers,but report that they have sold around 70 vectorprocessor computers to Japanese clients. Someowners, or soon to be owners, include Nissan, NTF,Mazda, Waseda University, Nippon Steel and Mis-tubishi Electric.

    ETA sold two supercomputers in Japan. The firstwas to the 'Ibkyo Institute of Technology (TIT). Thesale w&s important because it was the first sale of aCDC/ETA supercomputer to the Japanese as well asthe first purchase of an American supercomputer bya Japanese national university. This machine wasdelivered late (it arrived in May of 1988) and hadmany operating problems, partially due to its beingthe first installment of an eight-processor ETA 10-E.The second machine was purchased (not delivered)on February 9, 1989 by the University ofMeiji. HowCDC will deal with the ETA 10 at TIT in light of theclosure of ETA is enknown at this time.

    Hitachi, Fujitsu, and NEC, the three Japanesemanufacturers of supercomputers, are among thelargest computer/electronic companies in Japan; andthey produce their own semiconductors. Their sizeallows them to absorb the high initial costs ofdesigning a new supercomputer, as well as providelarge discounts to customers. Japan's technologicallead is in its very fast single-vector processors. Littleis known, as of yet, what is happening with parallelprocessing in Japan, although NEC's recent productannouncement for the SX-X states that the machinewill have multiprocessors.

    Hitachi's supercomputer architecture is looselybased on its IBM compatible mainframe. Hitachientered the market in November of 1983. Unliketheir domestic rivals, Hitachi has not entered theinternational market. All 29 of its ordered/installedsupercomputers are located in Japan.

    ci2

    17

    NEC's current supercomputer architecture is aotbased on its mainframe computer and it is not IBMcompatible. They entered the supercomputer marketlater than Hitachi and Fujitsu. Three NEC supercom-puters have been soldlmstalled in foreign markets:one in the United States, an SX-2 machine at theHouston Area Research Consortium, one at theLaboratory of Aerospace Research in Netherlands,and an SX-1 has recently been sold in Singapore.Their domestic users include five universities.

    On April 10, 1989; in a joint venture withHoneywell Inc., NEC announced a new line ofsupercomputers, the SX-X. The most powerfulmachine is reported to be up to eight times fasterthan the Cray X-MP machine. The SX-X reportedlywill run Unix-based software and will have multi-processors. This machine is due to be shipped in thefall of 1990.

    Fujitsu's supercomputer, like Hitachi's, is basedon their IBM compatible mainframes. Their firstmachine was delivered in late 1983. Fujitsu had sold80 supercomputers in Japan by mid-1989. Anestimated 17 machines have been sold to foreigncustomers. An Amdahl VP-200 is used at theWestern Geophysical Institute in London. In theUnited States, the Norwegian company GECO,located in Houston, has a VP-200 and twu VP-100s.The most recent sale was to the Australian NationalUniversity, a VP-100.

    Europe

    European countries that have (or have ordered)supercomputers include: West Germany, France,England, Denmark, Spain, Norway, the Netherlands,Italy, Finland, Switzerland, and Belgium. Europe iscatching up quicIdy with America and Japan inunderstanding the importance of high performancecomputing for science and industry. The computerindustry is helping to stimulate European interest.For example, IBM has pledged $40 million towardsa supercomputer initiative in Europe over the 2-yearperiod between 1987-89. It is creating a large base offollowers in the European academic community byparticipating in such progauns as the EuropeanAcademic Supercomputing Initiative (EASI), andthe Numerically `ntensive Computing Enterprise(NICE). Cray 1 _ Arch also has a solid base in

  • 18

    academic Europe, supplying over 14 supercomput-ers to European universities.

    The United Kingdom began implementing a highperformance computing plan in 1985. The JointWorking Party on Advanced Research Cnmputing'sreport in June of 1985, "Future Facilities forAdvanced Research Computing," recommended anational facility for advanced research computing.This center would have the most powerful super-computer available; upgrade the United Kingdom'snetworking systems, JANET, to ensure communica-tions to remote users; and house a national organiza-tion of advanced research computing to promotecollaboration with foreign countries and withinindustry, ensuring the effective use of these re-sources.12 Following this report, a Cray X-MP/48was installed at the Atlas Computer Center inRutherford. A Cray ls was installed at the Universityof London. Between 1986 and 1989, some $11.5million was spent on upgrading and enhancingJANET.13

    Alvey was the United Kingdom's key informationtechnology R&D program. The program promotedprojects in information technology undertaken jointlyby industry and academics. The United Kingdombegan funding the Alvey program in 1983. Duringthe first 5 years, 350 million pounds were allocatedto the Alvey program. The program war, eliminatedat the end of 1988. Some research was picked up byother agencies, and many of the projects that weresponsored by Alvey are now submitting proposals toEsprit (see below).

    The European Community began funding theEuropean Strategic Programme for Research inInformation Thchnology (Esprit) program in 1984partly as a reaction to the poor performance of theEuropean Economic Community in the market ofinformation technology and partly as a response toMITI's 1981 computer programs. The program,funded by the European Community (EC), intends 'n"provide the European IT industry with the key

    components of technology it needs to be competitiveon the world markets within a decade."14 The EC hasdesigned a program that forces collaboration be-tween nations, develops recognizable standards inthe information technology industry, and promotespre-competitive R&D. The R&D focuses on fivemain areas: microelectronics, software develop-ment, office systems, computer integrated manufac-turing, and advanced information.

    Phase I of Esprit, the first 5 years, received $3.88billion in funding.15 The funding was split 50-50 bythe EC and its participants. This was considered thecatch-up phase. Emphasis was placed on basicresearch, realizing that marketable goods will fol-low. Many of the companies that participated inPhase I were small experimental companies.

    Phase II, which begins in late 1989, is calledcommercialization. Marketable goods will be themajor emphasis of Phase II. This implies that thelarger firms will be the main industrial participantssince they have the capital needed to put a producton the market. The amount of funds for Phase II willbe determined by the world environment in informa-tion technology and the results of Phase I, but hasbeen estimated at around $4.14 billion.16

    Almost all of the high performance computertechnologies emerging from Europe have beenbased on massively parallel architectures. Some ofEurope's parallel machines incorporate the transputer.Transputer technology (basically a computer on achip) is based on high density VLSI (very large-scaleintegration) chips. The T800, Inmos's transputer,has the same power as Intel's 80386/80387 chip, thedifference being in size and price. The transputer isabout one-third the size and price of Intel's chip.17The transputer, created by the Inmos company, hadits initial R&D funded by the British government.Eventually Thorn EMI bought Inmos and the rightsto the transputer. Thom EMI recently sold Inmos toa French-Italian. joint venture company, SGS-Thomson, just as it was beginning to be profitable.

    12"Future Facilities for Advanced Research Computing," the report of a Joint Working Party on Mvanced Research Computing. United Kingdom,July 1985.

    13Diacussion paper on "Supercomputers in Mkt:a lia," Department of Industry. lbchnoloy and Commerce. April 1988. pp. 14-15.

    14"Esprit." Commission of