hep computing in korea

Download HEP Computing  in Korea

If you can't read please download the document

Upload: nasia

Post on 09-Jan-2016

45 views

Category:

Documents


3 download

DESCRIPTION

HEP Computing in Korea. Dongchul Son Center for High Energy Physics Kyungpook National University. ICFA DD Workshop, Cracow, October 9-12, 2006. Outline. Short Introduction to Korean HEP activities Networks and Computing Coordinations LCG Service Challenge 4 at CHEP Outlook Summary. - PowerPoint PPT Presentation

TRANSCRIPT

  • HEP Computing in KoreaDongchul Son Center for High Energy PhysicsKyungpook National UniversityICFA DD Workshop, Cracow, October 9-12, 2006

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    OutlineShort Introduction to Korean HEP activities Networks and Computing CoordinationsLCG Service Challenge 4 at CHEPOutlookSummary

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Brief History SSC 1950 60 70 2000Cosmic Ray,Small Detectors809010 20 TRISTAN e+e-AMYEmulsion Fermilab LEP e+e-L3, ALEPHTevatron D0/CDFZEUS (ep)CERN (CHORUS)FNAL DonutsLHC CMS, ALICE(later)KEKB e+e-BelleK2K AMS & astroparticleDark MatterCHEP est. (07/2000)PAL est.ILC, Proton Acc, studies T2K andCLASPHENIXSTAR, Spring82000 Linux Clusters (Kyungpook-CHEP) ~2*45 Mbps to abroad

    1996 APAN-KR (2Mbps), 45 Mbps to abroad1996 KOREN (155 M), KREONET(45M) LANs with 155 Mbps ATM backbone (Kyungpook)

    1991 Research Network backbone, T1/E1 level1990 HP WK 750s Clusters (Kyungpook) 1985 Vaxen (Kyungpook, Korea) no networking

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Institutions (HEP, HENP, Acc)Seoul NU, Korea, Yonsei,KIAS, Sogang, Hanyang.Sungkyunkwan, Kyunghee, Konkuk, Sejong, UoSeoul, Ewha W, etc.KAIST, Chungnam NUChungbuk NU Nat Ed U.KBSII, KAERI, KISTI, KASI,KARI, etc.Chonbuk NU, WonkangChonnam NU, DongshinCheju NUGyeongsang NUPusan NU, Inje, Kyungsung, etcCHEP-Kyungpook NU, Keimung,Taegu, Youngnam, DCU,APCPT, PAL, PostechKangnung, Kangwon~ 57 Institutions~150 Experimental HE & N Physicists~ 200 Theoretical HE Physicists~ 100 Accelerator Experts cf. Pop. ~48million

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    OrganizationsSeoul NU (host)Kyungpook NU(host)CHEP(2000)Seoul NUKorea USKKUChonnam NUKonkuk UGyeongsang NUPOSTECHEwha WUChonbuk NUKAISTKIASYonsei UDaegu UDonga UDongshin UPusan NUSogang(host)CQUeST(2005)Seoul NUSKKUEwha WUU SeoulKangwon NUKyungheeYonsei UKIASHanyang UPOSTECHChonbuk NUPOSTECH(host)PAL(1988)Seoul NU(host)CTP(1990)POSTECH(host)APCTP(1997)KAIST(host)KIAS(1996)DMRC(2000)of Korea U KODEL(1997)GSNUEmulsionGroupAnd others (KAERI, etc.)And othersExperimentsAccelerator R/DPhenomenologyAstro-Particle PhysicsHE Nuclear Physics

    23 PhDs at Kyungpook(16 Faculty, 7 Assoc. 16 exp, 4 th, 3 acc. )>50 Faculty & Assoc. from collaborating U.QFTString

    20 FacultymembersEwha W U (host)MEMSTEL(2006)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Korean HEP activitiesLarge-scale enterprise experiments in which Koreans are involved now and will be in the futureBelle / KEK, K2K / KEK Japan: in progressCDF / Fermilab (USA): in progressAMS / International Space Station: data taking starts in 2008(?)CMS (CERN, Europe): data taking starts in 2007and other experiments, such as PHENIX, ZEUS, D0, as well as ALICE, OPERA, KIMS, STAR etc.International Linear Collider experiment in mid 2010s

    Belle & CDF (& PHENIX) are producing a large volume of data now, which need be processed and analyzed with simulations.CMS & ALICE will be challengesThese need computing facility and data storage

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    HEP related Univ. & InstitutionsKonkuk, Kyunghee, KIASKorea, Sogang, Seoul NSeoul CU, SejongSKKU, YonseiEwha WU, HanyangChungnam NU, ChongjuKAIST, KBSRI, KAERIKISTI, ETRI, Chungbuk NUKAOKARIWonkwang, Chonbuk NUDongshin, Chonnam NUCheju NUGyeongsang NUKyungsung, Pusan NU, InjeKyungpook NU-CHEPKeimyung, Taegu, Andong NU YoungnamAPCTP, POSTECH, PALKangnung NU Kangwon NU Sangji~ 57 Institutions~ 150 Experimental HE & N Physicists~ 200 Theoretical HE Physicists~ 100 Accelerator Experts

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    How are they networked? KonkukKIASKoreaSeoul NSKKUYonseiKISTIChonbuk NUChonnam NUDongshinCheju NUGyeongsang NUKyungsung, Pusan NU, InjeKyungpook NU(CHEP)POSTECH, PALKangnung NU Kangwon NUKREONETKOREN- Most major universities are connected to two Research Networks (MOST, MIC supports) Others use Education Network (KREN)10G10G2,5G10G40G10G10G10G10G10G between Backbones

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    KREONET/KREONet2KREONET11 Regions, 12 POP CentersOptical 2.5-10G Backbone; SONET/SDH, POS, ATMNational IX ConnectionKREONET2Support for Next Gen. Apps:IPv6, QoS, Multicast; Bandwidth Alloc. ServicesStarLight/Abilene ConnectionDirect 10 G Link to Kyungpook & GIST (GLORIAD), Dec. 2005

    International LinksGLORIAD Link to 10G to Seattle since Sept. 2005With KOREN links to APIIsSuperSIREN (7 Res. Institutes)Optical 10-40G BackboneCollaborative Environment SupportHigh Speed Wireless: 1.25 G

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    CHEP 10G NetworkGLORIAD

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    6221554545155453 x 622622622M+1GNorth America(via TransPAC2, 10Gbps)622EUEU6226224 x 155APII KR-CN(155Mbps)APII KR-JP(2.5G->10G, 2006.11.)TEIN2/APII (2006.06)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Summary of Networking (CHEP-centric view)USChina(IHEP)JapanSE AsiaOceaniaEU(CERN)CSTNET (HK) GLORIAD 10 G 10 GAPII (155 M)- KORENCHEPKOREAGLORIAD 10 GKOREN (10G) - APII (10G)TransPac (10G)TEIN2(S) (622 M)TEIN2 (S) (622 M)TEIN2 (N) (622 M)GLORIAD (RU,AMS) 2.5 G GLORIAD (AMS) + LHCnet, EsnetOther Univ.& Res. Inst.KORENKREONET2KREN1~10G21 GTEIN2 (S) (3*622 M)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Goals of HEP ComputingKyungpook(KNU) has been almost the only computing resource for HEP since 1985The Center for High Energy Physics (CHEP) the Only National Research Center of Excellence designated by MOST- is the place where most of these experimental activities are centered. Naturally the computing resources and R/D activities are based on the CHEP activities. (CMS, CDF, Belle, etc..)Other some universities have now some computing power for HEPYonsei, SNU, Korea, SKKU, Gyeongsang, Sejong, etc.Since 2002, CHEP has been trying to build a Supercomputing Center for HEP to be equipped with CPU: > 1,000Disk cache: 1,000 TBTape Storage: 3,000 TBNetworking: > 20 GbpsWith Grid Technology To keep with the LHC operation

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Data Grid R/D & CollaborationsDomestic Collaboration (Grid WG) since 2001 and HEP/Physics Network Working Group since 2002Have operated LCG Grid Testbeds at KNU& SNU in 2002 Many Large File Transfer Tests btw KR USA/Japan/CERN Running IPv6 tests, etc.iVDGL (since 2002) & Open Science Grid (2005)serving researchers in HEP, Astronomy, Bio-Chemistry, etc, providing CPU resources (84 CPUs once, later moved to LCG2)LCG2: up and running since 2005 ready to join WLCGSRB (Storage Resource Broker) Running ExperienceFor Belle, fed. (Taiwan, Japan, Australia, China, Poland,) For domestic usage (KISTI, KBSI)Partner of UltraLight Participated in SC04/SC05Joined EGEE (KISTI & Others)Acknowledgment: Harvey and his Caltech team, Paul and OSG, Karita/KEK,CERN LCG team and Lin/Taiwan, and many Korean Colleagues

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Data Grid R/D and CollaborationsCMS MC productions and analysesDecentralized Analysis Farm (DeCAF) for CDFRunning and serving 800 researchers around the World for CDF experimentBelle Gridification WorksCA and SRB, MC productionAMS: MC production but not in the Grid modeAnd other worksCA (Server installed, almost ready for operation)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Communities and National InitiativesHEP Data Grid(CHEP+HEP)HEPNational Grid Initiative(MIC-KISTI)National e-Science Initiative(MOST-KISTI)Bio informaticsMeteorologyAeronautics...TEIN/APII(MIC-KISDI)VirtualObservatoryFusion ScienceSeismologyPhysical Science WG+ Meteorology Medical Surgery Virtual Museum Measurement IPv6 HDTV, KoreaLight GLORIAD(MOST-KISTI)Global Grid Forum(GGF-KR)2002APAN-KRAdvanced Network Forum(ANF) 2003Access Gride-Science ForumTo be formed(Dec. 2006)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Korean CMS InstitutionsKangwon National UKonkuk UKorea USeoul Natl Education UYonsei USeoul National USungkyunkwan UChungbuk UWonkwang UChonnam National UDongshin USeonam UKyungpook National UGyeongsang National UCheju National UCheju NUGyeongsang NUKyungpook NUKangwon NUKonkuk KoreaSeoul NSeoul NE YonseiSKKChungbuk NUWonkwangChonnam NDongshinSeonam~15 instistutions30~50 Exp. Physicists

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    LCG Service Challenge 4 at KNUTier-2 configurationWorking for SC4: 2006.6.1-9.30Service Challenge 4: Successful completion of service phaseReady for CSA06: 2006.9.15-11.15Computing, Software, & Analysis Challenge 2006Receive from HLT (previously run) events with online tag (HLT decision)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    CHEP System Overview (CPU) Number of available device for SC4 (total CPU)

    Exp. GroupCPUNodekSI2KCMS46(55)29(35)40.3(48.7)CDF/DCAF73(85)44(51)82.2(93.5)Belle24(46)24(35)31.8(23.9)AMS24(35)15(22)22.7(29)Grid Testbed0(22)0(21)0(13.8)Network Test4(14)2(6)3.7(10.4)ETC0(14)0(8)0(12.1)Total171(269) 114(178)180(231.7)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    System Overview (Hard Disk)

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    System Overview (MSS)MSS(Mass Storage System)IBM 3494 Tape Lib : 45.6TBIBM FAStT200 (Buffer Disk): 740GBIBM 3494A : 21.8TBIBM 3494B : 23.8TB

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    ResourcesLinux Cluster SystemTape Library SystemSupercomputing Center at CHEP4th Floor+ CHEP computers at collb. Inst. eg. Konkuk, Yonsei, SKKU, SNU, KOREN-NOC, KEK, Fermilab, CERN

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    CHEP 10G NetworkONS15600ONS1545410GbpsGLORIAD/KREONETForce10Force10DaejeonCHEPCHEP Interface10G Ethernet 3Port(MM Type)Gigabit Interface Fiber 8Port(MM Type)10/100/1000 Interface 11Port(UTP Type)10GbpsUSHK

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    Border RouterCISCO 76061G1G10GCISCO 65091G1G UTPCISCO 3550-12TKNU 1G1GCISCOONS1545410GKREONETKORENKNU & CHEP Network DiagramPC cluster100M100M UTP155.230.20/222001:220:404::/64IPv6IPv6LambdaDaejeon10GForce10 E600CHEPTo be upgraded to10G

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    KNU System Diagram for LCG SC4

    PhDEx, FTS,DPM

    Network Switch

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    CMS SC4 Transfer State

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    CMS SC4 Transfer Details

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    CMS SC4 Transfer Rate

  • SC4 Transfer Quality Plot-Quality Map

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

  • SC4 Transfer Quality Plot-Successes

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    SC4 Transfer Rate Plots-Rate

  • SC4 Transfer Rate Plots-Rate

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    SC4 Transfer Rate Plots-Volume

  • SC4 Transfer Rate Plots-Volume

  • SC4 Transfer Rate Plots-Cumulative

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    OutlookNow in the phase from base configuration to more intensified system (Tier-1 like)

    Networks (GLORIAD & APII + TEIN2: more than 20 Gbps) and coming in and ready soon

    Continue R/D for Data Grid Technology and Networking for improvement

    A national strategic plan for ILC (including LHC and all other HEP activities, roadmaps, and e-HEP) has been discussed there: We need to implement it.

    MOST will support Tier-2s for CMS and ALICE from next year, possibly for a Tier-1 for CMS later.

    D. Son, ACFA Plenary , Hayama, 5-6 Sept 2006

    SummaryDemands in Korean HEP established HEP Data Grid Collaboration and HEP Working Groups for Networks,being supported by governments (MIC, MOST) and various groups of experts

    HEP Data Grid R/D and Network tests having been performed in collaboration with international and domestic partners

    Enough bandwidth with new technology such as GLORIAD 10Gbps are ready and will be up 40Gbps

    Korean HEP is proposing a regional center for CMS and other experiment and is in preparation and a Tier-2 Center is working properly.