workload characterization and performance assessment of yellowstone using xdmod and exploratory data...
TRANSCRIPT
Workload Characterization and Performance Assessment of Yellowstone using XDMoD
and Exploratory data analysis (EDA)
1 August 2014
Ying Yang, SUNY, University at BuffaloMentor: Tom Engel, NCAR
Co-Mentors: Shawn Strande, Dave Hart, NCAR
2
Big Picture
• Background• XDMoD and Yellowstone Job Data• Enhancement of XDMoD for
Yellowstone• Additional Analyses of Yellowstone Job
Data• Summary & Future Work
3
Big Picture
• Background• XDMoD and Yellowstone Job Data• Enhancement of XDMoD for
Yellowstone• Additional Analyses of Yellowstone Job
Data• Summary & Future Work
4
Background
• What is XDMoD?
Open XDMoD is an open source tool designed to audit and facilitate the utilization of supercomputers by providing a wide range of metrics on resources, including resource utilization, resource performance, and impact on scholarship and research.
XDMoD is an acronym for "XSEDE Metrics on Demand” developed by the University of Buffalo for NSF's XSEDE under NSF grant OCI 1025159
5
Background• XDMoD Architecture Details
6
Big Picture
• Background• XDMoD and Yellowstone Job Data• Enhancement of XDMoD for
Yellowstone• Additional Analyses of Yellowstone Job
Data• Summary & Future Work
7
XDMoD and Yellowstone Job Data• XDMoD runs on a dedicated server at NWSC, and
that software was installed and configured by the SSG group
• Collaborated with CISL and SUNY at Buffalo developers to test a new shredder for ingesting LSF job termination accounting records.
• Shredded and ingested all of the LSF accounting data from Yellowstone, Geyser, and Caldera (November 2012 to the present) into open XDMoD. Total 7111011 job records are shredded. 6810231 jobs are ingested.
8
XDMoD and Yellowstone Job Data
9
XDMoD and Yellowstone Job DataLSF
10
XDMoD and Yellowstone Job DataLSF
YellowstoneShredded Data
YellowstoneIngested
DataSuperMoD REST Service API
11
XDMoD and Yellowstone Job Data• XDMoD’s Summary tab
12
XDMoD and Yellowstone Job Data• XDMoD’s Metric Explorer (CPU time group by user)
13
Big Picture
• Background• XDMoD and Yellowstone Job Data• Enhancement of XDMoD for
Yellowstone• Additional Analyses of Yellowstone Job
Data• Summary & Future Work
14
Enhancement of XDMoD for Yellowstone
• Two new metrics (1)
Job Size: Weighted By Core Hours (Core Count): The average NCAR job size weighted by Core hours.
Defined as: sum(i = 0 to n){job i core count*job i core hours consumed }/sum(i = 0 to n){job i core hours consumed}.
15
Enhancement of XDMoD for Yellowstone
• XDMoD’s Average Job Size
16
Enhancement of XDMoD for Yellowstone
• Sophie’s Job Size Weighted By Core Hours (Core Count)
17
Enhancement of XDMoD for Yellowstone
• Two new metrics (2)
Yellowstone %Scheduled: The percentage of resources scheduled to be utilized by jobs running on Yellowstone.Yellowstone Scheduled Utilization: The ratio of the total scheduled CPU hours to Yellowstone jobs over a given time period divided by the total CPU hours that the system could have potentially provided during that period.
18
Enhancement of XDMoD for Yellowstone
• Yellowstone %Scheduled: (by job size)
Many 144-node (only 1 core per node) jobs are running.
19
Big Picture
• Background• XDMoD and Yellowstone Job Data• Enhancement of XDMoD for
Yellowstone• Additional Analyses of Yellowstone Job
Data• Summary & Future Work
20
Additional Analyses of Yellowstone Job Data
Exploratory data analysis with ingested data using R
Question: What is the average job size and how has it varied over time?
Methods:• Forecasting Using Exponential Smoothing• Forecasting Using ARIMA Model• Multiple Linear Regression• K-Nearest Neighbor
Experiments and Results
21
Additional Analyses of Yellowstone Job Data
Methods:Exponential Smoothinga) Simple Exponential SmoothingAn additive model with constant level and no seasonalityb) Holt’s Exponential SmoothingAn additive model with increasing or decreasing trend and no seasonalityc) Holt-Winters Exponential SmoothingAn additive model with increasing or decreasing trend and seasonality
22
Additional Analyses of Yellowstone Job Data
Methods:ARIMA ModelAutoregressive Integrated Moving Average (ARIMA) models include an explicit statistical model for the irregular component of a time series, that allows for non-zero autocorrelations in the irregular component.
Building the Model:Step1: Differencing a Time Series (diff() function)Step2:Selecting a Candidate ARIMA Model(acf(),pacf() function)Step3:Forecasting Using an ARIMA Model
23
Additional Analyses of Yellowstone Job Data Experiments • Naive method• Mean method• Drift method (week and month)• Simple Exponential Smoothing (SES)• Holt’s Exponential Smoothing (HES)• Holt-Winters Exponential Smoothing (HWES)• ARIMA Model• Multiple Linear Regression• K-Nearest Neighbor
Descriptions:• Data: data in 2013, total days:364. Day 1-308 as training data, day
309-364 as testing data.• Prediction error: the percentage that the difference of predicted value
and true value taking of the true value.• Naive, Mean and Drift methods serve as performance comparisons.• ES methods are predicted using all days before the predicting day.
(e.g. day 1-100 predict 101, day 1-101 predict 102,..)
24
Additional Analyses of Yellowstone Job Data Experiment Results
25
Big Picture
• Background• XDMoD and Yellowstone Job Data• Enhancement of XDMoD for
Yellowstone • Additional Analyses of Yellowstone Job
Data• Summary & Future Work
26
Summary & Future WorkSummary: • Ingested all Yellostone accounting data into
XDMoD(November 2012-Present)• Developed two new metrics for Yellowstone and
contribute back to open source• Exploratory data analysis using R
Future Work:• Enhancement of XDMoD • Further data analysis on Yellowstone data• Integrate EDA into XDMoD
27
AcknowledgementsHSS and USS:Tom Engel HSS (Mentor)Shawn Strande HSS (Co-Mentor)Dave Hart USS (Co-Mentor)Davide Del Vento CSGPamela Gillman DASGErich Thanhardt MSSGIrfan Elahi SCSG
IMAGe:Doug Nychka IMAGe
28
29
Ying [email protected]