high performance computing systems for iu researchers – an introduction iub wells library...

Post on 18-Dec-2015

227 Views

Category:

Documents

5 Downloads

Preview:

Click to see full reader

TRANSCRIPT

High Performance Computing Systems for IU Researchers – An Introduction

IUB Wells Library 10-Sep-2012

Jenett Tillotson <jtillots@iu.edu>George Turner turnerg@iu.edu

High Performance – Systems <hps-admin@iu.edu>

Getting Started

• BigRed: http://kb.iu.edu/data/avjx.html

• Quarry: http://kb.iu.edu/data/avkx.html

Getting an account

• http://itaccounts.iu.edu/• Manage my IU computing accounts• create more accounts• choose "BigRed" or "Quarry”• Takes about 15 minutes• You will receive a welcome email

Logging in

• ssh bigred.teragrid.iu.edu• ssh quarry.teragrid.iu.edu• MOTD

Nodes

• BigRed: b1-b1050, s1c1b1-s19c3b14• Quarry-classic: b001-b140, q001-q140• Quarry-pg: p1-p230, pg1-pg230

Head nodes

• BigRed: b513 - b516• Quarry-classic: b001 - b004• Quarry-pg; p230

Wild West nodes

• BigRed: b509-b512

• Quarry: q005-q008

Compute nodes

• BigRed: b1-b508, b561-b1050

• Quarry: q009-q140, pg1-pg229

File systems

• Home directory• DataCapacitor (Lustre)

File systems : Home directory

• /N/u/{username}/BigRed• /N/u/{username}/Quarry• ${HOME}• 10GB quota• quota -v• Slow, limited, backed up

File systems : DataCapacitor (lustre)

• http://kb.iu.edu/data/avvh.html• IU users: /N/dc/scratch/{username}• Fast, unlimited, not backed up• Permanent project file space available

Softenv

• Modifies environment• ${PATH} and ${MANPATH}• softlist• soft add• .soft• resoft

Resource ManagerKeeps tracks of resources: nodes, jobs, queues

• BigRed – LoadLeveler• llsubmit• llq• llcancel• llclass

• Quarry – TORQUE• qsub• qstat –a –u $USER• qdel• qstat -Q

Queues : BigRed• LONG: 32 nodes/job, 64 nodes/user, 14 days wall clock time

• NORMAL: 256 nodes/job, 512 nodes/user, 2 days wall clock time

• SERIAL: 1 proc/job, 512 proc/user, 2 days wall clock time

• DEBUG: 4 nodes/jobs, 4 nodes/user, 15 minutes wall clock time, 1 idle job

• Limit of 768 jobs in all the queues per user

• Limit of 16 idle jobs per user (except for the DEBUG queue)

Queues : Quarry• long: 42 nodes/job, 14 days wall clock time, 50 jobs/user

• normal: 6 nodes/job, 7 days wall clock time, 500 jobs/user

• serial: 1 node/job, 12 hours wall clock time, 500 jobs/user

• debug: 4 nodes/job, 15 minutes wall clock time,2 jobs/user

• himem: 28 nodes/job, 14 days wall clock time, 50 jobs/user

• batch: default queue

• Limit of 16 idle jobs per user

Job Scripts

• http://kb.iu.edu/data/axpz.html

BigRed#!/bin/bash -l # @ step_name = step1 # @ initialdir = /N/u/jtillots/BigRed/myoutput# @ output = step1.out # @ error = step1.err # @ notification = always # @ notify_user = jtillots@indiana.edu # @ class = DEBUG# @ wall_clock_limit = 15:00 # @ account_no = NONE # @ queue /bin/datesleep 10 /bin/date

Quarry

#!/bin/bash -l#PBS -N step1#PBS -j oe#PBS -k o#PBS -m abe#PBS -M jtillots@indiana.edu#PBS -q debug#PBS -l nodes=1,walltime=15:00/bin/datesleep 10/bin/date

Job ids

• BigRed: s10c2b5.{jobid}.0

• Quarry: {jobid}.qm2

Scheduler : MoabDecides which jobs get run on what nodes at what time.• showq• by queue: -w class={queuename}• running jobs: -r• idle jobs: -i• blocked jobs: -b

• checknode• checkjob• showstart• Single-user mode

Job Priority

• mdiag -p• Fair share• XFactor• QOS• Backfill• showbf

top related