performance modeling and measurement of parallelized code ... · performance modeling and...

22
1 Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed and Jerry Yan NAS Technical Report NAS-98-012 March 1998 {waheed,yan}@nas.nasa.gov NAS Parallel Tools Group NASA Ames Research Center Mail Stop T27A-2 Moffett Field, CA 94035-1000 Abstract This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

Upload: others

Post on 30-Apr-2020

10 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

1

Performance Modeling and Measurement of ParallelizedCode for Distributed Shared Memory Multiprocessors

Abdul Waheed and Jerry Yan†

NAS Technical Report NAS-98-012 March 1998

{waheed,yan}@nas.nasa.govNAS Parallel Tools Group

NASA Ames Research CenterMail Stop T27A-2

Moffett Field, CA 94035-1000

Abstract

This paper presentsa model to evaluate the performanceand overhead ofparallelizing sequentialcode using compiler directivesfor multiprocessingondistributedshared memory(DSM) systems.With increasingpopularity of sharedaddress space architectures, it is essential to understand their performanceimpacton programsthat benefitfromsharedmemorymultiprocessing. We presenta simplemodelto characterizetheperformanceof programsthat are parallelizedusingcompilerdirectivesfor sharedmemorymultiprocessing. We parallelizedthesequentialimplementationof NASbenchmarksusingnativeFortran77compilerdirectivesfor an Origin2000,which is a DSMsystembasedon a cache-coherentNon Uniform MemoryAccess(ccNUMA) architecture. We report measurementbasedperformanceof theseparallelized benchmarks from four perspectives:efficacy of parallelization process; scalability; parallelization overhead; andcomparison with hand-parallelized and -optimized version of the samebenchmarks.Our resultsindicate that sequentialprogramscan convenientlybeparallelizedfor DSMsystemsusingcompilerdirectivesbut realizingperformancegains as predictedby the performancemodeldependsprimarily on minimizingarchitecture-specific data locality overhead.

��������������� �������������������������������� �"! #�$%��&'� ( ���)��*��,+%���)�-��.)��( �0/�12( &'3 /�456+%5879��( �����&'� 4:56+<;�=>@? A�B�A�/��"�����2����� C�.)���)D-/�7E58F@? B�A�G�=2>�B�B�B

Page 2: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

2

Page 3: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

3

1 Introduction

DistributeSharedMemory(DSM) systemsarebecomingpopularin high performancecomputingbecause

they offer easeof programmingdueto a global addressspaceandscalability to large numberof nodes.

Although DSM systemsfacilitateprogramming,they canpotentially introduceperformancebottlenecks

that requireadditionaleffort on the part of a userto discover andeliminate[20]. Non Uniform Memory

Access(NUMA) architecturescan incur ordersof magnitudegreaterlatenciesto accessdatathat reside

fartherfrom the processorin memoryhierarchy [11]. Thesesystemsoften usecache-basedcommodity

processorwith cachecoherenceimplementedin hardware to hide latency. Memory traffic generatedby

protocolsthatkeepthecachescoherentis anotherpotentialsourceof performancedegradation.While the

developersof compilationandparallelizationtools for sharedmemorysystemshave addressedsomeof

these problems,extensive user input is still required to fully benefit from these tools [2,3,10,16].

Understandingthesourcesof parallelismin a programandpotentialoverheaddueto subtletiesof a DSM

architecture is essential for effectively using these systems.

Due to thegrowing disparitybetweenprocessorandmemoryspeeds,tool developershave beenfocusing

on measurement-basedtools to analyzememory performance.Several state-of-the-artmicroprocessors

provideon-chipperformancecountersto facilitatethesemeasurements[20]. However, mostof theexisting

toolsandtechniquesarelimited to evaluatingcacheandmemoryperformancefor a singleprocessor[19].

Thesetools typically do not directly addressmultiprocessormemory performanceissues.There are

examplesof researchprototypeDSM systemsthatcansupportmemoryperformancemeasurementsacross

multiprocessornodes [7]. Unfortunately, such tools are not yet widely available for commercial

multiprocessors.We presenta performancemodel that accountsfor inherentparallelismin a program,

whichcanresultin potentialspeedupaswell asoverheadwhenthatprogramis executedonaDSM system.

This modelcanbeusedto analyzetheefficacy of parallelizationandquantitatively measuretheoverhead

of parallelizing a program.Quantitative evaluation of this overheadprovides an indirect measureof

effective utilization of available memory subsystem performance.

In this paper, we presenta performancemodelto characterizetheexecutionof a compilerdirectives-based

parallelizedprogram.We subsequentlyapply this model to evaluatethe performanceof our parallelized

versionof NAS benchmarkson SGI Origin2000,which is a commercialDSM systemwith a ccNUMA

architecture.Eachnodeof thesystemconsistsof two MIPS R10000processorswith two levelsof separate

dataandinstructioncachesfor eachprocessor;and4GB of mainmemorysharedbetweentwo processors

Page 4: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

4

on a node.Multiple systemnodesareconnectedin a hypercubetopologythrougha high speednetwork.

We usednative tools to parallelizedthe sequentialimplementationof NPBs [14]. Thesetools include:

Power Fortran Accelerator (PFA), which canautomaticallyinsertparallelizationdirectives in sequential

codeand transformthe loops to enhancetheir performance;Parallel AnalyzerView (PAV), which can

annotatetheresultsof dependenceanalysisof PFA andpresentthemgraphically;andFortran77compiler

with MP runtime library to compile and executedthe parallelizedcode[13]. In addition to using these

tools, we inserted some directive by hand to assist the compiler and improve the performance.

Weexplain thedirectives-basedparallelizationparadigmin Section2. A performancemodelandmetricsto

evaluatedifferentaspectsof a directives-basedparallelizedprogramarepresentedin Section3. Section4

reportsdetailedmeasurementbasedevaluationof the parallelizedNAS benchmarksusing performance

modelandmetricsof Section2. We briefly survey therelatedresearchefforts in Section5 andconcludein

Section 6.

2 Compiler-Directed Parallelism

Compiler-directedparallelismhasbeentraditionallyusedfor vectorsupercomputers.It hasrecentlystarted

attractingattentionof mainstreamvendorsdue to increasingpopularity of SymmetricMultiprocessing

(SMP) systems. Parallelization directives can be inserted in legacy sequential code to tap the

multiprocessingpotentialof anSMParchitecture.Thesedirectivesarein theform of specialcommentsthat

are ignoredby a compiler without appropriatemultiprocessingflag. Thus, thereis no needto maintain

separatesequentialandparallelizedversionsof thesamecode.Thereis anongoingeffort of standardizing

these directives to port programs across different SMP platforms [15].

Potentialparallelismof aDSM systemcanbeexploitedin oneof threeways:message-passing;useof data-

parallellanguages;or compiler-directedmultiprocessing.Message-passingprovidestheuserwith explicit

controlovercommunicationandsynchronizationsthroughcommonlyusedmessage-passinglibraries[12].

Data-parallel programminglanguagesallow theusersto write SPMDprogramswithout explicit message-

passing,which is handledby the compilerandits runtimesystem.The main sourceof parallelismis the

programdata,which can be distributed amongdifferent processorsthrough compiler directives. High

PerformanceFortran(HPF [6]) is a standardfor thesedirectivesthat have beenusedby several compiler

developers.Both message-passinganddata-parallelismforcea userto developa parallelalgorithm,which

is a challengingtask.Dueto thesimplicity of programmingsharedmemorysystems,compilerdevelopers

Page 5: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

5

have been investigating different techniquesto exploit parallelismdirectly by the compilers for such

systems.This processcanbeaccomplishedautomaticallywith a compileror throughsomehintsprovided

by the user to the compiler [15].

Beforeinsertingcompilerdirectivesin sequentialcode,onehasto identify partsof theprogramthatcanbe

parallelizedwithout affectingthecorrectness.Themainsourceof parallelismis theloopswhoseiterations

canbescheduledon multiple processorswithout any dataaccessdependenceor conflictsamongdifferent

iterations.This requiresdependenceanalysisfor every loop nestof sourcecode.For a givenloop nest,it is

customaryto parallelizethe outer-most loop to have significantwork for eachset of iterationsthat are

scheduledonmultipleprocessors.Theusermayhaveto modify someloopneststo resolvedependenceson

theouter-mostloop index to parallelizetheloop.If therearedatadependenciesbetweendifferentiterations

of theouterloop,parallelizationis inhibitedto preservecorrectnessof theprogram.As illustratedin Figure

1, this parallelizationis an iterative process,which continuesuntil mostof the loopscontributing to the

overall executiontime areparallelized.Finally, the parallelizedcodeis compiledand linked usingwith

appropriate runtime libraries to execute on a target multiprocessor.

Comparedto the process-level parallelism for message-passingprograms,directive-basedparallelism

constitutesa finer-grained, loop-level parallelism. Figure 2 provides an example of this parallelism

implementedthrough MIPS Fortran compiler directives for multiprocessing[13]. The C$DOACROSS

directive instructsthecompilerto divide theouterloop iterationsequallyamongtheavailableprocessors.

This is the default loop scheduling,which is implementedby the runtime systemuntil specifically

instructedotherwiseby additionalcompilerdirectives.Datadistribution directive,C$DISTRIBUTE works

at the level of memory pagesrather than array elementsin data-parallellanguages.Thus, the data

distribution is relatively coarse-grain.

Directives-basedparallelismis supportedby the MP runtimelibrary on Origin2000,which implementsa

Sequential codeParallel code foran SMP system

Codemodifications as

needed

Directiveinsertions

Performanceevaluation

Figure 1. General methodology of parallelizing sequential code using compiler directives for sharedmemory multiprocessing.

Page 6: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

6

fork-and-join paradigmof parallelism.A master thread initiates the program,createsmultiple slave

threads, schedulesthe iterationsof parallelizedloops on all the threadsincluding itself, waits for the

completionof aparallelloopby all theslave threads,andexecutessequentialpotionsof theprogram.Slave

threadswait for work (i.e., for partsof parallelloops)whenthemasterthreadexecutesasequentialportion

of thecode.Figure3 representsthis runtimesystemgraphically. Clearly, themaindisadvantageof this type

of parallelism is the overhead to synchronize different threads that execute different iterations of a loop.

Consideringeaseof programming,directives-basedparallelismhasclearadvantagesovermessage-passing

anddata-parallelism.However, performanceimpactof usingthisprogrammingstyleonaDSM systemis a

relatively unexploredarea.We focuson performanceevaluationof directives-basedparallelizedprograms

in subsequent sections.

3 Performance Model and Metrics

Comparedto message-passinganddata-parallelism,compiler-directedparallelismis comparatively fine-

grained.Parallelismis discoveredfrom theloopsin sequentialprogramwhoseiterationscanbescheduled

on multiple processors.It is simpler to quantify the amountof parallelismthat hasbeendiscoveredin a

directives-basedparallelized program. Based on these initial measurements,we can estimate the

performancewith multiple processorsunder ideal conditionsof utilization. We use theseestimatesto

quantifytheoverheadof directives-basedparallelizationtechniquesthatis otherwisehiddenfrom theuser.

This analysishelpstheuserto decidewhetheror not a locality optimizationeffort will beuseful.We first

integer i, j, k double precision temp double precision a(256,256,256), b(256,256,256) c$distribute a(*,*,BLOCK) c$distribute b(*,*,BLOCK) c$doacross local(k,j,i,temp) do k = 1, 254 do j = 0, 255 do i = 1, 255 tmp = 1.0d+00 / a(i,j,k) b(i,j,k) = a(i,j,k) * tmp enddo enddo enddo

Figure 2. An example of instruction-level parallelism using MipsPro Fortran77 compiler directives formultiprocessing.

Page 7: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

7

explain the performancemodel with respectto DSM systemarchitecturethat we are focusing on.

Subsequently, we define metrics to evaluate parallelization and scalability of the parallelized code.

3.1 Performance Model

Considera sequentialprogramconsistingof N blocks,suchthat only oneblock is executedat any time.

Unlessotherwiseindicated,we shall usethe term block interchangeablywith subroutine. This is true for

mostof the programsdevelopedin a structuredmanner. The sequentialexecutiontime of the programis

denoted byTs and is calculated as:

, (1)

whereti is theexecutiontimespentin the i-th block.Wehave to measuretheaggregatetimespentin every

blockof thecodethatsubstantiallycontributestowardtheoverall sequentialexecutiontime.Therefore,we

define thesequential cost for executing thei-th block as a fraction:

. (2)

When a program is executed in parallel using fork-and-join paradigm,synchronizationoverheadis

Scheduleiterations in

parallel

Executeschedulediterations

Wait forcompletion

Executesequential

portion

Executesequential

portion

Wait forwork

Wait forwork

Executeschedulediterations

Synch. withmaster

Wait forwork

Wait forwork

Executeschedulediterations

Synch. withmaster

Master thread

Slave threadSlave thread

Figure 3. Execution of a parallel loop using fork-and-join paradigm with three threads.

Ts t ii 1=

N

∑=

SCi

t iTs-----=

Page 8: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

8

incurredby slave threadsto wait for parallelwork andby themasterthreadto wait for all theslave threads

to finishexecutingaparticularparallelloop.Theexecutiontime of adirectives-basedparallelizedprogram

is denoted byTp and is given by:

, (3)

wherethe(useful)executiontimespentin thei-th block(ti) is thesumof timespentin parallelizedloopsof

thatblock (tpi) andtheremainingsequentialcodeof thatblock (tsi). Parallelizationoverheadfor theentire

programis givenby to becauseit is non-trivial to measureit for eachindividual parallelizedblock of the

programusingprofiling. Consideringthe architectureof a ccNUMA-basedDSM system,parallelization

overhead is an intricate function of following factors:

1. aggregate synchronization time between threads during execution of a parallelized program;

2. number of parallel loops;

3. aggregate load imbalance between threads during execution of a parallelized program;

4. non-local memory accesses by each thread; and

5. resource contention between a thread and other users on the system.

While thefirst four factorsmaynot changefrom oneexecutionto another, theresourcecontentiondueto

otherusersof thesystemaffectsin anunpredictablemanner. Sincedirectives-basedparallelizedprograms

rely on accessto shareddatastructuresfor synchronizationaswell ascomputationsrequiringnon-local

data, they are particularly susceptibleto the contentionfrom other users.Quantitative calculationof

parallelization overhead and other metrics are presented in the following subsection.

3.2 Performance Metrics

Considerthat a subroutinej in the programhasK parallelizedloops.Thenwe definethe metric parallel

coverage of subroutinej as:

. (4)

Note that parallelcoverageof a subroutinecanbe determinedby profiling the executionof a sequential

program.This techniqueis oftenusedto determinethefractionof codethatcanbeexecutedin parallel[4].

The total parallel coverageof a parallelizedprogramis equal to the sum of parallel coveragesof all

subroutinesin theprogram.If thereareL subroutinesin a program,thentheparallelcoverageof theentire

Tp t ii 1=

N

∑ to+ tpi tsi+( )i 1=

N

∑ to+ tpii 1=

N

∑ tsii 1=

N

∑ to+ += = =

PC j

tpii 1=

K

∑Ts

----------------=

Page 9: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

9

program is calculated as:

. (5)

A valueof PCcloseto 1.0 (or 100%,if expressedasa percentage)will beanidealvaluefor a parallelized

programindicatingthat thereis no sequentialcodeandno parallelizationoverhead.Therefore,executing

sucha programon n processorsshouldresultin a speedupof n, provided thatall theprocessorsarefully

utilized duringtheentireexecution.A highervalueof this metricis desirablebecauseit representsa better

parallelization of sequential code.

Amdahl’s law basedon fixed workloadcanbe usedasa measureof scalabilityof the parallelizedcode

under fork-and-join execution model. According to Amdahl’s law if a is the sequentialfraction of a

program, the maximum possible speedup that can be obtained on ann processor system is given by:

, (6)

wherea is thefractionof serialportionof thecode.Noting thatparallelcoveragePC=1-a, we canexpress

ideal speedup according to Amdahl’s law as:

. (7)

Using this definition of theoreticalspeedup,we cannow calculatethe combinedvalueof parallelization

overhead as:

, (8)

whereTp is the measured execution time onn processors.

Parallel coverageand speedupmetrics definedby equations(5) and (7), respectively for independent

assessmentof a directives-basedparallelizedprogram.In orderto comparetheperformanceof a directive-

basedparallelizedprogram with the sameprogram parallelizedusing a different technique,we use

executiontime asa metric.Additionally, equation(8) will beusedfor evaluatingparallelizationoverhead

for directives-based parallelized programs.

PC PC jj 1=

L

∑=

Sn1

a1 a–

n------------+

--------------------- n1 a n 1–( )+-----------------------------= =

Snn

PC n 1 PC–( )+---------------------------------------=

to Tp tpi tsi+( )i 1=

N

∑– Tp

Ts

n----- PC n 1 PC–( )+( )–= =

Page 10: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

10

4 Performance Evaluation

Performanceis evaluated from three perspectives: efficacy of parallelization process;scalability of

parallelizedprograms;andperformancecomparisonof directives-basedparallelizedprogramagainst the

hand-parallelized and optimized code. The metrics discussed in Section 3.2 are used for this evaluation.

4.1 Analysis of Parallelization

Parallel coverageis definedin Section3.2 asa metric to representthe efficacy of parallelizationprocess.

This metric was calculatedfor all NAS benchmarksparallelizedusing compiler directives for shared

memorymultiprocessing.For thesecalculations,the benchmarksare compiledwith instrumentationto

measurethetimespentin eachsubroutinethatcontainsparallelcodeblocks.Weexecutetheseprogramson

a single processor of Origin2000.

Table 1 presentsdetailed measurementsrelated to parallel coverage obtained in BT. Sequential

implementationof BT containsa numberof modularsubroutinesthatsolve Navier-Stokesequationsusing

a Block Tridiagonalalgorithms.An inspectionof thesesubroutinesindicatesthat mostof this algorithm

containssufficient parallelism.Quantitatively, thesemeasurementsindicatethat the coderesponsiblefor

more than 99% of the entire executiontime is parallelized.This level of parallelismwas attainedafter

iteratively analyzingthesourcecodeanddiscoveringpossibilitiesof parallelizationby minormodifications

in some loop nests.

The samemeasurementprocedurewas repeatedto calculateparallel coveragesfor FT, CG, and MG

benchmarks.A summaryof thesecalculationsis reportedin Table2. Unlike BT, we reliedon native SGI

tools (PFA and PAV) to parallelizethesebenchmarks.Furthermore,we had to manuallyperform inter-

procedural analysis to parallelize a few important loops in FT.

The resultsshown in Table2 suggestthat 93%–99%of the codeis parallelized.It shouldbe notedthat

Table 2. Parallel coverage of FT, CG, and MG benchmarks.

BenchmarkExecutiontime (sec)

Executiontime forparallelblocks(sec)

ParallelCoverage (%)

FT 203.70 200.15 98.26

CG 50.65 48.49 95.75

MG 96.93 90.56 93.43

Page 11: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

11

whena programis 100%parallelized,a linearspeedupcouldbeobtainedprovidedthatall theprocessors

areequallyutilized throughouttheexecution.This theoreticalspeedupwill beusedasacriteriato evaluate

the actual performance of parallelized code in the following subsections.

4.2 Analysis of Scalability

Figure4 presentsthe scalabilitycharacteristicsof the four parallelizedbenchmarks.The ideal execution

time values are calculatedassuminga linear speedupfrom sequentialexecution times. Theoretical

executiontime valuesaredeterminedaccordingto speedupobtainedfrom equation(8) in Section3.2.The

speedupis lessthanidealor theoreticalvaluesfor BT andFT. However, CG andMG exhibit closeto ideal

speedupvalues.BT and FT are relatively larger programscomparedto CG and MG. Additionally,

algorithmsfor BT andFT dependon a regular patternof dataaccesseswhich is not the casefor CG and

MG [5]. Lack of structured data accesseshelps loop-level parallelization paradigm by reducing

parallelizationoverheadunlike message-passingor data-parallelism.Therefore,BT andFT aresusceptible

to overheaddueto datalocality aswell assynchronization.Sincetheseoverheadarenotsignificantfor CG

andMG dueto theirstructureaswell assmallernumberof parallelizedloops,thespeedupis closeto ideal.

Table 1. Parallelization statistics obtained from measurements of BT on an Origin2000 node.Sequential cost and parallel coverage is expressed as a percentage of total execution time, which is

2723.96 sec for this particular execution.

Subroutines withparallelized code

Sequentialoverall time(sec)

Executiontime forparallelblocks(sec)

SequentialCost (%)

ParallelCoverage (%)

add 19.05 19.05 0.69 0.69

rhs_norm 0.13 0.13 0 0

exact_rhs 2.31 0.83 0.08 0.03

initialize 6.17 0.19 0.22 0

lhsinit 2.35 2.34 0.08 0.08

lhsx 357.80 357.80 13.79 13.79

lhsy 375.06 375.00 13.76 13.76

lhsz 453.21 453.20 16.63 16.63

compute_rhs 272.46 272.45 10.00 10.00

x_backsubstitute 103.75 103.75 3.80 3.80

x_solve_cell 304.49 304.48 11.17 11.17

y_backsubstitute 106.87 106.40 3.92 3.90

y_solve_cell 306.06 306.00 11.23 11.23

z_backsubstitute 106.87 106.80 3.92 3.92

z_solve_cell 307.25 307.10 11.28 11.27

Total 2723.80 2715.50 99.99 99.69

Page 12: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

12

In fact,CGandMG show betterthanidealspeedupfor somenumberof processors.This is notunusualfor

acache-basedDSM system.Idealor theoreticalspeedupis determinedwith respectto sequentialexecution

time, which is constrainedby the amountof datathat canbe kept in caches.With computationanddata

distributed on multiple cache-basedprocessorsof Origin2000, the effective cachesize also increases

resulting in higher than expected speedup for some executions of CG and MG.

Basedon the resultsof scalabilitymeasurements,it canbe observed that speedupcloseto the ideal and

theoreticalvaluesareattainableby parallelizingprogramsusingdirectives-basedapproach.However, the

differencesfrom theexpectedtheoreticalvaluesof speedupshouldbeexpectedfor largerapplicationswith

regular dataaccesses.In thosecases,carefuldatadistribution becomesimportantto obtainhigh speedup

values.In fact, many argue in favor of using fine-graineddata distributions, similar to thoseusedin

message-passingprograms,in conjunctionwith sharedmemorymultiprocessingdirectivesto leveragethe

0 10 20 30 40 50 60 700

500

1000

1500

2000

2500

3000

0 2 4 6 8 10 12 14 160

5

10

15

20

25

30

35

40

45

50

0 2 4 6 8 10 12 14 160

10

20

30

40

50

60

70

80

0 5 10 15 20 25 30 350

20

40

60

80

100

120

140

Figure 4. Scalability characteristics of directives-based parallelized programs and their comparisonswith ideal and theoretical speedup.

(a) BT (b) FT

(c) CG (d) MG

IdealTheoreticalMeasured

Exe

cutio

n tim

e (s

ec)

Number of processors

Exe

cutio

n tim

e (s

ec)

Number of processors

Exe

cutio

n tim

e (s

ec)

Number of processors

Exe

cutio

n tim

e (s

ec)

Number of processors

Page 13: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

13

benefits of both paradigms.

4.3 Parallelization Overhead

Measurementbasedresultspresentedin Section4.2indicatethatparallelizationoverheadis inevitableeven

whentheperformanceis closeto ideal.Theoverheadstemfrom thecache-basedDSM architectureaswell

as excessive synchronizationto supportloop-level parallelizationat the runtime. In order to put these

overheadin proper perspective, we first presentthe measuredvalues of parallelizationoverheadfor

directives-basedparallelizedimplementationof NAS benchmarksin Section 4.3.1. Then we analyze

synchronization overhead using a synthetic loop nest in Section 4.3.2.

4.3.1 Measurement of Parallelization Overhead

Comparedto FT, CG, and MG, considerablymore time was spent on BT to analyzeand tune its

performance.Speedupcharacteristicsof BT based solely on its parallelization did not show any

appreciablereductionin executiontime with increasingnumberof processorseven with closeto ideal

parallelcoverageasdiscussedin Section4.1. This is dueto the overheadof accessingdatanot found in

cachesor local memory. Therefore,all parallelizedloopswerere-examinedandadditionaldirectivesthat

enabledatadistribution at the granularityof pagesof memorywereinserted.This resultedin significant

performanceimprovementcomparedto its initial unoptimizedimplementation.As shown in Figure4(a),

parallelizationoverheadis smallasa resultof additionaldatadistributiondirectives.However, asweknow

from the speedupcharacteristicsof CG and MG, closeto ideal speedupis attainableby removing data

locality overheadsuchthat mostof the dataaccessesare limited to the first level caches.We first try to

assess the quantitative value of this overhead for BT using equation (8).

Table 3 lists the ideal, theoretical,and measuredexecution times for BT using multiple processors.

Parallelizationoverheadis presentedas a percentageof measuredexecution time. Clearly, the actual

speedupis lower thantheexpectedtheoreticalmaximumvaluefor any numberof processors.Notethatthe

parallelizationoverheadcontinuesto increasewith thenumberof processorsandaccountsfor about75%

of the total execution time with 64 processors.This behavior is an indication of non-optimal data

placementthat resultsin non-localdataaccessesaswell ascachecoherencetraffic. As we mentionedin

Section3.1, it is difficult to quantify the parallelizationoverheaddue to a numberof factorsthat can

potentiallyaggravateit. Althoughthemeasurementspresentedin Table3 suggestthatthebottleneckcould

bedueto datalocality overhead,it is practicallyimpossibleto isolateits quantitativecontribution to overall

Page 14: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

14

overhead due to other factors including synchronization and resource contention.

Among parallelizationoverhead,synchronizationoverheadcan be measuredusing SGI’s SpeedShop

toolset,which can determinethe time spentin synchronizationprimitives of MP library. This profiling

information is obtained using hardware performancecounterson MIPS R10000 processors.These

measurementbasedexperimentswerecarriedout for BT, FT, CG, andMG usingrelatively smallnumber

of processors.Runningsuchexperimentsfor larger numberof processorsresultsin perturbationof the

actual program to a point that profiling itself becomesa significant overhead.The results of these

experimentsarereportedin Table4. Synchronizationoverheadfor eachcaseis obtainedasa percentageof

measuredexecutiontime. Synchronizationoverheadwereashigh as19%in somecases.The lastcolumn

lists thetotalparallelizationoverheadobtainedby subtractingmeasuredexecutiontimefrom thetheoretical

executiontime accordingto equation(8). In two cases,this calculationis not possibledueto betterthan

expectedspeedupof CG and MG, which is a consequenceof untunedsequentialversionsof these

programs as discussed in Section 4.2.

Although the measurementsreportup to 19% overheaddueto synchronization,it is incorrectto assume

that synchronizationoverheadis a result of parallel loop schedulingalone.Synchronizationand data

locality overheadarestronglycorrelatedwith eachother. Thetime thata masterthreadspendswaiting for

slaves to finish executing a parallel loop could be due to a combinationof two reasons:(1) time to

synchronizemultiple threads;and(2) loadimbalancebetweenmasterandsomeof theslave threadsdueto

their non-localdataaccesses.If resourcecontentionfrom otherusersis alsoconsidered,the problemof

isolating one particular type of overhead becomes even more complex.

Table 3. Calculation of parallelization overhead of BT on on a range of 1 to 64 nodes of Origin2000.

Number ofprocessors

Idealexecution time(sec)

Theoreticalexecution time(sec)

Measuredexecution time(sec)

Parallelizationoverhead (%)

1 2723 2723 2723 0

4 680 687 931 26.20

9 303 310 455 31.86

16 170 178 374 52.41

25 109 117 216 45.83

36 76 84 186 54.84

49 56 64 182 64.84

64 43 51 198 74.24

Page 15: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

15

4.3.2 Analysis of Loop Synchronization Overhead

Beforereachingany conclusionsaboutparallelizationoverhead,a few simpleexperimentswerecarriedout

to measuresynchronizationoverheadfor distributing loop iterations.Codefragmentlisted in Figure5 is

usedto isolatethis overheadfrom any otherasmuchaspossible.Note that all variablesaccessedin this

loop nestarelabeled“local”. We compiledandlinked this codewithout any compileroptimizationflags.

This guaranteesthatall dataaccessesin parallelizedloopsarefrom first level of cacheswithout any non-

local accesses.Multiple SpeedShopprofiling experimentswith this codewereexecutedon 4, 8, 9, and16

processors.

Table 4. Parallelization overhead for directives-based parallelized NAS benchmarks.

BenchmarksNumber ofprocessors

Theoreticalexecutiontime (sec)

Measuredexecution time(sec)

Measuredsynchronizationoverhead (sec)

Totaloverhead(sec)

BT 4 804 1053 208 (19.75%) 249 (23.65%)

9 363 444 80 (17.98%) 81 (18.24%)

FT 4 35.24 39.66 2.62 (6.6%) 4.42 (11.14%)

8 18.79 23.02 2.37 (10.3%) 4.23 (18.38%)

CG 4 12.97 14.58 2.80 (19.2%) 1.61 (11.04%)

8 7.46 4.78 0.74 (15.5%) —

MG 4 22.14 18.41 0.63 (3.4%) —

8 13.50 14.92 0.60 (4.0%) 1.42 (9.5%)

integer i, j, k,l double precision uo, u1

u0 = 1.0u1 = 1.0

c$doacross local(i,j,k,l,u0,u1)do l = 1, 128

do k = 1, 128 do j = 1, 128 do i = 1, 128 u0 = u1+1 end do end do end do enddo

end

Figure 5. A synthetic program to analyze the synchronization overhead for directives-based parallelizedprograms.

Page 16: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

16

Figure6 presentstheexperimentalresults.Eachbarrepresentsmeasuredsynchronizationoverheadfor one

executionof theprogram.Thetotal executiontime for four processorsis about1.2 seconds,which scales

linearly with increasingnumberof processors.This is consistentwith theexpectedbehavior dueto a very

simpleprogram.The overheadmeasurementsareconsistentfor smallernumberof processorshowing a

variationin therangeof 6%–19%.The16 processorcaseshows largeroverheadbecauseit is presentedas

a fraction of total executiontime, which is very small in this case.Although we tried to ensurethat data

locality overheaddoesnot affect the measurements,we cannot isolate the overheaddue to resource

contention from other users.

Based on the results reported in this subsection, two conclusions can be drawn:

1. Assuminga properlytunedsequentialversionof a programto calculateaccuratevaluesof theoreticalspeedup, it is possible to calculate the aggregate value of parallelization overhead.

2. It is impractical to quantitatively isolate the impact of different sources of parallelization overhead.

Calculationof aggregate parallelizationoverheadusing the performancemodel of Section3 provides

usefulinformationto theuser. A high valueof this overhead,despitenearidealparallelcoverage,almost

certainly indicatesa memoryperformancebottleneck.Parallelizationoverheadon a cache-basedDSM

systemwill continueto reduceas most of the data is placedclosestto the processorin the available

memory hierarchy.

4 8 9 160

5

10

15

20

25

30

35

Figure 6. Synchronization overhead for the synthetic loop nest.

Number of processors

Syn

chro

niza

tion

over

head

(%

)

Page 17: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

17

4.4 Comparative Performance Analysis

NAS benchmarkswere originally written as a suite of paper-and-pencilbenchmarksto allow high-

performancecomputingsystemvendorsandresearchersto developtheir own implementationsto evaluate

specific architecturesof their interest [5]. NAS also provides a hand-parallelizedmessage-passing

implementationof the benchmarksbasedon MPI message-passinglibrary [14]. This implementationis

carefully written and optimized for a majority of existing high performancecomputing platforms.

Therefore,we comparethe performanceof our directive-basedimplementationagainst the MPI-based

hand-parallelizedimplementation.It shouldbe noticedthat an MPI-basedimplementationdiffers from a

directives-based shared-memory implementation of the same program in two important respects:

1. programruns underSingle Program,Multiple Data (SPMD) paradigmand sharesdatawith explicitmessage-passing among multiple processes; and

2. datais distributedsuchdifferentprocessors“own” differentelementsof anarrayaccordingto thetypeof distribution.

In contrast,shared-memoryparallelizedprogramsare executedundera fork-and-join paradigmwith a

globaladdressspace.Additionally, datadistribution directivesresultin theownershipof differentpagesof

data (arrays) by different processors, in contrast to the ownership of specific elements of an array.

Figure7 presentsthecomparisonbetweendirectives-basedparallelizedbenchmarksandhand-parallelized,

MPI-basedversionsof the same.In all of thesecases,performanceimproves with the number of

processors.For BT and FT, the MPI-basedimplementationsperform slightly better than the shared-

memoryimplementationdueto dataplacement.Directives-baseddatadistribution resultsin placingpages

of arrayson multiple processors.Coarsegranularityof datadistribution startsbecominga bottleneckfor

larger numberof processorsbecauseall loop iterationsthat usea particulardataelementcannotbe co-

locatedat the samenode.Therefore,as the numberof processorsincreases,multiple processorshave to

accessdatafrom pagesthatthey do not own locally, which adverselyimpacttheoverall executiontime. In

contrast,a message-passingprogramis designedin a way that the programmercontrolslocality of every

dataelement.As the numberof processorsincreases,the amountof dataownedby a processorreduces

proportionately. This is a particularly favorablesituationfor a cache-basedDSM systembecauselarger

proportionsof localdatacanresidein cachesto enhancememorysystemperformance.WetunedBT’sdata

locality for almostall of theparallelizedloopsto ensurethateachloop iterationis scheduledataprocessor

that owns elementsof an arrayaccessedduring thoseiterations.Consequently, the performanceof BT is

comparableto its hand-parallelizedimplementation.Performanceof two implementationsof CG andMG

Page 18: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

18

is also comparable(seeFigure 7 (c) and (d)). In caseof CG andMG, datalocality doesnot becomea

bottleneckdueto comparatively smallersizeof codewith smallernumberof memoryaccesses.Therefore,

performance remains comparable with the hand-parallelized implementations of CG and MG.

4.5 Summary of Performance Evaluation

As a first stepin evaluationprocess,the parallelcoverageof eachparallelizedprogramwasdetermined.

Despiteabove 90% parallel coveragein all cases,programscannotachieve closeto ideal or theoretical

speedupdue to parallelizationoverhead.Our extensive experimentsindicate that a useful quantitative

measureof parallelizationoverheadis obtainedby the performancemodelpresentedin this paper, which

calculatesaggregate overheadwithout trying to isolate different types of overhead.Based on our

experiencewith performancetuning describedhere,we concludethat parallelizationoverheadcan be

Exe

cutio

n tim

e (s

ec)

Exe

cutio

n tim

e (s

ec)

Number of processorsNumber of processors

x—Directive-parallelizedo—Hand-parallelized

0 5 10 15 20 25 30 350

20

40

60

80

100

120

140

Figure 7. Performance comparison of shared-memory multiprocessing directives-based parallelizationwith MPI-based, hand-parallelized and -optimized versions of the same benchmarks.

(b) FT(a) BT

0 10 20 30 40 50 60 700

500

1000

1500

2000

2500

3000

3500

0 2 4 6 8 10 12 14 160

5

10

15

20

25

30

35

40

45

50

0 2 4 6 8 10 12 14 160

10

20

30

40

50

60

70

80

Exe

cutio

n tim

e (s

ec)

Exe

cutio

n tim

e (s

ec)

Number of processors Number of processors

(c) CG (d) MG

Page 19: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

19

significantlyreducedby improving datalocality. Superiorspeedupof message-passingimplementationof

same benchmarks due to improved data locality supports this conclusion.

5 Related Work

Recentperformanceevaluationstudieshave examinedthe effect of datalocality on the performanceof

DSM systems.Andersonreportsthatoverheadfor programsthatwereparallelizedwith near100%parallel

coverageand executedon StanfordDASH (a ccNUMA DSM system)resultedin significantly inferior

speedupcharacteristics[4]. Performancewas improved by analyzingdatadistribution. In our case,we

concludethat singleprocessorcacheperformanceis anotherkey factorthat canimprove performance,in

additionto appropriatedatadistribution.Hristeaet al presenttheresultsof severalexperimentsto evaluate

the performance of memory subsystem for ccNUMA systems [8].

Several research efforts have focused on parallelizing sequential programs for shared-memory

multiprocessors.Theseefforts arebecomingincreasinglyimportantdueto the revival of shared-memory

multiprocessorswith improved scalabilityvia distributedmemoryandhardwarecache-coherence.SUIF

compiler systemincorporatesvarious modules that can be used to analyzethe sequentialprogram,

parallelizethe loops, distribute programarrays,and perform inter-proceduralanalysis[3,4]. Polaris is

anotherparallelizingcompilerthatcangenerateparallelizedcodefor SMPs[16,18].CAPTools is a semi-

automaticparallelizationtool thattransformsa sequentialprogramto a message-passingprogramby user-

directeddistributionof arrays[9]. Fortran-D[1] andvariousimplementationsof High PerformanceFortran

(HPF[6]) areexamplesof parallelizingcompilersthatwork for sequentialprogramsthatcanbenefitfrom

dataparallelism.KAP [10] andPFA [13] areexamplesof commercialparallelizationtools for SMPs.We

have experimentedwith most of thesetools to parallelizesequentialNAS benchmarks.Basedon this

experienceandresultsreportedin this paper, we considerthat tools for SMPsaresimpleto learnanduse

and their performance is promising.

6 Discussion and Conclusions

Directives-basedparallelismis essentiallya fine-grainedparallelismthat works at the level of individual

loop iterations.This is fundamentallydifferentfrom conventionalcoarse-grainedparallelismat thelevel of

processesor threads.Whenit is implementedcarefully, it canobtainmuchbetterload-balancecomparedto

the conventionalmessage-passingor data-paralleltechniques.On the otherhand,the useris requiredto

spend additional time to ensure proper data locality to obtain performancecomparableto hand-

Page 20: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

20

parallelized, message-passing based implementation.

We presenteda performancemodel to characterizethe performanceof directives-basedparallelized

programsfor anOrigin2000system.Usingmeasurements,we quantitatively evaluatedthefractionof code

thatwasparallelized.Furtherevaluationindicatedreasonablespeedupaswell assignificantparallelization

overhead.Basedon extensive tuningof oneparallelizedprogramandsomeisolatedexperimentspresented

in this paper, we concludethat non-localdataaccessesare the main sourceof parallelizationoverhead.

Performancecan be optimizedby keepingdataat a level in memoryhierarchy, which is closer to the

processor. Based on these results, we continue to further tune parallelized NAS benchmarks.

Evaluationof parallelizationoverheadbasedonperformancemodelpresentedin thispaperemphasizesthe

needfor appropriateinstrumentationof multiprocessormemorysubsystem.Suchinstrumentationis readily

accessibleto a userfor measurementslimited to a singlenodeonly. Without hardwareor softwarebased

instrumentationof non-localmemoryaccessesand cache-coherencetraffic, direct measurementof data

locality overheadis not possible.Somecommercialtool developersrealizethis problemandareworking

on tools that furnish multiprocessor memory performance measurements.

References

[1] V. Adve, J-C.Wang,J. Mellor-Crummey, D. Reed,M. Anderson,andK. Kennedy, “An IntegratedCompilationandPerformanceAnalysisEnvironmentfor DataParallel Programs,” ProceedingsofSupercomputing ‘95, San Diego, CA, December 1995.

[2] SamanP. Amarasinghe,“Parallelizing Compiler TechniquesBasedon Linear Inequalities,” Ph.D.Dissertation, Dept. of Electrical Eng., Stanford University, Jan. 1997.

[3] S.P. Amarasinghe,J.M. Anderson,M. S.Lam andC. W. Tseng,“The SUIF Compilerfor ScalableParallelMachines,” Proceedingsof theFifth ACM SIGPLANSymposiumon PrinciplesandPracticeof Parallel Processing, July, 1995.

[4] Jennifer-Ann M. Anderson,“AutomaticComputationandDataDecompositionfor Multiprocessors,”TechnicalReport CSL-TR-97-719,ComputerSystemsLaboratory, Dept. of Electrical Eng. andComputer Sc., Stanford University, 1997.

[5] David Bailey, Tim Harris,William Saphir, RobvanderWijngaart,Alex Woo, andMauriceYarrow,“The NAS Parallel Benchmark 2.0,” Technical Report NAS-95-020, December 1995.

[6] High PerformanceFortranForum.High PerformanceFortranLanguageSpecification,Version1.0.Scientific Programming, 2(1 & 2), 1993.

[7] Mark Horowitz, MargaretMartonosi,Todd.C. Mowry, andMichaelD. Smith,“Informing MemoryOperations:Providing Memory PerformanceFeedbackin ModernProcessors,” Proceedingsof the23rd Annual International Symposium on Computer Architecture, May 1996.

Page 21: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

21

[8] CristinaHristea,Daniel Lenoski,andJohnDeen,“MeasuringMemory Hierarchy PerformanceofCache-CoherentMultiprocessorsUsingMicro benchmarks,” Proceedingsof SC‘97, SanJose,Cali-fornia, Nov. 1997.

[9] C. S. Ierotheou,S. P. Johnson,M. Cross,and P. F. Leggett “Computeraidedparallelisationtools(CAPTools)—conceptualoverview and performanceon the parallelisationof structuredmeshcodes”Parallel Computing, Vol.22, 1996, pp.163-195.

[10] Kuck & Associates,Inc., “ExperiencesWith Visual KAP and KAP/Pro ToolsetUnder WindowsNT,” Technical Report, Nov. 1997.

[11] JamesLaudonandDaniel Lenoski, “The SGI Origin: A ccNUMA Highly ScalableServer,” Pro-ceedingsof the24thAnnualInternationalSymposiumon ComputerArchitecture, Denver, Colorado,June 2–4, 1997, pp. 241-251.

[12] Message Passing Interface Forum, “MPI: A Message-Passing Interface Standard,” May 5, 1994.

[13] MIPSpro Fortran77Programmer’sGuide, SiliconGraphics,Inc. Availableon-linefrom: http://tech-pubs.sgi.com/library/dynaweb_bin/0640/bin/nph-dynaweb.cgi/dynaweb/SGI_Developer/MproF77_PG/@Generic__BookView.

[14] NAS Parallel Benchmarks. Available on-line from: http://science.nas.nasa.gov/Software/NPB.

[15] OpenMP:A ProposedStandard API for SharedMemoryProgramming, Oct.1997.Availableon-linefrom http://www.openmp.org.

[16] David A. Padua,Rudolf Eigenmann,JayHoeflinger, Paul Petersen,PengTu, StephenWeatherford,andKeith Faigin, “Polaris:A New-GenerationParallelizingCompilerfor MPPs,” TechnicalReportCSRD # 1306, University of Illinois at Urbana-Champaign, June 15, 1993.

[17] CherriM. Pancake,“The EmperorHasNo Clothes:WhatHPCUsersNeedto SayandHPCVendorsNeed to Hear,”, Supercomputing ‘95, invited talk, San Diego, Dec. 3–8, 1995.

[18] InsungPark,MichaelJ.Voss,andRudolf Eigenmann,“Compiling for theNew Generationof High-Performance SMPs,” Technical Report, Nov. 1996.

[19] Harvey J.Wassermann,Olaf M. Lubeck,YongLuo, andFedericoBassetti,“PerformanceEvaluationof theSGI Orign2000:A Memory-CentricCharacterizationof LANL ASCI Application,” Proceed-ings of SC ‘97, San Jose, California, Nov. 1997.

[20] MarcoZagha,BrondLarson,SteveTurner, Marty Itzkowitz, “PerformanceAnalysisUsingtheMipsR10000 PerformanceCounters,” Proceedingsof Supercomputing‘96, Pittsburgh, Pennsylvania,Nov. 1996.

Page 22: Performance Modeling and Measurement of Parallelized Code ... · Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors Abdul Waheed

Performance Modeling and Measurement ofParallelized Code for Distributed Shared

HJILKNMJOQPSR�MJT9PVU WYX%X%Z[MJ\^]`_ KNa�TbacWedfK8X%Z`T9PgOihjPVklPm]_ K8h-no_ PgRbK8\8h%R�MJXpT9Pgqra�T9_�sJILKNMJOiPtkua�T9vwP"]xkyh%_ Ky_ K8PMJWY_ KNacTbz n�{|_�a}Pg\Nn�W~T9P�RbX-MJT9hj_ Z`a�U�qYT9P"n�Pg\L_�MJ_ h�ac\yMJ\^]_ PgRJK8\8hjR�MJX�MJRbRbWYTJMJRbZ~sJIL_�MJvwPtq~PgTbn�ac\NMJX�T9P"n�qea�\^n�h%�� h%Xjh%_ Z`U�acT�_ K8P��fWrMJXjh%_ Z�a�UL_ K8h-n�]�a�RJWY��PV\8_�s'��,���|�f���L��t�6� ���,���|�f���L��t�6� ��

���m���'�<�¡ ��<�m�%�'¢£¥¤ ¢§¦¨¢§ �©¡ª�«

¬|­ ¦®�<�E¯�°�°��'���±¯�²�³¯�¢§¢� �©-ªm´ ¬|µ¶��<°����'¦±ª ¤�£¥· �%�<« ¸�¹m¹mº-»V¼ ���8�¾½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½6½