performance measures h. scott matthews february 10, 2004

23
Performance Measures H. Scott Matthews February 10, 2004

Post on 22-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Performance Measures

H. Scott Matthews

February 10, 2004

Recap of Last LectureFinished 2-part discussion on energy

(electricity) infrastructure issues.Discussed changing needs and

motivation for reliability assessment in elec. Grid

Began to inspire notion of performance

What is Performance?Amer. Heritage Dictionary: “the act or

manner of filling and obligation or duty”Oxford: “The accomplishment,

execution, carrying out, working out of anything ordered or undertaken; the doing of any action or work Act or manner: method or ability Fulfilling: meeting, satisfying Duty: depends on purpose/objectives -

may be strategic, economic, …

Alternative Definitions “Degree to which a facility serves its

users and fulfills the purpose for which it was built or acquired as measured by accumulated quality and length of service it provides to users” (HHU)

Ability to give satisfactory serviceHumplick - 5 levels, 4 groups,

measures, indicators

Humplick Framework5 major levels (and points of view)

Service quality and reliability (users) Network size and condition (facility) Op. efficiency & productivity (provider) Sectoral performance (investment, pricing) Institutional performance

(Not in Humplick) Performance measurement/assessment

needs to consider both the supply (e.g. condition, inventory) and demand (e.g. usage) sides

Why Care About It? Performance measures used as tools to do the

following (Humplick 94): Support Management Decisions Diagnose, Track, Monitor Potential Problems Signal suppliers and users Allocate Resources (aka Economics) Track Data in Info. Systems

Needs/expectations change over time, need a framework for consistency

Different parties have different views, thus care about different indicators (e.g., Fig 1)

Current Limitations in Assembling Performance Info Data collected by multiple/different agencies Data that is collected tends to differ in

collection method and context Type of data Reliability/Precision Spatial/temporal frequency

Consistency is variable (More justification for framework)

Users of Performance IndicatorsFacility/Network UsersService Providers (US DOT, PennDOT)Facility & Network Providers (firms)Policy Sector and Institutions (FHWA)

Framework RequirementsAre objectives being met?Are user demands being met?Service providers performing efficiently?Are policymakers taking right actions?Five perspectives next.

Infrastructure ProvisionCharacteristics of system network

Size, users, etc We saw these in built/elec infrastructure

presentations last 2 weeks Also see data on HW 2 Others would be MW, Dist/Trans lines per

capita

Service QualityHaven’t really looked at these yetRoads: Ride quality/safety metricsFor electricity: power quality

Ex: num/freq of outages / disruptions “1/100th of a second power spike or drop” Don’t forget different classes of users Total hours of outage per year?

Example - Peak DemandWinter and summer demand curves

Why different? Why relevant?Define: Capacity/reserve marginsBetween 1978-1992: 25-30%

Now lower (dangerously?)

Common Characteristics of Infrastructure Projects/components are parts of networks

(e.g. bridge needs road) Long time horizons (lifespans) Presence of tradeoffs (build/maintain) Indivisibility (can’t build half) Spatial/temporal variability Essential - to point of being ubiquitous Expensive (often are one-off solutions)

Common Characteristics (2)Subject to design standards

Could be DOT, IEEE, etc.Subject to deteriorationSubject to uncertaintyExhibit multiple modes of failureHierarchical Decision ProcessOthers?

Approaches to PerformanceCondition AssessmentCondition IndicesReliability TheoryMulti-dimensional Measures

Condition AssessmentMeasure type, severity, extent of

deteriorationSpecific examples or indicators of

deterioration usedSubjective ratingsVisual evaluationDestructive testingDirect MeasurementDoes this sound familiar? (NBI)

ExamplesPavement - total length of cracks per

lane mile, roughness, deflectionBridge decks - chloride contentPipeline - breaks per mileRoof - square feet of wet insulationElectric Power - ?Communications - ?

Subjective RatingsPredefined, arbitrary scale (see Grant

and Dunker articles)Requires training to minimize errors and

discrepancies across inspectorsEx: Present Serviceability Rating (PSR)

AASHO Road Test: Bad = 0, Good = 5

TestingDestructive: Requires actual invasive

test (or removal) of infrastructure to be compared with reference samples E.g. cores for density, chemical content Bending/Breaking trusses

Other (NDE/NDT): uses technology and sensors to give similar results (e.g. Ground-penetrating radar to detect cracks/defects)

Condition IndicesDeveloped to address multitude of

condition measuresBased on amount of distress or

damage, results from non-destructive tests, relationships between use conditions

Condenses ‘vector’ of data into ‘scalar’

Index Requirements Completeness- covers all aspects of deterioration Measurable - to ensure consistency and

repeatability Relevance - provides rational quantification of

condition Example: Building Condition Index (BCI) = Total

Deferred Maintenance / Replacement Plant Value Excellent: BCI < 2% Good: 2% < BCI < 5% Adequate: 5%< BCI < 10% .. Fail BCI > 60%

Condition Index LimitationsTries to make performance into one

valueHard to choose right aggregationMay be hard to integrate technologyAnything outside index not included

Reliability TheoryBased on probability of failureWidely used in high tech industriesCan minimize costs while maximizing

reliability