a comparison of cost-of-service and performance-based regulation - reliability and service quality...

26
Ira Shatzmiller May 2015 A Comparison of cost-of-service and performance-based regulation: reliability and quality of service Introduction Observers of the electricity industry have summarized the conundrum around the price regulation of electricity: “Price and quality go hand in hand. It makes little sense to buy a cheap product without knowledge of its quality. The same applies for electricity – cheap electricity does not really mean anything if there are constant interruptions in supply. But paying huge sums of money for no interruptions at all does not make sense either. The question, therefore, is: what is the optimal quality level and at what price should this be offered to consumers?” i Over the past twenty years, incentive- or performance-based regulation (“PBR”) has been advanced as a solution to this problem. Proponents of PBR claim that, unlike traditional cost-of-service regulation (“COSR”), PBR incents the supplier to find cost efficiencies, while lowering prices for the consumer. While some have argued that PBR has the power to more accurately mimic a competitive market, others have argued that PBR results in unacceptable levels of degradation in reliability and customer service. This paper will address these arguments and conclude whether PBR is, in fact, a superior solution to COSR in terms of locating the optimal quality level and price for consumers. 1

Upload: ira-shatzmiller

Post on 08-Aug-2015

21 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

Ira Shatzmiller

May 2015

A Comparison of cost-of-service and performance-based regulation: reliability and quality of service

Introduction

Observers of the electricity industry have summarized the conundrum around the price

regulation of electricity: “Price and quality go hand in hand. It makes little sense to buy a cheap product

without knowledge of its quality. The same applies for electricity – cheap electricity does not really

mean anything if there are constant interruptions in supply. But paying huge sums of money for no

interruptions at all does not make sense either. The question, therefore, is: what is the optimal quality

level and at what price should this be offered to consumers?”i

Over the past twenty years, incentive- or performance-based regulation (“PBR”) has been

advanced as a solution to this problem. Proponents of PBR claim that, unlike traditional cost-of-service

regulation (“COSR”), PBR incents the supplier to find cost efficiencies, while lowering prices for the

consumer. While some have argued that PBR has the power to more accurately mimic a competitive

market, others have argued that PBR results in unacceptable levels of degradation in reliability and

customer service.

This paper will address these arguments and conclude whether PBR is, in fact, a superior

solution to COSR in terms of locating the optimal quality level and price for consumers.

Criticism of COSR

Today, COSR remains the common regulatory method for investor-owned utilities in North

America. A regulatory commission approves “just and reasonable” rates so that the regulated entity

recovers its “prudently incurred” costs in providing electricity, which includes a return on capital ii. The

base rate revenue requirement is established through a rate case, where estimates are made of the

prudent cost of capital, labor and other inputs. Upon determination of the revenue requirement, it is

allocated for recovery based on customer numbers, delivery volumes, and other billing determinants. iii

Criticism of COSR is varied. Among the more procedural arguments, it is alleged that allowed

costs can be difficult to determine if the regulated company sells some products in unregulated markets,

1

Page 2: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

sometime resulting in costs for the unregulated services needing to be assigned to the regulated

servicesiv. By the same token, complications may arise when common costs are incurred jointly by both

the regulated and unregulated segments of the firm.v

It is also claimed that in the current economically volatile period, there is too much of a lag

between when the regulated company actually purchases the inputs and the revenue requirement

hearing.vi

The requirement under COSR that costs be “prudently incurred” is also subject to criticism, as

there is no shortage of evidence of prudence risk, exemplified in the 1980s and early 1990s where costs

for building expensive nuclear generation were disallowedvii. Disputes over the prudence of certain

costs continue today, as exemplified by the Court of Appeal decision Power Worker’s Union (Canadian

Union of Public Employees, Local 1000) v. Ontario (Energy Board)viii, whereby the Ontario Energy Board

(the “OEB” or “Board”) reduced by $145 Million the revenue requirement submitted by Ontario Power

Generation (“OPG”) to cover its nuclear compensation costs for 2011 and 2012. The dispute, which

centres on whether the labor costs submitted by OPG were prudently incurred or not, is currently

before the Supreme Court of Canada.

The underlying problem of determining whether costs were prudently incurred or not stems

from what economists have termed “informational asymmetry” between the regulator and the

regulated, which has been a topic of theoretical research since the 1980six, including by recent the

recent Nobel Prize laureate Jean Tirolex. Ascertaining the regulated firm’s “real” costs is extremely time-

consuming and costly, because correcting of the informational asymmetry between company managers

and regulators requires substantial data exchange, processing, and analysis.xi As summarized by one

observer, if regulators knew the efficient way to produce and market utility services, they could simply

mandate the provision of the optimal services and set prices to recover the minimum cost of providing

them. In fact, given the uncertainties of future market conditions and changing regulatory

requirements, even those running the company cannot entirely determine what the most cost-efficient

practices would be. The challenge is much greater for regulators who generally will not have practical

experience with running a utility.

These problems may be further amplified in the presence of “regulatory capture”, namely if the

regulated firm (or other interest group) is able to influence the regulator into deciding on issues in

accordance with its own interests.xii

2

Page 3: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

The most consistent criticism of COSR, however, relates to the premise that under COSR, a

regulated firm has no incentive to cut costs or otherwise operate efficiently. As a practical matter, as

long as its costs meet the prudence threshold, the regulated firm will be reimbursed for these costs and

also receive a reasonable rate of return. Furthermore, it is claimed that COSR creates a bias in the

choice of inputs, particularly if the rate-of-return is higher than the firm’s cost of capital, a phenomenon

known as the Johnson-Averch effectxiii. As it has been described: “Overcapitalizing is associated with an

oversupply of quality since quality is typically a capital-using attribute (Spence, 1975). It can therefore

be expected that both price and quality levels will be too high. Empirical studies show that under rate-

of-return regulation, existing reliability levels in the electricity sector are generally higher than from a

social point of view. This “gold-plating” effect suggests that consumers may be paying too high a price

for too high a level of quality.”xiv

In light of the foregoing, a growing sentiment among electricity industry observers is that energy

supply is simply too costly to regulate well using COSR, and fails to achieve the maximum possible

benefit to societyxv.

The alternative to traditional rate-making, PBR.

PBR is an alternative to traditional COSR of energy utilities, and is now the standard form of

regulation of investor-owned firms outside of North America. PBR is also extensively used in other

regulated industries, most notably in telecommunications.xvi

Several different tools make up what are understood to be PBR mechanisms, but which all have

as a common denominator the decoupling of the price of electricity from its production costsxvii.

The basic approaches to PBR include rate caps, revenue caps, and benchmarking; important

categories PBR tools include benefit sharing and plan termination provisions.

The mechanisms for determining allowed rate growth vary, but all have the attribute of being

external. The simplest approach is to hold rates constant for the plan duration, which is sometimes

called a rate freeze or moratorium. A simple variant of the rate freeze is a set of pre-scheduled rate

adjustments, which may be increases or decreases.xviii

3

Page 4: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

The most common form of PBR in the world today is an indexed rate cap established in advance

of its operation, often represented by the mathematical formula ΔPCI = P – X +/- Z. The growth of the

price cap index (PCI) depends on the difference between an inflation factor (P) and an “X” factor (X), plus

or minus a “Z” factor (Z). The inflation factor (P) is the growth rate in an inflation measure that is

external to the utility, as its value does not depend on the company’s actions. The “X” factor is generally

also external and is sometimes called the “productivity factor”, as rate proceedings in North America

usually involve this value. The Z factor adjusts the allowed change in rates for reasons other than

inflation and productivity trends, and is generally designed to recover the impact that changes in

government policy have on the company’s unit cost.xix

Advocates of PBR claim that using such mechanisms can reduce the frequency and scope of

regulatory intervention. They also claim that as PBR mechanisms rely heavily on external data, the

informational asymmetry problem between regulator and regulated is alleviated. Furthermore, utilities

can be assured that superior performance will not entail immediate modifications to regulatory policy,

in turn depriving their shareholders of the benefits of such performance.xx

However, advocates of PBR claim that the greatest benefit of PBR that its advocates point to is

the motivation to be cost efficient: by decoupling price from incurred cost, incentives for cost savings

and ultimately lower consumer prices are strengthenedxxi.

Criticism of PBR Concerning Reliability and Service Quality

Although PBR is coming to be the norm, many have expressed concern with its effects on non-

price dimensions of performance, what is generally referred to as “quality of service”, for obvious

reasons. A simple price cap or other incentive plan rewards the firm for lowering its cost, but absent

mandatory quality standards, cost reductions are sometimes achieved by shortchanging quality, an

incentive not found under COSR.xxii

A 2010 study by Anna Ter-Martirosyan and John Kwokaxxiii (the “Study”) is one of the few

empirical investigations of this concern, and uses a sample of U.S. electricity distributors studied over

the period between 1993-1999, several of which were subject to PBR at the time. The Study recognizes

that while there are numerous possible dimensions of quality of service in electricity, outage-related

indices relating to average duration (SAIDI) and average frequency (SAIFI) are the only widely accepted

4

Page 5: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

and measured criteria. SAIDI, the “System Average Interruption Duration Index”, is defined as total

minutes of service interruptions in a year divided by the number of customers served. SAIFI, the

“System Average Interruption Frequency Index”, measures how often an average customer’s service is

interrupted in a given year. It is computed by dividing the total number of customers interrupted by the

average number of customers served.xxiv The Study acknowledges that differences in the way

interruptions are defined and measured was a complicating factor.

The Study came to some surprising conclusions. The empirical measurements taken on the

sample revealed that the mean SAIFI for utilities without PBR was 1.28, while for those with PBR SAIFI

averaged 1.08, a higher SAIFI number indicating a higher frequency. Among the latter, there was a

substantial difference between those with and without quality standards – specifically, 0.96 for utilities

with quality standards versus 1.42 for those without. Similarly with respect to SAIDI, duration averaged

122 for utilities without incentive regulation and 117 for those with it. But among utilities with PBR,

quality standards were associated with remarkably large differences in duration, namely, 199 for utilities

without such standards versus 90 for those with standardsxxv.

The Study concluded that there did not appear to be any positive relationship between PBR and

SAIFI, based the following explanation: the single most common cause of outages is equipment failure,

something only partially within the control of a utility. As a result, the association between PBR and

SAIFI may be difficult to discern empirically. In contrast, once such failure has occurred and been

detected, the duration of the resulting outage is a function of repair crew readiness, equipment

availability, etc., which are matters more within the control of the firm. That implies that SAIDI, more

than SAIFI, would be affected by PBR, in the absence of quality standards. xxvi The Study therefore

suggests that PBR results in degradation of one aspect of service quality – duration of outages – unless it

is paired with explicit quality provisionsxxvii. The Study does state that while its results indicate that PBR

has a significant impact on SAIDI, it qualifies its results on the possible endogeneity of PBR and a more

complicated, two-step chain of causation, which is the utility’s decision to reduce their quality-related

expenditures in order to increase profits under incentive regulation, followed by the effect of this

reduction in expenditures on quality itselfxxviii.

Criticism of PBR Concerning Reliability and Service Quality: Ontario

Turning to Ontario, observers Francis Cronin and Stephen Motluk believe that reliability has

been significantly lowered since the implementation of PBR in Ontario in 2000, through the OEB’s 2000

5

Page 6: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

Rate Handbookxxix. They cite Ter-Martirosyan and Kwoka’s Study in concluding that incorporating strict

reliability standards with financial penalties into PBR can offset the tendency of PBR plans without

standards and penalties to imprudently cut critical OM&A activities.xxx

Cronin and Motluk further cite the Study’s finding that over half of the utilities subject to PBR in

the sample, in both North America and Europe, had such penalties. In North America, following a series

of significant outages often caused by imprudent reductions in OM&A expenses, some regulators

imposed inspections, maintenance, and sometimes investment on the utility, going as far as to specify

the nature, timing and, in some cases, the capital investment and staffing the utility would need to

expend to meet the regulations.xxxi

The authors paint a stark comparison with the OEB’s response to what they claim is the

purported degradation in reliability and service quality in Ontario. They argue that prior to the

restructuring of the electricity through the introduction of the Ontario Electricity Competition Act of

1998, Ontario electricity distributors were acknowledged to be technically efficient and providing highly

reliable power. At the time of the restructuring, the OEB’s implementation task force predicted that

Ontario utilities would react to these increased incentives, and that robust standards would be

necessary to ensure the continued supply of reliable power. The OEB opted to require those local

distribution companies (“LDCs”) with historical data to continue supplying power within the levels of

reliability observed over the preceding three years, following which the OEB would review the standards

by 2003 and set financial penalties for non-compliance.xxxii The task force indicated that those LDCs

without reliability data should begin to collect it, and that benchmarks be set for this group using peer-

group averagesxxxiii.

For a variety of reasons, the task force recommended that only minimum customer-service

standards be applied to the LDCs during the first generation of PBR. The levels of the minimum

standards were determined through a survey of the LDCs.

The OEB was expected to take action quickly, possible even early in the first generation, but no

later than the beginning of the second generation following the initial three-year PBR term, to set

reliability-performance targets based on a reasoned rationale. The OEB itself stated its intent to move

expeditiously: “Upon review of the first year’s results, the OEB will determine whether there is sufficient

data to set thresholds to determine service degradation for years 2 and 3.”xxxiv

6

Page 7: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

In 2003, the OEB issued a Staff Discussion Paper examining the reliability performance of LDCs

relative to various proposed benchmarks such as sector average or peer group average performance

over the preceding three years. It found that anywhere from 25 – 50% of Ontario distributors failed

these benchmarks. Furthermore, LDCs that failed typically had a reliability performance that was 50 –

100% worse than the selected average. And yet, the authors maintain that these findings failed to elicit

any concern from the OEB, nor did the OEB offer any explanation for the huge discrepancy, nor did the

Staff Discussion Paper actually shed any light on whether LDCs were in compliance with the reliability

guidelines established by the OEB in 2000.

In January 2008, the OEB released its discussion paper on reliability, its only publicly-released

analysis of LDC performance since the 1999 task force report. The paper employs only data from 2004-

2006 to examine LDC performance, no pre-PBR data and no data for the first three years of the PBR are

examined. As such, the authors claim it is impossible to say what the performance of the electricity

distributors in Ontario has been relative to the minimum standards established in 2000, as this question

is not addressed in any public OEB analysis. They find puzzling that the OEB reports that it will not use

this data for its reliability-trend analysis, claiming that this data “may not have been reported

consistently or calculated properly”. This very data was collected by those same utilities for at least

fifteen years, reported to the Implementation Task Force in 1999 and to the OEB in its required filings

since 2000. If the OEB is willing to employ the 2002 and 2003 reliability data in its cost benchmarking

that determine each LDCs’ future annual revenue, shouldn’t this data be sufficient for trend analysis as

well? xxxv

As of 2009, the OEB still hadn’t conducted a public review of LDC reliability performance from

2000 – 2003, nor had it conducted a review of any post-PBR implementation performances over the

2000 – 2007 period to determine whether LDC’s were compliant with the standards imposed in 2000xxxvi.

Cronin and Motluk argue that in choosing to reject use of its own data prior to 2004, the OEB

not only misses a significant degradation in 2004-2006 compared with 2000-2003, it misses an earlier

degraded compared with the pre-PBR 1993-1997 period. Only by examining the performance relative to

the pre-PBR period could the OEB determine compliance, and the OEB sees no degradation in large part

because it has chosen to eliminate the periods of higher reliability performance in its comparison. The

OEB didn’t report what tests had been performed to determine that the data reported in the earlier

years hadn’t been reported consistently or calculated properly. It’s unclear what methodology was used

to remove statistics that appeared to be unreliable. The earlier data came from the same population as

7

Page 8: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

the later data and therefore can be jointly used to assess the 2000-2007 trend, as well as to assess

performance relative to the pre-PBR period used in 2000 to set standards. xxxvii Cronin and Motluk

underline that the OEB had ample time to implement quality standards or an incentive/penalty regime.

As of 2009, the OEB was in its tenth year of collecting reliability data from individual LDCs, more than

sufficient time to gain experience. Indicators such as SAIDI and SAIFI are standards that are used for

monitoring and regulating service quality around the world, and these indicators have been used by

Ontario distributors’ association for at least fifteen years.

Cronin and Motluk don’t offer any explanations for what they depict as inactivity or obfuscation

on the part of the OEB, leaving the reader’s imagination to fill the void. However, they do link the

purported inactivity to a violation of the Bonbright principle of “just and reasonable rates”, arguing that

service quality and reliability standards should be explicitly formulated as part of the sale of access by

distributors to customers.” xxxviii

They also quote the Council of European Energy Regulators (CEER), which states the following in

its 3rd Benchmarking Report on the Quality of Electricity Supply (2005): “Price-cap regulation without any

quality standards or incentive/penalty regimes for quality may provide unintended and misleading

incentives to reduce quality levels. (...) The increased attention to quality incentive regulation is rooted

not only in the risk of deteriorating quality deriving from the pressure to reduce costs under price-cap,

but also the increasing demand for higher quality services on the part of consumers.... a growing

number of European regulators have adopted some form of quality incentive regulation over the last

few years.xxxix”

4 th Generation PBR in Ontario

As of the 4th generation of PBR in Ontario, there still remain to be any financial penalties or

incentives in place for service quality. In the OEB’s “Report of the Board – Renewed Regulatory

Framework for Electricity Distributors: A Performance-Based Approach” of October 18, 2012, the Board

has established an “Electricity Distributor Scorecard” which establishes performance outcomes that it

expects distributors to achieve in four distinct areasxl:

• Customer Focus: services are provided in a manner that responds to identified customer preferences;

8

Page 9: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

• Operational Effectiveness: continuous improvement in productivity and cost performance is achieved;

and utilities deliver on system reliability and quality objectives;

• Public Policy Responsiveness: utilities deliver on obligations mandated by government (e.g. in

legislation and in regulatory requirements imposed further to Ministerial directives to the Board); and

• Financial Performance: financial viability is maintained; and savings from operational effectiveness are

sustainable.

Distributors will be required to report their progress against the scorecard on an annual basis, and the

OEB has indicated that it will engage stakeholders in further consultation on the standards and

measures to be included in the distributor scorecard.”xli However, it does not appear that further

consultation will result in penalties, as the OEB has stated that “The standards and measures must be

suitable for use by the Board in monitoring and assessing distributor performance against expected

performance outcomes, in monitoring and assessing distributor progress towards the goals and

objectives in the distributor’s network investment plan, in comparing distributor performance across the

sector and identifying trends, and in supporting rate-setting.”1 The OEB also foresees that the

“expanded use of benchmarking will be necessary to support the Board’s renewed regulatory

framework policies.”xlii

The Board will maintain its existing regulatory mechanisms, subject to certain refinements.

Specifically, the X-factor will be refined and the “publication of distributor results” mechanisms referred

to above (among possible others) will be integrated into the scorecard.xliii

How could an effective PBR plan assure reliability and service quality?

If the OEB were ever to apply penalties for degraded reliability or service standards, what would

it look like? Several jurisdictions offer different models.

OFGEM

1 RRFE, p. 58.

9

Page 10: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

In a 2006 article entitled “Incentive Regulation in Theory and Practice: Electricity Distribution

and Transmission Networks”, xliv Paul Joskow provides a detailed account of the mechanisms that

OFGEM (the Office of Gas and Electricity Markets, the U.K. regulator), has put in place to maintain or

enhance service quality:

(a) two distribution service interruption incentive mechanisms targeted at the number of outages and

the number of minutes per outage;

(b) storm interruption payment obligations targeted at distribution company response times to outages

caused by severe weather events;

(c) quality of telephone responses during both ordinary weather conditions and storm conditions;

(d) a discretionary award based on surveys of customer satisfaction.

Joskow reports that overall, 4% of total revenue on the downside can be lost and an unlimited

percentage of total revenue on the upside are subject to these incentives. Joskow provides a detailed

analysis of how OFGEM uses statistical and engineering benchmarking studies and forecasts of planned

maintenance outages to develop targets for the number of customer outages and the average number

of minutes per outage for each distribution company: “The individual distribution companies are

disaggregated into different types (e.g. voltages) of distribution circuits and performance benchmarks

and targets are developed for each based on comparative historical experience and engineering norms.

Aggregate performance targets for each distribution company are then defined by re-aggregating the

targets for each type of circuit (OFGEM (2004c) appendix to June 2004 proposals) to match up circuits

that make up each electric distribution company. Both planned (maintenance) and unplanned outages

are taken into account to develop the outage targets. The targets incorporate performance

improvements over time and reflect, in part, customer surveys of the value of improved service quality.

There is a fairly wide range in the targets among the 14 distribution companies in the UK, reflecting

differences in the configurations of the networks. OFGEM also has added cost allowances into the price

control (…) to reflect estimates of the costs of improving service quality in these dimensions.”xlv

He goes on to say:

“Once performance targets are set, a financial penalty/reward structure needs to be applied to it to

transform the physical targets into financial penalties and rewards. The natural approach would be to

apply estimates of the value of outages and outage minutes to customers (OFGEM surveys indicated

10

Page 11: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

customers valued reducing the number of minutes per outage more than the number of outages) to

define prices for outages and outage duration.”

Joskow expresses disapproval that OFGEM did not take this customer survey approach in the

most recent distribution company price review, opting instead to develop prices for outages and outage

duration by taking the target revenue at risk and dividing it by a performance band around the target

(25% and 30% respectively). He indicates that this approach seems rather arbitrary and yields a fairly

wide variation in the effective price per outage and the price per minute of outage across distribution

companies.xlvi

Joskow also provides an account of OFGEM’s storm restoration compensation incentive

mechanism. Under this mechanism, the distribution companies are given incentives to restore service

within a specified time period and if they do not they must pay compensation to customers as defined in

the incentive mechanism. The mechanism includes adjustments for exceptional events. Under normal

weather conditions customers are eligible to be paid £50 pounds for an interruption that lasts more

than 24 hours (£100 for non-domestic) and a further £25 for each subsequent 12-hour period. Again,

Joskow expresses concern that it is not clear where the values for these payments originate from. If a

customer consumes 20kWh per day (600kWh per month) the implied value of lost load is £2.5 per lost

kWh or roughly $5000/Mwh of lost energy. xlvii

Finally, Joskow provides an account of the compensation arrangements are applied when there

are severe weather conditions, where both the triggers and the compensation change.xlviii The trigger

periods for compensation are defined below and the amount of compensation starts at £25 when the

trigger is hit with a cap of £200 per customer.

Category of severe weather Definition Trigger period for compensation

Category 1 (medium events) Lightning events (≥8 times daily

means faults at higher voltage

and less than 35% of exposed

customers affected)

24 hours

Non-lightning events (≥8 and ≤

13 times daily mean faults at

higher voltage and less than 35%

of exposed customers affected).

24 hours

11

Page 12: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

Category 2 (large events) Non-lightning events (≥13 times

daily mean faults at higher

voltage and less than 35% of

exposed customers affected)

48 hours

Category 3 (very large events) Any severe weather events

where ≥35% of exposed

customers are affected.

48 hours x (Number of

customers affected/35% of

exposed customers)²

Finally, there are penalties and rewards for the quality of telephone service, based on the results

of customer surveys.xlix

Massachusetts, Michigan, New York State

Relating to quality service standards in a North American context, Ter-Martirosyan and Kwoka

provide two of what they believe to be the more thorough-going and interesting plans introduced, in

Massachusetts and Michigan. In 2001, Massachusetts developed a Service Quality metric based on eight

factors – frequency and duration of outages, five aspects of customer service, and one measure of

workplace safety. Joskow specifies that the benchmarks are developed based on historical experience

and penalties and rewards are triggered when actual performance falls outside of one standard

deviation of historical performance. This effectively leads to a “dead-band” around historical

performance. There are also caps and floor on the incentive arrangement.l Each factor was assigned a

weight and combined into a composite index that permitted an electric or gas utility to earn or lose up

to 2% of its revenues from distribution and transmission services (Massachusetts DTE 2001).

Mass. DTE’s SQ Plans li

Performance Measure Weight Penalty or Offset

Operations Frequency of outages 22.5% $3.0M

Duration of outages 22.5% $3.0M

Customer Service On cycle meter reads 10% $1.3M

Timely call answering

(w/in 20 seconds)

10% $1.7M

Service appointments 10% $1.7M

12

Page 13: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

met

Complaints to

regulators

5% $0.7M

Billing Adjustments 5% $0.7M

Safety Lost Work Time

Accidents

10% $1.3M

Risk/Reward Potential 100% $13.4M*

*Based on 2% of T&D revenues (using Mass Electric as an example)

Also in 2001, Michigan established 10 specific standards of quality, again involving both outages

and customer service (one additional standard was subsequently added). Utilities were required to

report their performance on each, and shortfalls from established standards could result in penalties

and ultimately credits to customers lii.

In “Reforming the Energy Vision”, the NYS Department of Public Service Staff Report and

Proposal, it is stated that “Generally, most New York electric utilities are subject to several performance

metrics with negative-only revenue adjustments for failing to meet certain criteria. These metrics are

related to outage duration, number of outages, customer service, safety, and various metrics targeted to

particular needs identified for individual utilities. Earnings exposure for electric company operations, by

rate plan, range from total negative incentives of 263 basis points to total positive incentives of 45 basis

points including positive incentives for energy efficiency.” liii

Conclusion

In light of the foregoing, my conclusions are two-fold. On one level, it would appear that while

Ontario has implemented PBR in the electricity sector, it lags behind in measuring reliability in a

disciplined manner and applying incentives/penalties to ensure service quality to ratepayers. If the

leading study on the question is to be accepted, PBR without mandatory service quality standards does

entail a degradation in SAIDI, which the OEB should take measures to correct.

On a more philosophical level, the comparison between COSR and PBR with which this paper

began may require a sober reassessment. According to Joskow, the theoretical framework underpinning

PBR for legal monopolies in general has developed considerably over the last fifteen years and is

13

Page 14: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

reasonably mature. However, the practical application of these concepts to electrical distribution has

lagged behind the theory for the several reasons. While PBR has been promoted as a straightforward

and superior alternative to COSR, it would more accurately be termed a complement to COSR. It

requires a precise accounting system for capital and operating costs, cost reporting protocols, data

collection and reporting requirement for the non-cost categories of performance. liv Even simple cap

mechanisms require rate cases or price reviews. Ultimately, the information burden of implementing

PBR is not unlike that for COSR, and a determination of whether switching systems is worth the effort

ultimately depends on whether the performance improvements justify the additional effort. lv

Other observers point to other problems with PBR, resulting in several jurisdictions abandoning

their PBR plan because of unforeseen exogenous events that could not be administered within the

confines of the plan, adverse public reaction to utility earnings in excess of those commonly authorized

under COSR, and questions about the legality of the plans under state statutes.lvi There is also a

fundamental complaint leveled against PBR in that it rewards utilities for things they should be doing at

any event.lvii

Regardless, as PBR plans have evolved around the world, the focus has shifted from reducing

operating costs to investment and various dimensions of service quality. Joskow feels these

mechanisms should be more fully integrated, as quality of service schemes appear to have been simply

“bolted on” to cost reduction schemes without any incorporation of consumer valuations of quality, or

an exploration of different quality in different dimensions. If PBR is to provide a satisfactory answer to

the question this paper began with – how to arrive at the correct level of quality for price in electricity –

more empirical research on PBR needs to be done.

14

Page 15: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

References

Ajodhia, V., and Rudi Hakvoort, “Economic Regulation of quality in electricity distribution networks”,

Utilities Policy, 13, 211-226.

Berg, Thomas F., “The Incentive Regulation Bandwagon: Picking up Speed”, Public Utilities Fortnightly,

May 1, 1992, p. 16.

Cronin, Francis J. and Stephen Motluk, “Ontario’s Failed Experiment (Part 1)”, Public Utilities Fortnightly,

July 2009, p. 39.

Cronin, Francis J. and Stephen Motluk, “Ontario’s Failed Experiment (Part 2)”, Public Utilities Fortnightly,

August 2009, p. 52.

Joskow, Paul (2005), “Incentive Regulation in Theory and Practice: Electricity Distribution and

Transmission Networks”, http://economics.mit.edu/files/1181.

Kaufman, Lawrence, “Incentive Regulation for North American Electric Utilities”, Energy Law and Policy

(Toronto: Carswell, 2011) at p. 275.

Lowry, Mark Newton and Lawrence Kaufman, “Performance-Based Regulation of Utilities”, Energy Law

Journal (2002) 23:2 - 399.

NYS Department of Public Service Staff Report and Proposal, Case 14-M-0101, “Reforming the Energy

Visions”, April 24, 2014.

Ontario Energy Board, “Report of the Board – Renewed Regulatory Framework for Electricity

Distributors: A Performance-Based Approach”, October 18, 2012.

Ter-Martirosyan, Anna and John Kwoka, “Incentive regulation, service quality, and standards in U.S.

electricity distribution”, Journal of Regulatory Economics (2010) 38:258-273.

Sappington, David E.M. and Weisman, Dennis L. “Designing Superior Incentive Regulation”, Fortnightly,

February 15, 1994.

15

Page 16: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

i Virendra Ajodhia and Rudi Hakvoort, “Economic regulation of quality in electricity distribution networks”, Utilities Policy 13 (2005) 211-221 at p. 211.ii Mark Newton Lowry and Lawrence Kaufman, “Performance-Based Regulation of Utilities”, Energy Law Journal, 2002, 23, 2, p. 399.iii Supra note ii at p. 402.iv Ibid.v Ibid.vi Ibid.vii Supra note ii at p. 406.viii 2013 ONCA 359.ix Paul Joskow, “Incentive Regulation in Theory and Practice: Electricity Distribution and Transmission Networks”, MIT, January 21, 2006 at p. 3; http://economics.mit.edu/files/1181.x “The Prize in Economic Sciences 2014”, The Royal Swedish Academy of Sciences, Background, “Market Power and Regulation” http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2014/popular-economicsciences2014.pdfxi Supra note ii at p. 403.xii Supra note ix at p. 3.xiii Supra note i at p. 212.xiv Ibid.xv Supra note ii at p. 406.xvi Supra note ii at p. 399.xvii Supra note ii at p. 404.xviii Supra note ii at p. 408.xix Lawrence Kaufman, “Incentive Regulation for North American Electric Utilities”, Energy Law and Policy at p. 280.xx Supra note iii at p. 405.xxi Anna Ter-Martirosyan and John Kwoka, “Incentive regulation, service quality, and standards in U.S. electricity distribution”, J. Regul. Econ (2010) 38:258-273 at p. 259.xxii Ibid.xxiii Ibid.xxivSupra note xxi at p. 262.xxv Supra note xxi at p. 263.xxvi Supra note xxi at p. 268.xxvii Ibid.xxviii Ibid.xxix Francis J. Cronin and Stephen Motluk, “Ontario’s Failed Experiment (Part 1)”, Public Utilities Fortnightly, July 2009 at p. 39.xxx Supra note xxix at p. 40.xxxi Ibid.xxxii Ibid.xxxiii Francis J. Cronin and Stephen Motluk, “Ontario’s Failed Experiment (Part 2), Public Utilities Fortnightly, August 2009 at p. 51xxxiv Ibid.xxxv Supra note xxxiii at p. 55.xxxvi Supra note xxix at p. 41.xxxvii Supra note xxxiii at p. 54.xxxviii Supra note xxxiii at p. 51.xxxix Supra note xxxiii at p. 53.xl Ontario Energy Board, Report of the Board, Renewed Regulatory Framework for Electricity Distributors: A Performance-Based Approach, October 18, 2012, p. 57.xli Supra note xl at p. 58.xlii Supra note xl at p. 59. xliii Supra note xl at p. 61.xliv Supra note ix at p. 31.xlv Ibid.xlvi Ibid.

Page 17: A Comparison of cost-of-service and performance-based regulation - reliability and service quality (May 2015)

xlvii Ibid.xlviii Supra note ix at p. 31.xlix Ibid.l Ibidli Supra note ix at p. 36lii Supra note xxi at p. 262.liii NYS Department of Public Service Staff Report and Proposal, Case 14-M-0101, “Reforming the Energy Vision”, April 24, 2014.liv Supra note ix at p. 51.lv Supra note ix at p. 52.lvi David E.M. Sappington and Dennis L. Weisman, “Designing Superior Incentive Regulation”, Public Utilities Fortnightly, February 15, 1994, p. 12.lvii Thomas F. Berg, “The Incentive Regulation Bandwagon: Picking up Speed”, Public Utilities Fortnightly, May 1, 1992, p. 129.