comments by francis j. cronin in the matter of the report · 1 comments by francis j. cronin in the...
TRANSCRIPT
Comments by Francis J. Cronin
In the matterof the
Pacific Economics GroupReport
Calibrating Rate Indexing Mechanisms for 3rd Generation Incentive Regulation in Ontario
(EB-2007-0673)
On behalfof
Power Workers’ Union
April 14, 2008
Comments on PEG’s 3rd Generation Report
| P a g e 1
1.0 Introduction
On August 2, 2007 the Ontario Energy board (Board) initiated a consultation on 3rd Generation
Incentive Regulation (3rd Generation IR) for Ontario’s Electricity Distributors (EB-2007-0673).
On February 28, 2008 the Board released for comment a report by Board staff’s consultant,
Pacific Economics Group (PEG), on a methodology for adjusting electric distributor rates
“Calibrating Rate Indexing Mechanisms for 3rd
Generation Incentive Regulation in Ontario” (the
February PEG IR report). The Board noted that release of the PEG report “is a step in the
evolution of the Board’s consideration and development of regulation of the sector.” According
to the Board, the “PEG Report makes specific recommendations for the productivity and stretch
factor components of the X-factor.” The Board has requested written comments on the February
PEG Incentive Regulation (IR) report.
I have been retained by the Power Workers’ Union (PWU) to provide comment on the February PEG
IR report.
The February PEG report relies for some critical input on an earlier April 2007 report
“Benchmarking the Costs of Ontario Power Distributors” (the April PEG benchmarking report)
issued by the Board in its consultation on benchmarking, Comparison of Electricity Distributors
Cost (EB-3006-0268). Subsequently, the April PEG report was updated by a March 20, 2008
release (the March PEG benchmarking report). These latter two reports are a continuation of
work on costs and cohorts begun in the 2006 Electricity Distribution Rate initiative with an
October 2005 study prepared by Christensen Associates, “Methods and Study Findings:
Comparators and Cohorts Study for 2006 EDR” (the Christensen report).
This report contains my comments on PEG’s reports and recommendations. I provided detailed
comments on these issues at the Board’s March 25 and 26, 2008 3rd Generation IR Stakeholder
Meeting as expert consultant to the PWU. Related comments are offered in my earlier comments
on benchmarking1 submitted by the PWU in the benchmarking consultation which I will revise
1 Comments by Francis J. Cronin In the matter of the Ontario Energy Board’s Comparison of Distributor Costs Consultation Consultant’s Report (EB-2006-0268) June, 26, 2007.
Comments on PEG’s 3rd Generation Report
| P a g e 2
based on PEG’s revisions in its March 20, 2008 benchmarking report. I also provided detailed
comments on the benchmarking issues at the September 12, 2007 Comparison of Distributor
Costs Technical Consultation (Technical Conference) as expert consultant to the PWU.
Additional related comments on service quality (SQ) and service quality regulation (SQR) can be
found in the PWU’s filed submission in the Board’s consultation on SQR2 (EB-2008-0001).
The issues addressed in this report are PEG’s approaches in setting the IR parameters:
Ø Use of operations, maintenance and administration (OM&A) data to benchmark
cost level inefficiency penalties based on mis-specified short-run cost functions
that creates substantial biases in cost/rankings from both the model as well as the
partial cost comparisons;
Ø Use of OM&A data and a proxy measure for capital to estimate total factor
productivity (TFP) trends rejected by PEG in its benchmarking report and which
suffers substantial errors in estimated capital use and TFP trend by LDC; and
Ø Use of US LDC data to derive TFP trends applied to Ontario distributors without
adequate justification.
I demonstrate through empirical evidence that PEG’s current methodologies are flawed and
unacceptable as the basis for an IR. My recommendation for 3rd Generation IR is a short-term
framework to be instituted (e.g. baseline productivity factor or PF with menu) while a more
rigorous IR is developed with the decades of continuous LDC data held by the OEB.
Although I am critical of PEG's analysis in a number of respects, I readily acknowledge the
difficulty of their assignments. I understand that this difficulty was compounded by the fact that
PEG did not have access to important, substantial historical data previously collected by the
Board to serve as the foundation of both 1st Generation and future IR/benchmarkings. I have no
doubt that this critical information would be of significant assistance to them and to all the
stakeholders affected by the proposed IR. Notwithstanding the various criticisms I have
identified, I do agree with PEG's analysis in a number of important respects, i.e., the examination
of 1st Generation analysis/results for application to 3rd Generation and the use of monetary
2 Comments on Staff Discussion Paper Regulation of Electricity Distributor Service Quality (EB-2008-0001), March, 2008.
Comments on PEG’s 3rd Generation Report
| P a g e 3
values rather than physical counts of capital. Furthermore, I agree with PEG's characterization of
the remaining "problems" and that its forced shortcuts result in "problematic" applications to
capital calculations for IR and total cost benchmarking; that differing labour capitalization
among the LDCs has not been controlled for; and, that the higher costs of LDCs providing higher
reliability has not been recognized in the work they have been able to undertake to date.
Moreover, it is my view that, if the modifications I recommend below are adopted to overcome
these problems PEG has acknowledged, as well as a number of others, together with PEG's
suggested application of 1st Generation analysis to 3rd Generation and the IPI discussed in the
Staff Report, the Board will have the basis of an effective IR scheme for both the 3rd and
subsequent Generations.
Outline
In section 2, I present a summary, conclusions, and recommendations. The latter are divided
between those I recommend the Board act on in the short run and those which should be
implemented over a longer time horizon. In section 3, I discuss the integrated nature of utility
operations and how this must be reflected in IR and benchmarking, including capital and service
reliability. Section 4 presents an analysis of the biases inherent in structuring IR on OM&A
costs only. Section 5 examines the errors and associated biases with proxy measures of capital
costs. Comparisons of benchmark results using only O&M or proxy capital measures are
presented in section 6. Actual benchmarking effects on individual LDCs from these errors are
discussed. Section 7 discusses the role of SQR and the reliability trends among Ontario LDCs.
Finally, section 8 discusses two design features for the IR framework: the use of a PF-Return on
Equity (ROE) menu and the construction of an input price index (IPI).
2.0 Summary, Conclusions, and Recommendations
In this section I present a summary, conclusions, and recommendations. The latter are divided
between those I recommend the Board act on in the short run and those which should be
implemented over a longer time horizon.
Comments on PEG’s 3rd Generation Report
| P a g e 4
1. Summary
The PEG report presents an IR methodology for adjusting electric distributor rates. Underlying
this framework are four pieces of analyses. The first three are used by PEG in its February IR
report to establish TFP trends for Ontario LDCs. The information on TFP trends is used by PEG
to establish their recommended baseline TFP target for the proposed 3rd Generation IR
framework. These separate pieces of analysis are:
1. Estimation of TFP trends for 2002-2006 using Ontario LDC OM&A data and a proxy
measure for capital;
2. Derivation of TFP trends for Ontario distributors using US data on LDC TFP trends.;
and
3. Assessment of 1st Generation performance based regulation (PBR) analysis and results
on TFP performance by Ontario LDCs.
In the Conclusions section below, I discuss the critical shortcomings of (1) and (2). In terms of
(1), the capital proxy method used by PEG in estimating the 2002-2006 TFP trend was rejected
by PEG for use in cost benchmarking in its March 2008 report. I find that this proxy is a poor
estimator of capital usage by LDC; it almost always overestimates capital but this error can range
from 100 percent to several hundred percent across individual LDCs. In terms of (2), PEG offers
inadequate justification for the use of its US LDC sample which consists of large, urban LDCs
which are in some cases vertically integrated, operate in multiple states, and may distribute gas
as well as electricity.
In terms of (3), I support the use of 1st Generation information for establishing 3rd Generation
parameters. The TFP and IPI analyses in 1st Generation PBR were based on rigorous research;
subsequent research has greatly expanded our understanding of Ontario LDCs’ costs,
productivity, and efficiency. Consideration of this information in establishing a baseline TFP
target and IPI for 3rd Generation IR is a reasonable and relatively expeditious approach.
The fourth piece of analysis is used to set individual LDC inefficiency penalties. In doing so
PEG examined and compared individual LDCs with the other Ontario LDCs on the basis of
reported OM&A costs only. LDCs found to be less efficient are subject to higher annual TFP
Comments on PEG’s 3rd Generation Report
| P a g e 5
targets during the 3rd Generation IR term. The benchmarking analysis used in comparing the
LDCs’ efficiencies is the analysis presented in PEG’s April benchmarking report. Given the
unavailability to PEG of essential historic capital cost information, this analysis uses only
OM&A costs rather than total cost to estimate a short-run cost function for Ontario distributors
from 2002 – 2006. This analysis therefore lacks consideration of LDC capital usage, which is
required for correct specification of short-run cost functions.
Another shortcoming of PEG’s analysis is the lack of consideration of the LDCs’ SQ (e.g.
system reliability) performance. As PEG acknowledges, reliability varies across LDCs and
higher reliability usually necessitates higher costs However, the PEG April benchmarking
report does not examine SQ performance and does not take into account the substantial
differences in service reliability across the LDCs and the associated differences in costs.
Conclusions on PEG’s analysis are set out below.
2. Conclusions
a. Need to Reflect Integrated Nature of Electric Distribution Operations in IR.
The most important, overriding issue in the Board’s evolving benchmarking initiatives is the failure to “model or benchmark” the integrated operation of distribution utilities with comprehensive data reflecting:
Ø the joint nature of LDC output
Ø the substitution relationships among an LDC’s inputs
Joint output means that just and reasonable rates cannot be determined unless costs are assessed jointly with SQ; failure to reflect all LDC outputs seriously biases the assessments in favor of LDCs with lower reliability. We demonstrate below that LDCs with equal costs can operate with very different proportions of capital and OM&A. Were we to focus on just one input which favors certain LDCs we would be misled as to the cost performance of all the LDCs.
b. OM&A benchmarking is inherently flawed.
It fails to recognize the integrated nature of utility operations and that LDCs can and do make management decisions regarding the appropriate distribution of their budgets between O&M and capital. Based on a sample of 2000 data filed with the OEB we find:
Comments on PEG’s 3rd Generation Report
| P a g e 6
(1) the share of labour capitalized ranges from about 10 percent to 50 percent; (2) the resulting capitalized labour represents as much as 39 percent of reported OM&A and as little as 7 percent; (3) the amount of capitalized labour per customer is inversely related to the amount of OM&A per customer; (4) conclusions about cost rankings and comparisons change when the capitalized labour per customer is added to the amount of OM&A per customer. This means that OM&A benchmarking is inherently flawed.
c. Capital Usage not Included in Benchmark Assessment.
The current IR and benchmarking analyses undertaken by PEG does not include properly calculated capital costs. PEG examined the use of a capital “proxy” approach termed the ‘Mini Inventory Model’. That is, the normal approach to capital calculation but using one year to assume a back-cast of 30 or 40 years. Ultimately, this ‘Mini Inventory Model’ approach was rejected by PEG for use in its March benchmarking report. This meant that PEG’s benchmarking analysis and short-run cost function fails to include capital as a control variable, completely undermining the cost comparisons and “consumer dividend” recommendations.
d. Biased Capital Measure used in Ontario TFP Analysis.
We find the ‘Mini Inventory Model’ capital proxy has widespread and significant errors across LDCs. Unfortunately, while PEG’s March benchmarking report rejects the ‘Mini Inventory Model’ and does not use the proper measure of LDC capital, the February PEG IR report relies upon this proxy capital approach in order to calculate Ontario LDC TFP from 2002 – 2006. The use of such error-ridden capital estimates in the absence of historic capital cost data, completely undermines PEG’s Ontario TFP analysis. PEG’s current methodologies are flawed and unacceptable as the basis for an IR; the Board cannot rely on this analysis as input for their decision.
e. Biased Capital Cost Shares.
The February PEG IR report calculates a capital share of 63 percent based on analysis of Ontario data over the 2002 – 2006 period. However, this calculation is based on the same capital proxy rejected in the March PEG benchmarking report. This proxy does not take actual cost data for 25, 30 or 40 years to inflation-adjust the actual LDC capital deployment profile. Rather, it assumes a deployment profile for the net fixed assets(NFA) in 2002. The same assumed deployment profile is used for every LDC. I have examined the extent of biases from this capital proxy. It creates substantial and wide spread errors between LDCs’ real capital costs and the estimates generated by the proxy. For the vast majority of Ontario LDCs, this proxy substantially overestimates the amount of capital used with the extent of the error varying greatly amongst the LDCs. These share data should not be employed in the IR, e.g., in weights for the IPI.
Comments on PEG’s 3rd Generation Report
| P a g e 7
f. Inadequate Justification for Use of US LDC Data.
PEG uses a sample of generally very large, urban, US LDCs. Some of these appear to operate in multiple states and may have very different organizational structures from the Ontario LDCs. Many of PEG’s US sample also distribute gas and in some cases are vertically integrated. Some are under IR and some are not. These differences make it unlikely that the sample of US distributors used by PEG is a good peer group for Ontario. However, there are thousands of municipal LDCs in the US which could provide a better peer group. As structured, PEG’s US peer group cannot be used as the basis for 3rd
Generation IR.
g. Multi-dimensional Output and Just rates.
Electricity distributors produce and sell a multi-dimensional output to their customers. Clearly, the customer service, reliability and voltage quality, among others, can vary substantially, producing different products depending on the mix of characteristics delivered to the customers. Many/most energy regulators have a dual responsibility toward consumers: they must ensure that prices are just and reasonable and they must ensure the appropriate level of service/reliability is delivered.
h. Relationship between Reliability and O&M Expenditures.
Using a medium-length time series of data filed with the OEB, on-going research for the PWU finds that among Ontario LDCs, O&M expenditures are statistically related to the LDC’s reliability performance.3 Furthermore, lower reliability is found to cause an LDC to raise its budget. These relationships should be keenly appreciated by regulators, especially when transitioning from cost of service to IR.
i. Reliability Costs and Observed “Inefficiency.”
Since reliability varies “widely” among LDCs, and LDCs with higher reliability will generally have higher costs, we must structure the LDC benchmarking to account for these differences. If such different cost causation situations are ignored in observations on the LDCs’ OM&A costs, we may mistakenly identify “higher cost” LDCs as less efficient than lower cost LDCs providing lower reliability. If this is so, the benchmarking approach proposed by PEG and Board staff will penalize the high-reliability LDCs and reward the low-reliability LDCs.4
3 This research will be discussed in my filing on April 28 on the PEG benchmarking report. 4 We are using the terms “high” and “low” in a relative context.
Comments on PEG’s 3rd Generation Report
| P a g e 8
j. PEG’s Proposed IR and Perverse Incentives.
This perverse reward/penalty scheme could then incent high-reliability LDCs to reduce their OM&A expenses to improve their benchmarking scores; reliability would most likely decline as well. This is not the result we would expect from a well-structured benchmarking scheme.
k. PEG’s Proposed IR and Perverse Profit Incentives.
The shift to IR can put OM&A costs directly in conflict with the pursuit of profit during the plan’s term. Cost reductions experienced earlier in a plan’s term are worth more to a utility than cost reductions achieved in later years. Since capital may not be subject to significant changes within the earliest years of a plan’s term, the utility could be incented to cut OM&A expenses beyond what is prudent for the quality and reliability of the network.
l. De Facto IR and Deterioration in Ontario Reliability.
The Board’s Decision in the RP-1999-0034 case was to establish a minimum floor for reliability. As stated by the Board in that decision, “…the Board favours the minimum standards proposed in the draft Rate Handbook for first generation PBR. The Board notes that these standards represent the minimum acceptable performance level.” 5 It appears that on average, Ontario LDCs have been experiencing a deterioration of reliability over the 2000 to 2006 period; furthermore, it appears that some are not compliant with their performance standard established in 2000. Unfortunately, the Board appears not to have examined the reliability performance for the mandated compliance set in 2000.
m. Internalizing Social Cost of Interruptions.
Firms will only optimize those costs internal to its cost structure, generally capital and OM&A. The costs borne by customers due to the utility’s interruptions are not considered by a utility when deciding how much to spend on capital and OM&A. It would generally be the case that the failure to recognize such customer interruption costs would lead to an LDC spending too little on reliability.
n. Critical, multi-dimensional Role of IPI.
The IPI plays a critical, multi-dimensional role in an IR. The IPI sets an automatic adjustment for LDC cost changes. It obviates the need to hold frequent COS proceedings. The IPI mirrors a COS process by adjusting rates on a prudency basis, but uses the sector average as a prudency test. It mitigates likelihood that mistakes in RAM
5 RP-1999-0034 Decision at 5.1.20.
Comments on PEG’s 3rd Generation Report
| P a g e 9
associated with macroeconomic price index will over/under compensate LDCs. It establishes yardstick competition among Ontario LDCs, with better performers holding down costs. And finally, it provides proper incentive signals to LDCs and customers.
o. 1st Generation IPI.
The IPI developed in 1st Generation was rigorously examined and evaluated. The input weights used in 1st Generation PBR were extensively tested and based on the results of 48 Ontario distributors, including decades of actual capital data for each LDC. These weights would be preferable to the weights suggested by PEG which appear to be based on the capital proxy rejected by PEG in their March benchmarking report. For the vast majority of Ontario LDCs, this proxy dramatically overestimates their use of capital. PEG’s resulting 63 percent capital in total costs versus the 45 percent or so found in 1st
Generation is consistent for the error associated with the PEG capital proxy. Use of such a higher estimated cost share for capital could lead to error in the calculated IPI and increased volatility in the IPI.
p. PF-ROE Menu Incorporates LDC Diversity into IR.
The PF-ROE menu is a natural solution to the issue of diversity that exists among the LDCs. Firms in different circumstances can base their IR choices on these differences. And, a menu can be easily structured to reach explicit sharing goals between rate payers and shareholders.
3. General Recommendations
I will first discuss my recommendations from a “blue-sky” perspective. Subsequently, I consider
what and how the Board should move on first and what to implement over a longer time frame.
a. Systemic Risk with Improper IR.
Given the risks to customers, shareholders, and LDCs associated with inadequate benchmarking regimes, the Board should not implement any benchmarking of Ontario LDCs until this can be done correctly, i.e., with the full, properly specified costs of distribution together with each LDCs reliability level as a foundation of the framework.Total cost benchmarking better reflects an LDC’s cost structure and input choices, is more equitable, permits an evaluation of societal resource usage, and limits inappropriate regulatory incentives. The Board should develop the appropriate capital cost information necessary to properly benchmark Ontario electric utilities. A very good starting point is the 1999 PBR Baseline Surveys which covered decades of capital components and, as of that time, had calculated the total costs of distribution, including capital costs, for those LDCs in the 1999 Staff report as well as others. Even with the subsequent substantial
Comments on PEG’s 3rd Generation Report
| P a g e 10
mergers and amalgamations since 1999, the Board could update the initial PBR submissions with the subsequent annual PBR filings and other distributors’ submissions.6
b. Factor Input Efficiency.
Due to the distortions caused by non-market prices for capital, it is essential that benchmarking cost measures reflect the full set of factor input choices and their associated costs. Conventional measures of capital costs must be calculated and included for efficient and equitable cost comparisons among Ontario LDCs. Adequate adaption periods with feasible efficiency improvement targets must be structured within a multi-period IR.
c. IR, Cost Incentives, and SQR.
It is clear that IR produces incentives for potentially imprudent cost cutting. We also know from empirical research that LDCs under IR but without standards and penalties do in fact cut O&M adversely to maximize profit. Service quality/reliability must be included in any IR framework as a benchmark to assess a utility’s production integrating utility cost benchmarking with SQ and reliability regulation.
d. European SQR Efforts.
European regulators have led the way and laid out compelling arguments for the need to blunt the adverse impacts of IR on SQ. The Council of European Energy Regulators (CEER) has outlined the components of such SQR and encouraged its member jurisdictions to implement these items, such as data collection and customer surveys, system-wide standards, and single customer guarantees. Individual European countries have incorporated WTP and interruption costs into their IR frameworks. The Board should thoroughly review this experience for its applicability to Ontario and implementation into a robust set of service reliability mandates with incentives/penalties.
6 This effort collected PBR capital data from the 1970s to 1997 and PBR operating/financial/demand data from 1988 to 1997, including “environmental” factors potentially affecting an LDC’s performance. This effort was augmented by directed PBR filings among Ontario LDCs for at least the years 2001 and 2002. It is possible as well as preferable to update this data as must surely have been the intent in collecting the data from the LDCs on an on going basis. These critical data and what must mount up to thousands of man hours of effort expended collectively to compile, process and analyze this wealth of information should not be ignored. Updating the 1999 data would cost no more, and probably less, than efforts to start in 2007 and work backward. It is not even clear if the latter approach is even feasible, would most certainly produce less robust data and almost certainly take longer to complete.
Comments on PEG’s 3rd Generation Report
| P a g e 11
4. Short-term Recommendations
a. Interim Term.
The Board should consider 3rd Generation as a transition term to a fully developed and tested IR. Such an IR is detailed below. The term of 3rd generation should be no longer than 3 tears.
b. Data preparation and Analysis.
Full data preparation and related analyses need to be a top priority of this interim plan. The Board should work with a small group of key stakeholders to assure that an effective research plan is implemented and that the data processing and analysis is expeditiously carried out.
c. Mid-Term Review.
The Board should schedule a mid-term review at about the 18th month of 3rd Generation IR for the 1st tranche LDCs to begin the process of operationalizing the longer term IR.
d. IPI.
The Staff report presents a detailed discussion on the price index options. The IPI 1st
Generation framework should be implemented for 3rd Generation.
e. PF-ROE Menu.
The Board should employ a PF-ROE menu. Existing research on Ontario LDCs or comparable jurisdictions/circumstances (e.g., PF-ROE menu Norway) should be used to set the PF. For example, research during 1st Generation found a ten-year mean growth rate of slightly more than .8 for TFP. Research subsequent to 1st Generation found a mean ten-year growth rate of about 1.6 percent for TFP for frontier firms. On this basis, I recommend the Board set the ceiling of the proposed menu at 1.6 percent with an ROE of 12.5. Increments of .2 in the PF would be associated with 100 basis point increments in the allowed ROE. This would set the baseline TFP at .8 with an estimated nearly 60/40 split for customers on the incremental PF-ROE choices. The baseline PF would be virtually identical to the ten-year growth among Ontario LDCs in 1st Generation.
Comments on PEG’s 3rd Generation Report
| P a g e 12
f. PF and Budget Sufficiency.
The above recommendation assumes that the LDC budgets going into IR are in a steady state mode and providing sufficient funds for capital refurbishment, growth, and necessary additions induced by wholesale price increases or conservation and that the operational side of OM&A is receiving a similarly sufficient budget. However, I have some doubts that this is so for all LDCs. Information on reliability, budgets and ROE would seem to indicate that there may be an operational budget gap. No doubt, many LDCs have seen increases in OM&A but my expectation is that the LDCs have had to substantially increase the “A” portion of that to meet the substantial increase in regulatory burdens imposed on them over the past 10 years.
g. SQR and Reliability.
The Board must appraise the effective SQ standards relative to that established by the 2000 Rate Handbook. Reliability performance must be examined for compliance with the minimum SQ requirement. Causes of the deterioration in the overall reliability performance must be assessed. The Board should reexamine the interrelationship between the de-facto (e.g. rate freezes) and proposed IR and LDC incentives. A balanced approach should be adopted.
5. Long-term Recommendations
a. Long-term Framework.
The OEB should outline a long-term (e.g., 15 year) framework with a staged approach for elimination of any inefficiencies. Each LDC should be required to eliminate a reasonable minority of total inefficiency, if any, during each term. Plan terms should be long enough (e.g., 5 years) for LDCs to adapt and have plausible chance of reaching the efficiency targets.
b. Benchmarking Basis.
Benchmarking must be based on correctly calculated measures that adequately account for the joint nature of LDC output and interrelationship among LDC inputs. Benchmarking must be based on: (1) total costs, including, (2.) capital costs, and reflect SQ; (3) reliability/service quality aspects of operations; and (4) measure total inefficiency, including allocative inefficiency.
Comments on PEG’s 3rd Generation Report
| P a g e 13
c. Socially Optimal SQ.
The Board should plan to fully integrate reliability/customer interruption costs into each LDC’s planning process. This should be done at the start of a second term following 3rd
Generation IR. The requisite information on customer costs and WTP should be collected and analyzed in the first term. The goal should be to identify the socially optimal level of SQ with the requite levels of investment and O&M.
3.0 A Model of Electricity Distribution
The April PEG benchmarking report on distributor cost comparisons describes the distribution
business and related inputs as follows:
Power flows to the customer through wire conductors. Other capital inputs used in local delivery include poles, conduits, station equipment, meters, vehicles, storage yards, office buildings, and information technology (“IT”) inputs such as computer hardware and software. Distributors commonly operate and maintain such facilities and are also frequently involved in the construction of distribution plant. These activities require labour, materials, and services. Local delivery also typically requires a certain amount of power in the form of line losses. Opportunities are available to outsource many OM&A and construction activities. Distributors vary greatly in the extent of their outsourcing. (p 28)
Despite this description of the importance of capital and power losses, PEG recommends neither
in its proposed cost benchmarking or 3rd generation framework. And, despite noting that
reliability varies across LDCs and that higher reliability generally necessitates higher costs, PEG
does not account for such reliability associated cost differences.
Just how are capital and line losses integrated by each utility in the distribution of power? What
role does reliability play? These questions are addressed below.
Multi-dimensional Output, Multi-dimensional Inputs
The distribution of electricity can be represented by equation (1) which expresses the
relationship between the quantity of distribution output produced, Q, and the inputs of labour, L,
materials, M, capital, K, and system losses of electricity, SL. Electric distributors produce and
sell a multi-dimensional output to their customers. Clearly, the customer service, reliability (or
“continuity” for the Europeans) and voltage quality, among others, can vary substantially,
Comments on PEG’s 3rd Generation Report
| P a g e 14
producing different products depending on the mix of characteristics delivered to the customers.
Since reliability and associated costs vary across LDCs and the observed LDCs’ costs reflect
these differences in reliability, benchmarking an LDC’s efficiency must include its reliability as
an output. Definitions are discussed below.
(1) Q = f ( L, M, K, SL)
The first two inputs (L,M) are commonly labelled OM&A. Some labour associated with
deploying capital assets is capitalised. Indeed, distribution utilities are capital intensive
operations. Cost shares among inputs are reported for the Ontario LDCs in 1st Generation PBR
as 45 percent capital, 42 percent OM&A, and 13 percent system losses. The Norwegian
regulator (NVE) includes system losses and has done so within a model based on equation (1).
Capital
Capital is represented by the conventional stock/amount of real capital.7 The share of capital
ranges from about 35 percent to about 65 percent. Clearly, utilities can substitute between
OM&A and capital and do so. As PEG noted in its April benchmarking report cost comparison
report (p 50), and I agree with, “Capital often serves as a substitute for OM&A inputs, and
companies vary in their propensity to capitalize OM&A expenses. OM&A expenses should thus
be lower the higher is the capital quantity…” That is, when two inputs are substitutes a firm can
produce the same level of output with varying amounts of the two inputs.
7 Standard utility accounting of capital costs is based on book valuation (i.e., historical prices) and fails to reflect changing assets prices over time. The capital quantity index employed in the 1999 Board Staff Report was constructed using inflation–adjusted values for historical capital stock deployed before a benchmark year, as well as for subsequent additions and retirements, each adjusted for inflation. Real stock in 1980, the benchmark year, was estimated by deflating undepreciated capital by a capital asset price constructed by “triangularizing” the pre-benchmark asset prices back to 1960. The capital asset price index, CAP, is the electric utility distribution system construction price index published by Statistics Canada. The standard treatment of capital in productivity research expresses subsequent values of the capital quantity index as a perpetual inventory model adjusted for annual depreciation rate, additions and retired capital. Both additions and retirements are inflation adjusted. The capital service price index is equal to rate of the depreciation plus opportunity costs adjusted by the construction price index. Capital costs are then the capital quantity index times the capital service price index. See 1999 Staff report.
Comments on PEG’s 3rd Generation Report
| P a g e 15
Unlike equation (1) and the comprehensive depiction of distribution inputs, some researchers and
regulators have based their evaluations of efficiency on some variant of equation (2) or (3).
Here, the differing levels of reliability are not accounted for in Q.
(2) Q = g( O&M, NK)
(3) (Q, NK) = h(O&M)
Where, NK represents number of transformers, the aggregate capacity of transformers, or the
circuit length of the network. Note in equation (2) and (3), system losses are ignored. Not only
does this remove 10 to 20 percent of costs from the efficiency comparisons, more importantly, it
eliminates the “benefit of lowered losses” for those utilities that increased inputs like OM&A, for
higher power efficiency (i.e., lower losses); the higher usage OM&A remains to be interpreted as
lower production efficiency.
Capital Proxies
Instead of the correct capital data, i.e., the net real stock of capital, short-cut measures of capital
are sometimes suggested. These might be either monetary shortcuts referred by PEG as proxies
or physical counts of infrastructure.
Physical Counts of Capital
Number of transformers or circuit miles instead of constant dollar stock of total capital (and the
service that flows from that) have sometimes been employed in benchmarking electric utilities.
Transformers represent only 10 to 20 percent of capital asset valuations for a distribution utility;
distribution lines might represent slightly more. As the importance of computers, software and
communications grow with market openings, billing complexities, and real-time network
operations, these shares will fall.
Some researchers have employed physical counts that view capital as exogenous to the utility;
such efficiency comparisons have been made largely on OM&A expenditures, taking the capital
measures and output as fixed. Thus, in this approach a utility’s decisions regarding half of
capital is taken as given, its remaining capital and line losses ignored, as are the
interrelationships among these factors and OM&A.
Comments on PEG’s 3rd Generation Report
| P a g e 16
The failure to evaluate performance with monetary values for capital means that utility
comparisons fail to reflect the cost of different choices for capital, allocations, and the trade-offs
between capital and O&M. Transformers might be high-efficiency models or lower cost, higher
maintenance models. A circuit km might be overhead, underground or reconductored, wider
lines. While km may be similar, costs for the installation options can vary by several hundred
percent and there are differences in maintenance. Utilities employing higher labour
capitalisation rates (which lowers near term costs) are evaluated as more efficient since their
non-capital costs are lower (i.e. OM&A costs).
System Losses
Distribution utilities act as the middleman selling and transporting electricity from wholesale to
retail markets. Resistance to the flow of electric current throughout the distribution network
causes a portion of the electricity entering the network to be lost in the form of heat. Network
characteristics such as conductor size, type of transformers, end-user power factors, and non-
optimal loads and voltage can affect system losses which range from 6 to 8 to over 20 percent as
a share of total distribution costs. During energy price spikes, the cost associated with line losses
can increase dramatically. Between 1988 and 1993, the wholesale price of power increased 45
percent in Ontario. Indeed, the price of power rose faster than other inputs between 1988 and
1993 and between 1988 and 1997. Such energy “crises” have sparked intense system audits by
some utilities to identify the sources of losses and potential network remedies, including:
• system automation equipment to optimise load;
• capacitors to compensate for low power factors among some end users;
• reconductoring to increase conductor size and reduce resistance;
• higher cost, high-efficiency transformers which reduce losses associated with transformer activation (core) and/or load (winding); and
• system regulators to optimise voltage.
Clearly, the use of such remedies to reduce system losses means that greater amounts of capital
are being employed. Some of these remedies also require higher levels of OM&A. For example,
the use of capacitors to correct the effect of reactive power from certain end-user applications
means that equipment is being more widely dispersed and that equipment failures increase as
more fuses, switches and controls are deployed closer to the customer. Installation of system
voltage optimisation and phase current balancing equipment increases the OM&A associated
Comments on PEG’s 3rd Generation Report
| P a g e 17
with such regulating equipment.
Inter-related Inputs
Numerous Ontario distributors relying on the substitution possibilities among their inputs
attempt to optimize their production with different input mixes based on substitution
possibilities, circumstances and prices. So, two LDCs might produce the same output but one
uses more capital and less labour while the second uses less capital and more labour.
PEG’s Failure to Include Capital in its Benchmarking
The April PEG benchmarking report benchmarked LDCs on OM&A costs only as a result of the
lack of capital cost information available to it for the analysis. Stakeholders at the OEB’s
September 2007 Technical Conference on benchmarking noted the substantial biases and errors
associated with the lack of capital costs. PEG agreed to try potential capital proxies for the
missing data. The March PEG benchmarking report (pp. 62-65) examines its capital proxy:
NFA in 2002 with an assumed 30 years deployment schedule to inflation-adjust the lump sum of
historical accounting capital reported in 2002. Capital additions for 2003 - 2006 are then added.
However, the March PEG benchmarking report ultimately rejects the use of this proxy.
I have examined the biases related to the use of partial cost (OM&A) benchmarking in IR: this
measure distorts LDC efficiency, sometimes ranking as most efficient those LDCs that are in fact
among the least. I have also examined the substantial biases from a number of potential capital
proxies. Below, I report on these analyses: all suffer from substantial and wide spread errors
between a LDCs actual costs and those used to benchmark it.
4.0 The Inherent Biases in OM&A Benchmarking
The February PEG IR report recommends numerous individual LDCs be assigned inefficiency
penalties above the baseline TFP target recommended for the whole sector. These penalties are
based on the analysis presented in its June benchmarking report.8 The benchmarking report
employs reported OM&A data filed by the LDCs with the OEB to gauge individual LDC
8 PEG indicates these penalties will be revised based on the March revision of the benchmark report.
Comments on PEG’s 3rd Generation Report
| P a g e 18
(in)efficiency based on cost comparisons with other LDCs. No consideration is made for
differences among LDCs in labour capitalization, capital employed, or service reliability.
Below, we examine the severe biases associated with using OM&A data unadjusted for different
labour capitalization amounts or the amount of capital also embedded in an LDC’s cost structure.
PEG does not have the current data to calculate LDC capital. Instead, PEG uses a capital proxy
which I find consistently overestimates LDC capital by a substantial amount but which varies
significantly across LDCs. Therefore, PEG’s cost shares derived using the capital proxy are also
likely flawed and are discussed below and in section 5. The issue of reliability is examined in
section 7.
I find the errors associated with using only OM&A to set IR parameters are substantial and likely
to reward inefficient LDCs and penalize efficient LDCs. Such IRs also create and foster the
wrong incentives: LDCs will first rearrange their reported costs (e.g., capitalize more); second,
cut reliability and service-related OM&A; and, third, substitute further capital for labour even if
it does not make purely economic sense to do so.
Cost Shares for Ontario Distributors
Exhibit 4.1 is taken from the 1999 Staff report and is based on the 1999 PBR filings. It presents
the 1993 cost shares by LDC size class. In the 4-factor analysis about 45 percent of an average
utility’s total cost is related to capital. Note that this analysis placed a market-based rate of
return on real distribution assets so it is comparable to current LDC cost shares with assets
earning a market return. Remaining cost shares are on average 29 percent for labour, 13 percent
for material and 13 percent for line losses on average. Medium-sized utilities tend to have a
slightly higher share for capital and slightly lower shares for labour and material than large and
small utilities.9
9 These share are consistent with findings in other jurisdictions, see Grasto, K., 1997, “Incentive-Based Regulation of Electricity Monopolies in Norway,” NVE working paper.
Comments on PEG’s 3rd Generation Report
| P a g e 19
Exhibit 4.1: 1993 Average Cost Shares for Ontario Distributors4 Factor 3 Factor
Capital Line
LossLabour Materials Capital Labour Materials
Large 45 12 30 13 51 34 14
Medium 49 12 28 12 55 31 13
Small 40 16 30 14 48 35 17
All 45 13 29 13 52 34 15
Source: 1999 Staff report.
In fact, the analysis of cost shares among Ontario LDCs over the 1988-1997 period found a very
substantial range of cost shares among utility inputs (see Exhibit 4.2). For example, looking at
all 10 years of data for a subset of 19 LDCs finds that the share of capital can range from a low
of 33.1 percent to a high of 63.2 percent. Associated with these data points are line loss cost
shares of 5.1 percent and 10.0 percent. Combined capital and line loss shares range from a low
of 38.2 percent to a high of 73.2 percent.
The share of labour ranged from a low of 18.8 percent to a high of 44.4 percent. Associated with
these data points are material cost shares of 8.0 percent and 17.4 percent. Combined labour and
materials shares range from a low of 26.8 percent to a high of 61.8 percent. Had the totality of
filed cost data been examined, even greater differences among LDCs may well have been found
in terms of their cost shares.
The April PEG benchmarking report notes, and I would agree, that shares do vary:
At current input prices, capital inputs typically account for between 45 and 60 percent of the total cost of local power delivery and constitute the single most input group. The exact cost share of capital depends on the age of a system and the manner in which plant is valued. The relative shares of labour and other OM&A inputs vary greatly. Prices for labour, capital and other inputs are important drivers of power distribution cost. (p 29)
Comments on PEG’s 3rd Generation Report
| P a g e 20
Exhibit 4.2: Range of Annual Cost Shares for Ontario Distributors 1988 - 1997Capital Line
LossesCombined Labour Materials Combined
Minimum 33.1 5.1 38.2 18.8 8.0 26.8
Maximum 63.2 10.0 73.2 44.4 17.4 61.8
Source: Data examined in 1999 Staff report.
In the February PEG report, based on analysis of Ontario data over the 2002 – 2006 period, PEG
calculates a capital share of 63 percent. However, this calculation is based on the same capital
proxy rejected in the March PEG benchmarking report. This proxy does not take actual cost data
for 25, 30 or 40 years, to inflation-adjust the actual capital deployment profile of an LDC, rather,
it assumes a deployment profile for the NFA in 2002. This approach, assumes that the
deployment profile is the same for every LDC. I have examined the extent of biases of a number
of potential capital proxies, including that used by PEG. All the capital proxies examined create
substantial and wide spread errors between LDCs’ actual capital/capital costs and the estimates
generated by the proxies. The capital proxy comparative analyses are presented in section 5.
Labour Capitalization and Burdens
There are also administrative and accounting reasons why different utilities might show different
shares of labour or OM&A versus capital. Burden allocations can be expected to vary markedly
across utilities based on utility policy and practice differences.10 Furthermore, even within a
utility, burdens on capitalized labour can be markedly higher then the burdens put on labour
assigned to OM&A functions. Finally, the share of labour that LDCs choose to capitalize can
vary substantially.
The February PEG report notes:
Companies are inconsistent in their capitalization of OM&A expenses. A good example is the treatment of software maintenance expenses. Companies that outsource customer care tasks will report more of their IT costs as OM&A expenses. (p 34)
10 Along with the direct expenses associated with these various tasks, LDCs must decide administratively how to allocate overhead costs such as supervisory, engineering, or management expenses. Overhead costs are recovered by indirect cost burdens on the direct costs of labour.
Comments on PEG’s 3rd Generation Report
| P a g e 21
Benchmarking of detailed customer care cost items can be especially problematic due to the cost allocation inconsistencies we have discussed. (p 35)
Of three “high-priority” data improvements identified by PEG two have to do with the
capitalization or reporting of labour.
Improvements in the data can make it possible to expand the role of benchmarking in Ontario regulation. Here is a suggested list of high-priority upgrades:
Tighten data reporting rules and enforcement so as to encourage more consistent allocations of labour costs between distributor functions.
Make public the share of net OM&A expenses attributable to labour, ideally with itemization with respect to the major distributor functions. (p41)
Finally, the February PEG IR report noted:
The Board has established itself in recent years as a leader in the gathering of data that are useful in power distribution cost benchmarking. Despite the progress made, the data have flaws that limit their usefulness in benchmarking. Improvements in the data gathering and collection process can lead to better benchmarking and an expanded role for benchmarking in regulation. The following reforms are especially worthwhile:
better guidelines for, and public reporting of, the share of salaries and wages in net OM&A expenses;
greater consistency in the assignment of labour costs to the major categories of distributor activities; (p iv)
Having noted the “problematic” and “inconsistent” treatment of labour capitalization policies
among Ontario LDCs and the “high-priority,” “worthwhile” improvements in the accounting and
reporting of labour expenses, with the unavailability of the required capital data, the April report
benchmarks Ontario distributors on only OM&A. No attempt appears to have been made to
analyse the biases associated with differing burden and capitalization policies as was done in the
development of the 1st Generation PBR during the winter of 1998-1999.
We know that some portion of the reported differences in labour and thus OM&A among LDCs
are due to differences among LDCs on burden rates and on capital deployment policies and
Comments on PEG’s 3rd Generation Report
| P a g e 22
projects. Each utility must cost out the inputs it uses. Some of these work tasks, for example,
have to do with construction or equipment installation and would be considered capital activities.
Some work tasks have to do with, say maintenance or billing, and would be considered OM&A.
Along with the direct expenses associated with these various tasks, LDCs must decide
administratively how to allocate overhead costs such as supervisory, engineering, or
management expenses. These overhead costs are substantial and can be on the same order of
magnitude as an LDCs total payroll cost. Different utilities consider different costs as overhead
and within the same utility, may apply different overhead rates to labour applied to capital,
maintenance or billing-collecting.
1st Generation Burden Rates
An illustrative example from the 1999 PBR filing would be the following rough approximations:
labour burdens applied to billing-collecting were about 33 percent, labour burdens applied to
OM&A and capital were about 90 percent. Choices ranged widely in the burden rates applied to
different cost categories. Each utility must decide how much labour to capitalize, i.e., include in
the rate base and pay off over time as opposed to labour included in OM&A which is paid for as
you go, each year.
1st Generation Capital Deployment and Labour
When the PBR data was filed in 1999, as part of Board staff’s consulting team for 1st Generation
PBR we undertook an examination of functional cost assignments among a small sample of
Ontario LDCs. We found that Ontario LDCs were operating under a wide range of burden and
labour capitalization rates. In fact, we found that the percent of total labour capitalized among
this small sample of Ontario LDCs ranged from about 15 to about 30 percent. Had we examined
a broader sample of LDCs’ policies with respect to labour capitalization, we may well have
found even greater differences among utilities in terms of their rates of capitalization and burden
assignments. In part, these results reflect differences among LDCs on capital deployment
policies, and in part, differences in the type and amount of capital related projects undertaken.
Comments on PEG’s 3rd Generation Report
| P a g e 23
Biases in Reported OM&A Cost Shares Employed in the PEG IR Report
What bias might there be with just using OM&A with differing capitalization shares? Let’s say
that we have two LDCs with the same average cost of $500 per customer per year which was
about the average cost among large and medium distributors based on the 1999 PBR filing (see
Exhibit 4.3). Let’s also assume that each has the average share of labour observed in the PBR
filing, 29 percent: each would then have $145 of labour costs.
However, if one LDC capitalizes 30 percent of labour and the other capitalizes 15 percent we
would observe $123 in labour expenses in the latter, but only $102 in labour expenses in the
former, a 17 percent difference in “perceived labour costs.” Using the average share of
materials, there would be $167 in OM&A in the high capitalization LDC compared with $188 in
OM&A in the low capitalization utility: a difference of 11.2 percent due simply to differences in
the accounting of labour applied to capital.
Exhibit 4.3: Comparing 2 Illustrative Utilities with the Same Costs but DifferingLabour Capitalization PoliciesCapitalizationPolicy
Total Costs Per
customer
Total Labour
Costs @ 29 percent
Percent Labour
Capitalized
Labour Assigned To
OM&A
Reported OM&A
Expenses
High Capitalization Utility $500 $145 30 $102 $167
Low Capitalization Utility $500 $145 15 $123 $188
Source: Ontario Energy Board, 1999 PBR filing and author calculations.
Since each of the two LDCs has the same $500 per customer costs, benchmarking on total costs
per customer would rank the two utilities equally. With PEG’s proposal benchmarks based only
on OM&A, any errors due to accounting allocation differences will not be accounted for. As in
the case of labour capitalization practices among the LDCs, if we examined the actual
accounting policies with respect to overhead burdens and capitalization among a broader sample
of Ontario LDCs, we may well have found even greater differences among those LDCs in terms
of perceived differences in OM&A that are simply due to accounting allocations. The
Comments on PEG’s 3rd Generation Report
| P a g e 24
consequences of such benchmarking inconsistencies in PEG’s proposed approach undermine any
confidence that differences in reported costs are in fact differences in the underlying efficiencies
among the LDCs rather than differences in administrative policies.
Recommendation:
Benchmarking for regulatory incentives/penalties should be done on a utility’s total costs. Use of partial cost measures whether it be OM&A or capital suffers from the fact that some inputs are substitutes and LDCs combine them in different ways. Without a correct measure of capital to examine, OM&A costs can and do present biased results of LDC performances since they reflect differences in approaches to labour burdens and capitalization. Even adjusting the reported OM&A for allocation differences will still not present a plausible efficiency result since many combinations of capital and labour can be employed by equally efficient utilities.
Actual Ontario LDC Labour Capitalization Shares
Above, we examined the error implications of varying labour capitalization proportions
assuming the difference among LDCs might range from 15 to 30 percent.11 We found that with
such a range O&M could have an error of 11 percent while reported labour could have an error
of 17 percent.
But what is the range of reported labour capitalization rates and how much does this vary across
LDCs? Using data reported by electric distributors in the OEB’s 2000 PBR filings, we can
examine this question. Exhibit 4.4 reports labour capitalization rates for 17 Ontario LDCs
representing a nonrandom cross section of LDCs selected to reflect the diversity of operating
circumstances, e.g., size, age, growth, and location.
11 This was range observed from a small sample of Ontario LDCs in the 1999 research for the first generation PBR for electricity distributors.
Comments on PEG’s 3rd Generation Report
| P a g e 25
Exhibit 4.4: Capitalized Labour Shares for Selected LDCs
0
10
20
30
40
50
60
0 5 10 15 20
Utility
% o
f Lab
or C
apita
lized
Y axis
Source: 2000 Annual PBR filing Ontario Energy Board. Author calculations.
As we can see, the share of labour capitalized among these 17 LDCs ranges from below 10
percent (actually 0) to 50 percent. Thus we can conclude that the range spanning capitalized
labour shares is three times larger than that assumed in Exhibit 4.3. Recall that the assumed
range in Exhibit 4.3 produced an error of 11 percent of reported OM&A and 17 percent of
reported labour: with the findings from Exhibit 4.4, these errors could reach 33 percent of
OM&A and 51 percent of labour.
Exhibit 4.5 presents data on the percent of capital additions from labour together with capitalized
labour shares. We see that the two variables track each other well: utility 1 has the lowest
capitalization rate among these six LDCs and the lowest share of capital additions from labour;
on the other hand, utility 6 has the highest labour capitalization rate (50 percent) and the highest
share of capital additions from labour (over 80 percent). Now let us see how varying
capitalization rates actually affect cost benchmarking.
Comments on PEG’s 3rd Generation Report
| P a g e 26
Exhibit 4.5: Capitalized Labour Shares and Labour Shares in Capital Additions
0
10
20
30
40
50
60
0 1 2 3 4 5 6 7
Utility
% o
f Lab
or C
apita
lized
0
10
20
30
40
50
60
70
80
90
% o
f Cap
ital A
dditi
ons
from
Lab
or
YY
% of Labour Capitalized ( ♦ ) % of Capital Additions from Labour ( ■ )Source: 2000 Annual PBR filing Ontario Energy Board. Author calculations.
Exhibit 4.6 ranks 6 LDCs on OM&A costs as reported in the 2000 PBR filing to the OEB.
Based on this information reported in column 1 we would judge utility 5 with $130 per customer
to be the low cost utility with utility 2 at $146 to be second; utilities 3 at $160 and 4 at $179 rank
fourth and fifth; utility 1 to be the highest at $206. LDC 5 has a 12 percent margin over the next
lowest distributor and LDC 1’s cost is 17 percent higher than its nearest comparator. However,
recall that our data in column 1 is reported OM&A net of labour capitalized by each LDC.
Column 3 presents data on the amount of capitalized labour actually reported by each utility.
Comments on PEG’s 3rd Generation Report
| P a g e 27
Exhibit 4.6: OM&A Costs with and without Capitalized Labour per Customer
Utility OM&A per Customer
OM&A Capitalized Labour per Customer
Capitalized per Customer
1 206 (6th) 220 14 (2nd)2 146 (2nd) 186 39 (4th)3 160 (4th) 219 59 (6th)4 179 (5th) 192 12 (1st)5 130 (1st) 182 51 (5th)6 154 (3rd) 186 32 (3rd)
Source: 2000 Annual PBR filing Ontario Energy Board. Author calculations.
First, note that the capitalized labour data reported in column 3 is generally inversely related to
the OM&A data reported in column 1: LDC 5 has the lowest OM&A cost but ranks second
highest in capitalized labour in column 3; utility 1 ranks highest in OM&A cost but second
lowest in capitalized labour. Second, the addition of capitalized labour costs (column 3) to
reported OM&A costs (column 1) dramatically alters our prior conclusions: LDC 1’s cost is now
almost identical with LDC 3 being less than 0.05 percent different; LDC 5’s cost is now only 2
percent lower than 6, not the 12 percent calculated above; and LDC 2’s cost is not 23 percent
lower than LDC 4, but rather only 3 percent. Such differences are hardly enough to base any
conclusive efficiency comparison on. Third, note that the ratio of capitalized labour to OM&A
ranges from a low of 7 percent (LDC 1 and 4) to highs of 37 percent (LDC 3) and 39 percent
(LDC 5).
Ranking Errors in Comparing Ontario LDCs on OM&A rather than Total Cost
So far we have examined the biases involved with OM&A benchmarking. But how different are
the rankings for individual LDCs? Exhibit 4.7 below compares rankings for a set of 24 of the 48
LDCs used in the 1st Generation staff report. For each of these LDCs we have their OM&A,
total costs, and their respective ranking across the 48 firms. As we can see, the rankings are
markedly different. Utility 1, which ranks 3rd on OM&A, ranks 43rd on total costs. There are
many other utilities similarly ranked: low ranks on OM&A but high ranks on total costs. Others
are just the opposite. Utility 18 ranks 37th on OM&A and 3rd on total costs, with many other
utilities similarly ranked: high ranks on OM&A but low ranks on total costs. What we see in
Comments on PEG’s 3rd Generation Report
| P a g e 28
Exhibit 4.7 with actual LDC cost and cost rankings is the perverse effect of rewarding low
OM&A/high total costs and penalizing high OM&A/low total costs when we benchmark on
partial costs.
Exhibit 4.7: Comparing LDC Rankings on OM&A vs.Total Costs
UtilityOM&A
Ranking
Total Cost
Ranking
Difference in
Rankings
Percent Difference in Ranking
1 3 43 -40 -0.832 7 30 -23 -0.483 8 24 -16 -0.334 10 35 -25 -0.525 11 33 -22 -0.466 12 39 -27 -0.567 15 45 -30 -0.638 18 11 7 0.159 20 6 14 0.29
10 21 7 14 0.2911 22 10 12 0.2512 24 41 -17 -0.3513 25 42 -17 -0.3514 28 46 -18 -0.3815 31 47 -16 -0.3316 31 47 -16 -0.3317 33 9 24 0.5018 37 3 34 0.7119 38 18 20 0.4220 42 23 19 0.4021 45 14 31 0.6522 46 21 25 0.5223 47 25 22 0.46
Source: Ontario Energy Board, 1999 PBR filing and author calculations.
Comments on PEG’s 3rd Generation Report
| P a g e 29
Conclusion:
OM&A benchmarking is inherently flawed for efficiency comparison. It fails to recognize the integrated nature of utility operations and that LDCs can and do make management decisions regarding the appropriate allocation of their budgets between O&M and capital. Based on a sample of 2000 data filed with the OEB we find: (1) that the share of labour capitalized ranges from less than 10 percent to 50 percent; (2) the resulting capitalized labour represents as much as 39 percent of reported OM&A and as little as 7 percent; (3) the amount of capitalized labour per customer reported to the Board is inversely related to the amount of OM&A per customer reported to the Board; and, (4) finally, conclusions about cost rankings and comparisons change when the capitalized labour per customer is added to the amount of OM&A per customer.
5.0 Biases in Benchmarking with Proxy Measures of Capital Stock
The current IR and benchmarking analyses undertaken by PEG does not include properly
calculated capital costs. During the Technical Conference, PEG discussed several potential
proxies as a potential substitute for a capital quantity index.12 These proxies would be based on a
total of five years of data currently available to PEG for the cost comparison exercise, i.e., 2002-
2006.
One such proxy cited by PEG at the Technical Conference is what was termed as the ‘Mini
Inventory Model’ capital proxy approach. That is, the normal approach to capital calculation but
using one year to assume a back-cast of 30 or 40 historic years. Such an approach was examined
by PEG in the March revised Cost Comparison report. Ultimately, this ‘Mini Inventory Model’
approach was rejected by PEG for use in the March benchmarking report.
12 Presumably, because PEG lacks data to provide a measure of the cost of capital, the quantity of capital, and the price of capital, PEG specifies their model as a short-run cost function and employs several expedient short cuts to cover critical gaps in their data. As such, to be properly specified, the short-run cost function should have properly defined variable costs on the left-hand side of the model, and the quantity of the “fixed” input on the right-hand side, together with all other input prices. In sum PEG’s specified short-cost function is seriously compromised. However, even if the specification and data issues could have been overcome, the specification employed by PEG assumes that capital is fixed, i.e., that it does not respond to changes in such important determinants as price. But, research on Ontario LDCs, indicates that capital does respond to altered circumstances, e.g., price or regulatory changes, and does so, even partially, within a 3 - 5 year period. As such, other specifications need to be examined such as a dynamic or long-run cost function.
Comments on PEG’s 3rd Generation Report
| P a g e 30
In the analysis below, the ‘Mini Inventory Model’ is found to have widespread and significant
errors across LDCs. When we compare the cost per customer based on the ‘Mini Inventory
Model’ to the cost per customer for real capital stock (both indexed to the mean across the
LDCs) we find this proxy has a mean absolute error of 18.22 percent and extreme errors of -89
and 88 percent. Fifty six percent of the LDCs had errors greater than 10 percent; nearly a fifth of
the LDCs had errors between 25 percent and 50 percent.
Unfortunately, while the March benchmarking report rejects the ‘Mini Inventory Model’ and
does not use the proper measure of LDC capital, the PEG February IR report relies upon this
approach in order to calculate Ontario LDC TFP from 2002 – 2006. The use of such error-ridden
capital estimates completely undermines the Ontario TFP analysis.
I have added additional potential proxies and one (i.e., the Budget or Capital Additions)
recommended by one of the stakeholders participating in the Cost Comparison consultation
process, for the analysis of potential biases. These proxies are:
1 Gross book value (GBV), the value of an LDC’s plant;
2 Net book value (NBV), gross book value minus depreciated plant;
3 Capital additions, the cumulative sum of capital additions for 5 years;
4 “Mini Inventory” method, the real capital method but using only 5 years of data and assuming a fixed deployment profile for all LDCs which is then applied to NFA;
5 Budget data, or the lagged value of capital additions;
6 Capital expenditures (Capex), depreciation and capital costs; and
7 Customer additions, the cumulative sum of customer additions over 5 years.
The analysis below suggests that there is no quick and convenient capital proxy that can be
reasonably used to substitute for the use of real capital stock in TFP determination.
Comments on PEG’s 3rd Generation Report
| P a g e 31
Analysis of Potential Proxies
During the development of 1st Generation PBR, the Board collected decades of data on GBV,
NFA, depreciation, capital additions and retirements from scores and scores of electricity
distributors. This information was used to calculate the price of capital and each utility’s
quantity of capital (i.e., real capital stock) as inputs into the construction of the IPI and each
utility’s TFP. Data from 48 of the utilities was used to ascertain the extent of error for each
proxy by comparing the calculated results for each proxy to the previously calculated real capital
to determine whether any of the proxies prove to be an accurate enough substitute.
Exhibit 5.1: Capital Proxies Compared with Real Capital Stock
Percentage Difference
(+/-)
Gross Book Value
Net Book Value
Real K Adds
(5 yrs)
MiniInventory
($2006)
(Budget) Nominal
Cap Add t-1 (1996)
Capex, Nominal
1997
5 Year Cust Adds
<10 45.83% 41.67% 31.25% 43.75% 20.83% 39.58% 2.08%≥10 to
<2531.25% 39.58% 27.08% 31.25% 18.75% 37.50% 14.58%
≥25 to <50
18.75% 14.58% 29.17% 18.75% 37.50% 18.75% 12.50%
≥50 to <75
2.08% 4.17% 10.42% 2.08% 8.33% 2.08% 16.67%
≥75 to <100
2.08% 2.08% 4.17% 8.33% 2.08% 43.75%
≥ 100 6.25% 10.42%Max. Error
80.49% 70.88% 76.57% 88.31% 192.38% 82.56% 469.3%
Min. Error -30.22% -29.11% -71.56% -88.98% -97.42% -33.49% -99.75%
Absolute Average
Error
14.95% 14.84% 26.44% 18.22% 39.26% 16.95% 82.89%
Source: Ontario Energy Board, 1999 PBR filing. Author calculations.
Exhibit 5.1 presents results of the capital proxy analysis. For each proxy, each LDC’s calculated
proxy cost per customer was indexed to the mean of that proxy across all 48 LDCs. The same
index value was calculated for real capital per customer and the difference between the two
index values was then calculated. Information on the error distribution including the mean
absolute, minimum, and maximum errors is presented for each proxy.
Comments on PEG’s 3rd Generation Report
| P a g e 32
GBV was found to have a mean absolute error of 14.95 percent (i.e., the mean error irrespective
of sign) and extreme errors of -30.22 and 80.49 percent. Fifty four percent of the LDCs had
errors greater than 10 percent; nearly a fifth of the LDCs had errors between 25 percent and 50
percent.
NBV also provides a poor approximation of real capital stock: it has a mean absolute error of
14.84 percent and extreme errors of -29.11 and 70.88 percent. Fifty eight percent had errors
greater than 10 percent; nearly a sixth had errors between 25 percent and 50 percent.
Several scenarios were modeled using Capital Additions (what is currently available in the
Board’s Cost Comparison data). First, five years of data on additions were examined.
Cumulative capital additions were found to have a mean absolute error of 26.44 percent (i.e., the
mean error irrespective of sign) and extreme errors of -71.56 and 76.57 percent. Sixty nine
percent of the LDCs had errors greater than 10 percent; nearly a third of the LDCs had errors
between 25 percent and 50 percent. Second, the “budget” approach was analyzed by using the
last year of reported capital additions. Cumulative capital additions were found to have a mean
absolute error of 39.26 percent and extreme errors of -97.42 and 192.38 percent. Seventy nine
percent of the LDCs had errors greater than 10 percent; nearly 2 in 5 of the LDCs had errors
between 25 percent and 50 percent.
The ‘Mini Inventory Model’ approach also resulted in significant error. When the cost per
customer based on the ‘Mini Inventory Model’ was compared to the cost per customer for real
capital stock (both indexed to the mean across the LDCs) this proxy was found to have a mean
absolute error of 18.22 percent and extreme errors of -89 and 88 percent. Fifty six percent of the
LDCs had errors greater than 10 percent; nearly a fifth of the LDCs had errors between 25
percent and 50 percent. As noted earlier, while this method was employed in the February PEG
IR report, it was rejected for use in the March PEG report. This approach with such substantial
and widespread errors is not recommended as the foundation for the Board’s TFP decision.
The efficacy of Capex was assessed by using depreciation and capital costs and comparing the
ranking of Capex to real capital stock. This proxy was found to have a mean absolute error of
Comments on PEG’s 3rd Generation Report
| P a g e 33
16.95 percent and extreme errors of -33.49 and 82.56 percent. Sixty percent of the LDCs had
errors greater than 10 percent; nearly a fifth of the LDCs had errors between 25 percent and 50
percent.
Finally, the efficacy of using customer additions as a proxy for real capital stock was examined.
This proxy was found to have a mean absolute error of 82.89 percent and extreme errors of -
99.75 and 82.89 percent. Ninety eight percent of the LDCs had errors greater than 10 percent;
over 40 percent of the LDCs had errors between 75 percent and 100 percent.
By comparing the results of each proxy variable to the estimated real capital stock computed
during the 1st Generation PBR research we estimated the extent of error for each proxy for our
sample of LDCs. None of the 7 proxies examined proved to be an acceptable substitute for real
capital input.
Physical Measures of Capital
Physical counts of capital, such as transformer (or substation) capacity or line miles of conductor
have been used in some jurisdictions as a substitute for value-based capital measures (PEG
already uses this as a business condition variable on the output side of the cost function). We did
not model physical counts of capital at this time. However, previous work has shown that
including physical counts of capital biases calculated efficiency estimates.13
This issue has also been discussed by others. For example, see Dr. Denis Lawrence’s Report to
Energy Australia on London Economics Efficiency and Benchmarking Study on the New South
Whales (NSW) Distribution Business, March 1999. Dr. Lawrence states:
Of more fundamental concern, however, is the attempt to measure capital input simply by the route kilometres of lines and MVA of transformer capacity. The measure of capital inputs should take account not only of quality differences between capital inputs but also capture the amount of resources which have to be expended to construct the capital input. Particularly in the case of lines, simply adding kilometres of lines together is inappropriate. It fails to recognise the inherent differences between central business district, suburban and rural situations…Treating all kilometres of line as being identical is akin to measuring
13 see F.J. Cronin & S.A. Motluk, “Flawed Competition Policies: Designing Markets with Biased Costs and Efficiency Benchmarks,” Review of Industrial Organization, Vol.31, No. 1, Aug 2007
Comments on PEG’s 3rd Generation Report
| P a g e 34
aircraft inputs by the number of miles flown. If one of those kilometres is flown by a Boeing747 while another is flown by a Cessna, the inappropriateness of the assumption is apparent.
6.0 Benchmarking LDC Performance with Partial Costs (O&M only) and/or Incorrect (Proxy) Capital Measures
The February PEG IR report uses LDC inefficiency penalties based on the analysis presented in
its June benchmarking report.14 The June PEG benchmarking report employs only reported
OM&A data to gauge individual LDC (in)efficiency based on cost comparisons with other
LDCs.
Further, the February PEG IR report uses a capital proxy, the ‘Mini Inventory Model’, to
calculate the Ontario LDCs’ TFP for 2002-2006. As noted in section 5, the March PEG
benchmarking report rejects this capital proxy employed in the February IR report. In section 5
these issues are examined from a sequential and separated basis but not actually what they would
mean in a modeled benchmarking application.
In this section I look at a benchmarking analysis that compared benchmarking on total costs to
other short cut approaches like using OM&A only, using capital proxies, or using physical
measures of capital. The extent of error on individual LDC rankings resulting from such flawed
methodologies is examined. This analysis finds that the errors associated with such short-cut IR
approaches are substantial and likely reward inefficient LDCs and penalize efficient LDCs.
Among the alternative specifications, we find substantial divergence from the base case
efficiency scores for many individual utilities e.g. in excess of 10, 20, 30 or more percent.
Examining Variations in Utility Efficiency Rankings
A recent paper examines the impacts on utility efficiency rankings from variations in peer group
regulation in Europe and Australia as well as in the U.S.15 I examine both technical, allocative,
and total efficiency variations among firms resulting from the different cost specifications
14 PEG indicates these will be revised based on the March revision of the benchmark report.15 Cronin, F. and S. Motluk, “Flawed Competition Policies: Designing 'Markets' with Biased Costs and Efficiency Benchmarks.”
Comments on PEG’s 3rd Generation Report
| P a g e 35
employed by regulators involving output, factor inputs, and costs.16 How are rankings impacted
when only subsets of total costs (e.g., OM&A, not capital or system losses) are used to gauge
efficiency?17 Does the use of partial measures of capital relying on physical specifications
impact efficiency rankings? Are rankings affected when comparisons are made independently
one input at a time? Is the efficiency frontier stable? Finally, we compare alternative yardstick
measures to a simple ranking on relative (total) cost per unit.
Regulatory Applications of Peer-based Benchmarking
As part of electricity sector restructuring over the past decade, a number of regulators have
employed production frontier techniques like data envelopment analysis (DEA) or other peer-
based techniques to establish externally fixed performance benchmarks for distribution utilities.
Such benchmarking in NSW, the United Kingdom and the Netherlands, among other
jurisdictions, has “uncovered” wide divergences in efficiency among individual firms. In some
cases, “laggard” firms have been assigned substantial targets for improved productivity, i.e., their
rates must decrease each year by the external benchmark established by the regulator.
16 Our interest is in examining the extent of potential biases from peer-group benchmarking with incomplete specifications and inadequate data. Some research on utility benchmarking has included environmental characteristics to control for differences in operating circumstances. Since we are comparing the results from alternative economic models and data to the results from our preferred Base Case, i.e., our interest is in the difference between the Base Case and each alternative not the absolute ranking, the absence of environmental characteristics in each instance will net out. In addition, there is no standard practice on how environmental characteristics should be specified within DEA models; researchers have employed at least four different approaches and these alternative approaches produce different rankings within DEA models. 17 In order to test whether age of assets or service area density has a significant effect, these variables were incorporated into the DEA specification. Area density (the number of customers per square mile of service territory) and relative age of infrastructure were added to the base case specification separately and together. Results show a highly stable comparison to the base case. In each of the alternative specifications, the average efficiency scores were quite similar to the base case with TE going to .896 from .871, AE going to .698 from .704, and EE going to .627 from .614. In each of the alternative specifications, all utilities that defined the frontier remained on the frontier. In addition, the bottom 7 LDCs (37 percent) remained in the same exact ranking with nearly identical scores. One larger utility did show some improved performance with the introduction of density: however, although it moved up several places in the ranking, it was still far from the frontier and did not cause a substantive change in the overall results.
Comments on PEG’s 3rd Generation Report
| P a g e 36
DEA Applications of Benchmarking
A number of studies have used DEA to estimate the relative efficiency of electricity distribution
systems (Weyman-Jones, 199118 and 1992,19 Førsund and Kittelsen, 1998,20 and Kumbhaker and
Hjalmarsson, 199821). Such yardstick approaches have also been employed by regulators in the
design of regulatory mechanisms. The NVE (Grasto, 1997),22 the Dutch regulator (DTe) (DTe,
2000)23 and the NSW Australia regulator (IPART, 1999)24 have all employed DEA to benchmark
electricity distribution utilities and establish parameters of alternative regulatory frameworks.
The California Public Utility Commission relied upon a DEA benchmarking to evaluate Pacific
Gas and Electric’s (PG&E) efficiency. The U.K. regulator OFFER employed less formal cost
comparisons and limited regression analysis to rank LDCs in its price reviews. These studies and
regulatory applications suffer from serious shortcomings.
First, these studies often employ model specifications that make interpretation of results difficult.
For example, applications often ignore capital and line losses and rely on measures of operating
cost representing less than half of a utility’s total costs, employ physical measures of capital
(e.g., number of transformers, line miles), and at times, even define output to include what most
researchers would consider inputs (e.g., line miles, transformers). Some DEA/Malmquist
analyses have produced implausibly large estimates of productivity changes by distribution
utilities, e.g., as much as +/- 20 to 30 percent per year (IPART/London Economics, 1999).25
18 Weyman-Jones, 1991, “Productive Efficiency in a Regulated Industry: The Area Electricity Boards of England and Wales,” Energy Economics, April: 116-122.19 Weyman-Jones, 1992, “Problems of Yardstick Regulation in Electricity Distribution”, in Bishop, M. et al, editors, Privatisation and Regulation II, Oxford University Press. 20Førsund, F.R., and S. Kittelsen, 1998, “Productivity Development of Norwegian Electricity Distribution Utilities,” Resource and Energy Economics 20: 207-224. 21 Kumbhakar, S.C., and L. Hjalmarsson, 1998, “Relative Performance of Public and Private Ownership under Yardstick Regulation: Swedish Electricity Retail Distribution 1970-1990,” European Economic Review 42 (1): 97-122.22 Grasto, K., 1997, “Incentive-Based Regulation of Electricity Monopolies in Norway,” NVE working paper.23 DTe, February 2000, “Choice of Model and Availability of Data for the Efficiency Analysis of Dutch Network and Supply Businesses in the Electricity Sector.” Accompanying “Guidelines for Price Cap Regulation in the Dutch Electricity Sector”, prepared for DTe by Frontier Economics.24 IPART, February 1999, Technical Annex – Efficiency and Benchmarking Study of the NSW Distribution Businesses, prepared for IPART by London Economics. 25IPART, February 1999, Technical Annex – Efficiency and Benchmarking Study of the NSW Distribution Businesses, prepared for IPART by London Economics.
Comments on PEG’s 3rd Generation Report
| P a g e 37
Second, in some cases, utilities are compared sequentially one input at a time. Yet, it is clear that
input choices are interrelated, just as are utility operations.
Third, the failure to calculate factor input prices restricts the research to examining technical
efficiency. The potentially critical issue of optimal input selection (i.e. allocative efficiency) is
unexplored. Yet, earlier research (Fare, et al., 1985)26 concluded that allocative (in)efficiency is
especially important for regulated utilities facing non-market price signals.
Finally, little research has examined the question of benchmarking stability: over time does the
set of “efficient” firms exhibit stability?
Analysis and Results
In the analysis I employ Ontario electric distributors’ 1997 data. The comparison (base case)
uses an output measure of customer connections and kWh and four inputs representing capital,
labour, system losses and material, which comprehensively span utility costs.27 Our alternative
cases are grouped into two sets. The first set varies output (e.g., customers only, kWh only) with
inputs specified as in the base case; the second varies input specifications with output defined as
in the base case. These variations include inputs defined as (1) base case minus system losses,
(2) capital and system losses only, (3) OM&A only, and (4) OM&A with physical counts of
capital.
Summary of ResultsUsing alternative production specifications employed in recent regulatory applications, we find
mean total efficiency ranges from 58.2 percent to 74.6 percent. Frontier firms and their influence
on the global frontier are found to vary substantially between the base and altenative cases for
both technical and allocative efficiency. Correspondingly, among the alternative specifications,
we find substantial divergence from the base case efficiency scores for many individual utilities,
often exceeding 10, 20, 30 or more percent. Such differences may not be surprising since cost
26 Fare, R., S. Grosskopf, and J. Logan, 1985, “The Relative Performance of Publicly-Owned and Privately-Owned Electric Utilities,” Journal of Public Economics 26: 89-106.27 Cronin, F. and S. Motluk, Flawed Competition Policies: Designing 'Markets' with Biased Costs and Efficiency Benchmarks.
Comments on PEG’s 3rd Generation Report
| P a g e 38
comparisons by regulators are often based on total costs. Similar to Fare, et al. (1985),28 we find
the vast majority of this ineffiency is due to factor mix (i.e., allocative) and a small minority to
less efficient operations (i.e., technical). This may be due to shadow prices varying significantly
from market prices over long periods and institutional incentives favoring internal over external
funds. The use of a simple relative ranking on total cost per customer produces “scores” that are
closer to the base case, and absent the extreme deviations found for individual firms in
alternative DEA specifications employed by some regulators.
In addition, unlike earlier research that found benchmarking highly unstable with firms cycling
on and off the frontier (Weyman-Jones, 1992),29 we find that over a ten-year period “efficiency”
in the base case (i.e., benchmark on total costs) is defined by a stable set of firms. Attributing
cause to the stability is somewhat subjective, but the use of a comprehensive cost benchmark
likely contributes significantly to this stability. Eighteen of nineteen firms have one or more
peers that were their peers in 1988. Eleven of the eighteen firms have as their 1997 peers only
firms that were their peers in 1988. In 1997, seven firms have 1988 peers as well as some new
1997 peers. It is important to note that even for these latter seven, their new peers in 1997 were
also frontier firms in 1988, but for other sets of peer firms.30 Only one frontier firm has a peer in
1997 that was not a frontier firm in 1988. And, the only new frontier firm in 1997 has a capital
share that increased from about 13 percent higher than the average to about 50 percent higher.
Description of Results
Below I present the analysis of total (in)efficiency results for the base case and seven alternative
specifications. Firms on the efficiency frontier are assigned a score of 1.00; firms not on the
frontier are assigned a score of less than 1.00 based on the percentage reduction in total inputs
that could be made while holding output constant.
In the base case, we find a mean technical efficiency of 87.1 percent. Alternative output
28Fare, R., S. Grosskopf, and J. Logan, 1985, “The Relative Performance of Publicly-Owned and Privately-Owned Electric Utilities,” Journal of Public Economics 26: 89-106.29 Weyman-Jones, 1992, “Problems of Yardstick Regulation in Electricity Distribution”, in Bishop, M. et al, editors, Privatisation and Regulation II, Oxford University Press.30 The new frontier firms for these 7 were frontier firms in previous periods but for other firms.
Comments on PEG’s 3rd Generation Report
| P a g e 39
definitions result in individual scores which are are found to differ by 10, 20, 30 or even 40
points from those of the base case. Among alternative output specifications a customers only
case and a kWh only case were run. While the customers only case has two firms with
differences between 10 and 20 points from the base case, the kWh only case scores differerences
of more than 10 points for nine firms. Alternative input specifications result in mean efficiency
scores ranging from 72.37 to 87.8 with individual efficiency scores deviating substantially from
the base case. With only OM&A costs as the benchmark, we find four firms (21 percent) with
deviations of more then 20 percent. Finally, in the NK (i.e., physical counts of capital) and
OM&A case more than half the sample scores deviate by more than 10 points.
Allocative efficiency in the base case averages 70.4. This result is consistent with Fare, et al,
(1985)31 who also find allocative ineffiency to be more than twice as large as technical
efficiency. Alternative output definitions result in average efficiency scores of 65 to 70 percent.
Generally, firms are found to have similar scores on the alternative output definitions relative to
the base case. However, individual scores in some cases are found to differ by 10 and even 20
points from those of the base case. One firm has a score that is 20.5 less than its score in the
base case. Alternative input specifications result in efficiency scores ranging from 75.6 percent
to 91.7 percent. Note that in the OM&A case, we are assessing factor input selections for only
two inputs (i.e., labour and materials). In the NK with OM&A case, we find a high degree of
divergence with the base case: nine firms have deviations of more than 10 points, many greater
then 30.
Exhibit 6.1 presents results on total efficiency combining the technical and allocative scores. In
the base case we find an average total efficiency of 61.4. Among alternative specifications, total
efficiency ranges from an average of 58.2 to 74.6 with corresponding deviations for individual
firms. The alternative specifications have differences ranging up to 25, 40 and even 100 percent
from the base case for individual LDCs. Furthermore, the correlation between technical and
allocative efficiency is quite weak.
31 Fare, R., S. Grosskopf, and J. Logan, 1985, “The Relative Performance of Publicly-Owned and Privately-Owned Electric Utilities,” Journal of Public Economics 26: 89-106.
Comments on PEG’s 3rd Generation Report
| P a g e 40
Therefore, regulators cannot assume that unduly large penalties for “reported” technical
inefficiency can be justified equally well as on the basis that these same utilities have an
inefficient factor mix (i.e. allocative inefficiency)
Exhibit 6.1: Base Case and Alternative Regulatory Benchmarking Results for Total Efficiency
Firm Relative Cost
Efficiency
Base case
Base Caseno SL
CK, SL O&M NK, O&M
1 0.343 0.360 0.360 0.390 0.284 0.2832 0.500 0.448 0.447 0.373 0.859 0.8623 0.629 0.641 0.641 0.616 0.701 0.6974 1.000 1.000 1.000 1.000 0.978 0.9765 0.669 0.625 0.624 0.558 0.856 0.8576 0.880 1.000 1.000 1.000 0.951 0.9527 0.495 0.446 0.445 0.369 0.877 0.8738 0.466 0.457 0.456 0.380 1.000 1.0009 0.453 0.469 0.468 0.446 0.532 0.528
10 0.735 0.865 0.864 0.868 0.822 0.82211 0.651 0.711 0.710 0.672 0.824 0.83112 0.719 0.635 0.634 0.658 0.553 0.55013 0.591 0.546
0.5450.527 0.574 -
14 0.523 0.396 0.395 0.342 0.656 0.65315 0.639 0.598 0.598 0.584 0.609 0.60416 0.758 0.662
0.6610.598 0.862 -
17 0.672 0.643 0.643 0.641 0.620 0.61818 0.592 0.567 0.566 0.531 0.672 0.67019 0.758 0.588 0.586 0.505 0.908 0.906
Mean .588 0.614 0.613 0.582 0.744 0.746Source: Cronin and Motluk, Flawed Competition Policies: Designing 'Markets' with Biased Costs and
Efficiency Benchmarks.
7.0 Appropriate Quality/Reliability Standards must be assessed and ultimately Service Quality Regulation must be integrated into the IR
The Board’s Decision in the RP-1999-0034 case was to establish a minimum floor for reliability.
As stated by the Board in that decision, “…the Board favours the minimum standards proposed
Comments on PEG’s 3rd Generation Report
| P a g e 41
in the draft Rate Handbook for first generation PBR. The Board notes that these standards
represent the minimum acceptable performance level.” 32 It appears that on average, Ontario
LDCs have been experiencing a deterioration of reliability over the 2000 to 2006 period;
furthermore, it appears that some are not compliant with their performance standard established
in 2000. Unfortunately, the Board appears not to have examined the reliability performance for
the mandated compliance set in 2000.
Using a medium-length time series of data the distributors filed with the OEB, on-going research
for the PWU finds that among Ontario LDCs, O&M expenditures are statistically related to the
LDC’s reliability performance.33 Furthermore, lower reliability is found to cause an LDC to
raise its budget. These relationships should be keenly appreciated by regulators, especially when
transitioning from COS to IR.
Economic theory suggests that firms under IR will reduce costs (especially near term costs like
O&M) for enhanced profits. Prior empirical work on US electric distributors has confirmed such
a link: LDCs under IR without SQ standards made widespread and substantial reductions in
O&M. These reductions in O&M were statistically associated with lower levels of reliability34.
For over a decade, Ontario LDCs have been subject to de facto IR. Given the relatively
innocuous SQ standards imposed by the Board, it would not be surprising to find that the level of
reliability among Ontario LDCs has fallen. In fact, data filed by Ontario LDCs with the OEB
over the 2000 – 2006 period documents a sharp reduction in the average level of reliability35.
These issues are discussed below.36
32 RP-1999-0034 Decision at 5.1.20.33 This research will be discussed in the PWU’s filing on April 28, 2008 on the March PEG benchmarking report.34 Ter-Martirosyan, A., “The Effects of Incentive Regulation on Quality of Service in Electricity Markets,” Working Paper, 2002.35 PWU Comments on Staff Discussion Paper Regulation of Electricity Distributor Service Quality (EB-2008-0001), March, 2008. ttp://www.oeb.gov.on.ca/OEB/Industry+Relations/OEB+Key+Initiatives/Electricity+Service+Quality+Regulation36 For more extensive comments, see Comments by Francis J. Cronin In the matter of the Ontario Energy Board’s Comparison of Distributor Costs Consultation Consultant’s Report (EB-2006-0268) June 26, 2007.
Comments on PEG’s 3rd Generation Report
| P a g e 42
Multi-dimensional LDC Output
As noted earlier the different bundles of characteristics delivered by the distributors to their
customers would likely have different costs associated with them and thus different prices. In
evaluating the reasonableness of a distributor’s price, we need the context of the “whole
package(s)” being delivered to its customers. Determining if distribution prices are just and
reasonable requires that we evaluate the other non-price features of their product.
Many/most energy regulators have a dual responsibility toward consumers: they must ensure that
prices are just and reasonable and they must ensure the appropriate level of service/reliability is
delivered. Without the latter, there can be no assurance that the prices being paid are in fact just
and reasonable. However, as PEG notes in its April 2007 report, and I agree with, (pp 30 - 31):
The reliability of distribution services provided by utilities varies widely. Better reliability generally comes at a higher cost. The cost impact of quality is thus a valid issue in distribution benchmarking. There are special challenges in the estimation of the cost impact of quality. Despite its importance, empirical research on this topic is not well advanced.
Therefore, since reliability varies so “widely” among LDCs, and those LDCs with higher
reliability will generally have higher costs, we must structure the LDC benchmarking to account
for these differences. If not, and such different cost causation situations are simply observed
through the LDCs’ OM&A costs, we may mistakenly identify “higher cost” LDCs as less
efficient than lower cost LDCs providing lower reliability.
If this is so, the benchmarking approach proposed by PEG and Board staff will penalize the high-
reliability LDCs and reward the low-reliability LDCs.37 This perverse reward/penalty scheme
could then incent high-reliability LDCs to reduce their OM&A expenses to improve their
benchmarking scores; reliability would most likely decline as well. This is not the result we
would expect from a well-structured benchmarking scheme.
37 We are using the terms “high” and “low” in a relative context.
Comments on PEG’s 3rd Generation Report
| P a g e 43
Imprudent curtailments in OM&A have been shown to significantly lower LDC reliability (see
below for a discussion). Regulators in both North America and Europe have recently responded
to profit-driven OM&A cuts with new regulatory initiatives. In Europe, regulators such as the
CEER have documented and encouraged the adoption of SQR which combines system-wide
standards with incentive/penalty schemes and single-customer guarantees with monetary
payments for nonperformance. Some regulators have used willingness to pay (WTP) studies to
gauge the value customers place on reliability.
Recommendation:
In addition, LDCs have different levels of reliability and different levels of associated costs, i.e., higher reliability costs more. When we observe different OM&A costs among Ontario LDCs without the associated reliability information, we can not assume that an LDC with higher OM&A is less efficient, it may simply be providing a higher-valued output for its customers. This difference among LDCs with respect to reliability needs to be accounted for just as does the differing labour capitalization rate.
The Council of European Regulators’ Electricity Working Group, Quality of Supply Task
Force
In Europe, regulators such as the CEER have encouraged the adoption of SQR which combines
distribution reliability standards with incentive/penalty schemes on revenues as well as single-
customer guarantees with monetary payments for nonperformance.38 CEER has been publishing
a benchmarking report on SQR among its constituent members since 2002; the 2005 report
covered regulators from 19 member countries. The 2005 report also examined the reasons
behind the need for SQR.
The CEER task force report notes that quality may have a “long recovery time after
deterioration.” and that “quality of service is usually regulated over more than one regulatory
period.” (p 31)
In recent years, a growing number of countries have adopted price-cap as the form of regulation for electricity distribution, and sometimes also transmission,
38Council of European Energy Regulators (CEER), Third Benchmarking Report on Quality of Electricity Supply – 2005, Ref: C05-QOS-01-03, December, 2005.
Comments on PEG’s 3rd Generation Report
| P a g e 44
services. Price-cap regulation without any quality standards or incentive/penalty regimes for quality may provide unintended and misleading incentives to reduce quality levels. Incentive regulation for quality can ensure that cost cuts required by price-cap regimes are not achieved at the expense [of] quality. The increased attention to quality incentive regulation is rooted not only in the risk of deteriorating quality deriving from the pressure to reduce costs under price-cap, but also in the increasing demand for higher quality services on the part of consumers. For these reasons, a growing number of European regulators have adopted some form of quality incentive regulation over the last few years. Moreover, quality is multidimensional and some aspects of quality have a long recovery time after deterioration. Hence, quality of service is usually regulated over more than one regulatory period to address numerous issues, including continuous monitoring of actual levels of performance.
An Empirical Examination of IR, OM&A Expenditures and Utility Reliability: IR without Standards Leads to Reduced O&M and Lowered Reliability
One study examined the effects of IR on OM&A expenses and service results. Ter-Martirosyan
(2002) examined the effects of IR on electricity distributors’ OM&A and quality of service.39
The author uses data from 1993 – 1999 from 78 major US electric utilities from 23 states. Ter-
Martirosyan finds that IR is associated with a reduction in OM&A expenditures. The author
finds that such reduced OM&A activities are associated with an increase in the average duration
of outages per customer, System Average Interruption Duration Index (SAIDI). Importantly
Ter-Martirosyan’s analysis concludes that the incorporation of strict reliability standards with
associated financial penalties into IR can offset the tendency of IR plans without standards and
penalties to imprudently cut critical OM&A activities.
It is clear that IR alters the motivations of utilities. The shift to IR can put OM&A costs directly
in conflict with the pursuit of profit. Cost reductions experienced earlier in a plan’s term are
worth more to a utility than cost reductions achieved in later years. Since capital may not be
subject to significant changes within the earliest years of a plan’s term, the utility could be
incented to cut OM&A expenses beyond what is prudent for the quality and reliability of the
network. Possibly for these perverse quality services results, it is common for utilities under IR
39 Ter-Martirosyan, A., “The Effects of Incentive Regulation on Quality of Service in Electricity Markets,” Working Paper, 2002.
Comments on PEG’s 3rd Generation Report
| P a g e 45
to have explicit and strict SQ standards, often with penalties for violations. Indeed, Ter-
Martirosyan finds that 70 percent of the utilities in that sample with IR had such penalties.
The Board’s Experience with Service/Reliability Quality Regulation
The OEB’s experience with SQR of electric distributors has its origins in the OEB’s 2000
Electricity Distribution Rate Handbook (Rate Handbook). In terms of SQR, this document was
largely based on the Implementation Task Force Report’s40 recommendations.
1st Generation Minimum Standards
For a variety of reasons, including the limited time available to the task force efforts, the task
force recommended that only minimum customer contact standards be applied to LDCs during
1st Generation. These minimum standards were determined through a survey of the LDCs. For
reliability, the “standards” were actually weaker: for those LDCs with historical data, those
LDCs should keep their performance within the range of whatever it had been during the
preceding three years. Those LDCs without data on reliability performance should begin to
collect it.
However, despite the reluctant acceptance of the “lowest common denominator” for SQR by the
task force, the expectation was that the Board would move quickly, possibly even in the 1st
Generation to set reliability performance targets based on a more reasoned rationale than “just do
whatever it was that you were doing.” The principles of just and reasonable rates require that SQ
and reliability standards be explicitly formulated as part of the sale of access to customers. And,
the Board itself stated its intent to move expeditiously: “upon review of the first year's results,
the Board will determine whether there is sufficient data to set thresholds to determine service
degradation for years 2 and 3.”41 Unfortunately, it is now 2008 and the same standards that
applied in 2000 still apply today. However, as applied by the Board, there has been a de facto
deterioration42.
40 Report of the Ontario Energy Board Performance Based Regulation Implementation Task Force. May 18, 1999,41 OEB, Service Quality, 2000 Electric Distribution Rate Handbook, March 9, 2000, p 7-10.42 PWU Comments on Staff Discussion Paper Regulation of Electricity Distributor Service Quality (EB-2008-0001), March, 2008. ttp://www.oeb.gov.on.ca/OEB/Industry+Relations/OEB+Key+Initiatives/Electricity+Service+Quality+Regulation
Comments on PEG’s 3rd Generation Report
| P a g e 46
The Reliability Performance of Ontario LDCs43
What has been the performance of the electricity distributors in Ontario relative to the minimum
standards established in 2000? Based on the 1st Generation standards, each LDC must keep its
reliability performance within the range experienced in the three-year period preceding the start
of 1st Generation. As far as I can tell, the Board has conducted no analysis on whether any
LDCs are compliant with the standards established in 2000. As far as I can tell, no analysis has
been conducted on the overall reliability of the Ontario electricity sector for any period over the
last 8 years.
The analysis I performed that is described below was included in the PWUs submission on the
OEB’s Board staff Discussion Paper on the Regulation of Electricity Distributor Service
Quality44.
A sample of 24 LDCs has been compiled to examine reliability trends comprised of mainly
medium and large LDCs but also a few with less than 20,000 customers.45 Data spanning the
period from 2000 to 2006 have been assembled from the Board’s annual PBR filings as well as
from the Reporting and Record Keeping Requirements (RRR) data for each utility.46 The
reliability records of merged LDCs have been included. Time series statistical tests were
conducted to examine whether or not the pre-2002 reliability data is different from the 2002 -
2006 data. It was not possible to reject the null hypothesis of no difference, i.e., for statistical
purposes, the data appear to come from the same universe. Therefore, the data were combined.
A customer-weighted annual mean for SAIFI, SAIDI, and CAIDI was calculated for this sample
of LDCs. The reliability from 2000 to 2006 is summarized in the table below.
43 See PWU Comments On Staff Discussion Paper Regulation of Electricity Distributor Service Quality (EB-2008-0001), March, 2008.44 PWU Comments on Staff Discussion Paper Regulation of Electricity Distributor Service Quality (EB-2008-0001), March, 2008. ttp://www.oeb.gov.on.ca/OEB/Industry+Relations/OEB+Key+Initiatives/Electricity+Service+Quality+Regulation45 These utilities include large, medium, small and rural LDCs. In all, over 3.8 million customers are represented in the sample in 2006.46 Recall that the 2000 Rate Handbook, required LDCs to file data which would allow the Board to determine whether or not they were in compliance with the reliability standard.
Comments on PEG’s 3rd Generation Report
| P a g e 47
Exhibit 7.1: Customer-Weighted Annual Means
2000 2001 2002 2003 2004 2005 2006
SAIDI 3.42 3.64 5.02 5.62 2.80 5.39 9.63
SAIFI 2.09 2.08 2.38 2.23 1.91 2.41 2.76
CAIDI 1.64 1.75 2.11 2.52 1.46 2.24 3.49
Source: Calculations from OEB data: pbr_sqi_data3_2000.csv; pbr_sqi_data3_2001.csv;spreadsheet_ontarioelectricitydistributorscosts_20070907.xls.
The reliability indexes suggest there has been quality of service degradation in the Province over
the past 7 years. Assuming a three-year historical benchmark for SQ reliability based on 2000-
2002 as the three-year average, the sample mean fails to comply with this standard for every year
post-2002 except for 2004 for all three indexes.
The current performance requirement for the three service reliability indicators (i.e. SAIDI,
SAIFI and CAIDI) as stated in the Electricity Distribution Rate Handbook is as follows:
A distributor that has at least three years of data on this index should, at minimum, remain within the range of their historical performance.
As seen in Exhibit 7.1, the average for SAIFI in the last 2 years exceeds all prior year results.
The 2006 value is almost 20 percent higher than the whole period average and 33 percent higher
than the first two years. For SAIDI, reported in Exhibit 7.1, 2006 is the highest reported year; in
fact, two of the 3 highest results occur in the last two years. Indeed, the 2006 results are more
than double the average of the prior 5 years. Finally, note that the results for CAIDI indicate that
the last year is by far the worst year. Two of the three highest reported scores occur in 2005 and
2006. The score for 2006 is about 75 percent higher than the score for the prior 5 years.
Comments on PEG’s 3rd Generation Report
| P a g e 48
These data present troubling findings; they indicate a degrading of the reliability performance
for the Ontario electricity distribution sector as a whole. This is consistent with the general
nature of regulation applied to the LDCs since the early-mid 1990s: over this span, the LDCs
have been under de facto IR for long stretches of time. We should expect to see behaviour
consistent with what theory indicates LDCs would do and consistent with the behaviour of
utilities under earlier IR frameworks with weak or absent SQR. In addition, it may well be that
the focus of LDCs has been distracted by a policy-regulatory environment that has been
constantly changing. Finally, it may be that some LDCs curtailed needed O&M or investment
due to insufficient budgets47. It is critically important for the Board to examine this degradation
in reliability and the reason(s) behind it.
Intent of 2000 Decision
The Board’s Decision RP-1999-0034 (1st Generation Electricity Distribution PBR) was to
establish a minimum floor for reliability. As noted above, the Board stated:
…the Board favours the minimum standards proposed in the draft Rate Handbook for first generation PBR. The Board notes that these standards represent the minimum acceptable performance level48.
It appears therefore that the intent was to maintain the reliability standards that existed at the
time of the 2000 Decision as a minimum acceptable standard. Unless reliability performance
increased significantly in 2000 and 2001 from the preceeding 2 years, some LDCs may have
failed this minimum performance standard.
Recommendation:
Service quality/reliability regulation must be reassessed and integrated explicitly into IR. Although the Board’s efforts to establish meaningful SQ standards was prematurely curtailed, it is abundantly clear that a substantial amount of work has already been accomplished. The need to implement meaningful standards nor their integration into an IR framework should no longer be debated. Rather the manner in which this should be accomplished needs to be addressed. European regulators have made substantial progress in the area of SQ standard implementation and their accomplishments can serve as models for the Board.
47 Indeed, the issue of deteriorating reliability has been raised in at least one 2008 rate application.48 RP-1999-0034 Decision at 5.1.20.
Comments on PEG’s 3rd Generation Report
| P a g e 49
8.0 Implementation Issues: A PF-ROE Menu and the IPITwo issues appear to offer superior opportunities for the Board’s IR framework. These are the
use of a menu-based PF approach and the use of an IPI.
The menu is a natural solution to the issue of diversity that exists among the LDCs. Firms in
different circumstances can base their IR choices on these differences. And, a menu can be
easily structured to reach explicit sharing goals between rate payers and shareholders. An IPI
was the basis of the rate adjustment mechanism (RAM) developed in 1st Generation PBR.
Extensive research and analysis was done and documented at that time. This should provide a
strong foundation for the RAM in the 3rd Generation.
Examples of a PF-ROE Menu
For the 2nd Generation IR for local exchange carriers (LECs), the US Federal Communications
Commission (FCC) proposed that each firm select their PF from a menu. The higher the PF
selected by a LEC, the higher would be its allowed ROE.49 Clearly the FCC was trying to
incent higher performance; in addition, the FCC must have been keenly interested in adding to its
understanding of potential productivity performance in the telecommunications industry.
The 2000 Draft Rate Handbook offered a menu of PF-ROE choices shown in Exhibit 8.1 below.
Diversity and Understanding
Indeed, the issue of self-directed choice would seem to be very appealing. On the one hand, it
would allow the regulator to explore the feasible set of PF much more assiduously than could be
done through a multi-generation IR framework which could take a decade to fill out. On the
49 The policy implications of this work are discussed in Cronin and Motluk, “The Road Not Taken: PBR with Endogenous Market Designs,” Public Utilities Fortnightly, March 2004. An earlier version of this paper Restructuring Monopoly Regulation With Endogenous MarketDesigns was presented at the Michigan State University, Institute for Public Utilities, Annual Regulatory Conference, Charleston, S.C. December, 2003. Results from this research have also been used as the basis for an invited seminar on improving utility benchmarking at Camp NARUC, “Restructuring Monopoly Providers or Regulation through Revelation,” 46th Annual Regulatory Studies Program MSU, IPU, Regulatory Studies Program, August 2004.
Comments on PEG’s 3rd Generation Report
| P a g e 50
other hand, it permits the regulator to recognize the existence of diversity among the LDCs and
to embed this reality into the PF options.
Exhibit 8.1: 1st Generation Proposed PF-ROE Menu
Source: 2000 Draft Rate Handbook
From the customer stakeholders’ perspective, the menu would also appear appealing. LDCs
would be incented to more aggressively review productivity improvements and undertake those
that it could manage. Within the plans term (and presumably after the term), rate payers would
experience greater reductions in rates than would likely have taken place. Shareholders would
experience concomitant increases in ROE and earnings.
The Menu and Stakeholder Sharing
The Board should employ a PF-ROE menu. The menu can be easily structured to reach explicit
sharing goals between rate payers and shareholders. For example, as indicated in the Exhibit 8.2
below, the illustrative menu provides a baseline ROE of 8.5 percent and PF of 0.8. If the LDC
selects an incremental PF of 0.2, its allowed ROE increases by 100 basis points. The ceiling
ROE is 12.5 percent with a PF of 1.6.
Comments on PEG’s 3rd Generation Report
| P a g e 51
Exhibit 8.2: Illustrative PF-ROE Menu with Customer-ShareholderBenefit Splits
Selection Productivity Factor ROE
Customer-Shareholder
Split1 .8 8.5 NA
2 1.0 9.5 57 – 43%
3 1.2 10.5 Same
4 1.4 11.5 Same
5 1.6 12.5 Same
Source: Author calculations
How did I develop the menu? Research on Ontario LDCs following the 1st Generation process
has examined the performance of frontier (most efficient) versus interior (less efficient) LDCs.50
Looking at the period 1988 to 1997, a TFP framework similar to that used in 1st Generation PBR
was used: one output, four input, fixed weight calculation of TFP. It was found that the LDCs
that were judged to be most efficient at the start of the period had consistently higher growth in
TFP than did less efficient LDCs. This was true over both the 1988-1993 and the 1993-1997
period. Over the full ten-year period, the average annual growth in TFP for these frontier firms
is about 1.6 percent. On this basis, I recommend the Board set the ceiling of the proposed menu
at 1.6 percent with an ROE of 12.5. Increments of .2 in the PF would be associated with 100
basis point increments in the allowed ROE. This would set the baseline TFP at .8; virtually
identical to the ten-year growth among Ontario LDCs in 1st Generation.
This recommendation assumes that the LDC budgets going into IR are in a steady state mode and
providing sufficient funds for capital refurbishment, growth, and necessary additions induced by
wholesale price increases or conservation and that the operational side of OM&A is receiving a
similarly sufficient budget. However, I have some doubts that this is so for all LDCs.
50 See, Cronin, F. and S. Motluk, “Leaders and Laggards: Examining Regulatory Applications of the Mamquist Productivity Index to Establish Secular Growth in Productivity.” (forthcoming)
Comments on PEG’s 3rd Generation Report
| P a g e 52
Information on reliability, budgets and ROE would seem to indicate that there may be an
operational budget gap. No doubt, many LDCs have seen increases in OM&A but my
expectation is that the LDCs have had to substantially increase the “A” portion of that to meet
the substantial increase in regulatory burdens imposed on them over the past 10 years.
How was the stakeholder sharing split for menu increments derived? Assuming total costs per
customer, the share of capital in costs, the share of equity in capital, and the time path of the
LDC’s operational savings, we calculate the savings to the ratepayer and increased earnings to
shareholders. Assuming that in a 3-year IR the LDC would need two years to reach full
incremental ROE, a 57 percent share for customers and 43 percent share to shareholders was
calculated. Ratepayers would likely experience long-term benefits as well.
The Input Price Index
Board staff have proposed to implement an IPI for 3rd Generation.
Critical, multi-dimensional role
The IPI plays a critical, multi-dimensional role in an IR. The IPI sets an automatic adjustment
for LDC cost changes. It obviates the need to hold frequent COS proceedings. The IPI mirrors
the COS process by adjusting rates on prudency basis, but uses experience of sector average as
the prudency test. It mitigates the likelihood that mistakes in RAM associated with a
macroeconomic price index (e.g. GDP-IPI) will over/undercompensate LDCs. It establishes
yardstick (e.g. benchmark) competition among Ontario LDCs, with better performers holding
down costs. And finally, it provides proper incentive signals to LDCs and customers.
1st Generation IPI
The IPI developed in 1st Generation was rigorously examined and evaluated.51 For example, the
input weights developed in 1st Generation PBR for Ontario distributors were extensively tested
and based on the results of 48 Ontario distributors representing a cross section of the distributors.
51 See, Cronin, F. et al, Productivity and Price Performance for Electric Distributors in Ontario, OEB Staff Report, July, 1999 and associated material.
Comments on PEG’s 3rd Generation Report
| P a g e 53
These weights would be preferable to the weights suggested by PEG which appear to be based
on the capital proxy rejected by PEG in their latest benchmarking report (May 20, 2008). My
analysis on this proxy finds that for the vast majority of Ontario distributors, this proxy
dramatically overestimates their use of capital. PEG’s resulting 63 percent capital in total costs
versus the 45 percent or so found in 1st Generation is consistent for the error associated with the
PEG capital proxy. Use of such a higher estimated cost share for capital could lead to error in
the calculated IPI and increased volatility in the IPI.
The Staff Discussion Paper presents a detailed discussion on the price index options. The IPI
approach used in 1st Generation PBR framework should be implemented for 3rd Generation IR as
described in the Discussion Paper.
PWU_EVD_20080414