evaluation and visualization of multi-level contingencies ...€¦ · evaluation and visualization...
TRANSCRIPT
Evaluation and Visualization of Multi-Level Contingencies in PowerSystems
by
Anton Lodder
A thesis submitted in conformity with the requirementsfor the degree of Masters of Applied Science
Graduate Department of Electrical and Computer EngineeringUniversity of Toronto
c© Copyright 2015 by Anton Lodder
Abstract
Evaluation and Visualization of Multi-Level Contingencies in Power Systems
Anton Lodder
Masters of Applied Science
Graduate Department of Electrical and Computer Engineering
University of Toronto
2015
Contingency analysis is critical to evaluating the operational capacity of power systems and char-
acterizing their vulnerability to component faults. As we look to increase the resilience of networks to
element failure, the instance of multiple contingencies is of growing concern for planners and operators
in identifying weak points in the system. Multi-element contingencies introduce new challenges for how
to reliably and consistently measure the severity of a fault, how to perform contingency analysis on an
expanding range of contingency scenarios in a timely manner, and how to interpret the increasingly
hierarchical data obtained by contingency analysis. This research explores techniques that can be used
to generate, summarize and display the results of multi-element contingency analyses in power systems,
including high-performance computational methods for evaluating contingencies and new visualization
techniques that leverage visual summarization and live interaction to extract valuable insights from the
resulting data.
ii
Contents
1 Introduction 1
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Computations for Contingency Analysis 6
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Techniques for Evaluating Contingency Scenarios . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.2 Performance Indexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2.3 Voltage Stability Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Continuation Power Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Formulation of Continuation Power Flow . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.2 Continuation Power Flow Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.3 Choice of Continuation Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Modifications to the Continuation Power Flow . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.1 Adaptive Step Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.4.2 Lagrange Polynomial for Prediction . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.3 Quantification of Performance Gains in Continuation Power Flow . . . . . . . . . . 20
2.5 Performing Multi-level Contingency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5.1 Dealing with Islanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.5.2 Implementing Participation Factors . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.5.3 Application of Parallel Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.6 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3 Visualizing Multi-Level Contingency Data 26
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 Tree Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.2 Representing Contingency Data as a Tree Diagram . . . . . . . . . . . . . . . . . . 28
3.2.3 Techniques for Overlaying Quantitative Data on Tree Diagrams . . . . . . . . . . . 30
iii
3.2.4 Normalization of Data in Tree Diagrams . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.5 Strengths and Limitations of the Tree Diagram . . . . . . . . . . . . . . . . . . . . 32
3.2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Treemap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.2 Representing Contingency Data as a Treemap . . . . . . . . . . . . . . . . . . . . . 34
3.3.3 Normalization of Data in Treemaps . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.4 Different Approaches to Treemap Styling . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.5 Tiling Algorithm for Treemap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3.6 Dealing With Quantization Error . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.3.7 Use of Colour Coding to Overlay Information Treemaps . . . . . . . . . . . . . . . 43
3.3.8 Strengths and Limitations of Treemaps . . . . . . . . . . . . . . . . . . . . . . . . 44
3.3.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4 Expanding Content of Visualizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.2 Interactive Discovery of Contingency Details . . . . . . . . . . . . . . . . . . . . . 49
3.4.3 Responsive Highlighting of Diagram Structures . . . . . . . . . . . . . . . . . . . . 49
3.4.4 Cross-referencing to One-line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.4.5 Drill-down . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.4.6 Thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.5 Chapter Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4 Conclusions and Future Work 64
4.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.1.1 List of Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
4.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.2.1 Future Work in Improving Contingency Analysis Techniques . . . . . . . . . . . . 67
4.2.2 Future Work in Improving Visualizations . . . . . . . . . . . . . . . . . . . . . . . 69
Bibliography 70
Appendices 74
A Source Code for Contingency Analysis 75
A.1 Running Contingency Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
A.2 Defining Faults to be Analyzed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
A.3 Fault Class Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
A.4 Dealing With Islanding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
B Source Code for Continuation Power Flow 102
B.1 Continuation Power Flow Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
B.2 Prediction step using First Order Approximation . . . . . . . . . . . . . . . . . . . . . . . 137
B.3 Prediction step using Lagrange Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . 140
B.3.1 Lagrange Polynomial Prediction with λ Continuation . . . . . . . . . . . . . . . . 140
iv
B.3.2 Lagrange Polynomial prediction with Voltage Continuation . . . . . . . . . . . . . 142
B.3.3 Implementation of Lagrange Polynomial . . . . . . . . . . . . . . . . . . . . . . . . 143
B.4 Correction step of Continuation Power Flow . . . . . . . . . . . . . . . . . . . . . . . . . . 144
B.4.1 Correction Step With λ as Continuation Parameter . . . . . . . . . . . . . . . . . 144
B.4.2 Correction Step With Bus Voltage as Continuation Parameter . . . . . . . . . . . 146
C Source Code for Visualizations 152
C.1 Software Representations of Power Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 152
C.2 Cross-Referencing Elements and Contingencies . . . . . . . . . . . . . . . . . . . . . . . . 178
C.3 Building A Treemap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
C.3.1 Treemap Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
C.3.2 Treemap Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
C.4 Building a Tree Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
v
List of Figures
2.1 Example of a power-voltage curve. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Linear approximation versus Lagrange polynomial interpolation in predictor step of con-
tinuation power flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.3 Comparison of performance of the Lagrange polynomial interpolation scheme for contin-
uation power flow with gradual versus sudden change in step size. . . . . . . . . . . . . . . 19
2.4 Flow chart describing the algorithm for island detection. . . . . . . . . . . . . . . . . . . . 22
2.5 Graph summarizing performance gains resulting from modifications to continuation power
flow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1 Road-map describing the relationship between tree diagrams and treemaps. . . . . . . . . 27
3.2 Tree representation of a subset of fault cases for the IEEE 30-bus test system. . . . . . . . 29
3.3 Tree diagram illustrating the use of data overlays. . . . . . . . . . . . . . . . . . . . . . . 31
3.4 Treemap of n− 1 contingencies of a set of four elements. . . . . . . . . . . . . . . . . . . . 34
3.5 Treemap of n− 1 and n− 2 contingencies of a set of four elements. . . . . . . . . . . . . . 35
3.6 Flow chart describing the recursive tiling algorithm for a squarified treemap layout. . . . . 39
3.7 Treemap of n− 1 and n− 2 contingencies for the IEEE 30 bus test system. . . . . . . . . 45
3.8 Treemap of n− 1 and n− 2 contingencies for the IEEE 118 bus test system. . . . . . . . . 46
3.9 Treemap demonstrating use of alternating colours to increase visual differentiation. . . . . 47
3.10 Treemap diagram demonstrating break-out of contingency details via mouse click inter-
action for the fault of generator 4 and transformer 2. . . . . . . . . . . . . . . . . . . . . . 50
3.11 Treemap diagram demonstrating break-out of contingency details via mouse click inter-
action for the fault of bus 2 and transformer 2. . . . . . . . . . . . . . . . . . . . . . . . . 51
3.12 Treemap diagram demonstrating break-out of contingency details via mouse click inter-
action for the fault of transformer 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.13 Treemap diagram demonstrating break-out of contingency details via mouse click inter-
action for the fault of generator 4 and bus 6. . . . . . . . . . . . . . . . . . . . . . . . . . 53
3.14 Screen-shots of a tree diagram demonstrating the use of mouse hover interaction. . . . . . 54
3.15 Use of mouse interaction to highlight duplicate contingencies in a treemap. . . . . . . . . 56
3.16 Screen-shots of a treemap diagram demonstrating the use of mouse interaction to identify
elements involved in a contingency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.17 Screen-shots of a treemap diagram demonstrating the use of mouse interaction to identify
elements involved in a contingency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.18 Treemap diagram showing n− 1 through n− 3 contingencies involving branch 2 and four
other elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
vi
3.19 Treemap diagram showing n− 2 and n− 3 contingencies for five elements. . . . . . . . . . 62
vii
List of Tables
2.1 Performance benchmark comparing execution times for linear approximation versus La-
grange polynomial approximation in the prediction step of continuation power flow. . . . 18
2.2 Performance benchmark comparing execution of continuation power flow with various
modifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Performance benchmark comparing execution times for continuation power flow with and
without the use of multi-processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1 Comparison of rounding error for elements with large area versus small area in laying out
a treemap diagram. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
viii
List of Code Snippets
1 Pseudo-code describing how to update the step size after completing the prediction and
correction steps of an iteration of continuation power flow. . . . . . . . . . . . . . . . . . . 15
2 Pseudo-code describing how to calculate edge weights for the layout of tree diagrams,
taking into consideration the associated faults. . . . . . . . . . . . . . . . . . . . . . . . . 32
ix
Chapter 1
Introduction
1
Chapter 1. Introduction 2
1.1 Background
Contingency analysis is integral to design and operation of large scale power system, allowing controllers
and planners to assess how vulnerable a power network is to various scenarios of component fault [1],
and to make decisions about how to mitigate these scenarios. The ability to measure the effect of an
outage on a power system, in particular its ability to supply power and operate with voltage stability, is
a key component of quantifying the operational security and resilience of the network. For this reason,
modern systems are designed with the criteria of guaranteed voltage stability for single-element (n− 1)
contingencies, and proposed upgrades to transmission systems as well as introduction of new generators
and loads are evaluated with with respect to whether or not they meet this minimum constraint on
contingency security.
In the past, contingency analysis has been performed using comparisons of line loading, voltage
deviations, and other measurements resulting from power flow computations as a measure of operational
fitness; this results in a quantity which can be used to compare contingencies, evaluating how the metric
changes in various fault scenarios. This has allowed single-element contingency analysis to be reduced
to a list of fault scenarios ordered by how severely they affect the evaluation measure [2], extending
contingency analysis beyond a binary analysis — stable vs. not stable — to a priority ordering of
contingencies for risk mitigation.
As market deregulation progresses and new generation technologies are brought on-line, more detailed
and dynamic analysis of contingency data can aide in ensuring reliability of the power grid. These
analyses are important for determining optimal measures in system operations, and are also useful for
planning and feasibility evaluation of market solutions. In addition, there is growing interest in evaluating
contingencies involving multiple elements, moving beyond the current n−1 security constraint standard
for system stability; in certain cases, select multi-element contingencies have already been included in
reliability studies under the category of extreme disturbances [3, p.38-39]. This expansion of contingency
analysis could serve 1) to increase the level of detail with which contingency security is understood and
2) to guide planning and operations processes to improve the bottom line for guaranteed security of
operation during contingency scenarios.
1.2 Motivation
With increasing complexity of power systems resulting from greater variety and dispersion of generation
and loads as well as increasing power demand on power grids, coupled with the aging nature of current
grid infrastructure, comes a need for more sophisticated and detailed knowledge of where networks are
vulnerable to fault conditions and how these vulnerabilities can be mitigated. Current contingency
analysis techniques are geared towards single-element contingencies; in order to improve the state of
the art with respect to the reliability of power systems, it is necessary to also consider multi-element
contingencies1. This expansion in the scope of contingency analysis raises new questions about how
to reliably measure the effect of a contingency, how to perform these measurements on a large set of
scenarios in a timely fashion, and how to analyze the resulting dataset to extract useful observations
about the power system. In particular, it is desired that for an exponential increase in the number of
contingencies that are included in analysis, there be a concomitant increase in both the breadth and
1Failure to consider extreme multi-element contingencies was considered a contributing factor in the 2003 blackout inNorth America[3, p. 18].
Chapter 1. Introduction 3
depth of observations that can be extracted from the resulting data. Furthermore, there is a need to
improve upon computational efficiency for any models that are used to quantify contingencies. These
visual techniques will be built on the use of continuation power flow (CPF) as a tool for evaluating
contingencies, allowing consistent and detailed understanding of how a power network is vulnerable to
fault scenarios.
1.3 Objective
The approach presented in this research is to develop a companion visualization that could go alongside
a geographic or one-line map2 of the system, allowing for a visualization technique that eschews the
constraint of describing the systems shape in favour of a simplified diagram that highlights the informa-
tion contained in contingency analysis data. The goal of this technique is to allow the viewer to see an
overview of the entire data set, decide for themselves what elements can be filtered out and then focus
in on particular details3.
1.4 Literature Review
There has been limited work in developing compelling methods for visualizing quantitative data for power
systems. Most attempts at visualizing operational data have centered around overlaying quantities on a
system diagram, whether it be a one-line diagram or a direct geographical representation of the system.
This section will present some approaches that have been used in the past to visualize various aspects
of contingency analysis.
Several two-dimensional approaches have been proposed, including:
• In [1], colour contoured maps were overlaid on geographic maps of the power system. Severity of
contingencies involving each line were indicated by colour on that line, with red corresponding to
highest severity and green to lowest severity. A Gaussian blur effect was applied to smooth out the
colour transitions across different elements, generating a heat-map effect that aides quick visual
identification of areas in the diagram that require focus by the viewer. This technique serves to
superimpose the risk factors for each contingency locationally on one geometric system diagram,
giving a summarization of all contingency risks associated with individual elements. However, this
approach is misleading in that it puts an artificially heavy visual emphasis on elements by their
geometric shape. A transmission line may stand out visually because it carries incrementally more
contingency risk, or simply because it is longer than other transmission lines and thus takes up
more space in the diagram. While there may be some debate about how much the length of a
transmission line impacts its risk of being faulted, it is not at all clear how this risk interacts with
the impact of the fault on system operation. For example, transmission lines have a much greater
visual weight in a system diagram than buses do simply by virtue of their shape, yet the faulting
of a bus may lead to several transmission lines being taken out of service and could potentially
pose a much greater risk to the system.
• Several works [6, 7, 8] have proposed the use of animated flows (arrows that move along a transmis-
sion line) and in-diagram coloured indicators, pie charts or vertical bars to overlay flow information
2Also called a single-line diagram.3This three-step approach is known as the visual information seeking mantra[4, 5].
Chapter 1. Introduction 4
on a one-line or geographical diagram. These flow indicators and bars are used to highlight power
flows and measures such as short-circuit level across the diagram, giving the viewer a summary of
where operational limits are being approached or exceeded in a contingency scenario.
In addition to these, there have been efforts to produce three-dimensional diagrams with the goal of
increasing the richness of the visualization and utilizing the third dimension to alleviate visual clutter to
some degree. Sun and Overbye [2] proposed overlaying three-dimensional graphical elements on a planar
map of the system. These elements, each corresponding to components represented on the plane, would
be shaped and coloured in such a way as to convey quantitative information, such as using cylinders
of varying heights or drawing walls that follow transmission lines. Grijalva [7] also proposed the use of
cylinders coming out of a two-dimensional plane as a method of comparing quantities between different
points on a system map. Bei et. al. [9] drew a three-dimensional geographical map, complete with geo-
graphical terrain mapping, and overlaid a heat map to show problem areas in the system with regards
to contingencies.
Although three-dimensional visualizations may have some applications in visualizing data for power
systems, they suffer from a fundamental design flaw: after a visualization is built in a virtual thee-
dimensional space, it has to be projected on a two-dimensional screen. This makes precise comparisons
of physical objects challenging for the user, who has to to mentally account for the effect of perspective
on the relative scaling of objects. In real life, humans solve this problem with measuring sticks; it is
unrealistic to expect a user to precisely compare three-dimensional features by eye.
Techniques that overlay data on a map — whether two- or three-dimensional — are attractive
because they are conceptually easy to grasp, providing a top-down view of the system. They often
include geographic features such as lakes, roads, terrain forms and borders to give the viewer contextual
information about the scale of the diagram and what they are looking at. These features are valuable
in making visualizations familiar to operators because they leverage the viewers intuition for reading
maps and schematic diagrams. However, while a high-fidelity virtual representation of the system may
make intuitive sense to a viewer, most of the content contained in the visualization ends up being details
that form this contextualization, leaving less room for giving details about the values of interest in the
analysis. The values that are most important for a particular analysis — such as analysis results — are
marginalized by the very forms used to give them context. When the structural details of the system
map and the visual features of measures are combined onto one layer, the resulting visualization tends
to be crowded and visually tiring. These techniques also do not scale to larger systems, where individual
visual elements must be drawn very small in order to fit the entire system on one diagram. Even for
systems where these techniques can be used with some success, the values they are used to represent
often do not have real-world meaning that fits directly in the context of a virtual representation of the
system, since they don’t have visual analogues in the real system. Measures of contingency severity, for
example, don’t have a visual manifestation in a power system; attempts to overlay these values on a
virtual system are unable to bridge this abstraction.
Beyond the challenges of contextualizing abstract data, there are additional challenges that make map
overlays ill-suited for visualizing contingency data. These challenges stem from the fact that quantitative
data about numerous contingency scenarios must be aggregated on to a single system map, using icons
or heat-map techniques. This approach has several flaws:
1. In the case where icons are associated with a specific element, it is visually unclear whether a
Chapter 1. Introduction 5
specific element is highlighted because it experiences limit violations under certain contingencies
or because when it is faulted, other elements experience limit violations.
2. There is no way to work backwards from the aggregate visualization to understand what visual
features correspond to what contingencies.
3. There is no functionality for directly comparing individual contingencies in terms of their severity.
These challenges indicate that alternative visualization techniques are required to replace or augment the
use of system diagrams. Comparison of different contingencies is fundamental to contingency analysis;
even in the case of a one-dimensional list of contingencies, the ordering allows for immediate understand-
ing of how different faults compare to each other, allowing operators to prioritize what actions should
be taken to ameliorate these risks. A visualization technique that does not provide robust techniques
for comparing contingencies has only marginal value in providing insights about the system.
Mikkelsen et. al. and Chopade et. al. [4, 10] discuss a model for real-time visualization of power
system operations that involves building a dashboard from a combination of several different visual tech-
niques: geographic maps, one-line diagrams, lists of alarms, contextual detail breakouts and supportive
views detailing secondary analyses. These supportive views could be merely visual overlays on other
elements of the combined supervisory dashboard, or seperate visualizations; Mikkelsen et. al. [4] suggest
that in the case of contingency analysis this could consist of a list of contingencies, allowing the user to
identify the worst ones via sorting. The objective of this research is to develop a visualization technique
that effectively displays contingency results and which can be integrated into a dashboard such as are
proposed by these previous works through visual integration and cross-linking.
1.5 Overview
The contents of this thesis will be structured as follows: Chapter 2 will consider measures for evaluating
contingencies, focusing on continuation power flow as a measure of system fitness that is especially
well-suited for multi-level contingencies. The formulation and algorithm of continuation power flow will
be presented, followed by a discussion of computational enhancements that can be utilized to improve
performance. Chapter 3 will detail the two visualization techniques developed in this research – tree
diagrams and treemaps – and will detail how they are drawn and interpreted. This chapter will also
discuss the use of interactive features and companion diagrams to enhance these diagrams. Finally,
Chapter 4 will discuss some conclusions of the research and describe areas of future work.
Chapter 2
Computations for Contingency
Analysis
6
Chapter 2. Computations for Contingency Analysis 7
2.1 Introduction
The discussion of quantitative measures for evaluating contingencies is central to contingency analysis
techniques. At issue are concerns about accuracy, reliability and performance; there exists a fundamen-
tal trade-off wherein quantitative measures which are accurate and can be applied consistently across
numerous contingencies of varying severity will tend to require more computational effort to attain. This
trade-off can be overcome to some degree through the use of advanced computing techniques and by
taking advantage of advances in the increased availability of computer hardware for high-performance
computing. This chapter will discuss the selection of a measure for evaluating contingencies and will
explore the different techniques that can be applied to compute these measures quickly.
2.2 Techniques for Evaluating Contingency Scenarios
2.2.1 Introduction
One central aspect to contingency analysis is selecting a comparative measure with which to evaluate the
impact that each contingency has on system security. The selection of an appropriate technique is critical
for ensuring that contingency analysis is able to consistently quantify the risk of a variety of security
concerns in operation. In addition it is necessary that measures of system security be straightforward
to compare across a large number of scenarios; that they be amenable to summarization and normal-
ization, allowing the results of analysis to be effectively interpreted; and that they provide reasonable
computational performance.
2.2.2 Performance Indexes
The most common measures used in contingency analysis are performance indexes evaluating the load-
ing of elements in the network in comparison to pre-determined limits. These methods all involve a
summation of values for each bus or branch relative to their predetermined limits to produce a single
performance index. [11] [12, p. 416] Some examples of performance such limit-based indexes include
• line current-based limit
• line power-based limit
• voltage limit
• generator reactive power limit
The formulation of a performance index is a weighted sum of values for each element in the network,
relative to their respective limits. For example, the formulation of a voltage based index is
JV =
N∑j=1
wj(Vj − V nomj )
∆Vj(2.1)
where
• Vj is the voltage at the jth bus
Chapter 2. Computations for Contingency Analysis 8
• V nomj is the nominal voltage at the jth bus (usually 1 p.u.)
• ∆Vj is the voltage deviation tolerance at the jth bus
• wj is a weighting assigned to the jth bus
• there are N buses in the network
The formulation given in (2.1) evaluates the level of deviation of the bus voltages in the system from the
nominal or ideal value for operation. In the case of capacity-based measures, the formulation is similar;
for example a current-based index would be formulated as shown in (2.2):
JI =
M∑k=1
wkIkImaxk
(2.2)
where
• Ik is the current loading on the kth branch
• Imaxk is the maximum current that can be sustained by the kth branch.
• wk is the weighting assigned to the kth branch
• there are M branches in the network
This formulation could also be applied to other elements and respective measures such as power
flows on lines relative to their individual ratings, or generator reactive power outputs relative to their
individual reactive power limits. Note that these performance indexes are parametrized by the choice of
weighting factor; unless there is a specific reason to either emphasize or suppress the loading condition
of a particular element, these weightings are generally chosen to be uniform [13].
The resulting value J is a single value that can be used as a metric for how well the network
is conditioned; larger relative voltage deviations at buses — or larger relative line current loading —
correspond to a system that is more stressed and is more constrained in its ability to respond to increases
in power demand or remain stable under fault conditions.
Performance index measures have the advantage of requiring less computational effort to compute,
since they can be derived from a single power flow solution per contingency. This makes them advanta-
geous for contingency analysis of large systems, which require analysis of a large number of contingency
scenarios. However, these indexes suffer from several weaknesses which make them less suitable for
multi-level contingency analysis:
• These measures fail if the system becomes unsolvable at the operational level under consideration,
a condition that is increasingly likely for contingencies involving multiple elements.
• It remains unclear which measure (voltage deviation, current loading, power loading, reactive power
loading) is the most important or if it is even appropriate to pick just one of these performance
measures.
• Using more than one of these measures (e.g. combining voltage and transmission line limit vi-
olations) would entail either combining them into one value or considering a two-dimensional
performance index, which would be much harder to compare across a large set of contingencies.
Chapter 2. Computations for Contingency Analysis 9
• It is not clear if these measures are useful in contingency scenarios where the system topology is
significantly changed (e.g. in the case of islanding), a condition that is also more likely for higher
level contingencies.
• The results of these performance indexes depend on the loading level and load profile used for the
base (pre-outage) case.
With these issues in mind, it is necessary to consider other methods of quantifying contingency severity,
particularly with the additional consideration of multi-element contingencies.
2.2.3 Voltage Stability Analysis
Another approach for quantifying the fitness of a power network is to use voltage stability analysis
(VSA). This approach seeks to evaluate the stability limit of the power system1 in order to identify
the maximum power loading that can be sustained under a particular load profile. By measuring the
reduction in maximum loadability observed under a particular contingency in comparison to the base
(pre-outage) scenario, it is possible to quantify the impact that contingency has on the net ability of the
system to supply power to loads.
Voltage stability analysis is commonly performed using an iterative technique called the continuation
power flow (CPF) computation [14, 15]. CPF works by scaling the power injections at all buses on the
system by a scalar value, starting from a nominal level and increassing until the maximum loading point
is reached — beyond which the system is unsolvable [16]. This process iteratively increases the power
scaling factor to quantify the bus voltages on the system as a function of a load scaling parameter, and
is commonly used to identify the power-voltage (P-V) curves of the system; an example of a P-V curve
is shown in Figure 2.1. The value returned by a CPF computation is the scaling factor at the stability
limit, from which we can calculate a sum of maximum power injections at each bus.
The benefit of using CPF for contingency ranking is that the success of the computation does not
depend on a predefined loading level, meaning it does not fail for contingencies where the system is
unsolvable at normal loading levels. On the other hand, CPF is a more involved computation and
requires implementation of high-performance computing techniques to be fast enough for realistic use.
In this report CPF will be used to rank the different contingencies; however, it should be noted that the
visualizations presented could be used with more traditional performance indexes as well.
2.2.4 Summary
This section describes several approaches to evaluating the effect contingencies have on the operation
of a power system and provides a qualitative justification for using continuation power flow to evaluate
multiple contingencies. The next section will unpack the formulation and implementation of continuation
power flow.
1More precisely, the point of voltage collapse; or, the stability limit of an AC power flow system of equations corre-sponding to that system.
Chapter 2. Computations for Contingency Analysis 10
2.3 Continuation Power Flow
2.3.1 Formulation of Continuation Power Flow
Continuation power flow has long been established as a technique to measure the theoretical loading limit
of a power network [16, 17].The continuation power flow involves a reformulation of the AC power-flow
equations to include a power scaling factor λ. The AC power balance equations can be written at each
bus as
Pk = fP (Vk, δk) = Vk
N∑n=1
YknVncos(δk − δn − θkn) (2.3)
Qk = fQ(Vk, δk) = Vk
N∑n=1
YknVnsin(δk − δn − θkn) (2.4)
where Ykn and θkn are the magnitude and angle of entries in the admittance matrix Y and k represents
the index of the bus in question. Using these equations at each node, a vector power flow equation can
be formulated that describes the entire network as a function of bus voltages and angles in the form
P =
PGi
PLj
QLj
=
fP (Vi, δi)
fP (Vj , δj)
fQ(Vj , δj)
= F (V , δ) = F (x) (2.5)
which contains an equation for each PV bus2 (PGi) where the angle δi is unknown and two equations
for each PQ bus 3 (PLj and QLj) where the voltage Vj and angle δj are unknown; x represents a vector
of functional arguments of the form x = [δ V ]T
where δ contains a value δk for each PV and PQ bus,
and V contains a value Vk for each PQ bus. The vectors F and P are of length 2× nPQ + nPV where
nPQ (nPV ) is the number of PQ (PV) buses. Given a known set of power injections P and the voltage
set-points at all PV buses, the formula in (2.5) can be solved for the PQ bus voltages V and angles δ
using the Newton-Raphson method.
The continuation power flow reformulates the AC power flow by introducing a scaling factor into the
power injection vector; to do this, substitute P = λK where λ is a scaling factor we want to control.
K is a vector representing the loading profile at each bus and could be defined in any number of ways;
for a desired initial load profile P o, define K = P o such that the initial load profile would correspond
to λ = 1. Substituting this formulation into (2.5) yields
λK = F (x) (2.6)
Using this parametrization, the argument vector x takes the form x = [δ V λ]T
.
Having added a variable to this formulation, it is necessary to add an extra equation to ensure that
there is one unique solution to the system of equations. This extra equation can be added by taking
one of the variables in x (denoted xk, the kth element in x) and defining 0 = xk − xprk , where xprk is
the predicted value of the variable. The variable xk is called the continuation parameter; it is integral
to the CPF algorithm since the inclusion of this extra equation causes the system to be relaxed even
at the stability boundary, allowing the Newton-Raphson power flow to solve even at extreme loading
2Bus with generation or voltage regulation equipment.3Bus with no generation or voltage regulation.
Chapter 2. Computations for Contingency Analysis 11
0 1 2 3 4 5 6.078 7
0
0.2
0.4
0.6
0.8
1
Power−Voltage curve for bus 7.
Vol
tage
(p.
u.)
Power (lambda load scaling factor)
Predicted ValuesLambda ContinuationVoltage ContinuationMax Lambda
Figure 2.1: Example of a power-voltage (P-V) curve obtained using continuation power flow. This curvedescribes how the voltage at one bus on the system changes as the loads across the system are scaledup; CPF produces a P-V curve at every bus on the system. This curve illustrates the use of an adaptivestep size, and shows the three phases corresponding to different strategies for choosing the continuationparameter.
conditions where the equations would otherwise be ill-conditioned.
2.3.2 Continuation Power Flow Algorithm
The continuation power flow algorithm uses a predictor-corrector method to traverse the power-voltage
curve4 of the system, which describes how bus voltages droop as the load profile of the system increases.
The predictor step involves using a first-order approximation of the system to move along the P-V curve
a predetermined distance (defined by the step size σ). The corrector step then uses the approximate
solution obtained from the predictor step as an initial point and solves for an exact solution to the power
flow equations (2.6) using the Newton-Raphson method. The prediction step used to obtain xpri , the ith
point5 can be formulated shown in (2.7) and (2.8)
xpri = xi−1 + σ · dx (2.7)
4See Figure 2.1 for an example of a P-V curve.5In the notation xi, i refers to the iteration of the CPF algorithm; in contrast, xk is used to denote the kth element in
the vector x.
Chapter 2. Computations for Contingency Analysis 12
which, substituting for x, expands to: δi
Vi
λi
=
δi−1
Vi−1
λi−1
+ σ
dδ
dV
dλ
(2.8)
where dδ
dV
dλ
=
[JF K
ek
]−1 0...
0
1
(2.9)
Here JF is the Jacobian of the power flow equations in (2.5) evaluated at xi−1, and ek is a vector of
zeros with the kth element equal to one, corresponding with the continuation parameter.
The process of the CPF algorithm is as follows:
1. Solve (2.6) for a nominal scaling value, such as λ = 0, to obtain an initial correct voltage reference
xcorro .
2. Choose a parameter in x to use as the continuation parameter, as well as an appropriate step size
σ.
3. Solve (2.7) to get a prediction xpri for the next point.
4. Using the result of step 3 as an initial point, solve (2.6) to get an exact solution, xcorri , on the P-V
curve.
5. Evaluate the new point to decide which parameter to use as the continuation parameter in the
next step.
6. While the curve has not been explored, return to step 3.
This iterative process is repeated until the P-V curve has been described completely. The P-V curve
usually curves down to the nose and then continues to droop as λ is scaled down, giving two possible
voltage points for each value of λ on the range of operation of the system; this is shown in Figure
2.1. Since the value of interest for contingency analysis is the value of λmax, the computation can be
terminated once the nose of the curve has been identified — this can be accomplished by terminating
the computation once the λ values identified by the corrector step stop increasing and begin to decrease.
2.3.3 Choice of Continuation Parameter
Choice of continuation parameter can have a profound effect on the performance of CPF. In general,
it is desirable to use whatever state variable has the largest rate of change with respect to the other
variables [16] near the point where the approximation is taken for the prediction step. This will ensure
that the values of all other state variables will be changing slower, improving the accuracy of prediction
and allowing the P-V curve to be explored more quickly. For the majority of the CPF computation,
the best choice for the continuation parameter is λ; it changes quickly during the early part of the P-V
curve, where the slope of the voltage curves with respect to λ is low. However, as the CPF algorithm
Chapter 2. Computations for Contingency Analysis 13
nears the nose of the P-V curve, the slope of the curve becomes high and the slope of power-voltage curve
increases, leading to potentially higher prediction error — resulting in a greater number of iterations of
Newton-Raphson to find the exact solution xcorri at each step of CPF . In addition, λ starts to decrease
once the nose of the P-V curve has been reached; beyond the nose of the curve there will be no solution
for the desired value of λ, causing the algorithm to stall when λ is scaled past λmax. For these reasons it
is better to choose a bus voltage as the continuation parameter near the nose of the P-V curve, allowing
λi to be identified in the corrector step.
The mechanism for choosing an appropriate variable to use as the continuation parameter consists of
breaking the CPF computation into three phases. In the first phase, while the slope of the P-V curve is
small, λ is used as the continuation parameter and is scaled up according to the chosen step size. After
each corrector step, the slope dV i
dλ is calculated for all each PQ bus as
V i − V i−1
λi − λi−1
for the ith solution , such that dV i
dλ is of length nPQ6. Phase one ends when the maximum slope crosses
a threshold dV max, such that
max
(dV i
dλ
)> dV max
The threshold dV max defines how far to travel along the P-V curve before leaving phase 2, and needs to
be chosen in such a way as to switch phases at an optimal point. There are two criteria to be considered
in choosing this parameter:
1. A larger threshold will allow more of the curve to be explored during the first phase of CPF, when
λ is the continuation parameter; this usually results in fewer iterations being required to traverse
the P-V curve, especially in applications where the precise shape of the curve is not of interest
away from the point of maximum loading.
2. A smaller threshold will reduce the risk of stepping past the P-V curve nose (along the λ axis) during
phase one, an event which could lead to problems during the corrector step: the corrector may not
converge, requiring the algorithm to abort; alternatively the corrector might converge on a point
further around the curve than the nose, causing the maximum value of λ to be underestimated.
Though it is possible to slightly reduce the total number of iterations of CPF by using a high slope
threshold dV max, the costs of staying in phase one for too long have the potential to be very severe,
causing the computation to be aborted or causing the nose of the P-V curve to be overshot; for this
reason it is better to choose a conservative slope threshold. In this research, the threshold was set to
dV max = 0.5, a value identified through trial-and-error.
In the second phase, a bus voltage is chosen as the continuation parameter; rather than increasing
λ by the step size, the bus voltage Vk (where k indicates the chosen PQ bus) is scaled down and the
associated value of λ is found in step 4 along with the other bus voltages. After each corrector step in
phase two, the bus voltage slopes are re-evaluated and the bus with the highest slope is chosen as the
continuation parameter in the next iteration. Phase two ends when the P-V curve has been traversed
past the nose and the slope of the P-V curve falls back below the slope threshold. In phase three the
algorithm returns to using λ as the continuation parameter.
6In the notation for vector V i, i refers to the iteration of CPF; in contrast, the notation Vk refers to the kth elementin the vector V .
Chapter 2. Computations for Contingency Analysis 14
A detailed implementation of continuation power flow is described in the source code in Appendix B
2.4 Modifications to the Continuation Power Flow
The previous section described the CPF algorithm as it was originally defined. The CPF algorithm
produces an evaluation of a power system that is suitable for use in contingency analysis, however the
computational effort required to execute this algorithm in its original formulation makes it unwieldy
for larger systems. This limitation has significant impact for multi-level contingency analysis, in which
the number of potential fault scenarios grows exponentially with the number of simultaneous faults
involved. To perform contingency analysis with continuation power flow in a reasonable amount of time,
it is necessary to implement some modifications to the algorithm in order to improve performance and
flexibility of the computation for a large set of scenarios. This section will explore two modifications
made to continuation power flow to improve its computational performance.
2.4.1 Adaptive Step Size
The step size σ is a key parameter for controlling the trade-off between speed and precision in the
execution of continuation power flow. The step size dictates how far the next point will be along the
P-V curve, and since the predictor step uses a first-order approximation to obtain xpri , increasing the
distance from the point of approximation (xpri−1) causes the accuracy of the prediction to be reduced —
increasing the number of iterations to solve the corrector step. The step size also affects the number of
iterations of continuation power flow required to traverse the P-V curve, since a larger step size would
allow the curve to be described with fewer points. These two dynamics introduce a design trade off for
optimal computation speed:
1. The accuracy of the prediction step directly affects the number of iterations needed to find an
exact solution in the correction step of the CPF algorithm. Larger step sizes will lead to prediction
points that are further from the P-V curve; this means the starting point for Newton-Raphson
will be less optimal, requiring more iterations to solve. Smaller step sizes will induce the opposite
effect. In addition, a prediction error that is sufficiently large could cause the correction step to fail
to converge, which would cause the CPF computation to stall. From this perspective it is desirable
to make the step size smaller in order to limit the number of iterations and avoid non-convergence.
2. The overhead associated with each iteration of the predictor-corrector scheme — calculating a first-
order approximation and setting up the Newton-Raphson algorithm to converge in one iteration —
means that, holding constant the number of iterations to converge on xcorri , the total time it takes
to describe the P-V curve will increase with the number of points used. From this perspective it
is desirable to make the step size larger.
These two requirements can be balanced by choosing a moderate value for σ with the goal of limiting
the prediction error to within an acceptable range. However, in order to avoid non-convergence of the
corrector step in the vicinity the voltage stability limit — when the P-V curve starts to droop quickly
and prediction becomes less accurate — much of the CPF computation will be run with a step size that
is smaller than is desirable, leading to more points than necessary in the first phase of the computation
and as to increased computation times.
Chapter 2. Computations for Contingency Analysis 15
Within the context of contingency analysis, there is a broader challenge in selecting an appropriate
step size which stems from the variation in the different scenarios which must be analyzed. Each scenario
might benefit from a different step size for maximum performance of the algorithm; it may be impossible
to choose a step size that is big enough to allow contingency analysis to be completed in a reasonable time
while ensuring that the power flow calculations converge for every iteration of CPF, in each scenario.
This is especially true given that the loadability conditions of each scenario are expected to vary widely.
In addition to this, a step size that might work for a large network would likely be too small for a smaller
network, requiring that a step size be identified for each new system to be examined and for different
load profiles as well.
In order to deal with this problem, a variable step size is proposed that would adjust σ to control the
prediction error (xcorr−xpr) [18]. The step size adjustment fits into the CPF algorithm in the following
manner:
1. Solve (2.8) to get a prediction xpri for the next point.
2. Using the resulting xpri as an initial point, solve equation (2.6) to get an exact solution on the P-V
curve, xcorri .
3. Measure the average prediction error in |xcorri − xpri | and compare to an acceptable error range;
if the error is too large, the step size should be reduced to avoid non-convergence and reduce
the number of iterations required for correction; if the error is too small, the step size should be
increased to make the computation progress faster.
The desired prediction error range should be chosen with the goal of achieving convergence of each
Newton-Raphson power flow within two to four iterations on average. It is not necessary to precisely
control the number of iterations since, in addition to the distance from the starting point, other factors
such as network topology and nearness to the voltage stability limit also have a significant effect on the
number or iterations required to converge. Code snippet 1 describes the process for updating the step
1 if abs(log( error /_DESIRED_ERROR )) > log_ERROR_THRESOLD:2 new_step_size = stepSize - scaling_factor * log(error/DESIRED_ERROR)3
4 if MIN_STEP_SIZE < new_step_size < MAX_STEP_SIZE:5 stepSize = new_step_size6 else:7 end_phase()
Code Snippet 1: Pseudo-code describing how to update the step size after completing the prediction andcorrection steps of an iteration of continuation power flow.
size during an iteration of the CPF algorithm. The use of a logarithmic comparison of error to a desired
error level (line 1) ensures that the adaptive algorithm is not too restrictive; it is enough to ensure that
the error is within an order of magnitude of the desired error. The rate of adjustment (line 2) is also
logarithmic; this serves to dampen the response of step size adaption to large changes in error, ensuring
that the step size is less likely to oscillate between too-small and too-large values.
The adaptive step size modification adds several edge cases to the continuation power flow algorithm:
• It is necessary to implement a maximum and minimum step size (line 4 of code-snippet 1). A
minimum step size ensures that if the CPF algorithm gets stuck for whatever reason, it can fail
Chapter 2. Computations for Contingency Analysis 16
out rather than attempting to continue with unreasonably small step sizes. A maximum step size
ensures that in trivial cases where the prediction error is zero or very small, the step size does not
become unreasonably large.
• With an adaptive step size there is the concern that as the continuation parameter changes, the
tolerance for prediction error may decrease quickly leading the correction step to non-convergence
because the step-size is too large. To overcome this problem, it is necessary to double-check the
step size when the correction step fails; if the step size is greater than the minimum step size, the
iteration should be attempted with a smaller step size rather than allowing the computation to
fail.
The implementation of adaptive step sizing is detailed in Appendix B.1
Summary of Adaptive Step Size
The adaptive step size modification increases the complexity of CPF, but it also greatly improves the
flexibility and adaptability of the algorithm to varying input systems and reduces the computation time
necessary to reach the voltage stability limit of a system. For this reason it is integral to the use of CPF
for voltage stability analysis and contingency analysis of large sets of contingency scenarios.
2.4.2 Lagrange Polynomial for Prediction
The original formulation of the continuation power flow algorithm employed a first-order approximation
of the power flow equations — described in equation (2.8) — with respect to the continuation parameter
in order to make a prediction about what the next value on the curve would be. Li and Chiang [18]
proposed a higher-order non-linear fitting scheme which makes use of polynomial interpolation to obtain
a more accurate approximation of the P-V curve. Polynomial approximation takes into account that
the power-voltage curve is generally concave and well-behaved7 to approximate the P-V curve. This
causes the results of the predictor to be closer to the exact solution, necessitating fewer iterations of the
Newton-Raphson power flow to converge during the correction step (for the same step size).
For a set of known points (x0, y0), (x1, y1), ...(xk, yk) the Lagrange polynomial is defined as
L(x) =
k∑j=0
yj lj(x) (2.10)
where
lj(x) =
k∏m=0,m 6=j
x− xmxj − xm
(2.11)
[19] By choosing {x0, x1...xk} and {y0, y1...yk} to be previous correct samples of the continuation pa-
rameter, (2.10) can be used to predict the value of any other parameter at x = xk +σ. This formulation
fits a polynomial of degree k to the given points. In practice it is necessary to generate the initial points
on the P-V curve using the first order approximation technique, since polynomial interpolation depends
on known points. In addition, the interpolation should only use points that are close by (e.g. the last 6
points found on the P-V curve) to limit the order of the resulting polynomial and the computation time.
7That is, the slope of the curve changes slowly.
Chapter 2. Computations for Contingency Analysis 17
0 1 2 3 4 5 6 70.7
0.75
0.8
0.85
0.9
0.95
1
1.05
Power (lambda load scaling factor)
Vol
tage
(p.
u.)
Linear Predictor v.s. Lagrange Predictor
previous predicted voltagesprevious corrected voltageslinear prediction functionlagrange prediction functionlinear predictionlagrange predictioncorrected value
Figure 2.2: Comparison of linear tangential predictor scheme versus Lagrange polynomial interpolationpredictor for continuation power flow.
Chapter 2. Computations for Contingency Analysis 18
30-bus system 118-bus systemlinear 0.468 ms 1.051 msLagrange 0.115 ms 0.157 ms
speedup 75.4% 85.1%
Table 2.1: Performance benchmark comparing execution times for linear approximation versus Lagrangepolynomial approximation in the prediction step of continuation power flow, averaged from 4500 execu-tions of each algorithm.
The advantage of the polynomial formulation of the predictor over the first-order approximation is
a much closer approximation of the P-V curve during the predictor step of the CPF algorithm, allowing
the step size to be increased without increasing the convergence time of the corrector step — resulting
in fewer points to describe the P-V curve. In addition, the evaluation of the Lagrange polynomial
is computationally faster than first order approximation; whereas a first order approximation has at
best sub-quadratic time complexity8, the Lagrange polynomial has linear time complexity9 with the
system size10. Table 2.1 shows a benchmark comparing the average computation time for linear versus
Lagrange approximation for two systems, demonstrating a marked performance improvement of Lagrange
approximation over linear.
Complications arising from Lagrange Interpolation
There are two challenges that arise in implementing the Lagrange polynomial for finding approximate
solutions to the P-V curve:
• The Lagrange polynomial scheme requires two or more known points in order to produce an
interpolant.
• With large changes in sample spacing over the input data, the Lagrange polynomial can quickly di-
verge from the actual P-V curve, giving inaccurate predictions and even leading to non-convergence
in the correction step. This issue arises when adaptive step sizing is also being used.
The requirement to have known points for polynomial approximation can be resolved by priming
the continuation power flow, using a linear approximation in the first iteration of the algorithm. After
solving for an initial correct point of zero loading, the first iteration can use a linear approximation to
identify the next approximate solution point. After two known points have been obtained, subsequent
iterations of CPF can utilize the Lagrange polynomial for the approximation step.
The challenge of using polynomial approximation with a changing step size is more complex to
address, since it involves an interaction with the adaptive step size. Figure 2.3 shows a comparison
of Lagrange extrapolation when the step size changes abruptly (Figure 2.3a) as compared to when the
step size changes gradually over several iterations (Figure 2.3b). In the latter graph, the approximate
function generated by the Lagrange polynomial is close to the actual function well outside of the region
of interest (around λ = 100), and produces a good initial point for the correction step; however, in
8Computing (2.9) requires solving a linear system of equations, which has an upper bound of O(n2.376)[20], where n isthe number of buses in the system.
9The prediction step computes (2.10) once for each variable in x, of length 2 × nPQ + nPV ; since this value is relatedto the number of buses on the system, the computation is 0(n).
10The evaluation of (2.10) and (2.11) is quadratic with k, the number of samples (i.e., it is O(k2)), however k is generallysmall and fixed, meaning that it does not affect how the algorithm scales to larger systems.
Chapter 2. Computations for Contingency Analysis 19
(a)
0 20 40 60 80 100 120 140 160
0
0.2
0.4
0.6
0.8
1
Power
Vol
tage
(p.
u.)
(b)
0 20 40 60 80 100 120 140 160
0
0.2
0.4
0.6
0.8
1
Power
Vol
tage
(p.
u.)
actual PV curveknown pointslagrange curvepredicted value
Figure 2.3: Comparison of performance of the Lagrange polynomial interpolation scheme for continuationpower flow with gradual versus sudden change a) with a sudden change in step size and b) with a gradualchange in step size. This demonstrates that with a rapid change in step size, the Lagrange polynomial canquickly diverge from the real function. The known points have been generated from a cubic polynomialand random perturbations.
Chapter 2. Computations for Contingency Analysis 20
computation time for: four elements 30-bus system 118-bus systembaseline 26.158 s 1:27:08 hrs 43:53:04 hrs
with polynomial approximation 20.340 s 1:10:39 hrs 30:47:11 hrswith adaptive step-size 2.952 s 0:14:17 hrs 5:28:10 hrs
with both 1.791 s 0:09:04 hrs 3:26:21 hrs
Table 2.2: Performance benchmark comparing execution times for continuation power flow with variousmodifications. The column four elements contains results for all possible contingencies (up to n − 4)involving four select branches in the IEEE 30-bus test system, totaling 15 scenarios; the column 30-bussystem displays times to compute all possible n − 1 and n − 2 contingencies on the IEEE 30-bus testsystem, totaling 3240 scenarios; and the column 118-bus system displays times to compute all n− 1 andn− 2 contingencies on the IEEE 118-bus system, totaling 64261 scenarios.
the former graph the approximate function quickly diverges from the actual function; the voltage value
produced at λ = 100 is -2.729 V p.u., which would likely result in non-convergence of the correction
step.
The challenge of utilizing Lagrange interpolation with changing step-size can be overcome by tuning
the behaviour of the adaptive step size algorithm. The rate at which the step size adapts to the error
should be limited, and the threshold for changing the step size (line 1 of Code Snippet 1) should be small
so that the changes in step size are gradual and so that large changes in step size are spread out over
several iterations of CPF. In addition, before executing the Lagrange prediction, the step sizes between
the input points should be evaluated; if there is a large change in step size between the input points
to prediction, the algorithm should fall back to linear approximation. The latter condition is relevant
especially at the boundary between phases, where the step size often changes significantly in both the
fixed step-size and adaptive step-size approaches.
Summary of Lagrange Approximation
The Lagrange approximation offers increased accuracy in the prediction step of continuation power flow,
allowing for larger step sizes and fewer iterations to solve for the voltage stability limit, while also being
faster to compute than the alternative linear approximation. The cost of using the Lagrange method is
increased complexity, requiring a fall-back mechanism in the event that conditions are not suitable for
using it. This trade-off greatly benefits the performance of CPF. The integration of Lagrange polyno-
mial prediction into CPF and the implementation of the Lagrange polynomial interpolation scheme are
detailed in Appendix B.3.
2.4.3 Quantification of Performance Gains in Continuation Power Flow
The goal of developing algorithmic enhancements to continuation power flow is to bring massive compu-
tations of multi-level contingency analysis to a performance level that enables quick turnaround times.
Table 2.2 contains a benchmark comparison of continuation power flow calculations with and without
the techniques described previously; these benchmarks show that adaptive step sizing leads to a bigger
improvement in step size. These performance benchmarks demonstrate significant gains achieved in
computation times for continuation power flow through the use of adaptive step sizing and Lagrange
polynomial approximation. Figure 2.5 gives a summary of how these modifications enhance the perfor-
Chapter 2. Computations for Contingency Analysis 21
mance of continuation power flow.
2.5 Performing Multi-level Contingency Analysis
Beyond the selection of a tool for quantifying the effect of a contingency, multi-level contingency analysis
requires the development of techniques for consistent application of the tool to a wide range of contin-
gency scenarios. This research made use of the Matpower toolbox [21], which had an implementation of
continuation power flow without the modifications described in previous sections of this thesis. In addi-
tion, it did not include functionality to scale more than one load during CPF, or to define participation
factors for loads.
2.5.1 Dealing with Islanding
The issue of islanding is a challenging problem for contingency security, one that becomes even more
difficult when multi-element contingencies are considered. For single-element contingencies to result in
islanding, there must be a single bus or a sub-system connected by only one branch, such that the
outage of that branch or either of its terminal buses is enough to disconnect the bus or sub-system of the
system from the rest of the network. This places a restriction on the number of contingency scenarios
that can lead to islanding and on the size of islanded loads or sub-systems that result from a single
element outage. In contrast, when multiple elements are faulted, there are many more contingencies
that can result in islanding; in addition, larger sub-systems involving multiple branches, buses, loads
and generators can be stranded. This makes islanding an even bigger concern for contingency security
and necessitates accurate tools to identify and measure the severity of instances of islanding.
Regardless of whether or not multi-element contingencies are considered, evaluating power systems
with islanding presents a difficult challenge. Traditional performance indexes (e.g. voltage deviations
and line loading parameters) are ill-suited in scenarios where the topology of the system is significantly
changed, since the incremental effect of a fault on these indexes may vary in accordance with gross
changes in system topology; in addition, these measures may not reflect the significant effect of islanding
on the ability of the system to supply power to specific loads which may be isolated during an islanding
scenario11. Conversely, continuation power flow provides a robust evaluation of a power system even when
islanding occurs, since it explicitly measures the reduction in loadability caused by the contingency —
such a value can be compared irrespective of system conditions under which it was obtained. In addition,
continuation power flow is also able to evaluate an isolated sub-system for its capacity to operate, giving a
quantification of the incremental benefit that can be achieved by implementing equipment and protocols
to maintain operation of stranded systems.
The flow diagram in Figure 2.4 describes the algorithm for traversing a network in order to identify
islanding. The approach used in this algorithm is to explore the network by traveling from bus to bus
around the structure defined by the branches and buses in a network; each step involves either traveling
along an untraveled branch to another bus — which may or may not have been previously visited —
or jumping to a previously unvisited bus, one that is not connected to the current bus by a branch. A
jump would be performed as the initialization step for the exploration, or any time a bus has no more
untraveled branches attached to it. Each time a jump is performed, the newly visited bus exists on a
11Paradoxically, in cases where significant loads are isolated, the isolation of buses or sub-systems could actually resultin improvement in line loading levels and a reduction in voltage deviation on the part of the system which remains active.
Chapter 2. Computations for Contingency Analysis 22
Start
Jump to an unvisited bus.
Create new network listing.
Has thisbus beenvisited?
1. Mark as visited.
2. Create new networklisting.
Merge the network of thisbus with the network ofthe bus you just jumped
from (if one exists).
Does thebus haveuntrav-
elledbranches?
Move alongany untravelled
branch tothe next bus.
Do thereexist
unvisitedbuses?
Merge networks connectedby yet-untravelled branches.
Finish
noyes
yes
no
no
yes
Figure 2.4: Flow chart describing the algorithm for island detection. This algorithm travels from busto bus along the branches of the system, avoiding branches that have already been traveled, in order toidentify buses that are connected to the network. By tracking sets of buses that have been visited viabranch traversal, it is possible to identify regions of the network that are isolated by a branch that isdisconnected.
Chapter 2. Computations for Contingency Analysis 23
potentially islanded sub-network; this new sub-network can be explored by traveling from bus to bus
along untraveled branches. When a bus is encountered that has previously been visited but is not on the
same sub-network, it can be deduced that this new bus and all buses connected to it are on the same sub-
network. In this way the system can be explored and all islanded sub-networks identified by traveling
each branch at most once and by performing at most (but in general far fewer than) one jump per
bus. The algorithm is finished when every bus in the system has been visited, after which any branches
which have not yet been traveled must be checked to see if they connect separated sub-networks. A full
implementation of islanding detection and resolution is contained in Appendix A.4.
The inclusion of islanding in contingency analysis does not itself address the significant challenges
associated with coordinating the operation of islanded sub-networks within a power system. Islanded
networks may require special considerations to maintain adequate voltage regulation and stable oper-
ation, and they may still be susceptible to further outages. However, contingency analysis provides a
method to evaluate what gains to system security may be achieved by developing protocols and installing
equipment that allow islanded systems to continue operating. The use of continuation power flow to
evaluate loadability of systems with islanding could provide insight to whether the implementation of
islanded operation capabilities is feasible or economically viable.
2.5.2 Implementing Participation Factors
The implementation of participation factors for loads in continuation power flow is encoded in the
formulation described in equation 2.6. The vector K contains an array of participations factors such
that P o = λoK, where P o defines the load profile — containing power injections at each bus to match
equation 2.5 — and λo is the scaling factor corresponding to the load profile.
This parametrization of continuation power flow in terms of a particular load profile is a key qualifier
to how the resulting maximum loading reflects the severity of a particular contingency scenario. The
results of contingency analysis at one load profile only gives insight into the severity of faults at that
loading profile; this means that in the context of planning, it is necessary to choose an appropriate load
profile in order provide a realistic picture of contingency risk. In the context of operations, this requires
that contingency analysis be re-computed to match changes in loading and generation scheduling over
time.
For this research, it was necessary to extend the existing capability of the Matpower [21] package for
evaluating continuation power flow. The Matpower implementation formulated CPF to include λ as a
scaling factor on the real and reactive power injections at only one bus. This formulation is inadequate
for contingency analysis as it leads to gross distortion of the load profile for well-conditioned systems,
where the power injection at the bus of choice would be increased far beyond any load profile that would
be used in operation — and far beyond a reasonable power injection at that site. The formulation in
equation (2.6) distributes the scaling of power injections among all buses, preserving the load profile of
the system.
2.5.3 Application of Parallel Computing
The application of continuation power flow to multiple contingency analysis presents a significant chal-
lenge because of the sheer number of contingencies that must be considered. The number of contingency
cases grows exponentially with the level of contingencies considered and the number of elements in a
Chapter 2. Computations for Contingency Analysis 24
four elements 30-bus system118-bus system
0
20
40
1.2 1.2 1.5
96
8
15
9.513
19.5
32.7
47.5
Sp
eed
Mu
ltip
lica
tion
Fac
tor
polynomial fittingadaptive step size
combination of aboveincluding parallel computing
Figure 2.5: Graph summarizing performance gains resulting from modifications to continuation powerflow, including polynomial fitting, adaptive step size, the combination of the two algorithms, and thecombination of the two algorithms in addition to the use of parallel processing on a four-core CPU.Performance gain is calculated as a multiplicative increase in the rate of computation of contingencyanalysis; each rate is calculated as a
b , where a is the time to compute before adding the modificationand b is the time to compute afterwards. The contingency sets used for these benchmarks are the sameas those described in Table 2.5.
system, becoming unmanageable for larger networks at higher levels of contingency analysis. One key
improvement that can be made to the speed of contingency analysis is to utilize multiple processes to
speed up the rate at which a set of contingencies can be analyzed. This technique can leverage hardware
capabilities of multi-core processors and the fact that continuation power flow is primarily a CPU-bound
operation to compute individual continuation power flow solutions simultaneously.
The implementation of parallel processing for contingency analysis is made simple by the multi-
threading capabilities of modern computing languages. The continuation power flow algorithm was
implemented using MATLAB, which provides a simple mechanism to distribute the iterations of a loop
to individual workers that each act as a separate process in the operating system. This mechanism
allows a performance speedup in proportion to the number of CPU cores available. Table 2.3 gives
an example of the speedup made possible by parallel computing techniques. The benchmark describes
the performance gains achieved by enabling parallel acceleration on a desktop-class, quad-core CPU for
three different sets of contingency analysis ranging in size from 15 to 64261 elements. The use of parallel
processing allows each CPU core to analyze a contingency, providing a multiplication of computation
speed that is proportional to the number of CPU cores available — Table 2.3 shows that for larger
contingency sets, the total computation time for the contingency analysis is divided approximately by
the number of workers available, with one worker for each core. Figure 2.5 summarizes the performance
gains achieved by using parallel processing in addition to the modifications to CPF described earlier.
Parallel computing techniques provide a simple mechanism for introducing hardware scaling to fur-
ther reduce the computation time for contingency analysis. Since contingency analysis can be discretized
to a single unit for each contingency, the computation of large sets of contingencies can be distributed
on an arbitrarily large number of cores. Contingency analysis can be discretized to a single unit for
each contingency, such that each individual fault scenario can be run independently of others without
Chapter 2. Computations for Contingency Analysis 25
computation time for: four elements 30-bus system 118-bus systemone worker 1.791 s 09:04 min 3:26:21 hrs
two workers 1.4467 s 04:32 min 1:47:50 hrsthree workers 1.3333 s 03:09 min 1:12:14 hrsfour workers 1.3415 s 02:45 min 0:55:28 hrs
Table 2.3: Performance benchmark comparing execution times for continuation power flow with andwithout the use of multi-processing. The one worker case is the equivalent of not using parallel comput-ing, since with one worker all jobs are still completed sequentially. The column four elements containsresults for all possible contingencies (up to n− 4) involving four select branches in the IEEE 30-bus testsystem, totaling 15 scenarios; the column 30-bus system displays times to compute all possible n−1 andn − 2 contingencies on the IEEE 30-bus test system, totaling 3240 scenarios; and the column 118-bussystem displays times to compute all n−1 and n−2 contingencies on the IEEE 118-bus system, totaling64261 scenarios.
considering the number of faults involved or the ordering. This makes the application of computing
clusters to contingency analysis trivial. MATLAB’s Distributed Computing Toolbox makes this adapta-
tion seamless, running individual loop iterations in parallel on whatever processes are available — local
or on a computing cluster — and many other programming languages also provide tools to integrate
computing clusters.
2.6 Chapter Summary
This chapter explored several aspects of how contingencies can be evaluated to produce a robust, re-
peatable and reliable measure of contingency severity that is applicable to multi-element contingencies,
in addition to single-element or n − 1 contingencies. Special attention was given to the development
of algorithms for improving the performance of continuation power flow. While these developments do
not eliminate all concern with respect to the computational burden of performing contingency analysis
that includes multi-element contingency scenarios, they bring it into the realm of feasibility for appro-
priately sized systems at a reasonable depth; that this performance level can be achieved using scalable
commodity hardware is promising for the implementation of multi-level contingency analysis with quick
turn-around.
Chapter 3
Visualizing Multi-Level Contingency
Data
26
Chapter 3. Visualizing Multi-Level Contingency Data 27
Figure 3.1: Connection between tree diagrams and treemaps. a) A simple tree diagram showing fourelements. b) Treemaps of elements at a single level. c) A treemap combining two levels.
3.1 Introduction
A major component of this research is the exploration of techniques for creating meaningful visualizations
of contingency data. There have been few attempts at visualizing contingency analysis in the past, and
none that have considered multi-level contingency data. This research presents ground level exploration
towards effective visualization of such data.
The inclusion of multi-element contingencies complicates summarization and visual display of the
results of contingency analysis. Even the traditional approach of displaying contingencies in an ordered
list [4, 2] is ineffective since n − 2 contingencies cannot be directly compared to n − 1 contingencies;
though they are less likely to occur, they tend to cause bigger reductions in loading capacity of the
system. Beyond this, the sheer number of fault scenarios necessitates special considerations for how
to quickly and intuitively communicate the data contained in the analysis without omitting important
details.
Visualization techniques applied to contingency analyses have the potential to concisely summarize
the information contained in the results, allowing for quicker exploration of the data and gleaning of
new insights from it. Differing approaches to visualizing and interacting with contingency results will
produce different perspectives on that data and identify contrasting aspects of how the system behaves
under load. In addition to accommodating the increased complexity and scale of multi-level contingency
data, it is desirable to identify techniques that enable sophisticated, higher-order observations from
contingency data, in order to maximize the computational investment required to obtain contingency
analysis data for multiple simultaneous elements.
Chapter 3. Visualizing Multi-Level Contingency Data 28
In this chapter, two visualization methods are presented that can be used to summarize and explore
contingency analysis results for multiple levels of faults: tree diagrams and tree-maps. The tree diagram
describes how contingency scenarios can be mapped to a tree structure to describe the relationships
between contingencies of different levels that share elements in common; the treemap diagram builds
on this structural framework by presenting an alternative visual approach that highlights the value of
tree nodes over the structure of the data set as a whole. The mapping of contingency data to each
diagram will be described, accompanied by discussions about how the visualizations are constructed,
how they can be modified to communicate more information about the data set, and how they can be
augmented with other visualization companions and interactive tools to enhance exploration. Figure 3.1
gives examples of the two diagrams and displays an overview showing how the diagrams are related.
3.2 Tree Diagram
3.2.1 Introduction
Tree diagrams have been used in the past to display many different forms of hierarchical organizations
and topologies as well as binary decision processes. Examples of such visualizations include:
• descriptions of file directories [22]
• phylogenetic trees for differentiation of sub-species [23]
• DNA differentiation in population genetics [24]
• decision tree and decision tree classifier [25, 26]
• probability tree [27]
• family tree
Tree diagrams excel at exhaustive description of systems; each entity and its associated relationships
have relatively similar visual size and weight when compared to other entities in the diagram, and the
layout of a tree diagram emphasizes the structural relation between elements over their quantitative
relationships. There is no scale necessary to interpret the features of the graph, allowing the viewer to
explore the structure without interpreting the scale of any visual element — the relative sizes of tree
elements are not central to its structure. In addition, the omission of data from a tree diagram is discrete;
unlike diagrams that give intrinsic structural proportion to the value of each data point, elements in a
tree diagram have comparable visual presence regardless of any associated value1. The absence of an
element suggests not that it is less important but that it does not exist.
The following sections will detail the methodology for mapping contingency analysis results to a tree
structure and will discuss how they can be enhanced by overlaying quantitative information — followed
by an exploration of the strengths and limitations of the tree diagram in this application.
3.2.2 Representing Contingency Data as a Tree Diagram
The key elements of the tree data structure are nodes — representing individual entities — and edges,
which represent the relationship between different entities. Edges describe a parent–child relation be-
1Conversely, in a graph containing continuous values (e.g. a pie graph), insignificant data points would barely appearon the graph and could be lumped together in an other category.
Chapter 3. Visualizing Multi-Level Contingency Data 29
Figure 3.2: Tree representation of a subset of fault cases for the IEEE 30-bus test system.
tween the nodes they connect, and the aggregation of these relationships produces a representation of the
hierarchy of the data. To visualize contingency data in a tree diagram it is necessary to map elements of
the data to these conceptual structures. This mapping can be achieved by representing each contingency
scenario — defined as the combination of a list of elements involved and an associated value of reduced
loadability — as a node, and defining the relationship between different contingency scenarios to exist
where they share common grid elements — represented by an edge. Each relationship held by a node
is with a contingency scenario that has either more or fewer elements involved, causing the diagram to
be separated into groupings by n − k contingency level. These groupings are organized horizontally as
shown in Figure 3.2.
A key distinction of this mapping as compared to a typical application of tree structures is that
nodes can have multiple parents and numerous children, whereas in many traditional tree diagrams
nodes have only one parent and one or two children. There are a multiplicity of relationships between
different fault scenarios, since every element can participate in significant contingencies with a vast array
of combinations of other elements. The tree layout helps to simplify these relationships by containing
them within a per-level structure. In visual terms, this adaptation weakens the structural association
between nodes as compared to other applications of tree diagrams; the significance of each individual
edge is tempered by the fact that there are many relationships between nodes, since each one could have
numerous parent and child relations.
The diagram in Figure 3.2 shows a tree layout of contingency cases. Nodes represent fault scenarios
and have an associated value of reduced load capacity (not shown). Elements are colour coded by type,
allowing the viewer to quickly see what types of elements participate in a particular fault. Each level of
the tree corresponds to how many elements are involved in faults in that layer (i.e. level one contains
Chapter 3. Visualizing Multi-Level Contingency Data 30
single faults, level two contains double faults, etc...). Edges indicate instances where the elements of a
particular fault are a subset of the elements of another fault (e.g. a fault involving branch 1 and branch
2 would be a parent to any triple faults involving branch 1, branch 2 and one other grid element).
3.2.3 Techniques for Overlaying Quantitative Data on Tree Diagrams
Although the layout of a tree diagram is easy to understand and has a straightforward mapping to the
hierarchical structure of multi-level contingency analysis, some modifications are necessary to display
the data contained in contingency analysis. The structure of the tree diagram organizes the relationships
between different contingencies, grouping them automatically by level; however, these relationships are
already known before contingency analysis is performed, so the edges do not reveal any results of the
analysis. Furthermore, the structural layout of the tree diagram does not communicate to the viewer the
quantitative impact of each fault represented. These values are paired with the nodes of the diagram but
have no visual representation and must instead be somehow discovered or superimposed on the diagram.
The structure of the diagram alone provides no mechanism for understanding the relative differences in
severity between each fault.
In order for the tree diagram to be useful for interpreting contingency data, it is necessary that the
quantitative data obtained from continuation power flow be overlaid on the diagram, utilizing visual
cues to indicate the measured severity of each contingency scenario in relation to its neighbors. These
visual cues must be designed to draw the eye, allowing the viewer to quickly identify which contingencies
should be examined further; they should also make it easy for the viewer to see which contingencies
are related by common elements, illuminating patterns in the data and increasing comprehension of the
data set.
Since there are two structures in the tree diagram, there are two main visual techniques that can
be used to display quantities in a tree: edge emphasis and node emphasis. Figure 3.3 shows several
techniques that could be used to show quantitative differences between nodes in a tree diagram.
Edge Emphasis involves weighting the lines that connect nodes in accordance with the values of related
contingencies. In Figure 3.3b the severity of each fault is visualized by the weight of the lines
connected to it. The line weights are proportional to sum of the severity of the nodes at each
end, so the thickest lines represent instances where a fault and its child fault are both severe. This
weighting draws the viewers eye to nodes in the tree that have larger value, and highlights instances
where a particular fault as well as its child fault each cause significant reductions in loadability.
Node Emphasis involves weighting each node by the change in loadability caused by its associated
contingency. In Figure 3.3c fault severity is indicated by the diameter of each node. The diameters
are scaled relative to nodes on the same level, but have no direct relationship to the diameters of
nodes on different levels. This allows viewers to directly compare contingencies that share the same
number of elements using a common scale, and creates loose visual associations between nodes on
on different levels of the diagram.
Edge and Node Emphasis Figure 3.3d shows the application of both node and edge emphasis to
show fault severity. The combination of these two techniques effectively draws the eye to faults
more severe reductions in loadability, however the diagram is visually crowded.
Chapter 3. Visualizing Multi-Level Contingency Data 31
(a) (b)
(c) (d)
Figure 3.3: Screen-shots of a tree diagram illustrating various methods of overlaying quantitative data.a) A tree with no data overlaid. b) Edge Emphasis: a tree with line weightings added, correspondingto value of attached nodes. c) Node Emphasis: a tree with node diameter corresponding to nodevalue. d) Edge and Node Emphasis: a tree with both node and line weight corresponding to nodevalue. Note that in b) and c) the line weights are derived from a combination of the values of both nodes(parent and child).
Chapter 3. Visualizing Multi-Level Contingency Data 32
3.2.4 Normalization of Data in Tree Diagrams
A key aspect to drawing overlays of contingency analysis results on a tree diagram is defining a method
of normalizing the data and scaling the corresponding visual elements to produce visually pleasing and
intelligible diagrams. For any contingency visualization overlay technique it is necessary to specify a
consistent method for scaling the results across different levels so that they can effectively communicate
differences between nodes while not obscuring the overarching structure of the visualization. If all values
are taken on the same scale, faults involving the most elements will dominate the diagram; this is visually
misleading because while multi-element faults tend to have a larger effect on the network, they are less
likely to occur than faults involving less elements. It is desirable that faults can be compared on all levels
easily and, more importantly, that elements participating in multiple severe scenarios across different
levels of contingency can be easily identified.
Tree diagrams can take advantage of an intrinsic grouping of elements by level to normalize each
group separately. Fault values are normalized on each level so that they can be shown on the same visual
scale (e.g. n − 3 faults will have comparable visual weight to n − 1 faults). This can be accomplished
by normalizing each contingency within its level, and scaling each group of contingencies with the same
n− k level so that they fit within a maximum and minimum graphical size.
weight = 0.2 + 3*other.getLevelContext() + 2*self.getLevelContext()
Code Snippet 2: Pseudo-code describing how to calculate edge weights, taking into consideration theassociated faults. The weighting of each value is chosen to optimize the visual effect of this edge weightingon the readability of the diagram.
In the case of edge emphasis, this entails defining a minimum and maximum line weight and scal-
ing each line based on the values of associated contingencies in their individual contexts. Code snip-
pet 2 describes the calculation of line weight for each edge in a tree such as in Figure 3.3b, d. The
value is a weighted sum of the loadability reductions associated with contingencies other and self,
each normalized with respect to other contingencies involving the same number of elements by calling
getLevelContext() on the contingency. Since other is a sub-fault of self, having one more ele-
ment involved, the value returned by each of these function calls is normalized to a different sub-group
of contingencies.
In the case of node emphasis via node diameter, the normalized values obtained from getLevelContext()
must be evaluated across an entire level in order to determine what diameter should be used for each
node. The constraints for determining the radius of each node in a layer include preserving an appropri-
ate upper and lower bound on the size of any one node and ensuring a consistent spacing between nodes,
while adequately filling the available space. The details of this algorithm are described in Appendix C.4.
3.2.5 Strengths and Limitations of the Tree Diagram
The primary strength of the tree diagram is the straightforward nature in which the structure of the
diagram relates to the contingency data it represents. Fault scenarios are clearly represented by nodes,
and edges show which faults have elements in common; there is very little ambiguity as to what the
structure of the diagram represents. This is a valuable quality, since it reduces the effort required of
observers to comprehend and make deductions about the data represented by the visualization, and
Chapter 3. Visualizing Multi-Level Contingency Data 33
minimizes the training needed to properly understand the diagram. These strengths validate the use of
a tree data structure to organize the contingency results by level and by element.
The major limitation of the tree diagram is that it is unable to represent a large selection of faults.
The examples in Figures 3.2 and 3.3 show tree diagrams representing the combined faults of only five or
six different grid elements, yet the resulting diagrams are very crowded and structurally busy. Because
such a large proportion of the visual information contained in the diagram is devoted to explaining the
structure of the data, the tree diagram is difficult to scale to larger data sets without running out of
space. Even if there existed a method of reducing the data set by eliminating low-impact contingency
scenarios, it may be impossible to pare down the selection of faults for large systems without omitting
vital information.
The issue of limited scaling is compounded by the fact that the structure of the tree does not itself
contain the contingency data; to display the quantities of interest requires adding additional features,
such as the data overlays described — further crowding the diagram and making it more difficult to
read. Although the tree diagram is conceptually easy to comprehend, this benefit diminishes as a viewer
becomes proficient at reading the visualization; what does not diminish is how visually crowded the
diagram is, which makes it harder for the viewer to acquire information and leads to visual fatigue. The
tree format adds to the complexity of the data by directly drawing and focusing on the relationships
between different data points, information which is of secondary importance compared to the value
of each data point. Furthermore, it does not provide any mechanism to selectively omit structural
information or to minimize the presence of elements that are not significant to the user in terms of their
impact on contingency security. This weakness could make the tree diagram more frustrating for viewers
who use it frequently.
3.2.6 Summary
This section explored the visualization of contingency data using tree diagrams, giving attention to how
the tree structure can be used to visualize this data and what measures are necessary to make it effective
at doing so. The tree diagram visualizes the structural framework of a tree as applied to contingency
data. The strengths and weaknesses of this diagram were identified, including its conceptual simplicity
and its tendency to become visually crowded for expanded data sets. The structural simplicity of this
diagram validates the use of the tree data structure to organize the contingency set; however the problem
of scaling underlines a need for a better visualization of the tree structure that can effectively highlight
and summarize the severity of each data point, one that scales visually to large data sets.
3.3 Treemap
3.3.1 Introduction
The second visualization tool presented is the treemap. The treemap diagram is an alternative visualiza-
tion technique for tree data structures that uses a space-filling technique to fit the tree structure within
a specified region, and is one of the few tree visualizations that can naturally communicate quantitative
data about different nodes in the tree. This visualization technique was first developed as a method of
summarizing hard drive usage [22, 28]. Some examples of treemaps in use include:
• Hard drive allocation [22, 28, 29]
Chapter 3. Visualizing Multi-Level Contingency Data 34
Figure 3.4: Treemap of n − 1 contingencies of a set of four branches in the IEEE 30-bus test system,arranged in a hierarchy. Each block represents the outage of a particular branch (indicated by thenumber), with its area representing the net reduction in loadability of the system caused by contingenciesof that element. The layout strategy applies partial ordering to the contingencies by size, from top-left(biggest) to bottom-right (smallest).
• Breakdown of financial activity by region [30]
• Gene ontology [31, 28]
• Software metrics [32].
The treemap structure excels at breaking down complex data, quickly highlighting the largest data
points while illustrating their context within the broader data set. In contrast to the tree diagram, it
provides a transparent mechanism for masking those results that are less significant, since these elements
become extremely small. The diagrams are also visually intuitive, allowing the viewer to understand the
information displayed almost instantly if they understand how the structural elements of the diagram
are mapped to the contingency data.
The following sections will introduce the treemap diagram and how contingency analysis results can
be mapped to a treemap, followed by discussion of techniques for drawing treemaps and a comparison
of the advantages and pitfalls of using treemaps to display contingency data.
3.3.2 Representing Contingency Data as a Treemap
Despite the intuitive layout of the treemap diagram, the way in which contingency data sets can be
mapped to a treemap format is less straight-forward than the case of a tree diagram. Figure 3.4 shows
Chapter 3. Visualizing Multi-Level Contingency Data 35
Figure 3.5: Treemap of n− 1 and n− 2 contingencies for four elements in the IEEE 30-bus test system,arranged in a hierarchy. Each red block represents an n−2 contingency while each blue block representsan n − 1 contingency. The blocks are further divided by thick lines into four groups, each containingthree n− 2 contingencies and one n− 1 contingency.
a treemap summarizing single-element contingencies involving four branches in the IEEE 30-bus test
system, with each contingency numbered according to the branch that is faulted in that scenario. Each
block represents a particular element or set of elements, with its area representing the net reduction
in loadability of the system caused by the fault of those elements. The layout strategy used to build
the treemap in Figure 3.4 applies partial ordering to the contingencies by size; the faults are laid out
from largest to smallest starting in the upper left-hand corner and moving to the lower right. This
diagram demonstrates a very natural comparison between the four different contingency scenarios; it is
immediately apparent which one is the worst and how they are distributed.
To show multiple levels of contingency data in a treemap, a nesting technique is employed. Figure
3.5 visualizes contingencies for the same four elements that are shown in Figure 3.4, but utilizes nesting
to also show n − 2 contingencies; the n − 2 contingencies for each single element are nested inside
the corresponding boxes that were drawn in Figure 3.4. The original layout is preserved with four
major groups corresponding to the blocks in the single-level diagram (outlined with bold lines). Within
each group the area is divided into blocks representing the different contingencies involved; red blocks
represent double-element contingencies and blue blocks represent single-element contingencies.
One of the challenges of treemaps is their hierarchical nature; since n − 2 contingencies are nested
inside n− 1 contingencies, the area they occupy on the diagram is weighted by the severity of the n− 1
contingency block in which they are drawn. In order to avoid having severe contingencies hidden because
their parent contingencies are not severe, it is necessary to use cumulative values of contingency severity
Chapter 3. Visualizing Multi-Level Contingency Data 36
to give each block its relative area. For example, the first level (blue) blocks in Figure 3.4 correspond to
single elements, but their relative area should be based on the sum of all contingencies in the analysis
that involve that element. Using this scheme, each block in a treemap summarizes faults involving the
element(s) it represents. This is the opposite of the normalization for tree diagrams, in which higher
level contingencies were scaled down in order to give them a size comparable to lower level contingencies.
The use of treemaps is helpful in providing a quick-to-read summary of the contingency data. A brief
glance at the diagram is enough for an observer to make several deductions about the contingency data,
provided that they are familiar with the visualization technique.
By looking at Figure 3.4, the following observations can be made about the contingency analysis
data:
• Faults involving branch 3 combine to have the largest effect on system loadability.
• The order of severity for each single element (summarizing all the faults it is involved in) is 3,1,2,4.
•Faults involving branch 3 or branch 1 make up roughly 70% of the reduction in loadability for any
faults in the contingency analysis.
By expanding the treemap diagram to show multiple levels of contingency data, even more observa-
tions can be made without obscuring the information communicated by the single level diagram. Looking
at Figure 3.5 further observations can be made that:
•The combined fault of 3,1 is the most severe double contingency under the sub-grouping for both
element 3 and element 1, indicating that this is the most severe n− 2 contingency.
Chapter 3. Visualizing Multi-Level Contingency Data 37
•
The contingencies (4, 1) and (1, 2) are both less severe than just the contingency of
(1). This is because both double-element contingencies cause the isolation of a load,
leaving the rest of the system better conditioned at the expense of not supplying
power to that load at all.
• The previous observation and the comparable size of faults 3 and 1 in Figure 3.4 suggesting that
the fault of element 1 alone is more severe than faults of other elements on their own.
These detailed observations about the relationships between different scenarios in the contingency
analysis data illustrate how treemaps can enable deeper exploration of contingency data and synthesis
of that data into more nuanced understanding of the different contingencies. To a viewer with basic
understanding of the structural arrangement of a treemap and how it maps contingency data, these
characterizations can be made quickly and with minimal mental effort or visual confusion. These types
of higher-order observations are a marked improvement compared to list-based summaries used for single
element contingencies, which provide no mechanism for pattern visualization.
3.3.3 Normalization of Data in Treemaps
In the case of the tree diagram, it was necessary for the viewer to understand how elements were
normalized in order for the viewer to know which elements could be compared with respect to their size
and what was implied by the scaling of elements. With the treemap, this normalization is naturally
imposed by the structure of the diagram. For a single level of data, the total space of the treemap
diagram is analogous to the sum of all elements in the data set, and the area of each individual block in
the diagram represents that data point’s size relative to the sum of all elements2.
The space-filling nature of the treemap makes the normalization of data in a treemap relatively
intuitive; however, with multi-level treemaps (e.g. Figure 3.5), the comparison of n − 2 contingencies
is less straight-forward. In a two-level treemap, an n − 2 contingency will appear twice, once for each
element involved. This was illustrated in the observations for the two-level treemap in Figure 3.5, where
it was noted that the fault of elements 3,1 appears in the group of faults for element 3 and in the group of
faults for element 1. Within each of these sub-blocks, the normalization concept of a treemap holds, yet
because each sub-block is sized corresponding to a different — but overlapping — set of contingencies,
the two blocks representing the contingency of elements 3,1 are not the same size. This difference in
normalization shows that it is not appropriate to compare two different elements that are nested inside
different sub-blocks within the treemap. Each element is normalized in a different context, so their
relative areas cannot be compared directly. Only within a grouping, such as the one containing faults
of element 3 and one other branch, can the area of different blocks be directly compared.
3.3.4 Different Approaches to Treemap Styling
There are several different geometric approaches to drawing treemap diagrams and allocating space
within them, including
• Square or Rectangular [22]
2This is why treemaps are often described as space-filling diagrams.
Chapter 3. Visualizing Multi-Level Contingency Data 38
• Space-filling curves [33]
• Gosper curves [34]
• Voroni curves [32]
• Circular partitions [35]
• Circular layouts
All of these algorithms are straight-forward space-filling techniques that populate a constrained ge-
ometric space (a rectangle or circle, with the exception of the gosper curve [34] and Voroni curve [32])
with polygons whose area corresponds to the breakdown of individual areas correspond to data points in
a data set in such a way that together they occupy the entire space. In the example of a square treemap
describing hard drive allocation, the area of square space would correspond to the capacity of the storage
device, while individual blocks represent the space occupied by files or the unused space on the disk; the
summed area of all the elements is equal to the capacity of the drive. The different geometric approaches
listed above use varying polygons and layout algorithms to fill the space, achieving the same space-filling
effect with a different look. In this research the rectangular approach was used because it has seen broad
application, because the layout algorithms for this approach are simple and intuitive, and because of the
increased rendering precision afforded by having elements aligned with the display pixel matrix.
3.3.5 Tiling Algorithm for Treemap
Square or rectangular treemap geometries are the most common implementations of treemap diagrams,
and there are several tiling algorithms that have been proposed to lay out the rectangles within a
treemap. These include BinaryTree, Ordered, SliceAndDice, Squarified and Strip [36, 37].
The layout strategy employed to draw a treemap can have a profound effect on the way it is perceived
and how viewers interpret the data it displays. It is desirable that layout algorithms preserve the
ordering of data points from small to large, since this allows layouts to be more predictable — aiding
fast comprehension by the viewer. It is also desirable to maintain the squareness of each element (i.e.
that the aspect ratio of the element be at or near 1:1), since this makes the diagram more attractive and
facilitates direct comparison of elements by area3. These two competing criteria lead to a design trade-
off: a more strict ordering of contingencies can be achieved by relaxing the requirement for squareness;
alternatively, by changing the order of blocks it is often possible to achieve a better average aspect ratio.
This research utilizes the squarified tiling algorithm — described in Figure 3.6 — which enforces a
loose diagonal ordering of the elements from top-left to bottom-right. The squarified tiling algorithm
uses a recursive, edge-at-a-time approach to fill in a given rectangular space, allocating a single column
of elements along one edge with the goal of maximizing the aspect ratio of that column.
The approach for laying out a single column is to choose the shorter edge of the given rectangular
space and to iteratively add elements from the data set along that edge until the optimal number of
elements is found. Each rectangle is constrained so that its area relative to the given space is proportional
to its value as a fraction of the sum of all values in the data set. The column of elements spans the
chosen edge, and the width of the column is adjusted so that the area of each element in it is preserved.
One element is added to the column at a time, and the average aspect ratio of the elements in the
3Research has shown that elongation of shapes can introduce a bias in visual interpretation of their area [38, 39].
Chapter 3. Visualizing Multi-Level Contingency Data 39
Start with asquare/rectangular
space to fill
Pick theshortest
side
Add a block representingan element to the side
aspectratios im-proved?
pick the betterof the last two
Are therestill
elementsto add?
continue withremaining space
and elements
finish
no
yes
noyes
Figure 3.6: Flow chart describing the recursive tiling algorithm for squarified treemap layout. Startingfrom a given rectangle, elements (represented by rectangles having area in proportion to their value as afraction of the sum of all values) are added one by one starting from the largest, such that the elementsform a column spanning the shortest side of the space. As each element is added, the column gets widerand the aspect ratios get larger. This process stops after adding an element brings the average aspectratio above 1:1, and then the better of the last two column layout iterations is chosen for the final columnlayout. After this, the column layout is repeated with the remaining elements in the remaining space,and this happens recursively until all elements are laid out — at which point the given space is filled.
Chapter 3. Visualizing Multi-Level Contingency Data 40
column is calculated once each element is added to the column and the dimensions of the column have
been adjusted to preserve the total area. During the first few iterations, each element in the column will
be tall and narrow, with a low average aspect ratio 4; as elements are added the column will become
wider, each individual element will become shorter, and the average aspect ratio will increase. Once the
average aspect ratio is greater than one, the loop is broken and the last two iterations of the column are
compared to see which one is closer to 1:1.
After a subset of elements have been allocated as a column, the remaining area of the given space
will be a rectangle. The same column-laying technique can be used recursively on that space with the
remaining elements until they are all laid out. It is important that on each recursive iteration the shorter
edge of the given space is chosen as the dimension to be spanned, as this will ensure that the left-over
space of each recursive iteration tends towards being square. The squarified tiling algorithm achieves
a loose sorting effect by sorting the data points before laying them out, and by starting each recursive
column in the same corner of the remaining space; in the examples shown, this is the top-left corner.
The details of this tiling algorithm are described in the source code in Appendix C.3.1
3.3.6 Dealing With Quantization Error
One of the challenges associated with drawing precise graphical shapes on a computer screen is dealing
with precision in the definition of the dimensions of a rectangle. Although it is necessary to define a
treemap diagram so that the dimensions of each rectangle have floating point accuracy, they have to be
reduced to discrete integer sizes for rendering on a screen — a process that can lead to distortion of the
dimensions of various elements of the diagram.
Many graphics packages provide support for anti-aliasing filters, which allow high-frequency visual
content such as hard edges to be filtered so that they can be displayed even when their details are smaller
than the resolution of the screen. However, anti-aliasing filters tend to perform poorly on rectangular
diagrams like treemaps, where the exact positions of borders between different rectangles are vital to the
structure of the diagram. Since the borders in the treemap are aligned parallel with the pixels on a screen
and have dimensions in the range of 1-3 pixels, they tend to respond poorly to anti-aliasing; borders
that fall nicely on the pixel grid will appear dark, while borders that fall between pixels will appear
blurred. This effect can be mitigated by increasing the border sizes, but this increases the minimum
size of smallest elements that can be drawn and is also an inefficient use of screen space; fewer data
points can be displayed and more of the diagram is occupied by visual elements that don’t encode the
severity of a contingency. An alternative approach would be to use high-resolution displays; however,
this may greatly increase the equipment cost associated with displaying and using treemap visualizations
because high resolution displays and accompanying graphics hardware are more rare and expensive in
comparison to standard displays.
Another approach to handling precision errors is to round each shape to integer pixel dimensions, so
that the elements of the treemap can be drawn exactly by the pixels on the screen. This technique has
the advantage of producing better-looking images, but care is required in making sure that rounding
errors are not propagated, as this can lead to a mismatch between the area of the actual diagram and
the cumulative area of the rectangular elements contained within it. In addition, it is necessary to
understand how this rounding strategy might distort the treemap in comparison to the underlying data.
4Aspect ratio is calculated as width/height.
Chapter 3. Visualizing Multi-Level Contingency Data 41
Case 1 — very large elements Case 2 — large elements Case 3 – small elements
block heights h =
227.45
202.46
170.38
150.41
149.30
h =
35.46
31.48
26.49
23.39
23.18
h =
9.48
6.44
4.38
2.39
2.31
block heightsafter rounding
h =
227
202
170
150
149
h =
35
31
26
23
23
h =
9
6
4
2
2
column heightin pixels
900 140 25
column heightafter rounding
898 138 23
column width 160 27 5
avg. aspect ra-tio
1.05:1 1.03:1 1.00:1
percent errorby area
0.22% 1.43% 8.00%
Table 3.1: Comparison of rounding error for elements with large area versus small area in laying out atreemap diagram. The net error in a column as a ratio of the column area can be significant for columnsfilled with small rectangular elements (such as those in case 3); the error adds up to 8% of the totalcolumn area. For columns that span an entire 1080p screen vertically, the error is insignificant (such asin Case 1).
One cause of rounding error is the rounding of each block height within a column layout. For
example, suppose there are five blocks in one column, all of the same width w and having heights
h = [35.46, 31.48, 26.49, 23.39, 23.18] pixels. The height of the column sums to 140 pixels, however when
rounded with a pivot of 0.5 the heights h = [35, 31, 26, 23, 23] summing to 138 pixels. This means that at
the end of the column there will be a two-pixel strip across the column that is not filled by any element
— a discrepancy which makes up 1.4% of the total area of the column. This error is small when the area
of the column is large, but as the columns get smaller this error becomes a bigger factor in the column
layout. Table 3.1 compares this scenario with one where the elements are small, for which the rounding
error is much more significant in proportion to the column area.
The effect of block dimension rounding error on the treemap diagram is that the accumulated round-
ing error of all blocks in the column will appear at the end of each column, leaving either unoccupied
space or overflowing the column. If the column overflows and is truncated, the last box may appear
significantly smaller than is dictated by its value and may even be smaller than the first block in the next
column — a disruption in the layout pattern which will quickly be noticed by the viewer, undermining
Chapter 3. Visualizing Multi-Level Contingency Data 42
their confidence in the information conveyed by the diagram. In the case where there is leftover space in
the column, this space draws the eye and distracts the viewer from the rest of the diagram, contributing
to visual fatigue.
There is no way to eliminate rounding error in treemaps since the input values from contingency
analysis are continuous decimal values and must be mapped to a discrete integer range for display.
Fortunately, the effect of this error on the accuracy of the diagram is small enough that it is unnecessary
to use advanced graphical techniques such as anti-aliasing to compensate for it. For large elements that
are meant to be surfaced by the treemap visualization, the error in area is insignificant (for example,
see column 1 of Table 3.1). In the case of smaller elements, their size alone suggests to the viewer that
in order to make direct comparisons to similarly sized items, they need to be examined more closely
by pulling them out to a different context. The treemap naturally encourages the user to see them
as visually similar to their neighbors, as opposed to highlighting slight differences between them; even
errors that are significant compared to the box size will be considered insignificant in the context of
the entire diagram — provided that the layout strategy appears consistent. However, it is necessary to
eliminate the visual artifacts created by missing or extra space in the diagram, since these gaps draw
eye of the viewer.
The most straightforward approach to hiding rounding error in a column is to distribute the rounding
error evenly among the elements in the column by padding or trimming them. In the example given
earlier, a column made up of blocks of heights h = [35.46, 31.48, 26.49, 23.39, 23.18] rounds to h =
[35, 31, 26, 23, 23], producing a rounding error of 2 pixels. This error can be rolled into the individual
elements by spreading it among the elements such that h = [36,32, 26, 23, 23]. This brings the column
height to 140 pixels, with a net element sizing error of 1.51%. It is important that the padding be done
with the goal of preserving the ordering of elements; if the error was distributed as h = [36, 31, 26, 23,24],
the size order of elements in the column would be changed, which compromises the tiling algorithm rules
and would be noticed by the viewer as an anti-pattern. Instead, the rounding error shows up as a slight
discrepancy in the area of certain blocks as compared to the contingency analysis. The human eye has
very little ability to identify this discrepancy, and so the integrity of the diagram is preserved.
Rounding error can also be propagated between the allocation of columns, since after each column
is laid out, the column width also has to be rounded. For example, suppose a column spans one side
of a 900 pixel square, and after identifying the combination of elements that should be assigned to the
column to produce the best aspect ratio, the column width is 30.45 pixels wide. After each block in the
column is rounded and the resulting error is absorbed within it, the width of the column still has to be
rounded to 30 pixels in order to be drawn; the resulting error is:
0.45 · 900 = 405 pixels
This rounding creates an error of 1.48% in the column area; however, it can create a substantial visual
artifact — the extra space will show up at the end of the treemap diagram, where it could be of
comparable size to the final columns. Furthermore, this rounding happens for each column, and so the
error can accumulate over successive column allocations if, for example, each column is rounded down.
In the case of accommodating error within a column, it was possible to distribute rounding error
evenly between the different blocks in the column. With column width rounding there is no way to go
back and pad columns, since consecutive columns may be perpendicular to each other; rounding error
must be accommodated during the layout of each column. It is possible to avoid propagation of rounding
Chapter 3. Visualizing Multi-Level Contingency Data 43
errors by tracking the accumulated rounding error and factoring it in to the next column. For example, if
the first column is laid out with a width w1 = 30.45 pixels, the rounding error will be err1 = 0.45 pixels.
Suppose that the second column is allocated a width w2 = 28.37 pixels; a straightforward rounding
scheme would round this to w2 = 28 pixels, leading to an accumulated error err2 = 0.82 pixels in
column width. In order to avoid this accumulation, err1 can be added to w2 before rounding, giving
w2 = 29 pixels and an accumulated rounding error of:
err2 = err1 + (w2 − w2) = 0.45− 0.63 = −0.18 pixels
The rounding error of the second column is larger (0.63 pixels as opposed to 0.37) but the accumulated
error is much smaller. This scheme causes each column layout rounding step to consider which way the
previous step was rounded, allowing erri to be kept below 0.5 pixels for all recursive iterations of the
tiling algorithm. The implementation of error accommodation is detailed in Appendix C.3.2.
3.3.7 Use of Colour Coding to Overlay Information Treemaps
Unlike the tree diagram, treemaps naturally highlight the central quantitative data in contingency anal-
ysis. This is advantageous because it allows the treemap to much more easily scale to larger data sets
without running out of space on the diagram or having the main structural elements confused by over-
crowding. However, there is still opportunity to overlay additional data on the treemap by modulating
the colour of elements in the treemap.
Figure 3.7 demonstrates the use of colour coding to communicate the distance between elements
involved in a fault. The diagram visualizes n − 1 and n − 2 contingencies, with the distance value for
each n− 2 contingency calculated as the minimum distance between the two elements on a geographical
map of the system. The distance value obtained for each contingency, normalized to the geographical
size of the system, is encoded in the colour brightness of each rectangle, causing faults to appear dark
if the elements involved are close together. If a fault involves two branches that meet at a bus, or that
share a right-of-way, these faults are darker and stand out to the eye. This same mechanism could also
be used to encode another per-contingency metric such as fault probability data, giving the viewer a
better understanding of the risk of each contingency happening, in addition to the data that is already
shown concerning the consequence of each contingency.
Another way that colour coding has been used in the visualizations discussed here is as a method
to differentiate between n− 1 and n− 2 contingencies — for example in Figure 3.5 n− 2 contingencies
are drawn in red and n− 1 contingencies are drawn in blue. This technique allows quick reading of the
structure of the diagram and allows the viewer to identify how much worse double contingencies involving
an element are compared to the outage of just that element, as well as instances where a double element
fault leads to a better result than a single element fault.
Colour can also be used in treemaps simply to improve contrast between different groupings, em-
phasizing the nesting structure. Figure 3.9 demonstrates the use of colour purely to help differentiate
between groupings of contingencies by element. As the first level of contingencies are laid out, each is
given a colour and it and all sub-faults have that hue. This technique increases the contrast between
major blocks on the treemap, without introducing extra information.
Chapter 3. Visualizing Multi-Level Contingency Data 44
3.3.8 Strengths and Limitations of Treemaps
One of the key strengths of treemaps is that they are able to summarize the extent to which a certain
element contributes to loadability reduction in various fault cases in a contingency analysis data set.
This strength is built in to the mapping of contingency data to the treemap, wherein sub-faults are
summed and displayed in groups with their parent fault; this gives a clear picture of each element’s net
effect on security, along with a breakdown of how the effect is distributed among different contingencies.
A second strength of the treemap is that it allows for quick visual acquisition of the central obser-
vations that can be made from the data set. When an observer looks at the image, they internalize it
by identifying the most prominent features — single blocks that are much bigger than their neighbors,
and elements that appear darkest. These features are also the most valuable summary observations that
can be made about the data set, since they direct the observer to contingencies that are of primary con-
cern. This pattern can be seen on multiple scales within a multi-level treemap, providing a repeatable
and natural pattern framework for reading the diagram and requiring little effort on the part of the
viewer to make observations about the data. This reduces visual fatigue, a valuable quality for operators
who must continually explore updated visualizations to identify changing conditions of operation on the
power grid.
In addition, treemaps tend to scale well to larger data sets. Tree diagrams are visually dense, needing
larger drawing space to scale to bigger data sets; they devote much of the visual space in the diagram
to describing the structure of the data, meaning that each additional element requires a relatively large
amount of space to be displayed. In contrast, treemaps scale by reducing the size of less significant
elements — they are by definition space-filling visualizations. Each additional element requires little
visual over-head to fit into the diagram, allowing the treemap to fit a much larger data set in the same
space without becoming unreadable. Figure 3.7 shows a treemap of n − 1 and n − 2 contingencies for
the entire IEEE 30 bus system, comprised of 80 elements and 3129 contingency scenarios. Although
this diagram represents a much larger set of contingencies than Figure 3.4, it is not significantly more
crowded or difficult to read.
These strengths relate back to the concept of visual efficiency, which can be understood as the ratio
of visual information communicating data points compared to the total amount of visual information,
including that which simply gives context to the data points. The tree diagram has limited visual
efficiency, since for every contingency the contextual information of nodes and edges constitutes the
majority of each element; the actual data point is overlaid. There is a high overhead to each data point
in the visualization. The treemap has much better visual efficiency since each element in the visualization
is primarily constituted of its area, which directly represents the value of the data point. Only a small
fraction of the diagram is taken up by extra contextual information in the form of borders.
One negative effect of having a larger set of contingencies to draw is that it becomes more difficult
to compare elements of similar size that are big enough to show up, particularly those that are nested
at the second level. The eye has much more trouble discerning the relative area of these blocks, and
the rounding error in the layout becomes significant for smaller blocks. This reduces the efficacy of the
diagram in facilitating detailed comparisons between n − 2 contingencies in a comprehensive manner.
Despite this, there are still patterns that can be observed from the relative sizes of blocks.
Figure 3.8 shows a diagram of a treemap based on contingency analysis of the IEEE 118 bus test
system, comprised of 55817 n− 1 and n− 2 faults. This diagram demonstrates that even for extremely
large sets of contingencies over multiple levels, it is possible to produce a treemap that concisely sum-
Chapter 3. Visualizing Multi-Level Contingency Data 45
Figure 3.7: Treemap of n − 1 and n − 2 contingencies for the IEEE 30 bus test system, summarizing3240 scenarios. Colour coding is used to indicate distance between elements in multiple-fault scenarios;darker blocks represent faults of elements that are closer to each other geographically. Cross-hatchedareas represent groups of faults that are too small to draw. This diagram shows that the treemap scaleswell to larger data sets; however, in scaling the diagram up it was necessary to remove the numericallabeling of rectangles, which makes the diagram unreadable without interactive features to identify theindividual contingencies.
Chapter 3. Visualizing Multi-Level Contingency Data 46
Fig
ure
3.8:
Tre
emap
ofn−
1an
dn−
2co
nti
ngen
cies
for
the
IEE
E118
bu
ste
stsy
stem
,co
nta
inin
g55817
scen
ari
os.
Colo
ur
cod
ing
isu
sed
toin
dic
ate
dis
tan
ceb
etw
een
elem
ents
inm
ult
iple
-fau
ltsc
enari
os.
Dark
erb
lock
sre
pre
sent
fau
lts
of
elem
ents
that
are
close
rto
each
oth
ergeo
gra
phic
ally.
Cro
ss-h
atch
edar
eas
rep
rese
nt
grou
ps
offa
ult
sth
at
are
too
small
tod
raw
.
Chapter 3. Visualizing Multi-Level Contingency Data 47
Figure 3.9: Treemap demonstrating use of alternating colours to increase visual differentiation.
marizes the data set and highlights the basic structural themes of the data. One characteristic of this
image is that, compared to Figure 3.7, there is a greater amount of space occupied by cross-hatched
regions, representing a multitude of elements that are too small to be draw effectively. As the number
of contingencies in the data set increases, the absolute screen real-estate that any one element accom-
modates will tend to decrease, and more and more elements take up too little area to be drawn. The
result is that more of the diagram is taken up by space that contains little information, decreasing the
visual efficiency of the diagram.
The primary weakness of treemaps is that due to the fixed space and resolution constraints of common
display technology, there is a limit to how many levels of contingencies can be shown at once. For larger
systems such as in Figure 3.7 it would be impossible to show more than two levels of faults simultaneously
without overcrowding the diagram, because each successive layer of faults has to be nested inside the
previous layer. This limitation is offset by the fact that boxes in the treemap can be scaled by sub-faults
as well, summarizing faults in higher levels; however it tends to make the diagrams less intuitive for
observers, since the top-level breakdown of faults is an aggregation of several values as opposed to a
one-to-one relationship with the contingency data. Although there are techniques that could enable
exploration of further depth in contingency analysis, they may not fully overcome this weakness.
In addition to this, there is some redundancy in the treemap since multi-level faults appear more
than once on a diagram; for example, in Figure 3.5 the fault of branches 1 and 3 is shown twice, once
in the block containing faults involving branch 3 and then again in the block containing faults involving
branch 1. This complication is brought about by the fact that unlike many tree based data structures,
nodes in a tree of contingency scenarios can have multiple parents; there is no way to eliminate this
Chapter 3. Visualizing Multi-Level Contingency Data 48
from a treemap diagram. As a result, there is an extra dimension of space inefficiency for multi-leveled
treemaps of contingency data caused by the limitations of the display resolution. This redundancy can
be a pitfall for viewers; furthermore it represents a visual redundancy that reduces the space efficiency
of the diagram, and this effect gets worse for higher-order contingencies such as n− 3 and n− 4.
3.3.9 Summary
This section presented the concept of a treemap and how it can be used to visualize contingency data
about power systems. The mapping of contingency data to a treemap was detailed, along with practical
considerations of methodology for creating treemaps for use on a computer screen. Demonstrations were
given of the scaling capability of treemaps along with opportunities for enhancing treemaps with the use
of colour coding. The treemap diagram builds on the concept of a tree structure of contingency scenarios
by focusing on the values associated with each node, prioritizing this information over the tree structure
itself. In doing so, it is able to provide strong summarization features and scales well to larger trees,
naturally downplaying insignificant elements and always giving a top-level summary that highlights the
most significant data points in the contingency analysis. This makes the treemap diagram well suited
to multi-element contingency visualization.
3.4 Expanding Content of Visualizations
3.4.1 Introduction
Tree diagrams and treemaps are two tools that have the potential to provide a foundational structure for
visualizing and understanding the results of contingency analysis. Both tools offer some capacity to high-
light patterns in the data that occur across multiple levels of contingencies; however, these techniques
have limitations in their abilities to scale to larger data sets and in their effectiveness at communicating
more precise details of the contingency analysis. These limitations, though they have different man-
ifestations in each diagram, stem from a fundamental design conflict between giving concrete details
about individual contingencies at a microscopic level versus enabling macroscopic summarization of the
broader data set. In order to overcome this limitation, it is necessary to introduce supplementary visual-
ization techniques to increase the flexibility of contingency visualizations and give a more comprehensive
representation of the data.
In view of the large amount of complex, structured data involved in multilevel contingency analysis, it
is apparent that static visualization techniques may not be enough to get a full sense of what contingency
analysis reveals about the system. In the case of both the tree and treemap visualization there are aspects
of the results that are not clearly shown. The tree diagram struggles to effectively communicate data
about the effects of each fault case on system operation, and even what is visualized can only be shown
for small systems or small subsets of the full analysis. With the treemap, there is redundant information
and a limitation as to how many levels of contingency analysis can be shown. Beyond these challenges,
it is difficult to extract details about fault scenarios solely from the visual layout of the diagram. Both
diagrams struggle to provide full representation of the measurements obtained from contingency analysis,
making it difficult to substantiate observations gleaned from the visualization with concrete outcomes
of the contingency analysis.
With these factors in mind, it is necessary to consider other dimensions of visualization that may
Chapter 3. Visualizing Multi-Level Contingency Data 49
be helpful in conveying detailed and more complex information about contingency analysis in a way
that is comprehensive and intuitive. There are several techniques, visual and otherwise, that may be
used to enhance the readability and clarity of contingency visualizations. This section will discuss the
inclusion of interactive features in contingency visualizations and how they can be used to bring out
details that are not explicitly displayed by the core diagram. These techniques will take advantage
of mouse or keyboard interaction to visually augment the diagrams, exposing additional details which
cannot effectively be visualized for all elements simultaneously. Companion diagrams will be introduced
which seek to add context and detail to the visualizations, incorporating micro-level details into the
macroscopic visualizations afforded by treemaps and tree diagrams. In addition, the use of a threshold
to limit the size of the data set will be discussed.
3.4.2 Interactive Discovery of Contingency Details
The major weakness of both tree diagrams and treemaps is that they only show an abstraction of the
contingency analysis results. The absolute measurements of severity for each contingency are not directly
represented in the diagram; instead, a stylized visualization or summary is portrayed. For these diagrams
to be effective they need to be more closely connected with the underlying data. This can be achieved
by implementing interaction.
A key enhancement that can be made to static visualization techniques is the implementation of
mouse interaction for discovery of deeper details. Figures 3.10-3.13 show mouse interactions with a
treemap diagram to reveal details about individual contingencies contained in the visualization.
The implementation of this scheme is simple and natural for the user: each rectangle in the treemap
represents a specific contingency scenario and has associated with it a set of elements that are faulted
out for that contingency; an associated value of reduced maximum loadability, in mega-watts (MWs);
and a list of other elements that are directly affected by that contingency (e.g. if a bus is faulted out,
branches that connect to that bus would be directly impacted). When the viewer clicks on a particular
contingency, these details are displayed below the treemap in text format. The three categories shown
give a summary of what the measured effect of each contingency is and which elements are involved.
This summary also details the grid elements that might be directly affected by a contingency, drawing a
connection from these elements to the consequent reduction in loadability. Other measurements such as
voltage deviations or current loadings could also be reported here5 in order to give a broader perspective
on the effects of the contingency.
This interactive approach allows the viewer to take advantage of the visualization to make broader
observations about relationships between contingencies, and then break out fuller details of each contin-
gency as needed. The concise format of the visualization is preserved while giving the viewer access to
details that otherwise could not be displayed. The visualization acts as a vehicle to guide viewers from
general observations about the data set to insights about specific contingencies.
3.4.3 Responsive Highlighting of Diagram Structures
Responsive interaction can also enable highlighting of patterns in the visualizations that would otherwise
be difficult to identify and may require training to see, increasing the level of explorability afforded by
5These measurements could be derived during CPF, but only if the system is solvable at the defined loading level.
Chapter 3. Visualizing Multi-Level Contingency Data 50
Figure 3.10: Treemap diagram demonstrating the use of mouse input to identify details about a spe-cific fault. When a particular contingency is selected on the treemap, details corresponding to thatcontingency are displayed below the diagram. This diagram shows the fault of generator number 4and transformer number 2 in the IEEE 30-bus test system, resulting in a reduction to the maximumloadability of 614 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 51
Figure 3.11: Treemap diagram demonstrating the use of mouse input to identify details about a specificfault. This diagram shows the fault of bus number 2 and transformer number 2 in the IEEE 30-bus testsystem, resulting in a reduction to the maximum loadability of 537 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 52
Figure 3.12: Treemap diagram demonstrating the use of mouse input to identify details about a specificfault. This diagram shows the fault of transformer number 2 in the IEEE 30-bus test system, resultingin a reduction to the maximum loadability of 487 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 53
Figure 3.13: Treemap diagram demonstrating the use of mouse input to identify details about a specificfault. This diagram shows the fault of generator number 4 and bus number 6 in the IEEE 30-bus testsystem, resulting in a reduction to the maximum loadability of 612 MW.
Chapter 3. Visualizing Multi-Level Contingency Data 54
(a)
(b)
(c)
Figure 3.14: Screen-shots of a tree diagram demonstrating the use of mouse hover interaction. Hoveringover a certain node highlights sub-faults associated with that node, helping to cut through visual clutter.a) Base diagram. b) Hovering over a node. c) Hovering over another node.
Chapter 3. Visualizing Multi-Level Contingency Data 55
contingency visualizations. This section will describe two methods of responsive highlighting — one for
the tree diagram, and one for the treemap.
Highlighting of sub-trees in the tree diagram.
One thing that responsive techniques can do is allow the user to focus on sub-structures within the
diagram. By highlighting secondary relationships that are not part of the diagram structure or by
emphasizing relationships which would otherwise not stand out, the visualizations can increase the
breadth of information communicated. Figure 3.14 shows the use of responsive mouse interaction to
highlight sub-tree structures in a tree diagram. When the mouse is moved over a contingency (represented
by a node) that fault and all sub-faults, as well as the edges connecting them, are highlighted in bold.
This action displays a sub-tree of the selected node, enabling the user to more easily identify connections
between specific faults that have significant effect on the system loadability.
The effect of sub-tree highlighting is shown in Figure 3.14. In Figure 3.14b, The sub-tree of branch
number 1 highlights several large nodes involving that branch which are strung together. In contrast,
Figure 3.14c shows the highlighting of faults involving branch number 18 and bus number 12, involving
fewer serious contingencies. This reinforces the observation that faulting branch 1 has a bigger impact
on secure operation of the system.
Highlighting of redundant visual elements in the treemap.
In the case of the treemap, it was noted that there is redundancy because double contingencies appear
twice — once for each single element contingency. This can be visualized using responsive interaction.
Figure 3.15 gives examples of two different n− 2 contingencies under mouse-over. Figure 3.15a shows a
mouse-over of the fault of transformer number 2 and bus number 2 in the top-left corner of the screen;
this contingency is in a group of rectangles that all represent contingencies involving transformer 2 and
one other element. When the mouse is placed on the contingency, a second block is highlighted in black,
half-way down the left side of the treemap. This rectangle represents the same contingency but in a group
representing contingencies of bus 2 and one other element. Figure 3.15b shows a similar highlighting for
the fault of transformer number 2 and bus number 18.
The visualization of these redundancies in treemaps is of limited value in understanding contingency
analysis results; however, it can help viewers to better understand how treemaps are drawn and how
to properly interpret the diagram. In addition it provides a quick method for understanding how two
elements in a contingency have contrasting effects on the system by relating the various other faults
in which they are involved. For example, in Figure 3.15b the secondary block (corresponding to bus
18) is in the group of contingencies involving bus 18, a group that is much smaller than the group of
contingencies involving transformer 2. This underscores the imbalance between the elements in terms
of their contribution to the combined fault. Highlighting the two redundant instances of a contingency
provides a quick — though limited — view of the contrast between them, giving some indication of
which element is a bigger security liability.
3.4.4 Cross-referencing to One-line
Perhaps the most powerful method that can be used to give context to a contingency visualization is
to cross-reference the visualization with a one-line diagram of the system, enabling exploration of the
Chapter 3. Visualizing Multi-Level Contingency Data 56
(a)
(b)
Figure 3.15: Use of mouse interaction to highlight duplicate contingencies in a treemap. a) Contingenciesof bus 2 and transformer 2, represented by two rectangles. b) Contingencies of bus 18 and transformer2, represented by two rectangles.
Chapter 3. Visualizing Multi-Level Contingency Data 57
visualization and system diagrams to identify important features. Mouse input can be used to select a
particular fault, and the elements involved in that fault can be highlighted; alternatively, elements on
the one-line can be selected and faults involving that element can be highlighted on the treemap, giving
the observer a view of all the instances where that fault shows up.
Figures 3.16 and 3.17 give examples of how these interactions could be used to glean useful information
about the system. In Figure 3.16a, a treemap and corresponding one-line are shown, with the darkness
of each block encoding the distance between elements involved in the contingency that block represents.
In Figure 3.16b a particularly dark block, indicating a fault involving elements that are close together,
is selected with the mouse, causing the two elements involved in that fault to be singled out on the
one-line diagram — bus 6 and branch 41. This connection between the two diagrams allows the user to
get details about the elements in an interactive and visually efficient manner.
The connection between the two diagrams can be utilized in reverse to look at each element involved
in a fault that has been singled out. Clicking on an element within the one-line diagram will highlight
on the treemap the faults in which this element participates. For example, in Figure 3.17a branch 41
is selected and numerous rectangles are highlighted in dark gray on the treemap. These rectangles
correspond to specific faults of branch 41 and another element, and their size and position within each
grouping indicates that these faults are all significant. In addition, one whole group of n−2 contingencies
is highlighted, representing the proportional effect of all contingencies involving branch 41 and another
element. This analysis suggests that the selected bus participates in a few serious fault scenarios. The
position of the grouping of contingencies for branch 41 is smaller than groupings for many other groupings
(it is 37th in size out of 80 elements); likewise, in each grouping for other elements, n− 2 contingencies
involving branch 41 are middle-of-the-pack.
Looking at a different element can bring out vastly different observations. In Figure 3.17b, bus 6
is brought into focus with a click. When bus 6 is clicked, the largest grouping of n − 2 contingencies
(top-left) is highlighted in dark gray, indicating that the combined effect of all faults involving this bus
make it the most significant element in the system in terms of reduced loadability. In other groupings
of n− 2 contingencies, the faults involving bus 6 are at or near the top-left of every grouping, indicating
that no matter which other element is involved, an n− 2 contingency including bus 6 is one of the most
serious in terms of reduced loadability of the system. Furthermore, it can be observed that all the n− 2
contingencies in the highlighted major block are of similar size, suggesting that bus 6 is the primary
cause of reduced loadability in these contingencies. Changing the other element has little impact on how
severe a contingency of bus 6 is because it is the primary cause of the reduction in loadability.
The above discussion shows how interaction, combined with a system diagram, can increase the
functionality of a visualization. The treemap excels at summarizing contingency data in a way that
makes it easily explorable with the eye, at the expense of displaying details about individual cases.
These diagrams guide the user by surfacing important information, and with interaction the user can
discover patterns which can aide in prioritizing responses to contingency scenarios. When combined
with auxiliary visualizations, these techniques allow deeper patterns to be identified while also exposing
the contingency analysis results
3.4.5 Drill-down
One of the main limitations of the treemap is that it doesn’t scale easily to greater depths of contingency
analysis. Beyond two levels of faults it becomes impractical to draw nested treemaps because the screen
Chapter 3. Visualizing Multi-Level Contingency Data 58
(a)
(b)
Figure 3.16: Screen-shots of a treemap diagram where elements involved in a fault are highlighted ona one-line diagram when that fault gets a mouse-over. These visualizations contain n − 1 and n − 2contingencies of branches and buses on the IEEE 30-bus test system. a) Treemap and correspondingone-line. b) One fault singled out with the mouse (highlighted in gray) and the corresponding elementsare displayed on the one-line (highlighted in red).
Chapter 3. Visualizing Multi-Level Contingency Data 59
(a)
(b)
Figure 3.17: Screen-shots of a treemap diagram where an element on the one-line diagram is clicked,highlighting the blocks on the treemap that represent contingencies that element is involved in. Thesevisualizations contain n − 1 and n − 2 contingencies of branches and buses on the IEEE 30-bus testsystem. a-b) Single elements singled out with the mouse, with corresponding faults highlighted in gray.
Chapter 3. Visualizing Multi-Level Contingency Data 60
real-estate available to each nested treemap is small; features of the diagram become too small to
compare and rounding error becomes significant. This limitation does not preclude treemaps from
visualizing deeper contingencies; however, there is a need for a systematic approach to expanding the
scaling in order to overcome this obstacle. All of the examples given so far have been of treemaps with
n − 1 contingencies as the top level; however, it is possible to draw treemaps with n − 2 contingencies
as the top layer, with n − 3 contingencies nested inside them. Alternatively, the treemap can display
sub-faults related to a particular contingency (e.g., draw a treemap of n − 3 contingencies involving
branch A and bus B). This allows for secondary views to be opened up to explore more than two levels
of the contingency analysis.
One realization of depth in treemap visualizations of contingencies is to build drill-down capability
for the diagram, where clicking on a square would isolate the corresponding fault and create a treemap
of just that fault and its sub-faults. The effect could be implemented in one of two ways: bringing up
a windowed view of the secondary treemap on click, or using a zoom-in effect to blow up the area of
the fault in question. This technique is demonstrated in Figure 3.18, which shows several levels of faults
involving a single branch. Such a sub-tree functionality would provide a treemap visualization technique
analogous to the mechanism for highlighting subtrees in tree diagrams as described in Figure 3.14.
Another approach is to use scrolling to change the contingency level used for the top-level treemap
layout. In the examples given previously of treemaps such as in Figure 3.5, the first level of treemap
layout is by individual elements, with the areas corresponding to contingencies involving that element;
each area was then broken up as a nested treemap of sub-faults. The depth of the top level of the
treemap could be changed to map to n− 2 contingencies, with each block representing a combination of
two elements and within it a nested treemap of n− 3 contingencies involving the two elements and one
other. This approach would allow the viewer to isolate faults by level, giving a more focused view on a
sub-set of the entire contingency analysis. An example of this approach is shown in Figure 3.19; n − 2
contingencies are shown in orange and n− 3 contingencies are shown in green, with the top-most layout
pattern being a division by n− 2 contingencies.
3.4.6 Thresholding
Visualization of higher order contingency data is hampered by the large number of cases that need to
be compared. Diagrams become cluttered and unreadable for larger systems (particularly in the case
of the tree diagram, but also more generally), and it becomes harder to discern details at a scale that
shows the whole data set. For even moderately sized systems it may be impossible to draw a diagram
that shows all relevant elements of contingency analysis simultaneously and with adequate detail.
To mitigate this problem, a threshold may be applied to the contingency results to filter out contin-
gency scenarios that have limited impact on the network. Removing low-impact contingencies has the
effect of reducing the amount of information that needs to be displayed and increasing the granularity
with which the more important contingencies may be visualized. There are two approaches to applying
thresholding to contingency results:
1. Filter out as a fraction of the results (e.g. show only the worst 50 cases or the worst 15%).
2. Filter by severity as compared to the nominal case (e.g. show only cases with more than 15%
reduction in loadability).
Chapter 3. Visualizing Multi-Level Contingency Data 61
Figure 3.18: Treemap diagram showing n − 1 (purple), n − 2 (orange) and n − 3 (green) contingenciesinvolving branch 2 and four other elements in the IEEE 30-bus test system. This diagram demonstratesthe concept of building a treemap focused on only one element.
Chapter 3. Visualizing Multi-Level Contingency Data 62
Figure 3.19: Treemap diagram showing n− 2 (orange) and n− 3 (green) contingencies for five elementsin the IEEE 30-bus test system. This diagram demonstrates changing the base level of the treemapdiagram, such that the first level layout summarizes n− 2 faults, with child n− 3 faults nested within.
Chapter 3. Visualizing Multi-Level Contingency Data 63
Of these the first approach is convenient in that it allows more explicit control of the size of the resulting
diagram, since the threshold would be chosen in order to achieve the desired number of faults to be shown.
The second approach is more meaningful in the sense that the threshold directly addresses the severity
of the fault scenario; faults are included based on how bad they are, allowing the inference that any
contingency present in the visualization is significant by an absolute definition of how bad a contingency
is. This is valuable since both treemaps and tree diagrams summarize and normalize contingencies with
respect to the entire data set but omit a description of how bad the worst contingencies are in absolute
terms6. The second definition of a threshold more directly addresses the goal of the diagram but requires
a definition of what level of load reduction is considered to be significant.
The problem of scaling multiple contingency analysis to larger systems is compounded for applications
where contingencies should be visualized in real time to accommodate changing loading conditions and
generator availability, to provide situational awareness; in these applications, the time to run continuation
power flow on all contingencies may become prohibitive. One technique for alleviating this problem is
to use off-line analysis to identify a subset of contingencies which are of interest for on-line tracking. In
this scenario a full set of contingencies from a power system would be analyzed and visualizations of the
results could be used to earmark select contingency scenarios for on-line analysis.
3.4.7 Summary
Interaction and expansion of visualizations is integral to providing balanced insights into the results of
contingency analysis with multiple elements, for which the outcomes are diverse and the resulting data
set has many facets. Interaction is also key to increasing the breadth of information that can be com-
municated about contingency analysis. In this section, several techniques were detailed which could be
used to make techniques for visualizing multiple contingencies more usable, flexible and comprehensive.
3.5 Chapter Summary
This chapter introduced two contingency visualization techniques, explaining how they communicate
the structural relationships between elements and the severity of each contingency. In addition, several
interactive mechanisms were introduced that enhance the usability and explorability of these visualiza-
tions. These diagrams provide visual summarization — allowing the viewer to identify patterns and
extract key insights from the data set — and enable closer inspection of the underlying data through
the use of interaction. Details about the technology and methods used to implement these visualizations
are contained in Appendix C.
6A set of contingencies where the worst fault caused a 5% reduction in loadability would look very similar to a setcausing a 25% reduction in loadability, since both sets are normalized to the space alloted to the diagram.
Chapter 4
Conclusions and Future Work
64
Chapter 4. Conclusions and Future Work 65
4.1 Conclusion
The development of visualization tools is a prerequisite for performing multiple contingency analysis,
given the scale and complexity of contingency results. Multiple contingencies present a unique challenge,
because of the vast amount of complex data that is produced, and require more advanced and deliberate
techniques for extracting valuable observations. This research identified the tree structural model as
a viable method for organizing contingency results and used that model to map contingency data to
the treemap diagram. The treemap visualization provides a good proof-of-concept for an interactive
visualization model that allows exploration of contingency data and identification of patterns and trends.
Treemap diagrams attempt to visualize multiple contingencies in power systems by leveraging the
hierarchical structure of the data — organizing contingencies to highlight common elements in order to
surface both the most severe contingencies and the elements that participate in those fault scenarios.
The diagrams provide
• powerful summarization capabilities
• deep exploration
• ability to highlight patterns in contingency data
• opportunity for deep integration with companion visualizations
A key aspect of understanding and utilizing these visualizations is a rigorous understanding of what
questions should be asked of them and what questions they are best suited to answering. These details
inform how contingencies are summarized and grouped; for example, if a users is only interested in
examining n− 2 contingencies, they might achieve a better result by eliminating the visual grouping of
contingencies by single element and instead looking at a flat treemap of n − 2 faults, similar to what
is described in Figure 3.19. Conversely, it is also important that users understand the gains achieved
through incorporating summarization; identifying and fixing one n − 2 contingency security issue may
be of some value for improving security, but identifying and supporting an element that participates in
many severe contingencies can often provide much more value in improving the overall security of the
system with respect to multiple contingencies.
This research identifies treemap visualizations as a viable tool for visualization of contingencies, but
it also identifies several limiting flaws in the visualization technique. The instance of multiple visual el-
ements referring to the same fault can be confusing and frustrating for viewers, as it leads to redundant
elements being displayed. The mapping of treemap elements to contingency data is not transparent
enough that users can view the diagram without some instruction or training, which complicates the
deployment of these systems and could lead to misinterpretation by inexperienced users. The treemap
by nature scales to show the whole data-set, which provides strong summarization; however, this comes
at the expense of minimizing small details about individual contingencies, and this minimization be-
comes more profound as systems get larger. Although these limitations do not exclude the use of such
visualizations, they raise questions about how best to deploy and utilize treemaps in power system
operations.
Although the mapping of contingencies to a visual framework is a necessary and important step in
developing contingency visualization techniques, the importance of real-time visual interaction and in-
tegration with a one-line diagram cannot be overstated. These techniques are necessary to put treemap
Chapter 4. Conclusions and Future Work 66
visualizations in proper context and to facilitate the explanation and proper use of the diagrams. With-
out interaction, the treemap remains an abstraction without practical use for evaluating contingencies.
These visualization techniques are basic tools which can be leveraged to build sophisticated sense-making
applications for system operators [4], and can also be used as tools for gaining insights into planning
and evaluation of new projects to improve the resilience of the power grid to fault scenarios such as
component faults and extreme environmental events.
In addition to visualization techniques, this thesis explored the use of continuation power flow for
running contingency analysis with multiple elements, including measures to improve its performance to
achieve quicker measurement of contingency severity. Continuation power flow has been established as
a robust computational method for identifying the voltage stability margin of a power system, and this
research demonstrates its application for evaluating the significance or severity of contingency scenarios
— particularly multiple contingencies, where there is increased risk that the system may not be stable
under given loading conditions. This research extended an implementation of continuation power flow
from the MATPOWER toolbox, including: expanding the application of bus power scaling from one bus
to all buses in the system; implementing an adaptive step size to improve the flexibility of the algorithm
and allow it to realize performance gains on-the-fly; utilizing polynomial curve fitting to improve the
prediction step; and using CPU-based parallelization. The combination of these techniques provided
a substantial performance gain and allowed for contingency analysis of n − 1 and n − 2 faults to be
brought within a reasonable time frame for planning and real-time applications. The use of scalable
parallel computing techniques means that it would be possible, with dedicated computing resources, to
perform on-the-fly contingency analysis for moderately sized systems — a necessary ingredient to allow
real-world application of multi-element contingency analysis. Nevertheless, there remains a need for
further improvements in performance of CPF in order to enable this type of contingency analysis to be
performed on larger systems; recommendations for further improvements are given in Section 4.2.
The use of continuation power flow marks a shift in focus — with respect to how contingencies are
evaluated — from considering their effects on individual operating limits to an evaluation of voltage
stability. This shift in focus is appropriate since voltage stability becomes a significant concern for
multiple contingencies, not least because of how it affects reliability of traditional measures. However, a
truly comprehensive evaluation of a contingency should also take into account, where possible, the effect
of that contingency on operating levels across the grid, since ensuring compliance with operation limits
is one of the primary concerns of system operators. With this goal in mind, identifying effective ways
to incorporate operational limit evaluation into these contingency visualizations would be a valuable
improvement on the work presented in this thesis.
4.1.1 List of Contributions
• Implemented a new technique for comparison of contingencies across levels — e.g., comparing n−1
and n− 2 contingencies, as well as evaluating systems with islanding.
• Achieved reliable evaluation of severe contingencies for which a solution may not have been attained
using other metrics of severity.
• Improved performance to make multi-element contingency analysis feasible for larger systems.
• Developed a visualization technique that can display a summary of large, multi-level contingency
analysis sets.
Chapter 4. Conclusions and Future Work 67
• Developed interactive techniques for exploring these visualizations and surfacing patterns.
4.2 Future Work
There are several areas in which the research presented in this thesis could be expanded upon. These
extensions fall in to categories of either improving performance of contingency analysis or expanding on
visualizations techniques for contingencies.
4.2.1 Future Work in Improving Contingency Analysis Techniques
This research explores the use of continuation power flow, coupled with high-performance algorithms,
for evaluating multiple contingencies. There are several areas of further research that could be explored
for this application.
High-Performance Computing
One area of future work would be the development of more advanced high-performance techniques for
computing continuation power flow. The modifications to CPF that were discussed in this research
provided some improvement in computation times for contingency analysis; however, substantial im-
provements are necessary to make it suitable for real-time contingency analysis and contingency analysis
of extremely large systems. Several options exist for improving the performance of continuation power
flow:
• Integration of faster power flow techniques such as fast decoupled power flow or the use of a
constant Jacobian to improve the performance.
• Development of alternative formulations of continuation power flow that are suitable to increased
parallelization and implementation of heterogeneous parallel computing techniques.
• Scaling of CPU-bound parallel computing techniques (MATLAB Parallel computing or similar
technologies) for continuation power flow to utilize larger distributed computing clusters.
It remains to be seen what types of high-performance computing techniques could provide the most
effective means of improving continuation power flow techniques. One important constraint to this
process is a desire to avoid any need for purpose built computing clusters or custom hardware to carry
out contingency analysis; it would be better to take advantage of heterogeneous parallel processing on
commodity hardware, which would allow this technology to be implemented at lower cost and with less
expertise required to build and maintain. One particular area of interest may be the development of a
formulation of the continuation power flow to take advantage of massively parallel computing resources
such as a graphical processing unit (GPU). Modern GPUs are powerful commodity hardware that can
be used in a heterogeneous parallel processing scheme to greatly increase the computation speed of
algorithms at an affordable equipment cost — provided the algorithms can be efficiently decomposed
into parallelized computational units. It remains to be seen whether continuation power flow or equivalent
techniques for voltage stability analysis can be adapted to a massively parallel computing architecture in
a way that provides a meaningful improvement in performance, especially considering that using a GPU
resource might eliminate gains achieved by CPU-based multiprocessing; in such a scheme, avaliability of
GPU resources becomes an additional bottleneck to CPU-based multi-threading.
Chapter 4. Conclusions and Future Work 68
Augmentation of Contingency Measures
Another area where further analysis is needed is in the exploration of augmentations to the continuation
power flow — or alternative measures — that are more comprehensive in their evaluation of system
stability or that may provide comparable characterizations of stability with less computational effort.
Some potential areas of further work include:
• Integrating reactive power limits on generators [40, 41, 42].
• Identifying alternative techniques for measuring the voltage stability limit that are less computa-
tionally demanding [43, 44].
• Considering the effects of variation in generator and load participation factors.
• Finding alternative measures of contingency severity [45, 46, 47, 48].
• Include secondary metrics such as fault probabilities in the visualization.
• Identifying a method to extract information about contingency severity from the shape of the PV
curve, beyond the loading level at the PV nose[49].
These areas of study could provide improvements in the speed of computation and may be able to
identify methods of measuring the severity of a contingency which are even better than continuation
power flow in terms of quantifying real risk to the physical components of the system as well as risks
to operational stability and reliability. An important aspect of the latter would also include expanded
discussions towards identifying how continuation power flow may be improved upon in terms of accurate
quantification of contingency severity.
One of the key parameters in continuation power flow is the definition of participation factors for
the various loads on the system; these describe how each load scales up as the algorithm traverses the
power-voltage curve. The outcome of continuation power flow depends on how the participation factors
are defined, since they affect the loading profile of the system. In this research the participation factors
are held constant over the entire PV curve; one area of further research would be to explore mechanisms
for changing the load profile over the course of the PV curve, identifying if such an approach could give
a more appropriate measures of contingency severity.
In addition to the constant definition of participation factors, the scaling of reactive power outputs
of generators and the reactive demands of loads was assumed to stay constant as λ was scaled up; that
is, reactive power levels were scaled 1 : 1 with real power levels as the CPF computation progressed. In
reality, this is not always the case, since limitations on generators and loads may impact how reactive
power scales as the real power of a load or generator is scaled up. Introducing a mechanism to change the
reactive profile of generators or loads may increase the accuracy of outcomes of the loadability analyses.
Finally, many operators keep a record of fault probability data by tracking fault rates for each element
on the system. This historical data could be used to estimate the likelihood of a given contingency.
Considering the probability of a contingency in combination with the severity of that contingency would
render an approximation of the relative risk to secure operations of that contingency. In the past, higher-
order contingencies (such as n− 2 and beyond) have been considered so improbable as to be beyond the
scope of what is necessary for contingency security; this consideration is in part an acknowledgment of
the challenges posed by higher-order contingencies, but it is also in part a judgment that the likelihood of
Chapter 4. Conclusions and Future Work 69
these contingencies occurring is small enough in comparison to the consequences of these contingencies
that it would not be worth the effort to analyze them. This represents an implicit judgment about
the overall risk that these contingencies pose to operation. A multi-element contingency evaluation
that incorporates likelihood of occurrence would be able to more fairly judge this overall risk, helping
operators and planners make better decisions about which contingencies need further analysis and which
contingencies pose the greatest threat to secure operation.
4.2.2 Future Work in Improving Visualizations
This research performs an initial exploration of visual techniques that could be applied to multiple
contingency analysis data. There are several areas where more work is needed to expand and improve
these visualizations, such as:
• Expanded integration with other visualizations and dashboard functions [4].
• Testing functionality of visualization techniques with alternative contingency measures.
• Identifying additional mechanisms for exploration of diagrams and expansion of the system.
• Develop a visual method to integrate the effects of operating limits into the visualization.
• Integration of fault probability data, weighting the consequences of a contingency by its likelihood
of happening and creating a more comprehensive definition of the real risk of each contingency.
• Validation of usability through user testing.
• Validation on larger systems.
The key factor that makes treemap visualizations of contingencies relatable to a real world system is
the use of interaction and tie-in with a one-line diagram. This real-time, explorable association with a
system map allows the user to contextualize the contingency visualization and plays a primary role in
enabling the visualization as a tool for understanding the power system. Because of this, identifying novel
methods of further linking these visualizations with other common tools for monitoring and evaluating
system performance will be key to identifying ways of improving them and increasing their applicability
in power system analysis. By creating data-based or visual linkages with other common dashboards
including system maps, alarms lists and other metrics that are already being used to evaluate different
aspects of a systems performance and reliability, it may be possible to extract new observations from the
different tools. Contingency analysis is an important first step in developing robust protocols for handling
faults in a way that minimizes disruption to the grid; this is a prime example of where integration of
contingency visualization with existing protocols could help streamline an operators workflow and help
them more effectively make control inputs to optimise stability of the system. These techniques require
extensive input and user validation, since they have a direct impact on the workflow of operators. It is
important that these efforts be evaluated for both their functionality and their visual design to ensure
that they are ergonomically suitable and improve operational capability.
Bibliography
[1] Z. Huang, Y. Chen, F. L. Greitzer, and R. Eubank, “Contingency visualization for real-time decision
support in grid operation,” in Power and Energy Society General Meeting. IEEE, July 2011, pp.
1–7.
[2] Y. Sun and T. J. Overbye, “Visualizations for power system contingency analysis data,” IEEE
Transactions on Power Systems, vol. 19, no. 4, pp. 1859–1866, 2004.
[3] U.S.-Canada Power System Outage Task Force, “Final report on the august 14, 2003 blackout in
the united states and canada: Causes and recommendations,” www.nerc.com, April 2004.
[4] C. Mikkelsen, J. Johansson, and M. Rissanen, “Interactive information visualization for sense-
making in power grid supervisory control systems,” in International Conference on Information
Visualisation. IEEE, 2011, pp. 119–126.
[5] B. Shneiderman, “The eyes have it: a task by data type taxonomy for information visualizations,”
in Symposium on Visual Languages. IEEE, September 1996, pp. 336–343.
[6] Z. Jinli, W. Tengfei, J. Hongjie, W. Wei, C. Jing, W. Gang, and H. Wenmao, “Implementation
of power system security check and its visualization system,” in Electric Utility Deregulation and
Restructuring and Power Technologies. IEEE, July 2011, pp. 185–188.
[7] S. Grijalva, “Visualization and study mode architectures for real-time power system control,” in
Industrial Electronics and Control Applications. IEEE, 2005, pp. 1–8.
[8] T. Overbye, E. Rantanen, and S. Judd, “Electric power control center visualization using geographic
data views,” in Bulk Power System Dynamics and Control - VII. Revitalizing Operational Reliability,
2007 iREP Symposium, Aug 2007, pp. 1–8.
[9] B. Xu, C. Yuksel, A. Abur, and E. Akleman, “3d visualization of power system state estimation,” in
Electrotechnical Conference, 2006. MELECON 2006. IEEE Mediterranean, May 2006, pp. 943–947.
[10] P. Chopade, K. Flurchick, M. Bikdash, and I. Kateeb, “Modeling and visualization of smart power
grid: Real time contingency and security aspects,” in Southeastcon, 2012 Proceedings of IEEE,
March 2012, pp. 1–6.
[11] G. K. Stefopoulos, F. Yang, G. J. Cokkinides, and A. P. S. Meliopoulos, “Advanced contingency
selection methodology,” in Power Symposium, 2005, pp. 67–73.
[12] A. J. Wood and B. F. Wollenberg, Power Generation, Operation and Control, 2nd ed. Wiley, 1996.
70
Bibliography 71
[13] G. Ejebe and B. F. Wollenberg, “Automatic contingency selection,” Power Apparatus and Systems,
IEEE Transactions on, vol. PAS-98, no. 1, pp. 97–109, Jan 1979.
[14] S. Sundhararajan, A. Pahwa, S. Starret, and P. Krishnaswami, “Convergence measures for con-
tingency screening in continuation power flow,” in Transmission and Distribution Conference and
Exposition, vol. 1, 2003, pp. 168–174.
[15] Z. Jia and B. Jeyasurya, “Contingency ranking for on-line voltage stability assessment,” IEEE
Transactions on Power Systems, vol. 15, no. 3, pp. 1093–1097, 2000.
[16] V. Ajjarapu and C. Christy, “The continuation power flow: A tool for steady state voltage stability
analysis,” IEEE Transactions on Power Systems, vol. 7, no. 1, pp. 416–423, 1992.
[17] M. Crow, Computational methods for electric power systems. CRC press, 2003.
[18] S. H. Li and H.-D. Chiang, “Nonlinear predictors and hybrid corrector for fast continuation power
flow,” IET Generation, Transmission and Distribution, vol. 2, no. 3, pp. 341–354, 2008.
[19] L. D. Kudryavtsev and M. K. Samarin, “Lagrange interpolation formula.”
[20] D. Coppersmith and S. Winograd, “Matrix multiplication via arithmetic progressions,” Journal
of Symbolic Computation, vol. 9, no. 3, pp. 251 – 280, 1990, computational algebraic complexity
editorial.
[21] R. Zimmerman, C. Murillo-Sanchez, and R. Thomas, “Matpower: Steady-state operations, planning
and analysis tools for power systems research and education,” IEEE Transactions on Power Systems,
vol. 26, no. 1, pp. 12–19, February 2011.
[22] B. Shneiderman, “Tree visualization with tree-maps: 2-d space-filling approach,” ACM Trans.
Graph., vol. 11, no. 1, pp. 92–99, Jan. 1992.
[23] R. Rasteiro and J. Pereira-Leal, “Multiple domain insertions and losses in the evolution of the rab
prenylation complex,” BMC Evolutionary Biology, vol. 7, no. 1, p. 140, 2007.
[24] R. C. Griffiths and S. Tavar, “Ancestral inference in population genetics,” Statistical Science, vol. 9,
no. 3, pp. pp. 307–319, 1994.
[25] L. Breiman, J. Friedman, R. Olshen, and C. Stone, Classification and Regression Trees. Monterey,
CA: Wadsworth and Brooks, 1984.
[26] M. A. Freidl and C. E. Brodley, “Decision tree classification of land cover from remotely sensed
data.” Remote sensing of environment, vol. 61, no. 3, pp. 399–409, 1997.
[27] C. H. Brase and C. P. Brase, Understanding Basic Statistics. Cengage Learning, 2012.
[28] B. Shneiderman and C. Plaisant, “Treemaps for space-constrained visualization of hierarchies,”
September 2014. [Online]. Available: http://www.cs.umd.edu/hcil/treemap-history/
[29] U. Software, “Space sniffer,” September 2014. [Online]. Available:
http://www.uderzo.it/main products/space sniffer/index.html
Bibliography 72
[30] M. Online, “Descry treemap sample,” September 2014. [Online]. Available:
http://www.visitmix.com/labs/descry/cpptreemapsample/
[31] E. Baehrecke, N. Dang, K. Babaria, and B. Shneiderman, “Visualization and analysis of microarray
and gene ontology data with treemaps,” BMC Bioinformatics, vol. 5, no. 1, p. 84, 2004.
[32] M. Balzer, O. Deussen, and C. Lewerentz, “Voronoi treemaps for the visualization of software
metrics,” in Proceedings of the 2005 ACM Symposium on Software Visualization, ser. SoftVis ’05.
New York, NY, USA: ACM, 2005, pp. 165–172.
[33] M. Wattenberg, “A note on space-filling visualizations and space-filling curves,” in Information
Visualization, 2005. INFOVIS 2005. IEEE Symposium on, Oct 2005, pp. 181–186.
[34] D. Auber, C. Huet, A. Lambert, B. Renoust, A. Sallaberry, and A. Saulnier, “Gospermap: Using a
gosper curve for laying out hierarchical data,” IEEE Transactions on Visualization and Computer
Graphics, vol. 19, no. 11, pp. 1820–1832, 2013.
[35] K. Onak and A. Sidiropoulos, “Circular partitions with applications to visualization and embed-
dings,” in Proceedings of the 24th ACM Symposium on Computational Geometry, College Park,
MD, USA, June 9-11, 2008, 2008, pp. 28–37.
[36] R. Vliegen, J. van Wijk, and E.-J. van der Linden, “Visualizing business data with generalized
treemaps,” Visualization and Computer Graphics, IEEE Transactions on, vol. 12, no. 5, pp. 789–
796, Sept 2006.
[37] M. Bruls, K. Huizing, and J. van Wijk, “Squarified treemaps,” in Data Visualization 2000, ser.
Eurographics, W. de Leeuw and R. van Liere, Eds. Springer Vienna, 2000, pp. 33–42.
[38] R. E. Krider, P. Raghubir, and A. Krishna, “Pizzas: π or square? psychophysical biases in area
comparisons,” Marketing Science, vol. 20, no. 4, pp. 405–425, 2001.
[39] P. Raghubir and A. Krishna, “Vital dimensions in volume perception: can the eye fool the stomach?”
Journal of Marketing research, pp. 313–326, 1999.
[40] P. Zhu, G. Taylor, and M. Irving, “A novel q-limit guided continuation power flow method,” in
Power and Energy Society General Meeting - Conversion and Delivery of Electrical Energy in the
21st Century, 2008 IEEE, July 2008, pp. 1–7.
[41] N. Yorino, H.-Q. Li, and H. Sasaki, “A predictor/corrector scheme for obtaining q-limit points for
power flow studies,” Power Systems, IEEE Transactions on, vol. 20, no. 1, pp. 130–137, 2005.
[42] M. Beiraghi, A. Rabii, S. Mobaieen, and H. Ghorbani, “A modified approach for continuation power
flow,” Journal of Basic and Applied Scientific Research, vol. 2, no. 2, pp. 1546–1556, 2012.
[43] A. Ning, Z. Shuangxi, and Z. Lingzhi, “Power system voltage stability limits estimation based on
quasi-steady-state simulation,” in Power System Technology, 2006. PowerCon 2006. International
Conference on, Oct 2006, pp. 1–7.
[44] M. Haque, “A fast method for determining the voltage stability limit of a power system,” Electric
Power Systems Research, vol. 32, no. 1, pp. 35 – 43, 1995.
Bibliography 73
[45] J. Zhang, I. Dobson, and F. Alvarado, “Quantifying transmission reliability margin,” International
Journal of Electrical Power & Energy Systems, vol. 26, no. 9, pp. 697 – 702, 2004.
[46] D. Julian, R. Schulz, K. Vu, W. Quaintance, N. Bhatt, and D. Novosel, “Quantifying proximity to
voltage collapse using the voltage instability predictor (vip),” in Power Engineering Society Summer
Meeting, 2000. IEEE, vol. 2, 2000, pp. 931–936 vol. 2.
[47] F. Xiao and J. McCalley, “Power system risk assessment and control in a multiobjective framework,”
Power Systems, IEEE Transactions on, vol. 24, no. 1, pp. 78–85, Feb 2009.
[48] B. Gao, G. Morison, and P. Kundur, “Voltage stability evaluation using modal analysis,” Power
Systems, IEEE Transactions on, vol. 7, no. 4, pp. 1529–1542, 1992.
[49] S. Corsi and G. Taranto, “Voltage instability - the different shapes of the nose,” in Bulk Power
System Dynamics and Control - VII. Revitalizing Operational Reliability, 2007 iREP Symposium,
Aug 2007, pp. 1–16.
Appendices
74
Appendix A
Source Code for Contingency
Analysis
A.1 Running Contingency Analysis
The following code takes a list of faults and runs continuation power flow on them, using multi-processing,
to measure loadability of each fault.
function [CPFloads, messages, faults, base, baseLoad] = ...
cpf_compare(base, levels, branchFaults)
%
if nargin < 2, levels = 2; end
baseCPF = cpf(base);
baseLambda = baseCPF.max_lambda;
baseLoad = getLoad(base, baseLambda);
if nargin > 2,
faults = branchFaults;
else
faults = defineFaults(base,levels);
end
nFaults = length(faults);
%progress monitor that incorporates feedback from separate workers
75
Appendix A. Source Code for Contingency Analysis 76
ppm = ParforProgMon(’Running CPF on Faults: ’,...
nFaults, floor(nFaults/1000));
CPFloads = zeros(1, nFaults);
messages = [];
msg = [];
log = true;
if log,
logFile = fopen(’log_cpf.txt’, ’w’);
fclose(logFile);
end
%loop through list of faults (with parallel loop execution) and run CPF
parfor faultNum = 1:length(faults),
if log,
logFile = fopen(’log_cpf.txt’, ’a’);
fprintf(logFile, ’Fault %d of %d\n’, faultNum, nFaults);
fclose(logFile);
end
fprintf(’Fault %d of %d\n’, faultNum, nFaults);
ppm.update(faultNum);
try
%fault a case and run it
[CPFloads(faultNum), msg] = faultCPF(base, faults{faultNum});
catch err
rethrow(addCause(err, MException(’CPF:faultCPF’,...
sprintf(’Error faulting case %d’, faultNum))));
end
messages = [messages msg];
end
mprint.printMessages(messages);
pause(0.2); ppm.delete();
end
function [load, messages, results] = faultCPF(base,fault)
Appendix A. Source Code for Contingency Analysis 77
fprintf(’\tFaulting Case\t’);
if ˜isempty(fault.branch),
s = sprintf(’%d, ’, fault.branch);
fprintf(’Branch: %s\t\t’, s(1:end-2));
end
if ˜isempty(fault.bus),
s = sprintf(’%d, ’, fault.bus);
fprintf(’Bus: %s\t\t’, s(1:end-2));
end
if ˜isempty(fault.gen),
s = sprintf(’%d, ’, fault.gen);
fprintf(’Gen: %s\t\t’, s(1:end-2));
end
if ˜isempty(fault.trans),
s = sprintf(’%d, ’, fault.trans);
fprintf(’Trans: %s\t\t’, s(1:end-2));
end
fprintf(’\n’);
%apply the fault to the base case, recieve a MATPOWER representation
%of the case with elements outaged
myCase = fault.applyto(base);
lambda = [];
messages = [];
sigmaForLambda =1;
sigmaForVoltage = 0.0005;
fprintf(’\tRunning CPF\n’);
results = [];
loads = zeros(1, length(myCase));
nIslands = length(myCase);
for i = 1:nIslands,
if length(myCase) > 1, fprintf(’\t\tSubcase %d\n’, i); end
%case where loads are isolated and have to be shed
if size(myCase{i}.gen,1) == 0,
messages = addMessage( messages, newMessage(’No Generators’));
loads(i) = 0;
continue;
elseif nnz(myCase{i}.bus(:,3)) == 0,
messages = addMessage(messages, newMessage(’No loads’));
loads(i) = 0;
Appendix A. Source Code for Contingency Analysis 78
continue;
end
if size(myCase{i}.bus,1) == 1,
messages = addMessage(messages, newMessage(’No branches’));
loads(i) = 0;
continue;
end
defaultResults.max_lambda = NaN;
defaultResults.V_corr = [];
defaultResults.V_pr = [];
defaultResults.lambda_corr = [];
defaultResults.lambda_pr =[];
defaultResults.success=false;
defaultResults.time = 0;
try
%run CPF on the fault
myCPFresults = cpf(myCase{i}, -1,...
sigmaForLambda, sigmaForVoltage, false, false, false);
% error handling:
if isnan(myCPFresults.max_lambda),
myCPFresults.max_lambda = 0;
messages= addMessage(messages,...
newMessage(’CPF failed’, false));
else
if i>1 && myCPFresults.success,
messages = addMessage(messages,...
newMessage(’CPF succeeded on island’));
end
end
if ˜myCPFresults.success
messages = addMessage(messages,...
newMessage(’CPF failed’, false));
end
if myCPFresults.success && isempty(myCPFresults.V_corr),
%stopped short due to no PQ buses
messages = addMessage(messages,...
Appendix A. Source Code for Contingency Analysis 79
newMessage(’CPF aborted due to no PQ buses’, true));
end
loads(i) = getLoad(myCase{i}, myCPFresults.max_lambda);
lambda = [lambda myCPFresults.max_lambda];
results = [results myCPFresults];
catch err
getReport(err)
lambda = [lambda 0];
results = [results defaultResults];
messages = addMessage(messages, ...
newMessage(errLine(err), false));
loads(i) = 0;
end
end
%
load = sum(loads);
message = newMessage(text, success)
if nargin<2, success = true; end
message.text = text;
message.faultNum =fault.id;
message.faults = fault.print();
message.islandNum = sprintf(’%d / %d’,i, nIslands);
message.success = success;
end
messages = addMessage(messages, message)
messages = [messages message];
end
end
function load = getLoad(myCase, lambda)
PD = myCase.bus(:,3);
participation = PD ./sum(PD);
Appendix A. Source Code for Contingency Analysis 80
%calculate the power at each bus
power = PD.*(participation == 0)+participation * lambda*myCase.baseMVA;
load = sum(power);
end
function errString = errLine(err)
errString =sprintf(’%s in ’, err.identifier);
for i = length(err.stack):-1:2
errString = [errString, sprintf(’ %s: %d >’,...
err.stack(i).name, err.stack(i).line)];
end
errString= [errString sprintf(’ %s: %d; ’,err.stack(1).name,...
err.stack(1).line)];
for i = 1:length(err.cause),
errString = [errString, sprintf(’ %s; ’,...
causeLine(err.cause{i}))];
end
end
function causeString = causeLine(cause)
causeString = sprintf(’%s: %s’, cause.identifier, cause.message);
end
A.2 Defining Faults to be Analyzed
The following code defines a list of faults to be analyzed using contingency analysis. Given a set of
elements and a range of values for k corresponding to the n − k contingency levels to be analyzed,
this code produces a list of all contingencies involving the specified elements and levels, ready to be
consumed during contingency analysis. Once defined, all faults are listed sequentially regardless of their
n− k level.
function faults = defineFaults(base, levels)
if nargin == 0,
base = loadcase(’case30_mod.mat’);
end
if nargin < 2,
levels = 1;
end
nBranches = size(base.branch,1);
nBusses = size(base.bus,1);
nGens = size(base.gen,1);
Appendix A. Source Code for Contingency Analysis 81
try nTrans = size(base.trans,2); catch nTrans = 0; end
%list all elements (by ID) that should be faulted
% singleBranchFaults = 1:4;
singleBranchFaults = 1:nBranches;
singleBranchFaults = [1,2,18, 32];
singleBusFaults = 1:nBusses;
singleBusFaults = [8,12,15, 28];
%
%
if ˜exist(’singleBranchFaults’, ’var’), singleBranchFaults = []; end
if ˜exist(’singleBusFaults’, ’var’), singleBusFaults = []; end
if ˜exist(’singleGenFaults’, ’var’), singleGenFaults = []; end
if ˜exist(’singleTransFaults’, ’var’), singleTransFaults = []; end
%obtain faults from desired elements
[faults, columns] = combineFaults(levels,...
singleBranchFaults, singleBusFaults,...
singleGenFaults, singleTransFaults);
fprintf(’faults combined\n’);
singleFaults = [];
for i = 1:length(faults)
fault = faults{i};
if length(fault.branch) ...
+ length(fault.bus) ...
+ length(fault.gen) ...
+ length(fault.trans) == 1,
singleFaults = [singleFaults fault];
end
end
%
%
%
Appendix A. Source Code for Contingency Analysis 82
%
%
%
%
%
%
end
function [faults, columns] = combineFaults(level,varargin)
%create all different combinations of elements given and return
%corresponding faults
fprintf(’Combining Faults\n’)
elements = {’branch’, ’bus’, ’generator’, ’transformer’};
%getting boundaries for x = [ varargin{1}, varargin{2}, varargin{3}]
i = 1;
% delete empty lists from varargin, along with their associated
% elements
while i<= length(varargin),
if isempty(varargin{i}),
varargin = varargin( 1:end ˜= i);
elements = elements(1:end ˜= i);
else
i = i+1;
end
end
lengths = zeros(length(varargin),1);
for i = 1:length(varargin)
lengths(i) = length(varargin{i});
end
lengths = cumsum(lengths);
Appendix A. Source Code for Contingency Analysis 83
offset = [0; lengths(1:end-1)];
ranges = [1+offset, [lengths] ];
indices = ranges(1,1):ranges(end,2);
columns = [’label’, elements];
faults = {};
tic;
baseFaultNum = 0;
for l = 1:level,
%count the number of faults that occurred in previous levels.
if l > 1,
baseFaultNum = baseFaultNum + nchoosek(length(indices), l-1);
end
combos = nchoosek(indices, l);
fprintf(’\tLevel %d: building’,l);
fprintf(’ fault list - %d faults\n’, length(combos));
ppm = ParforProgMon(sprintf(’Building faults for level %d: ’,l),...
size(combos,1), floor(size(combos,1)/100));
%use parallel computing to speed up exploration of combinations
parfor index = 1:size(combos,1),
% for index =1:size(combos,1),
ppm.update(index)
combo = combos(index,:);
[gIndices, mGroups] = groupIndex(combo, offset,ranges);
if l<=1, label = sprintf(’single %s fault’, elements{mGroups});
else label = ’combined fault’;
end
%
%
%
%
%
Appendix A. Source Code for Contingency Analysis 84
%arguments will be cell array of lists in the form { [list of
%branches], [list of buses], [list of generators], [list of
%transformers]
arguments = cell(length(varargin),1);
for el = 1:length(gIndices),
arguments{mGroups(el)}(end+1) =...
varargin{mGroups(el)}(gIndices(el));
end
%
faultNum = baseFaultNum + index;
%
faults = [faults {Fault(label, arguments,faultNum)}];
%
end
end
pause(0.2); ppm.delete();
fprintf(’\tcompleted: %3.2f\n’, toc);
end
function groups = grouping(indices, ranges)
% returns grouping in varargin of each index specified in ’indices’
groups = zeros(length(indices), 1);
for index = 1:length(indices),
groups(index) = max(find(sum(indices(index) < ranges,2) <= 1));
end
end
function [gIndices, mGroups] = groupIndex(indices, offset, ranges)
% converts indices of x = [ varargin{1}, varargin{2}, varargin{3}]
% to indices of lists in varargin, accompanied by grouping number
mGroups = grouping(indices,ranges);
gIndices = zeros(length(indices),1);
for index = 1:length(indices)
gIndices(index) = indices(index) - offset(mGroups(index));
end
end
Appendix A. Source Code for Contingency Analysis 85
A.3 Fault Class Definition
The following code defines the class definition of a Fault, allowing it to be utilized in other operations
to streamline code.
classdef Fault
%FAULT Summary of this class goes here
% Detailed explanation goes here
properties
id = 0;
label = ’’;
branch = [];
bus = [];
gen = [];
trans = [];
end
methods
fault_obj = Fault(label,args, faultNum)
%label: text description of the object
%args: branch{ list by index},
% bus {list by index },
% gen { list by index },
% transformer {list by index}
if nargin > 2,
fault_obj.id = faultNum;
end
if length(args) > 0,
fault_obj.branch = args{1};
end
if length(args)> 1,
fault_obj.bus = args{2};
end
if length(args)>2,
fault_obj.gen = args{3};
end
if length(args)>3, %transformers (not transmission lines)
Appendix A. Source Code for Contingency Analysis 86
fault_obj.trans = args{4};
end
fault_obj.label = label;
end
stringRep = print(obj)
% produce a string represntation of faults - useful for printing
% table of faults
branchS = [’[’, sprintf(’%d ’,obj.branch), ’]’];
busS = [’[’, sprintf(’%d ’, obj.bus), ’]’];
genS = [’[’, sprintf(’%d ’, obj.gen), ’]’];
try transS = [ ’[’, sprintf(’%d ’, obj.trans), ’]’];
catch transS = ’’; end
stringRep = ...
sprintf(’Elements: Br:%20s\tBu:%20s\tGe:%20s\tTr:%20s’,...
branchS, busS, genS,transS);
if nargout == 0,
fprintf(stringRep);
end
end
function JSONRep = toJSON(obj)
function str = array2json_str(array),
mcell = cell(1, length(array));
for i =1:length(array),
mcell{i} = sprintf(’%d’, array(i));
end
str = sprintf(’[%s]’, strjoin(mcell, ’, ’));
end
branchS = array2json_str(obj.branch);
busS = array2json_str(obj.bus);
genS = array2json_str(obj.gen);
try transS = array2json_str(obj.trans);
catch transS = ’[]’; end
JSONRep = [...
’"elements": { ’,...
sprintf(’"Bus":%s, ’, busS),...
sprintf(’"Branch":%s, ’, branchS),...
Appendix A. Source Code for Contingency Analysis 87
sprintf(’"Gen":%s, ’, genS),...
sprintf(’"Transformer":%s’, transS),...
’}’...
];
end
outStruct = tostruct(obj)
outStruct.label = obj.label;
outStruct.branch = obj.branch;
outStruct.bus = obj.bus;
outStruct.gen = obj.gen;
outStruct.trans = obj.trans;
end
outList = tolist(obj)
outList = {};
outList{end+1} = obj.label;
outList{end+1} = obj.branch;
outList{end+1} = obj.bus;
outList{end+1} = obj.gen;
outList{end+1} = obj.trans;
end
function [branch,bus,gen,trans] = consolidate(obj,base)
nBranches = size(base.branch,1);
nGens = size(base.gen,1);
nBusses = size(base.bus,1);
bus = obj.bus;
branch = obj.branch;
gen = obj.gen;
trans = obj.trans;
try nTrans = length(base.trans); catch nTrans = 0; end
%take care of any transformer faults
Appendix A. Source Code for Contingency Analysis 88
if ˜isempty(trans),
%from all transformer faults, gather up a list of branches,
%busses, and generators involved
tBusses = [];
tBranches = [];
tGens = [];
for transInd = obj.trans,
tBusses = [ tBusses base.trans{transInd}{2}];
tBranchBusses = base.trans{transInd}{1};
%from connecting busses get branch indices
for ind = 1:size(tBranchBusses,1),
tBranches = [tBranches,...
find(tBranchBusses(ind,1) == base.branch(:,1)...
& tBranchBusses(ind,2) == base.branch(:,2))];
tBranches = [tBranches find(tBranchBusses(ind,2)...
== base.branch(:,1)...
& tBranchBusses(ind,1) == base.branch(:,2))];
end
tGens = [tGens base.trans{transInd}{3}];
end
bus = union(bus, tBusses);
branch = union(branch, tBranches);
gen = union(gen, tGens);
end
%take care of bus faults
if ˜isempty(bus),
markBranch = zeros(nBranches,1);
markGen = zeros(nGens,1);
busIndices = [];
for mBus = bus(:)’,
%because of no ’for each in x:’,
% you have to ensure the orientation of array
markBranch = markBranch |...
(base.branch(:,1) == mBus |...
base.branch(:,2) == mBus);
markGen = markGen | mBus == base.gen(:,1);%
end
gen = union(gen, find(markGen));
branch = union( branch, find(markBranch));
end
Appendix A. Source Code for Contingency Analysis 89
end
faultCases = applyto(obj,base, verbose, animate)
if nargin < 3, verbose = false; end
if nargin < 4, animate = false; end
faultCase = base;
nBranches = size(base.branch,1);
nGens = size(base.gen,1);
nBusses = size(base.bus,1);
try nTrans = length(base.trans); catch nTrans = 0; end
[mBranch, mBus, mGen, mTrans] = obj.consolidate(base);
%remove busses from faultCase
if ˜isempty(mBus)
faultCase.bus = base.bus( setdiff(1:nBusses, mBus),:);
%remove busses
if isfield(faultCase, ’bus_geo’),
faultCase.bus_geo =...
base.bus_geo( setdiff(1:nBusses,mBus),:);
%remove corresponding geographic entries
end
end
%take care of any generator faults
if ˜isempty(mGen),
faultCase.gen = base.gen( setdiff(1:nGens, mGen),:);
faultCase.gencost = base.gencost(setdiff( 1:nGens,mGen),:);
end
%take care of branches
if ˜isempty(mBranch) ,
faultCase.branch = ...
base.branch( setdiff( 1:nBranches, mBranch),:);
if isfield(faultCase, ’branch_geo’),
faultCase.branch_geo = ...
base.branch_geo( setdiff( 1:nBranches, mBranch));
end
Appendix A. Source Code for Contingency Analysis 90
end
%
networks = island.find(faultCase, verbose, animate);
if length(networks)>1,
faultCases = island.resolve(faultCase, networks);
else
faultCases = {faultCase};
end
if verbose, mplot.faulted(base, obj); end
end
end
end
A.4 Dealing With Islanding
The following code contains an abstract class with static methods for finding and resolving islanding in
contingencies. The algorithms are equipped to identify instances of islanding and resolve them either by
omitting the islanded elements from the network or, in the case where the island could be considered a
sub-system, returning two distinct cases which can both be solved.
classdef island
%ISLAND methods for detecting and dealing with islanding
% methods:
% resolve(mCase, networks) resolve mCase into seperate cases
%
%
%
%
% forkSystem(mCase, busList) from mCase, fork a subsystem with
%
%
%
%
% find(mCase) traverse the network and identify isolated
%
properties
end
methods
Appendix A. Source Code for Contingency Analysis 91
end
methods(Static)
[subCases] = resolve(mCase, networks)
% resolve handle bus islanding so we can run power flows again
% This function handles networks where faults have resulted
% in one or more busses being isolated from the main network.
% One of two strategies are used:
%
% * Island has no generation; Shed the load
% * Island contains generator units; Return a subsystem.
fork = false(length(networks), 1);
% Identify networks with generation
for isle = 1:length(networks),
busses = networks{isle};
genIsland = intersect(busses, mCase.gen(:,1));
fork(isle) = any(genIsland);
end
% Fork each isolated subsystem that has generation
subCases = {};
for isle = find(fork(:)’),
subCases = [...
subCases,...
island.forkSystem(mCase, networks{isle})...
];
end
end
[mCase] = forkSystem(mCase, busList)
% forkSystem fork a subsystem with select busses
%
% This function removes all busses and branches from mCase
% that are not listed in busList. It also removes generator
% listings and branches connecting to busses that are not
% in the network
%remove the busses
Appendix A. Source Code for Contingency Analysis 92
keepBusses = false(length(mCase.bus(:,1)),1);
%boolean vector of which busses keep
for i = 1:length(busList),
keepBusses = keepBusses | mCase.bus(:,1) == busList(i);
end
mCase.bus = mCase.bus(keepBusses,:);
% fix ’areas’
if isfield(mCase, ’areas’),
[nAreas,˜] = size(mCase.areas);
for i = 1:nAreas,
%for each area ’relative price bus’ listing,
areaBus = mCase.areas(i,2);
count = 0;
while ˜any(mCase.bus(:,1) == areaBus),
%while areaBus is not in network
areaBus = areaBus + (count * (-1)ˆ(mod(count,2)));
count = count+1;
end
% this loop searches around areaBus by
% checking +1, -1, +2, -2... until it finds a
% bus that is in the network.
mCase.areas(i,2) = areaBus;
end
end
%remove any branches which connect to busses not in the network
discardBranches = false(length(mCase.branch(:,1)),1);
for i = 1:length(mCase.branch(:,1)),
discardBranches(i) = ˜(...
any(mCase.branch(i,1) == busList)...
&& any(mCase.branch(i,2) == busList)...
);
end
mCase.branch = mCase.branch(˜discardBranches, :);
%discard gen listings
Appendix A. Source Code for Contingency Analysis 93
discardGenList = false(length(mCase.gen(:,1)), 1);
for i = 1:length(mCase.gen(:,1)),
discardGenList(i) = ˜any(mCase.gen(i,1) == busList);
end
mCase.gen = mCase.gen(˜discardGenList, :);
%discard bus positions
if isfield(mCase, ’bus_geo’),
mCase.bus_geo = mCase.bus_geo(keepBusses,:);
end
%discard fault listings
if isfield(mCase, ’fault’),
[numFaults, ˜] = size(mCase.fault);
discardFaults = false(numFaults, 1);
for fau = 1:numFaults,
discardFaults(fau) = ˜(...
any(mCase.fault(fau,1) == busList) &&...
any(mCase.fault(fau,2) == busList)...
);
end
mCase.fault = mCase.fault(˜discardFaults,:);
if isempty(mCase.fault),
mCase = rmfield(mCase, ’fault’); end
end
end
[networks] = find(mCase, figures, verbose)
%find traverse a network to identify isolated busses
%
% Given a matpower case, this code explores the network’s
% branch connections to identify isolated busses or groups
% of busses. It returns a cell array where each element is a
% collection of busses that are linked to a network.
%
% Usage:
Appendix A. Source Code for Contingency Analysis 94
% [networks] = find(mCase);
%
% [networks] = find(mCase, true); produces a plot of the
% network showing all processed elements in
% red; depends on the case having Xpos
% and Ypos values for each bus
%
% [networks] = find(mCase, true, true); steps through the
% network exploration, plotting each bus
% and branch as it is explored and
% printing the explored network busses
% to the command prompt.
%
% Written by Anton Lodder in May 2013.
%
%Input handling
if nargin >2,
timeDelay = 0.1* verbose;
%time delay is zero if verbose is off
elseif nargin > 1
verbose = false;
timeDelay = 0;
else %nargin == 1;
timeDelay = 0;
verbose = false;
figures = false;
end
if figures,
close all;
figure;
set(gca,’YDir’,’reverse’);
end
[nBusses,˜] = size(mCase.bus);
[nBranches,˜] = size(mCase.branch);
Appendix A. Source Code for Contingency Analysis 95
%% Set up some structures to track progress
visitBus = false(nBusses,1);
traversedBranch = false(nBranches,1);
%bool vectors to track which bus/branches have been visited
%these store by index, not by id
busNetworks = zeros(nBusses,1);
%indicate the network the bus is in at any given
% time(zero indicates not processed)
%boolean for each entry in mCase.bus, entry index
%corresponds to index in mCase.bus
networks = {[]};
% to track how many isolated sub-networks we
% have and which busses are in each
currentNetwork = 1;
%keep track of which network the current bus is in.
%% Traverse the network %%%
if verbose,
fprintf(’\n\nChecking Network for Islanding’);
fprintf(’:\nTraversing the Network...\n’);
fprintf(’===============================\n’);
end
if figures, %get the x/y limits of the figure:
Mins = min(mCase.bus_geo);
Maxs = max(mCase.bus_geo);
buffer = (Maxs - Mins) * 0.1;
Mins = Mins - buffer;
Maxs = Maxs + buffer;
xlim([Mins(1),Maxs(1)])
ylim([Mins(2),Maxs(2)])
end
Appendix A. Source Code for Contingency Analysis 96
currentBus = randi(nBusses);
% pick a random bus to start on (currentBus represents
% the index, not the ID number of a bus)
currentBusID = mCase.bus(currentBus,1);
%get the bus id associated with the current bus
if figures, %draw the first node
hold on;
plot(mCase.bus_geo(currentBus,1),...
mCase.bus_geo(currentBus,2), ’g.’, ’markersize’, 20);
text(mCase.bus_geo(currentBus,1)+5,...
mCase.bus_geo(currentBus,2)-5,...
sprintf(’%d’,currentBusID), ’Color’, ’g’);
hold off; pause(timeDelay * 6);
end
while(currentBus > 0),
if visitBus(currentBus),
%bus has been visited, find its network
% check if the branch’s network is different
% from currentNetwork
if currentNetwork ˜= busNetworks(currentBus),
%different network, so merge the two
currentNetwork = ...
mergeNetworks(busNetworks(currentBus),...
currentNetwork);
end
else% visit the bus
visitBus(currentBus) = true;
%mark current bus as visited
networks{currentNetwork} =...
[networks{currentNetwork} currentBus];
% Q: should this eventually be busID?
% A: yes, it should be converted at the end
busNetworks(currentBus) = currentNetwork;
%mark bus’s network number
if verbose,
fprintf(’\t%s\n’,printNetworks(networks)); end
end
Appendix A. Source Code for Contingency Analysis 97
if figures, hold on; %highlight explored node on map
scatter(mCase.bus_geo(currentBus,1),...
mCase.bus_geo(currentBus,2), ’r’, ’Linewidth’, 2);
text(mCase.bus_geo(currentBus,1)+5,...
mCase.bus_geo(currentBus,2)-5,...
sprintf(’%d’,currentBusID),...
’Color’, ’r’,’FontWeight’, ’bold’);
hold off; pause(timeDelay);
end
% Traverse to next bus
branches = mCase.branch(:,1) == currentBusID |...
mCase.branch(:,2) == currentBusID;
%get all branches connected to this bus - requires that
%we compare the bus IDs of each branch to the bus ID of
%the current bus
branchIndices = find(branches & ˜traversedBranch);
if ˜isempty(branchIndices),
% there is at least one untraversed
% branch leaving this node
branch = randi(length(branchIndices));
%pick a random branch
node = mCase.branch(branchIndices(branch),1:2);
nextBusID = node(node ˜=currentBusID);
%gets the id of the next bus to be travelled to
traversedBranch(branchIndices(branch)) = true;
%mark branch as traversed
if figures, %draw the branch
hold on;
branch_geo =...
mCase.branch_geo{branchIndices(branch)};
for i = 1:size(branch_geo,1)-1,
plot(branch_geo(i:i+1,1),...
Appendix A. Source Code for Contingency Analysis 98
branch_geo(i:i+1,2), ’r’, ’Linewidth’,2);
end
[x,y] = mplot.midpoint(branch_geo);
hold off; pause(timeDelay);
end %mark traversed path;
%move to next bus:
currentBus = find(mCase.bus(:,1) == nextBusID);
% get the index corresponding to bus
% with ID of nextBusID
currentBusID = mCase.bus(currentBus,1);
else % all branches have been traversed,
% so try to find an unvisited node
unVisited = find(˜visitBus); %get all unvisited busses
if isempty(unVisited), %all nodes have been visited
currentBus = -1; %causes while loop to exit
else
bus = randi(length(unVisited));
%pick a random index for a next bus
currentBus = unVisited(bus); %update bus
currentBusID = mCase.bus(currentBus,1);
%
networks = [networks {[]}];
currentNetwork = length(networks);
%set network number
if figures,
hold on;
plot(mCase.bus_geo(currentBus,1),...
mCase.bus_geo(currentBus,2),...
’g.’, ’markersize’, 20);
hold off; pause(timeDelay * 6);
end
end
end
end %end of while loop
Appendix A. Source Code for Contingency Analysis 99
%% Go over any untraversed branches
%make sure they don’t join two isolated networks
if verbose,
fprintf(’\n\nChecking remaining branches...\n’);
fprintf(’=================================\n’); end
remainingBranches = find(˜traversedBranch);
for branchNum = remainingBranches(:)’
if verbose,
fprintf(’\tChecking Branch %d.’, branchNum); end
node = mCase.branch(branchNum, 1:2);
%get nodes connected by branch
node(1) = find(mCase.bus(:,1) == node(1));
%convert to indexes
node(2) = find(mCase.bus(:,1) == node(2));
if busNetworks(node(1)) ˜= busNetworks(node(2)),
%bus networks are different
if verbose,
fprintf(’\n\t\tMerging networks %d and %d.\t’,...
busNetworks(node(1)),...
busNetworks(node(2)));
end;
mergeNetworks(busNetworks(node(1)),...
busNetworks(node(2)));
else
if verbose, fprintf(’\n’); end
end
%
if figures,
hold on;
branch_geo = mCase.branch_geo{branchNum};
for br = 1:size(branch_geo,1)-1,
p=plot(branch_geo(br:br+1,1),...
branch_geo(br:br+1,2), ’b’, ’Linewidth’,2);
set(p, ’Color’,[215,103,27]/256);
[x,y] = mplot.midpoint(branch_geo);
scatter(x,y,10,’bd’, ’fill’)
Appendix A. Source Code for Contingency Analysis 100
end
hold off; pause(timeDelay);
end
end
numIslands = length(networks);
oldNetworks = networks;
if verbose,
fprintf(’\n\nSummary:\n============\n’);
fprintf(’\tNumber of networks: %d\n’, numIslands);
end
%sort networks by largest-first
lengths = zeros(1, length(networks));
for i = 1:length(networks),
lengths(i) = length(networks{i}); end
[˜,indices] = sort(lengths, ’descend’);
%for each network, convert to sorted list of bus ids
networks = networks(indices);
for i = 1:length(networks),
networks{i} = mCase.bus(sort(networks{i},1));
end
out = printNetworks(myNetworks)
%returns a string output of ’networks’
out = ’’;
for network = 1:length(myNetworks)
sublist = myNetworks{network};
sublist = sprintf(’%d ’, mCase.bus(sublist,1));
out = [...
out,...
[sprintf(’%d:[’,network),...
sublist(1:end-1), ’]’], ’ ’...
];
end
end
currentNetwork = mergeNetworks(n1, n2)
Appendix A. Source Code for Contingency Analysis 101
%merge two networks into one.
newNetwork = min(n1, n2);
%keep the lower network number, discard the older one
oldNetwork = max(n1, n2);
%use fast_union_sorted, a better union operation
networks{newNetwork} = fast_union_sorted(...
sort(networks{newNetwork}),...
sort(networks{oldNetwork}));
%unite networks on lower-numbered network
networks = networks( 1:length(networks) ˜= oldNetwork);
%delete higher-numbered network
for i = 1:length(networks),
busNetworks(networks{i}) = i; end
%update bus network numberings
currentNetwork = newNetwork; %update network number
if verbose,
fprintf(’\t%s\n’,printNetworks(networks)); end
end
end %find(mCase, figure, verbose);
end
end
Appendix B
Source Code for Continuation Power
Flow
The following sections contain the main source code for performing continuation power flow. This code
based on the Matpower v4.0.1 implementation of CPF[21], but includes substantial modifications and
original code produced by the author.
B.1 Continuation Power Flow Algorithm
function [ max_lambda, predicted_list, corrected_list,...
combined_list, success, et...
] = cpf(casedata, participation, sigmaForLambda,...
sigmaForVoltage, verbose, plotting, do_phase3)
%CPF Run continuation power flow (CPF) solver.
% [INPUT PARAMETERS]
% loadvarloc: load variation location(in external bus numbering). Single
% bus supported so far.
% sigmaForLambda: stepsize for lambda
% sigmaForVoltage: stepsize for voltage
% [OUTPUT PARAMETERS]
% max_lambda: the lambda in p.u. w.r.t. baseMVA at (or near) the nose
% point of PV curve
% NOTE: the first column in return parameters ’predicted_list,
% corrected_list, combined_list’ is bus number; the last row is lambda.
% created by Rui Bo on 2007/11/12
% MATPOWER
% $Id: cpf.m,v 1.7 2010/04/26 19:45:26 ray Exp $
% by Rui Bo
% and Ray Zimmerman, PSERC Cornell
% Copyright (c) 1996-2010 by Power System Engineering Research Center
102
Appendix B. Source Code for Continuation Power Flow 103
% (PSERC)
% Copyright (c) 2009-2010 by Rui Bo
%
% This file is part of MATPOWER.
% See http://www.pserc.cornell.edu/matpower/ for more info.
%
% MATPOWER is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published
% by the Free Software Foundation, either version 3 of the License,
% or (at your option) any later version.
%
% MATPOWER is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with MATPOWER. If not, see <http://www.gnu.org/licenses/>.
%
% Additional permission under GNU GPL version 3 section 7
%
% If you modify MATPOWER, or any covered work, to interface with
% other modules (such as MATLAB code and MEX-files) available in a
% MATLAB(R) or comparable environment containing parts covered
% under other licensing terms, the licensors of MATPOWER grant
% you additional permission to convey the resulting work.
if nargin == 0 % run test suite
[path, name, ext] = fileparts(which(sprintf(’%s.m’,mfilename)));
runFile = fullfile(path, ’test’, sprintf(’test_%s.m’, name));
fprintf(’Running unit-test for <%s%s> \n%s\n\n’,...
name,ext,’=============================’);
run(runFile)
return;
end
finished = false;
Appendix B. Source Code for Continuation Power Flow 104
%% define named indices into bus, gen, branch matrices
BUS_I = 1;
PD = 3;
QD = 4;
VD = 8;
VA = 9;
GEN_BUS = 1;
VG = 6;
GEN_STATUS = 8;
GLOBAL_CONTBUS = 0; %for use by ’plotBusCurve’
lastwarn(’No Warning’);
%% assign default parameters
if nargin < 3
sigmaForLambda = 0.1; % stepsize for lambda
sigmaForVoltage = 0.025; % stepsize for voltage
end
if nargin < 5, verbose = 0; end
if nargin < 6, shouldIPlotEverything = false;
else shouldIPlotEverything = plotting; end
if nargin < 7, do_phase3 = true; end
boolStr = {’no’, ’yes’};
if verbose, fprintf(’CPF\n’); figure; end
if verbose,
fprintf(’\t[Info]:\tDo Phase 3?: %s\n’, boolStr{do_phase3+1}); end
mError = MException(’CPF:cpf’, ’cpf_error’);
%% options
max_iter = 1000; % depends on selection of stepsizes
%% ...we use PV curve slopes as the criteria for switching modes
slopeThresh_Phase1 = 0.5;
Appendix B. Source Code for Continuation Power Flow 105
% PV curve slope shreshold for voltage prediction-correction
%(with lambda increasing)
slopeThresh_Phase2 = 0.7;
% PV curve slope shreshold for lambda prediction-correction
%% load the case & convert to internal bus numbering
[baseMVA, busE, genE, branchE] = loadcase(casedata);
[numBuses, ˜] = size(busE);
if nargin < 2 %no participation factors so keep the current load profile
participation = busE(:,PD)./sum(busE(:,PD));
else
participation = participation(:);%(:) forces column vector
if length(participation) ˜= numBuses,
%improper number of participations given
if length(participation) == 1 && participation > 0,
%assume bus number is specified instead
participation = (1:numBuses)’==participation;
else
if verbose,
fprintf(’\t[Info]\t’);
fprintf(’Participation Factors improperly specified.’);
fprintf(’\n\t\t\tKeeping Current Loading Profile.\n’);
end
participation = busE(:,PD)./sum(busE(:,PD));
end
end
end
%i2e is simply the bus ids stored in casedata.bus(:,1), so V_corr can be
%outputted as is with rows corresponding to entries in casedata.bus
[i2e, bus, gen, branch] = ext2int(busE, genE, branchE);
e2i = sparse(max(i2e), 1);
e2i(i2e) = (1:size(bus, 1))’;
participation_i = participation(e2i(i2e));
participation_i = participation_i ./ sum(participation_i); %normalize
Appendix B. Source Code for Continuation Power Flow 106
%specify whether or not to use lagrange polynomial interpolation when
%possible; default is true
useLagrange=true;
useAdaptive=true;
strs = {’no’, ’yes’};
if verbose, fprintf(’\t[Info]:\tLagrange: %s\n’, strs{useLagrange+1}); end
if verbose,
fprintf(’\t[Info]:\tAdaptive Step Size: %s\n’, strs{useAdaptive+1});
end
%% get bus index lists of each type of bus
[ref, pv, pq] = bustypes(bus, gen);
%define a default result to return if we fail out before running any
%computations
defaultResults.max_lambda = sum( bus(:,PD) )/ baseMVA;
% set so that power calculation will return sum(bus(:,PD));
defaultResults.V_corr = [];
defaultResults.V_pr = [];
defaultResults.lambda_corr = [];
defaultResults.lambda_pr =[];
defaultResults.success=false;
defaultResults.time = 0;
%% GO THROUGH CONDITIONS FOR ABORTING
if nnz(participation_i) < 2,
%in this situation voltage won’t droop and we get a bad computation.
Appendix B. Source Code for Continuation Power Flow 107
%instead returns lambda to give current power as the maximum. This is
%under-stating the max loadability but hopefully not by too much.
%Requires a reasonable starting power value
defaultResults.success = true;
max_lambda = defaultResults;
return;
end
if isempty(ref), %if no reference bus is returned, throw an error
mError = addCause(mError,...
mException(’MATPOWER:bustypes’, ’No ref bus returned’));
throw(mError);
end
if isempty(pq) && isempty(pv),
mError = addCause(mError, mException(’MATPOWER:bustypes’,...
’NO PV or PQ buses returned; network is trivial’));
throw(mError);
end
if isempty(pq),
%if there is no PQ bus, all voltages are set
% by generators and we cannot run CPF.
defaultResults.success = true;
max_lambda = defaultResults;
return;
else
continuationBus = pq(1);
end
if any(isnan(participation_i)), %could happen if no busses had loads
participation_i = zeros(length(participation_i), 1);
participation_i(pq) = 1/numel(participation_i(pq));
end
%% generator info
on = find(gen(:, GEN_STATUS) > 0); %% which generators are on?
gbus = gen(on, GEN_BUS); %% what buses are they at?
%% form Ybus matrix
[Ybus, ˜, ˜] = makeYbus(baseMVA, bus, branch);
Appendix B. Source Code for Continuation Power Flow 108
if det(Ybus) == 0
mError = addCause(mError, MException(’MATPOWER:makeYBus’,...
’Ybus is singular’));
end
%% initialize parameters
flag_lambdaIncrease = true;
% flag indicating lambda is increasing or decreasing
%get all QP ratios
initQPratio = bus(:,QD)./bus(:,PD);
if any(isnan(initQPratio)),
if verbose > 1,
fprintf(’\t[Warning]:\tLoad real power’);
fprintf(’ at bus %d is 0. ’,find(isnan(initQPratio)));
fprintf(’Q/P ratio will be fixed at 0.\n’); end
initQPratio(isnan(initQPratio)) = 0;
end
%%------------------------------------------------
% do cpf prediction-correction iterations
%%------------------------------------------------
t0 = clock;
nPoints = 0;
% V_pr=[];
% lambda_pr = [];
% V_corr = [];
% lambda_corr = [];
V_pr = zeros(size(bus,1), 400);
V_corr = zeros(size(bus,1),400);
lambda_pr = zeros(1,400);
Appendix B. Source Code for Continuation Power Flow 109
lambda_corr = zeros(1,400);
nSteps = zeros(1,400);
stepSizes = zeros(1,400);
%% do voltage correction (ie, power flow) to get initial voltage profile
if ˜finished,
% first try solving for a flat start: 0 power, all voltages 1 p.u.
lambda0 = 0;
lambda = lambda0;
Vm = ones(size(bus, 1), 1); %% flat start
Va = bus(ref(1), VA) * Vm;
V = Vm .* exp(1i* pi/180 * Va);
V(gbus) = gen(on, VG) ./ abs(V(gbus)).* V(gbus);
lambda_predicted = lambda;
V_predicted = V;
[V, lambda, success, iters] = cpf_correctVoltage(baseMVA,...
bus, gen, Ybus, V_predicted, lambda_predicted,...
initQPratio, participation_i);
if ˜success,
% If flat start fails, try with voltages, angles and powers from
% input case
Vm = bus(:,VD); %get bus voltages from case
Va = bus(:,VA); %get bus angles from case
V = Vm .* exp(1i* pi/180 * Va);
V(gbus) = gen(on, VG) ./ abs(V(gbus)).* V(gbus);
lambda = sum(bus(:,PD)) / baseMVA;
% get lambda value that will give bus(:,PD) in
% cpf_correctVoltage(..) below
lambda_predicted = lambda;
[V, lambda, success, iters] = ...
Appendix B. Source Code for Continuation Power Flow 110
cpf_correctVoltage(baseMVA, bus, gen, Ybus, V_predicted,...
lambda_predicted, initQPratio, participation_i);
end
if success == false,
mError = addCause(mError, MException(’CPF:correctVoltageError’,...
’Could not solve for initial point’));
throw(mError);
end
if any(isnan(V))
mError = addCause(mError,MException(’CPF:correctVoltageError’,...
’Generating initial voltage profile’));
mError = addCause(mError,MException(’CPF:correctVoltageError’,...
[’NaN bus voltage at ’, mat2str(i2e(isnan(V)))]));
throw(mError);
end
stepSize = 1;
logStepResults();
nPoints = nPoints + 1;
end
%% --- Start Phase 1: voltage prediction-correction (lambda increasing)
if verbose > 0,
fprintf(’Start Phase 1: voltage prediction-correction’);
fprintf(’(lambda increasing).\n’);
end
lagrange_order = 6;
%parametrize step size for Phase 1
minStepSize = 0.01;
maxStepSize = 200;
Appendix B. Source Code for Continuation Power Flow 111
stepSize = sigmaForLambda;
% stepSize = 10;
if useAdaptive,
stepSize = min(max(stepSize, minStepSize),maxStepSize);
end
function y= mean_log(x)
y = log(x./mean(x));
end
function out = check_stepSizes(thresh)
if nargin < 1, thresh = 2; end
out = any( mean_log(abs(diff(...
[stepSizes(max(1,nPoints-lagrange_order+1):nPoints), stepSize])))...
> thresh);
end
i = 0; j=0; k=0; %initialize counters for each phase to zero
phase1 = true; phase2 = false; phase3 = false;
while i < max_iter && ˜finished
i = i + 1; % update iteration counter
% save good data
V_saved = V;
lambda_saved = lambda;
% do voltage prediction to find predicted point (predicting voltage)
if ˜useLagrange ||...
(nPoints<2 ||...
check_stepSizes() ||...
slope * (-1)ˆ˜flag_lambdaIncrease < 1e-10),
%fallback to first-order approximation
[V_predicted, lambda_predicted, ˜] = ...
Appendix B. Source Code for Continuation Power Flow 112
cpf_predict(Ybus, ref, pv, pq, V, lambda, stepSize, 1,...
initQPratio, participation_i, flag_lambdaIncrease);
else %if we have enough points, use lagrange polynomial
[V_predicted, lambda_predicted] = ...
cpf_predict_voltage(V_corr(:,1:nPoints),...
lambda_corr(1:nPoints), lambda,...
stepSize, ref, pv, pq,...
flag_lambdaIncrease, lagrange_order);
end
if verbose && shouldIPlotEverything,
hold on;
plot(lambda_predicted,...
abs(V_predicted(21)), ’go’);
hold off;
xlims = xlim;
ylims = ylim;
xlim([0,6.4]);
ylim([0.965,1.005]);
if nPoints > 1,
hold on
A = [lambda_corr(nPoints) lambda_predicted];
B = [abs(V_corr(21, nPoints)), abs(V_predicted(21))];
m = (B(2) - B(1)) / (A(2) - A(1));
n = B(1) - m*A(1);
y1 = m*xlims(1) + n;
y2 = m*xlims(2) + n;
plot([xlims(1), xlims(2)],[y1,y2], ’r’);
hold off
end
end
%% check prediction to make sure step is not unreasonably big
error_predicted = max(abs(V_predicted- V));
if useAdaptive && error_predicted > maxStepSize && ˜success
Appendix B. Source Code for Continuation Power Flow 113
% -> this is inappropriate since ’success’ would be coming
% from previous correction step
newStepSize = 0.8*stepSize;
%cut down the step size to reduce the prediction error
if newStepSize > minStepSize,
if verbose,
fprintf(’\t\tPrediction step too large’);
fprintf(’ (voltage change of %.4f).’,error_predicted);
fprintf(’ Step Size reduced from ’);
fprintf(’%.5f to %.5f\n’, stepSize, newStepSize);
end
stepSize = newStepSize;
i = i-1;
continue;
end
end
%% do voltage correction to find corrected point
[V, lambda, success, iters] = ...
cpf_correctVoltage(baseMVA, bus, gen, Ybus, V_predicted,...
lambda_predicted, initQPratio, participation_i);
%
% if i >= 2,
% demonstratePrediction();
% fprintf(’wait’);
% end
% if voltage correction fails, reduce step size and try again
if useAdaptive && success == false && stepSize > minStepSize,
newStepSize = stepSize * 0.3;
if newStepSize > minStepSize,
if verbose,
fprintf(’\t\tCorrection step didnt converge;’);
fprintf(’ changed stepsize from ’);
fprintf(’%.5f to: %.5f\n’, stepSize, newStepSize);
end
stepSize = newStepSize;
i = i-1;
V = V_saved;
lambda = lambda_saved;
Appendix B. Source Code for Continuation Power Flow 114
continue;
end
end
%% calculate slope (dP/dLambda) at current point
[slope, ˜] = max(abs(V-V_saved) ./ (lambda-lambda_saved));
%calculate maximum slope at current point.
%% adjust step size to improve error outcome next time
error = abs(V-V_predicted)./abs(V);
error_order = log(mean(error)/ 0.0005); %desired error (ballpark)
% error_order = log(mean(error)/ 0.0001);
% error_order = log(mean(error)/ 0.001);
% error_order = log(mean(error)/ 0.01);
if useAdaptive && abs(error_order) > 0.5 && mean(error)>0,
% adjust step size to improve error outcome
newStepSize = stepSize - 0.07*error_order;
%adjust step size according to how far we are from desired err
newStepSize = max(min(newStepSize,maxStepSize),minStepSize);
%clamp step size
if verbose,
fprintf(’\t\tmean prediction error: %.15f. ’, mean(error));
fprintf(’changed stepsize from ’);
fprintf(’%.2f to %.2f\n’, stepSize, newStepSize);
end
stepSize = newStepSize;
end
%dampen step size in cases where slope is reduced too quickly.
if useAdaptive && success && nPoints < 2 && slope > slopeThresh_Phase1,
Appendix B. Source Code for Continuation Power Flow 115
newStepSize = stepSize * 0.01;
if newStepSize > 0.000001,
if verbose,
fprintf(’\t\tArrived at Phase 2 too quickly;’);
fprintf(’ changed stepsize from %.5f to: %.5f\n’,...
stepSize, newStepSize); end
stepSize = newStepSize;
end
end
%if error is very small, we have a trivial system.
if useAdaptive && mean(error) < 1e-15,
newStepSize = stepSize* 1.2;
newStepSize = max( min(newStepSize,maxStepSize),minStepSize);
%clamp step size
if verbose,
fprintf(’\t\tmean prediction error: %.15f.’, mean(error));
fprintf(’ changed stepsize from %.2f to %.2f\n’,...
stepSize, newStepSize);
end
stepSize = newStepSize;
end
if success % if converged we can save the point and do verbose
logStepResults();
if verbose && shouldIPlotEverything,
% plotBusCurve(continuationBus, nPoints+1); end
plotBusCurve(21, nPoints+1); end
nPoints = nPoints + 1;
end
% use PV curve slopes as the criteria for switching modes:
if abs(slope) >= slopeThresh_Phase1 || success == false
% Approaching nose area of PV curve, or correction step fails
Appendix B. Source Code for Continuation Power Flow 116
% restore good data point if convergence failed
if success == false
V = V_saved;
lambda = lambda_saved;
i = i-1;
end
if verbose > 0
if ˜success,
if ˜isempty(strfind(lastwarn, ’singular’)),
fprintf(’\t[Info]:\t’);
fprintf(’Matrix is singular. Aborting Correction.\n’);
lastwarn(’No error’);
break;
else
fprintf(’\t[Info]:\tLambda correction fails.\n’);
end
else
fprintf(’\t[Info]:\tApproaching nose area of PV curve.\n’);
end
end
break;
end
end
phase1 = false;
if verbose > 0
fprintf(’\t[Info]:\t%d data points contained in phase 1.\n’, i);
end
if i < 1,
mError = addCause(mError, MException(’CPF:Phase1’,...
’no points in phase 1’));
end
%% --- Switch to Phase 2: lambda prediction-correction (voltage decreasing)
if verbose > 0
Appendix B. Source Code for Continuation Power Flow 117
fprintf(’Switch to Phase 2: ’);
fprintf(’lambda prediction-correction (voltage decreasing).\n’);
end
p2_avoidLagrange = false;
maxStepSize = 0.04;
minStepSize = 0.0000001;
continuationBus = pickBus(useLagrange);
if useAdaptive,
try %try to previous voltage step on continuation bus to start with.
if stepSize < maxStepSize,
%check that step size hasn’t been reduced already
stepSize = stepSize;
else
stepSize = abs(V(continuationBus) - V_saved(continuationBus));
end
catch
%if this fails (e.g. V_saved was never defined) revert to preset value
stepSize = sigmaForVoltage;
end
stepSize = min(max(stepSize, minStepSize),maxStepSize);
else
stepSize = sigmaForVoltage;
end
j = 0;
phase2 = true;
while j < max_iter && ˜finished
%% update iteration counter
j = j + 1;
% save good data
V_saved = V;
lambda_saved = lambda;
Appendix B. Source Code for Continuation Power Flow 118
%% do lambda prediction to find predicted point (predicting lambda)
if ˜useLagrange ||...
(˜useAdaptive && j <= lagrange_order) ||...
p2_avoidLagrange ||...
(nPoints<4 || check_stepSizes() ||...
slope * (-1)ˆ˜flag_lambdaIncrease < 1e-10),
[V_predicted, lambda_predicted, J] = ...
cpf_predict(Ybus, ref, pv, pq, V, lambda, stepSize,...
[2, continuationBus], initQPratio,...
participation_i,flag_lambdaIncrease);
else
[V_predicted, lambda_predicted] = ...
cpf_predict_lambda(V_corr(:,1:nPoints),...
lambda_corr(1:nPoints), lambda_saved, stepSize,...
continuationBus, ref, pv, pq, lagrange_order);
end
if verbose && shouldIPlotEverything,
hold on;
plot(lambda_predicted,...
abs(V_predicted(continuationBus)), ’go’);
hold off;
end
if (useLagrange && ˜p2_avoidLagrange) && any(isnan(V_predicted))
p2_avoidLagrange = true;
if verbose, fprintf(’\t\tAbandoning lagrange\n’); end
j = j-1; continue;
end
if useAdaptive &&...
(abs(lambda_predicted - lambda) > maxStepSize ||...
lambda_predicted < 0),
newStepSize = stepSize * 0.8;
if abs(lambda_predicted - lambda) > 10*maxStepSize,
p2_avoidLagrange = true;
end
Appendix B. Source Code for Continuation Power Flow 119
if newStepSize > minStepSize,
if verbose,
fprintf(’\t\tPrediction step too large (lambda change of’);
fprintf(’ %.4f). Step Size reduced from %.7f to %.7f\n’,...
abs(lambda_predicted - lambda),...
stepSize, newStepSize); end
stepSize = newStepSize;
j = j-1;
continue;
end
end
%% do lambda correction to find corrected point
Vm_assigned = abs(V_predicted);
[V, lambda, success, iters] = cpf_correctLambda(baseMVA, bus, gen,...
Ybus, Vm_assigned, V_predicted, lambda_predicted, initQPratio,...
participation_i, ref, pv, pq, continuationBus);
if verbose && shouldIPlotEverything
hold on; plot(lambda, abs(V(continuationBus)), ’b.’); hold off;
end
if ˜success,
fprintf(’fail’);
end
if useAdaptive && abs(lambda - lambda_saved) > maxStepSize,
if abs(lambda - lambda_saved) > maxStepSize * 10,
p2_avoidLagrange = true;
end
% ...otherwise, reduce the step size, discard and try again
newStepSize = stepSize * 0.2;
if newStepSize > minStepSize,
if verbose,
fprintf(’\t\tcontinuationBus = %d’, continuationBus);
fprintf(’\t\tLambda step too big;’);
fprintf(’ lambda step = %3f).’, lambda-lambda_saved);
fprintf(’ Step size reduced from %.7f to %.7f\n’,...
Appendix B. Source Code for Continuation Power Flow 120
stepSize, newStepSize); end
stepSize = newStepSize;
V = V_saved;
lambda = lambda_saved;
j = j-1;
continue;
end
end
%Here we check the change in Voltage if correction did not converge; if
%the step is larger than the minimum then we can reduce the step size,
%discard the sample and try again.
mean_step = mean( abs(V_predicted-V_saved));
prediction_error = mean(abs(V-V_predicted)./abs(V));
error_order = log(prediction_error/0.000005);
% error_order = log(prediction_error/0.000001);
% error_order = log(prediction_error/0.00001);
if useAdaptive && ( (mean_step > 0.00001 || stepSize > 0) && ˜success)
% if we jumped too far and correction step didn’t converge
newStepSize = stepSize * 0.4;
if newStepSize >= minStepSize, % if we are not below min step-size
% threshold go back and try again
% with new stepSize
if verbose,
fprintf(’\t\tDid not converge; voltage step: ’);
fprintf(’%f pu. Step Size reduced from,’, mean_step);
fprintf(’ %.7f to %.7f\n’, stepSize, newStepSize); end
stepSize = newStepSize;
V = V_saved;
lambda= lambda_saved;
j = j-1;
continue;
end
Appendix B. Source Code for Continuation Power Flow 121
end
if useAdaptive && abs(error_order) > 0.75 && prediction_error>0,
% if abs(error_order) > 1.5 && prediction_error>0,
%if we havent just dropped the stepSize, consider changing it to
%get a better error outcome. this allows us to increase our steps
%to go faster or reduce our steps to avoid non-convergence, which
%is time consuming.
newStepSize = stepSize*...
(1 + 0.15*(error_order < 1) - 0.15*(error_order>1));
newStepSize = max( min(newStepSize,maxStepSize),minStepSize);
%clamp step size
if verbose && newStepSize ˜= stepSize,
fprintf(’\t\tAdjusting step size from’);
fprintf(’ %.7f to %.7f; mean prediction error: %.15f.\n’,...
stepSize, newStepSize, prediction_error); end
stepSize = newStepSize;
end
continuationBus = pickBus(useLagrange);
if success %if correction step converged, log values and do verbosity
logStepResults();
if verbose && shouldIPlotEverything,
plotBusCurve(continuationBus, nPoints+1); end
nPoints = nPoints + 1;
end
%% use PV curve slopes as the criteria for switching modes:
if ˜success || (slope < 0 && slope > -slopeThresh_Phase2),
if ˜success,
% restore good data
V = V_saved;
lambda = lambda_saved;
Appendix B. Source Code for Continuation Power Flow 122
j = j-1;
end
%% ---change to voltage prediction-correction (lambda decreasing)
if verbose > 0
if ˜success,
if ˜isempty(strfind(lastwarn, ’singular’))
fprintf(’\t[Info]:\t’);
fprintf(’Matrix is singular. Aborting Correction.\n’);
lastwarn(’No error’); break;
else
fprintf(’\t[Info]:\tLambda correction fails.\n’);
end
else
fprintf(’\t[Info]:\tLeaving nose area of PV curve.\n’);
end
end
break;
end
end
phase2 = false;
if ˜success && lambda_corr(nPoints) > lambda_corr(nPoints-1),
%failed out before reaching the nose
if verbose,
fprintf(’\t\t[Info]:\tFailed out of Phase 2 before ’);
fprintf(’reaching PV nose. Aborting...\n’); end
mError = MException(’CPF:convergeError’,...
’Phase 2 voltage correction failed before reaching nose.’);
throw(mError);
end
if verbose > 0
fprintf(’\t[Info]:\t%d data points contained in phase 2.\n’, j);
end
function cntBus = pickBus(avoidPV)
% Describes the process for picking a continuation Bus for the next
% iteration
Appendix B. Source Code for Continuation Power Flow 123
if nargin < 1, avoidPV = true; end
%how to pick the continuation bus during Phase 2:
% 1. calculate slope (dP/dLambda) at current point
mSlopes = abs(V-V_saved)./(lambda-lambda_saved);
% 2. check if we have passed the peak of the PV curve
if flag_lambdaIncrease && any(mSlopes < 0),
flag_lambdaIncrease = false; end
% 3. choose the buses that could be continuation buses.
if avoidPV, mBuses = pq;
else mBuses = [pv; pq];
end
%try to eliminate buses with 0 real slope
newMBuses = setdiff( mBuses, find( (abs(V)-abs(V_saved)) == 0));
if ˜isempty(newMBuses), mBuses = newMBuses; end
[˜, ind] = max(mSlopes(mBuses) .* (-1)ˆ˜flag_lambdaIncrease);
cntBus = mBuses(ind);
slope = mSlopes(cntBus);
end
Appendix B. Source Code for Continuation Power Flow 124
%% --- Switch to Phase 3: voltage prediction-correction (lambda decreasing)
if verbose > 0
fprintf(’Switch to Phase 3: ’);
fprintf(’voltage prediction-correction (lambda decreasing).\n’);
end
% set lambda to be decreasing
flag_lambdaIncrease = false;
%set step size for Phase 3
if useAdaptive,
minStepSize = 0.2*stepSize;
maxStepSize = 2;
try
stepSize = lambda_saved - lambda;
catch
stepSize = stepSize;
end
stepSize = min(max(stepSize, minStepSize),maxStepSize);
else
stepSize = sigmaForLambda;
end
if ˜do_phase3, finished = true; end
k = 0;
phase3 = true;
while k < max_iter && ˜finished
%% update iteration counter
k = k + 1;
%% store V and lambda
V_saved = V;
lambda_saved = lambda;
Appendix B. Source Code for Continuation Power Flow 125
if ˜useLagrange ||...
(˜useAdaptive && k < lagrange_order) ||...
(nPoints<4 || check_stepSizes() ||...
slope * (-1)ˆ˜flag_lambdaIncrease < 1e-10),
% do voltage prediction to find next point (predicting voltage)
[V_predicted, lambda_predicted, ˜] = cpf_predict(Ybus, ref, pv,...
pq, V, lambda, stepSize, 1, initQPratio, participation_i,...
flag_lambdaIncrease);
else %if we have enough points, use lagrange polynomial
[V_predicted, lambda_predicted] = cpf_predict_voltage(...
V_corr(:,1:nPoints), lambda_corr(1:nPoints), lambda,...
stepSize, ref, pv, pq, flag_lambdaIncrease, lagrange_order);
end
%% do voltage correction to find corrected point
[V, lambda, success, iters] = cpf_correctVoltage(baseMVA, bus, gen,...
Ybus, V_predicted, lambda_predicted, initQPratio, participation_i);
mean_step = mean( abs(V-V_saved));
if useAdaptive && ( mean_step > 0.0001 && ˜success)
% if we jumped too far and correction step didn’t converge
newStepSize = stepSize * 0.4;
if newStepSize > minStepSize, %if we are not below min step-size
% threshold go back and try again with new stepSize
if verbose,
fprintf(’\t\tDid not converge; voltage step:’);
fprintf(’ %f pu. Step Size reduced’, mean_step);
fprintf(’ from %.5f to %.5f\n’, stepSize, newStepSize); end
stepSize = newStepSize;
V = V_saved;
lambda= lambda_saved;
k = k-1;
continue;
end
end
prediction_error = mean( abs( V-V_predicted));
error_order = log(prediction_error/0.001);
% error_order = log(mean(error)/ 0.005);
if useAdaptive && abs(error_order) > 0.8 && prediction_error>0,
Appendix B. Source Code for Continuation Power Flow 126
newStepSize = stepSize - 0.03*error_order; %adjust step size
newStepSize = max( min(newStepSize,maxStepSize),minStepSize);
%clamp step size
if verbose && newStepSize ˜= stepSize,
fprintf(’\t\tAdjusting step size’);
fprintf(’ from %.6f to %.6f; ’, stepSize);
fprintf(’mean prediction error: %.15f.\n’,...
newStepSize, prediction_error); end
stepSize = newStepSize;
elseif useAdaptive && mean_step > 0 && prediction_error == 0,
%we took a step but prediction was dead on
newStepSize = stepSize * 2;
newStepSize = max( min(newStepSize,maxStepSize),minStepSize);
%clamp step size
if verbose && newStepSize ˜= stepSize,
fprintf(’\t\tAdjusting step size’);
fprintf(’ from %.6f to %.6f; mean’, stepSize);
fprintf(’ prediction error: %.15f.\n’,...
newStepSize, prediction_error); end
stepSize = newStepSize;
end
if lambda < 0 % lambda is less than 0, then stop CPF simulation
if verbose > 0,
fprintf(’\t[Info]:\t’);
fprintf(’lambda is less than 0.\n\t\t\tCPF finished.\n’); end
k = k-1;
break;
end
if success,
logStepResults()
if verbose && shouldIPlotEverything,
plotBusCurve(continuationBus, nPoints+1); end
nPoints = nPoints + 1;
end
if ˜success % voltage correction step fails.
Appendix B. Source Code for Continuation Power Flow 127
V = V_saved;
lambda = lambda_saved;
k = k-1;
if verbose > 0
if ˜isempty(strfind(lastwarn, ’singular’))
fprintf(’\t[Info]:\t’);
fprintf(’Matrix is singular. Aborting Correction.\n’);
lastwarn(’No error’);
break;
else
fprintf(’\t[Info]:\tVoltage correction step fails..\n’);
end
end
break;
end
end
phase3=false;
if verbose > 0,
fprintf(’\t[Info]:\t%d data points contained in phase 3.\n’, k); end
%% Get the last point (Lambda == 0)
if success && do_phase3, %assuming we didn’t fail out, solve for lambda = 0
[V_predicted, lambda_predicted, ˜] = cpf_predict(Ybus, ref, pv, pq,...
V, lambda_saved, lambda_saved, 1, initQPratio, participation_i,...
flag_lambdaIncrease);
[V, lambda, success, iters] = cpf_correctVoltage(baseMVA, bus, gen,...
Ybus, V_predicted, lambda_predicted, initQPratio, participation_i);
if success,
logStepResults()
if shouldIPlotEverything,
plotBusCurve(continuationBus, nPoints+1); end
nPoints = nPoints + 1;
end
end
if verbose > 0,
fprintf(’\n\t[Info]:\t%d data points total.\n’, nPoints); end
Appendix B. Source Code for Continuation Power Flow 128
V_corr = V_corr(:,1:nPoints);
lambda_corr = lambda_corr(1:nPoints);
V_pr = V_pr(:,1:nPoints);
lambda_pr = lambda_pr(1:nPoints);
nSteps = nSteps(1:nPoints);
stepSizes = stepSizes(1:nPoints);
max_lambda = max(lambda_corr);
if lambda <= max_lambda,
success = true;
end
if shouldIPlotEverything,
plotBusCurve(continuationBus);
% myaa();
figure;
hold on;
plot(lambda_corr, abs(V_corr(pq,:)));
maxL =plot([max_lambda, max_lambda],...
ylim,’LineStyle’, ’--’,’Color’,[0.8,0.8,0.8]);
ticks = get(gca, ’XTick’);
ticks = ticks(abs(ticks - ml) > 0.5);
ticks = sort(unique([ticks round(ml*1000)/1000]));
set(gca, ’XTick’, ticks);
uistack(maxL, ’bottom’);
% uistack(mText, ’bottom’);
hold off;
title(’Power-Voltage curves for all PQ buses.’);
ylabel(’Voltage (p.u.)’)
xlabel(’Power (lambda load scaling factor)’);
% ylims = ylim;
figure;
%%
Appendix B. Source Code for Continuation Power Flow 129
plot([lambda_corr(1),...
lambda_corr(1) + cumsum(abs(diff(lambda_corr)))],...
real(V_corr(continuationBus,:)), ’o’)
hold on;
xpos = lambda_corr(1) + cumsum(abs(diff(lambda_corr)));
slopes = real(diff(V_corr(continuationBus,:))) ./ diff(lambda_corr);
plot(xpos, slopes, ’r’)
hold off;
end
et = etime(clock, t0);
if verbose > 0,
fprintf(’\t[Info]:\tMax Lambda Value: %.4f\n’, max_lambda);
fprintf(’\t[Info]:\tAverage step size: %.6f\n’, mean(stepSizes));
fprintf(’\t[Info]:\tAverage # iterations: %.2f\n’, mean(nSteps));
fprintf(’\t[Info]:\tCompletion time: %.3f seconds.\n’, et);
end
if nargout > 1, % LEGACY create predicted, corrected, combined lists
predicted_list = [ [bus(:,1); 0] [V_pr; lambda_pr]];
corrected_list = [ [bus(:,1); 0] [V_corr; lambda_corr]];
combined_list = [bus(:,1); 0];
combined_list(:,(1:size(V_corr,2))*2) = [V_corr; lambda_corr];
combined_list(:,(1:size(V_pr,2)-1)*2+1) ...
= [V_pr(:,2:end); lambda_pr(2:end)];
end
if nargout == 1,
results.max_lambda = max_lambda;
results.V_pr = V_pr;
results.lambda_pr = lambda_pr;
results.V_corr = V_corr;
results.lambda_corr = lambda_corr;
results.success = success;
results.time = et;
max_lambda = results; %return a single struct
Appendix B. Source Code for Continuation Power Flow 130
end
function logStepResults()
% quicky for logging the results of an iteration, which happens the
% same in each phase.
V_pr(:,nPoints+1) = V_predicted;
lambda_pr(nPoints+1) = lambda_predicted;
V_corr(:,nPoints+1) = V;
lambda_corr(:,nPoints+1) = lambda;
nSteps(:,nPoints+1) = iters;
stepSizes(:,nPoints+1) = stepSize;
end
function plotBusCurve(bus, mIndex)
% This function creates a pretty plot of prediction/correction up
% to the current point, for the bus specified, including colour
% coding of phases in CPF
if nargin<2,
mIndex = nPoints;
end
if bus == GLOBAL_CONTBUS,
%if its the same bus as last time, check resizing of window.
xlims = xlim;
ylims = ylim;
xlims(1) = min(xlims(1),...
lambda_corr(mIndex) - (xlims(2) - lambda_corr(mIndex)) * 0.1);
xlims(2) = max(xlims(2),...
lambda_corr(mIndex) + (lambda_corr(mIndex) - xlims(1)) * 0.1);
ylims(1) = min(ylims(1),...
abs(V_corr(bus,mIndex))-(ylims(2)-abs(V_corr(bus,mIndex)))*0.1);
ylims(2) = max(ylims(2),...
abs(V_corr(bus,mIndex))+(abs(V_corr(bus,mIndex))-ylims(1))*0.1);
end
markerSize = 8;
Appendix B. Source Code for Continuation Power Flow 131
pred = plot(lambda_pr(1:1+i+j+k), abs(V_pr(bus,1:1+i+j+k)),...
’o’, ’Color’, [0.9137,0.4275,0.5451], ’LineWidth’, 2);
hold on;
%plot phase 3
p3=plot(lambda_corr(1+i+j:1+i+j+k), abs(V_corr(bus, 1+i+j:1+i+j+k)),...
’.-b’, ’markers’,markerSize, ’Color’,[0.2510 0.5412 0.8235]);
%plot phase 2
p2=plot(lambda_corr(1+i:1+i+j), abs(V_corr(bus, 1+i:1+i+j)),...
’.-g’, ’markers’, markerSize, ’Color’, [0.3490 0.7765 0.4078]);
%plot phase 1
p1=plot(lambda_corr(1:1+i), abs(V_corr(bus,1:i+1)),...
’.-b’, ’markers’, markerSize, ’Color’, [0.2510 0.5412 0.8235]);
%plot initial point
st=plot(lambda_corr(1), abs(V_corr(bus,1)),...
’.-k’, ’markers’, markerSize);
if 1+i+j+k < nPoints,
plot( lambda_pr(nPoints), abs(V_pr(bus,nPoints)),...
’o’, ’Color’, [0.9137,0.4275,0.5451],’LineWidth’, 2)
en=plot(lambda_corr(end-1:end), abs(V_corr(bus, end-1:end)),...
’.-k’, ’markers’, markerSize);
end
% title(sprintf(’Power-Voltage curve for bus %d.’, continuationBus));
ylabel(’Voltage (p.u.)’)
xlabel(’Power (lambda load scaling factor)’);
ml = max(lambda_corr);
ticks = get(gca, ’XTick’);
ticks = ticks(abs(ticks - ml) > 0.25);
ticks = sort(unique([ticks round(ml*1000)/1000]));
set(gca, ’XTick’, ticks);
hold on;
mLine=plot( [ml, ml], ylim,...
’LineStyle’, ’--’, ’Color’, [0.8,0.8,0.8]);
uistack(mLine, ’bottom’);
hold off;
Appendix B. Source Code for Continuation Power Flow 132
legend([pred, p1,p2, mLine],...
{’Predicted Values’, ’Lambda Continuation’,...
’Voltage Continuation’, ’Max Lambda’})
if bus == GLOBAL_CONTBUS,
xlim(xlims);
ylim(ylims);
end
GLOBAL_CONTBUS = bus;
a = 1;
% set background to transparent
% set(gca, ’xcolor’, [0 0 0],...
% ’ycolor’, [0 0 0],...
% ’color’, ’none’);
%set background transparent, for making high quality figures
% set(gcf, ’color’, ’none’, ’inverthardcopy’, ’off’);
end
function demonstratePrediction(startPt, endPt)
% Use this function to demonstrate the difference between lagrange
% versus linear approximation during the prediction step.
%
% This function can be used by calling ’demonstratePrediction()’ after
% i reaches a value high enough to do meaningful lagrange predictions
% (4 or greater). Call it after finding the correction point as
% follows:
%
%
% [V, lambda] = cpf_correct(...);
%
% if i >= 4,
% demonstratePrediction();
% %equivalent to:
% % demonstratePrediction(1,i);
% end
if nargin < 1,
Appendix B. Source Code for Continuation Power Flow 133
startPt = 1;
end
if nargin < 2,
endPt = nPoints;
end
pts_to_take = startPt:endPt;
%plot previous points
prev_pr = scatter(lambda_pr(pts_to_take),...
abs(V_pr(continuationBus,pts_to_take)),...
’r’, ’MarkerFaceColor’, ’r’);
hold on;
prev_corr = plot( lambda_corr(pts_to_take),...
abs(V_corr(continuationBus,pts_to_take)),...
’b.-’, ’LineWidth’, 1); hold off;
%define the range over which to compute predictions
sigmaRange = linspace(...
max( -1.2*stepSize, -lambda_saved), 1.2*stepSize, 4000);
times = zeros(1, length(sigmaRange));
%get linear predictions
V_linear = zeros(1, length(sigmaRange));
l_linear = zeros(1,length(sigmaRange));
for sample = 1:length(sigmaRange),
if phase1 || phase3,
tic;
[mVs, mL,˜] = cpf_predict(Ybus, ref, pv, pq, V_saved,...
lambda_saved, sigmaRange(sample), 1, initQPratio,...
participation_i, flag_lambdaIncrease);
times(sample) = toc;
elseif phase2,
tic;
[mVs, mL, ˜] = cpf_predict(Ybus, ref, pv, pq, V_saved,...
Appendix B. Source Code for Continuation Power Flow 134
lambda_saved, sigmaRange(sample), [2, continuationBus],...
initQPratio, participation_i,flag_lambdaIncrease);
times(sample) = toc;
end
V_linear(sample) = abs(mVs(continuationBus));
l_linear(sample) = mL;
end
fprintf(’average of %d iterations of linear predictor: %f seconds\n’,...
length(sigmaRange), mean(times));
%get lagrange predictions
V_lagrange = zeros(1, length(sigmaRange));
l_lagrange = zeros(1, length(sigmaRange));
for sample = 1:length(sigmaRange)
if phase1 || phase3,
tic;
[mVs, mL] = cpf_predict_voltage(V_corr(:,1:endPt),...
lambda_corr(1:endPt), lambda_saved, sigmaRange(sample),...
ref, pv, pq, flag_lambdaIncrease, lagrange_order);
times(sample) = toc;
elseif phase2,
tic;
[mVs, mL] = cpf_predict_lambda(V_corr(:,1:endPt),...
lambda_corr(1:endPt), lambda_saved, sigmaRange(sample),...
continuationBus, ref, pv, pq, lagrange_order);
times(sample) = toc;
end
V_lagrange(sample) = abs(mVs(continuationBus));
l_lagrange(sample) = mL;
end
fprintf(’average of %d iterations of’, length(sigmaRange));
fprintf(’ lagrange predictor: %f seconds\n’, mean(times));
Appendix B. Source Code for Continuation Power Flow 135
%plot prediction curves
hold on;
lin_prs = plot(l_linear, V_linear, ’g-.’,’LineWidth’, 2);
lag_prs = plot(l_lagrange, V_lagrange, ’m--’,’LineWidth’, 2);
hold off;
%get linear prediction point at stepSize
if phase1 || phase3,
[mVs, mL,˜] = cpf_predict(Ybus, ref, pv, pq, V_saved,...
lambda_saved, stepSize, 1, initQPratio, participation_i,...
flag_lambdaIncrease);
elseif phase2,
[mVs, mL, ˜] = cpf_predict(Ybus, ref, pv, pq, V_saved,...
lambda_saved, stepSize, [2, continuationBus], initQPratio,...
participation_i,flag_lambdaIncrease);
end
hold on; lin_pr = scatter(mL, abs(mVs(continuationBus)),...
’cˆ’, ’MarkerFaceColor’,’c’); hold off;
%get lagrange prediction point at stepSize
if phase1 || phase3,
[mVs, mL] = cpf_predict_voltage(V_corr(:,1:endPt),...
lambda_corr(1:endPt), lambda_saved, stepSize, ref, pv, pq,...
flag_lambdaIncrease, lagrange_order);
elseif phase2,
[mVs, mL] = cpf_predict_lambda(V_corr(:,1:endPt),...
lambda_corr(1:endPt), lambda_saved, stepSize,...
continuationBus, ref, pv, pq, lagrange_order);
end
hold on; lag_pr = scatter(mL, abs(mVs(continuationBus)),...
’cv’,’MarkerFaceColor’,’c’); hold off;
%plot solution to correction
hold on;
plot( ...
[lambda_corr(endPt), lambda],...
[abs(V_corr(continuationBus,endPt)),...
abs(V(continuationBus))], ’b’)
corr = scatter(lambda, abs(V(continuationBus)),...
’bs’,’MarkerFaceColor’,’b’); hold off;
Appendix B. Source Code for Continuation Power Flow 136
%detail the plot
legend([prev_pr, prev_corr, lin_prs, lag_prs, lin_pr, lag_pr, corr],...
{
’previous predicted voltages’,...
’previous corrected voltages’,...
’linear prediction function’,...
’lagrange prediction function’,...
’linear prediction’,...
’lagrange prediction’,...
’corrected value’,...
});
title(’Linear Predictor v.s. Lagrange Predictor’);
ylabel(’Voltage (p.u.)’);
xlabel(’Power (lambda load scaling factor)’);
fprintf(’done’)
end
function [med, idx] =mymedian(x)
% mymedian Calculate the median value of an array and find its index.
%
% This function can be used to calculate the median value of an array.
% Unlike the built in median function, it returns the index where the
% median value occurs. In cases where the array does not contain its
% median, such as [1,2,3,4] or [1,3,2,4], the index of the first occuring
% adjacent point will be returned; in both of the above examples the median
% will be 2.5 and the index will be 2.
assert(isvector(x));
med = median(x);
[˜, idx] = min( abs( x - med));
end
end
Appendix B. Source Code for Continuation Power Flow 137
%% Changelog - Anton Lodder - 2013.3.27
% I implemented participation factor loading to allow all buses to
% participate in load increase as a function of lambda.
%
% * Participation factors should be given as a vector, one value for each
% bus.
% * if only one value is given, it is assumed that the value is a bus
% number rather than a participation factor, and all other buses get a
% participation factor of zero (point two)
% * any buses with zero participation factor will remain at their initial
% load level
% * from the previous two bullets: backwards compatibility is maintained
% while allowing increased functionality
% * if participation is not a valid bus number (eg float, negative number),
%
% * if no participation factors are given, maintain given bus loading
% profile.
B.2 Prediction step using First Order Approximation
function [V_predicted, lambda_predicted, J, success] = ...
cpf_predict(Ybus, ref, pv, pq, V, lambda, sigma, type_predict,...
initQPratio, participation, flag_lambdaIncrease)
%CPF_PREDICT Do prediction in cpf.
% [INPUT PARAMETERS]
% type_predict: 1-predict voltage; 2-predict lambda
% loadvarloc: (in internal bus numbering)
% [OUTPUT PARAMETERS]
% J: jacobian matrix for the given voltage profile (before prediction)
% created by Rui Bo on 2007/11/12
% MATPOWER
% $Id: cpf_predict.m,v 1.4 2010/04/26 19:45:26 ray Exp $
% by Rui Bo
% Copyright (c) 2009-2010 by Rui Bo
%
% This file is part of MATPOWER.
% See http://www.pserc.cornell.edu/matpower/ for more info.
%
% MATPOWER is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published
% by the Free Software Foundation, either version 3 of the License,
% or (at your option) any later version.
Appendix B. Source Code for Continuation Power Flow 138
%
% MATPOWER is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with MATPOWER. If not, see <http://www.gnu.org/licenses/>.
%
% Additional permission under GNU GPL version 3 section 7
%
% If you modify MATPOWER, or any covered work, to interface with
% other modules (such as MATLAB code and MEX-files) available in a
% MATLAB(R) or comparable environment containing parts covered
% under other licensing terms, the licensors of MATPOWER grant
% you additional permission to convey the resulting work.
%% set up indexing
npv = length(pv);
npq = length(pq);
Vangles_pv = 1:npv;
Vangles_pq = npv+1:npv+npq;
Vmag_pq = npv+npq+1:npv+2*npq;
lambdaIndex = npv+2*npq + 1;
%% form current variable set from given voltage
x_current = [ angle(V([pv;pq]));
abs(V(pq));
lambda];
%% evaluate Jacobian, dF/d detla and dF/dV
J = getJ(Ybus, V, pv, pq);
%% form K based on participation factors. dF/d lambda
K = [-participation(pv); -participation(pq);...
-participation(pq) .* initQPratio(pq)];
%% form e
Appendix B. Source Code for Continuation Power Flow 139
e = zeros(1, npv+2*npq+1);
switch type_predict(1),
case 1, %predict voltage
e(npv+2*npq+1) = -1 + 2*(flag_lambdaIncrease == true);
case 2, %predict lambda
%each bus has an angle, plus all PQ busses have a Voltage
continuationBus = type_predict(2);
% [Anton] used type_predict to pass in
% bus value I want for voltage continuation
e(npv + find(pq == continuationBus)) = -1;
e(pv==continuationBus) = -1;
otherwise %error
fprintf(’Error: unknow ’’type_predict’’.\n’);
end
% form of e is expected to be [ delta * (#pv buses + #pq buses), v* #pq
% buses) + 1]
%% form b
b = [zeros(npv+2*npq,1); 1];
%% reformulated jacobian
augJ = [J K ;
e ];
% now includes lambda
%% calculate predicted variable set
x_predicted = x_current + sigma*(augJ\b);
%% convert variable set to voltage form
V_predicted(ref, 1) = V([ref]); %reference bus voltage passes through
V_predicted(pv, 1) = abs(V(pv)).*exp(sqrt(-1)*x_predicted(Vangles_pv));
V_predicted(pq, 1) = ...
x_predicted(Vmag_pq).* exp(sqrt(-1) * x_predicted(Vangles_pq) );
lambda_predicted = x_predicted(lambdaIndex);
if ˜isempty(strfind(lastwarn, ’singular’)),
lastwarn(’No error’);
success = false;
Appendix B. Source Code for Continuation Power Flow 140
end
end
function J = getJ(Ybus, V, pv, pq)
% J = getJ(Ybus, V, pv, pq)
%
% getJ get jacobian of power flow equations
%
% This function gets the Jaciobian of the power flow equations at
[dSbus_dVm, dSbus_dVa] = dSbus_dV(Ybus, V);
j11 = real(dSbus_dVa([pv; pq], [pv; pq]));
j12 = real(dSbus_dVm([pv; pq], pq));
j21 = imag(dSbus_dVa(pq, [pv; pq]));
j22 = imag(dSbus_dVm(pq, pq));
J = -[ j11 j12;
j21 j22; ];
%% form augmented Jacobian
%NOTE: the use of ’-J’ instead of ’J’ is due to that the definition of
%dP(,dQ) in the textbook is the negative of the definition in MATPOWER.
% In
%the textbook, dP=Pinj-Pbus; In MATPOWER, dP=Pbus-Pinj. Therefore, the
%Jacobians generated by the two definitions differ only in the sign.
end
B.3 Prediction step using Lagrange Polynomial
B.3.1 Lagrange Polynomial Prediction with λ Continuation
The following code predicts the bus voltages at the next step with a set step in λ, the continuation
parameter in phase 1 and 3 of CPF.
function [V_predicted, lambda_predicted] = ...
cpf_predict_voltage(V_corr, lambda_corr, lambda, sigma,...
ref,pv, pq, flag_lambdaIncrease,maxDegree )
if nargin < 9,
maxDegree = 6;
end
% degree of Lagrange polynomial is set by length of known data, too
% high a degree causes instability
Appendix B. Source Code for Continuation Power Flow 141
%% set up indexing
npv = length(pv);
npq = length(pq);
Vangles_pv = 1:npv;
Vangles_pq = npv+1:npv+npq;
Vmag_pq = npv+npq+1:npv+2*npq;
%
%% update lambda
if flag_lambdaIncrease,
lambda_predicted = lambda + sigma;
else
lambda_predicted = lambda - sigma;
end
%
V_corr = V_corr(:, max(1,end-maxDegree+1):end);
lambda_corr = lambda_corr(max(1,end-maxDegree+1):end);
%shorten V_corr and lambda_corr to maxDegree points
x_current = [ angle( V_corr([pv;pq], :) ); abs( V_corr(pq, : ))];
x_predicted = lagrangepoly(lambda_predicted, lambda_corr, x_current);
%% convert variable set to voltage form
V_predicted(ref, 1) = V_corr(ref,end);
%reference bus voltage is passed through
V_predicted(pv, 1) =...
abs(V_corr(pv, end)).* exp(sqrt(-1) * x_predicted(Vangles_pv) );
V_predicted(pq, 1) =...
Appendix B. Source Code for Continuation Power Flow 142
x_predicted(Vmag_pq).* exp(sqrt(-1) * x_predicted(Vangles_pq) );
%
end
B.3.2 Lagrange Polynomial prediction with Voltage Continuation
The following code predicts the bus voltages and λ value at the next step for a given step size in Vk, the
chosen bus voltage continuation parameter for phase 2.
function [V_predicted, lambda_predicted] = ...
cpf_predict_lambda(V_corr, lambda_corr, lambda, sigma,...
continuationBus, ref,pv, pq, maxDegree)
if nargin < 9, maxDegree = 6; end
% if lambda > 1.79, keyboard; end
%% set up indexing
npv = length(pv);
npq = length(pq);
Vangles_pv = 1:npv;
Vangles_pq = npv+1:npv+npq;
Vmag_pq = npv+npq+1:npv+2*npq;
lambdaIndex = npv+2*npq + 1;
%% update lambda
%
% degree of Lagrange polynomial is set by length of known data, too
% high a degree causes instability
V_corr = V_corr(:, max(1,end-maxDegree+1):end);
%
V_conn = angle(V_corr(continuationBus,:));
lambda_corr = lambda_corr(max(1,end-maxDegree+1):end);
%shorten V_corr and lambda_corr to maxDegree points
Appendix B. Source Code for Continuation Power Flow 143
V_conn_predicted = V_conn(end) - sigma;
x_current = [angle( V_corr([pv;pq], :) );...
abs( V_corr(pq, : ));...
lambda_corr];
x_predicted = lagrangepoly(V_conn_predicted, V_conn, x_current);
%% convert variable set to voltage form
V_predicted(ref, 1) = V_corr(ref,end);
%reference bus voltage is passed through
V_predicted(pv, 1) = ...
abs(V_corr(pv, end)).* exp(sqrt(-1) * x_predicted(Vangles_pv) );
V_predicted(pq, 1) =...
x_predicted(Vmag_pq).* exp(sqrt(-1) * x_predicted(Vangles_pq) );
lambda_predicted = x_predicted(lambdaIndex);
end
B.3.3 Implementation of Lagrange Polynomial
The following code implements the lagrange polynomial in vector form, providing interpolants for a
vector of sampled functions.
function Lx = lagrangepoly(x, xs, ys)
% lagrangepoly
% lagrange polynomial.
%
% Lx = lagrangepoly(x,xs,ys);
%
% This function uses the lagrange polynomial formulation to perform
% interpolation or extrapolation via polynomial. It works by returning
% y = L(x), where L represents the lagrange polynomial derived from
% x,xs and ys. The order of the polynomial is
% determined by the number of samples in the input.
%
% inputs:
% x: x-axis value for which you are seeking y value
% inter/extra-polation.
% xs: x values corresponding to input functional values, in columns
Appendix B. Source Code for Continuation Power Flow 144
% ys: y data from which to extrapolate the output y=L(x), where L is
% the Lagrange Polynomial. each column of ys is considered as a
% separate function, with one sample per column; thus xs and ys
% should have the same number of columns, and y will have the
% same numer of rows as ys.
nSamples = length(xs);
xxm = x-xs;
js = 1:nSamples;
Lj = zeros(nSamples,1);
for j = 1:nSamples,
Lj(j) = prod(xxm(:,js˜=j),2) ./ prod( xs(j)- xs(:, js˜=j));
end
Lx = ys * Lj;
end
B.4 Correction step of Continuation Power Flow
B.4.1 Correction Step With λ as Continuation Parameter
The following code solves for an exact voltage solution at the specified loading level when λ is the
continuation parameter.
function [V, lambda, success, iterNum] = cpf_correctVoltage(...
baseMVA, bus, gen, Ybus, V_predicted, lambda_predicted,...
initQPratio, participation)
%CPF_CORRECTVOLTAGE Do correction for predicted voltage in cpf.
% [INPUT PARAMETERS]
% participation: (in internal bus numbering) percentage of loading for
% each bus
% created by Rui Bo on 2007/11/12
% MATPOWER
% $Id: cpf_correctVoltage.m,v 1.4 2010/04/26 19:45:26 ray Exp $
% by Rui Bo
% Copyright (c) 2009-2010 by Rui Bo
%
Appendix B. Source Code for Continuation Power Flow 145
% This file is part of MATPOWER.
% See http://www.pserc.cornell.edu/matpower/ for more info.
%
% MATPOWER is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published
% by the Free Software Foundation, either version 3 of the License,
% or (at your option) any later version.
%
% MATPOWER is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with MATPOWER. If not, see <http://www.gnu.org/licenses/>.
%
% Additional permission under GNU GPL version 3 section 7
%
% If you modify MATPOWER, or any covered work, to interface with
% other modules (such as MATLAB code and MEX-files) available in a
% MATLAB(R) or comparable environment containing parts covered
% under other licensing terms, the licensors of MATPOWER grant
% you additional permission to convey the resulting work.
%% define named indices into bus, gen, branch matrices
% [PQ, PV, REF, NONE, BUS_I, BUS_TYPE, PD, QD, GS, BS, BUS_AREA, VM, ...
% VA, BASE_KV, ZONE, VMAX, VMIN, LAM_P, LAM_Q, MU_VMAX, MU_VMIN] = idx_bus;
[˜, ˜, ˜, ˜, ˜, ˜, PD, QD, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜] ...
= idx_bus;
%% get bus index lists of each type of bus
[ref, pv, pq] = bustypes(bus, gen);
%% set load as lambda indicates
lambda = lambda_predicted;
bus(:,PD) = bus(:,PD) .*(participation == 0)...
+ participation.*lambda.*baseMVA;
bus(:,QD) = bus(:,PD) .* initQPratio;
Appendix B. Source Code for Continuation Power Flow 146
%% compute complex bus power injections (generation - load)
SbusInj = makeSbus(baseMVA, bus, gen);
%% prepare initial guess
V0 = V_predicted; % use predicted voltage to set the initial guess
%% run power flow to get solution of the current point
mpopt = mpoption(’VERBOSE’, 0);
[V, success, iterNum] = newtonpf(Ybus, SbusInj, V0, ref, pv, pq, mpopt);
% run NR’s power flow solver
if ˜isempty(strfind(lastwarn, ’singular’)),
%
lastwarn(’No error’);
success = false;
end
B.4.2 Correction Step With Bus Voltage as Continuation Parameter
The following colde solves for an exact voltage solution when a bus voltage is used as the continuation
parameter.
function [V, lambda, converged, iterNum] = cpf_correctLambda(...
baseMVA, bus, gen, Ybus, Vm_assigned, V_predicted,...
lambda_predicted, initQPratio, participation,...
ref, pv, pq, continuationBus)
%CPF_CORRECTLAMBDA Correct lambda in correction step near load point.
% function: correct lambda(ie, real power of load) in cpf correction step
% near the nose point. Use NR’s method to solve the nonlinear equations
% [INPUT PARAMETERS]
% loadvarloc: (in internal bus numbering)
% created by Rui Bo on 2007/11/12
% MATPOWER
% $Id: cpf_correctLambda.m,v 1.4 2010/04/26 19:45:26 ray Exp $
% by Rui Bo
% and Ray Zimmerman, PSERC Cornell
% Copyright (c) 1996-2010 by Power System Engineering Research Center
% (PSERC)
% Copyright (c) 2009-2010 by Rui Bo
%
% This file is part of MATPOWER.
Appendix B. Source Code for Continuation Power Flow 147
% See http://www.pserc.cornell.edu/matpower/ for more info.
%
% MATPOWER is free software: you can redistribute it and/or modify
% it under the terms of the GNU General Public License as published
% by the Free Software Foundation, either version 3 of the License,
% or (at your option) any later version.
%
% MATPOWER is distributed in the hope that it will be useful,
% but WITHOUT ANY WARRANTY; without even the implied warranty of
% MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
% GNU General Public License for more details.
%
% You should have received a copy of the GNU General Public License
% along with MATPOWER. If not, see <http://www.gnu.org/licenses/>.
%
% Additional permission under GNU GPL version 3 section 7
%
% If you modify MATPOWER, or any covered work, to interface with
% other modules (such as MATLAB code and MEX-files) available in a
% MATLAB(R) or comparable environment containing parts covered
% under other licensing terms, the licensors of MATPOWER grant
% you additional permission to convey the resulting work.
%% define named indices into bus, gen, branch matrices
%
% VA, BASE_KV, ZONE, VMAX, VMIN, LAM_P, LAM_Q, MU_VMAX, MU_VMIN] = idx_bus;
[˜, ˜, ˜, ˜, ˜, ˜, PD, QD, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜, ˜]...
= idx_bus;
%% options
tolerance = 1e-5; % mpopt(2);
max_iters = 100; % mpopt(3);
verbose = 0; % mpopt(31);
%% initialize
V = V_predicted;
lambda = lambda_predicted;
Va = angle(V);
Vm = abs(V);
%% set up indexing for updating V
npv = length(pv);
Appendix B. Source Code for Continuation Power Flow 148
npq = length(pq);
Vangles_pv = 1:npv;
Vangles_pq = npv+1:npv+npq;
Vmag_pq = npv+npq+1:npv+2*npq;
lambdaIndex = npv+2*npq + 1;
%% update bus and calculate power flow value at current V
[bus, F] = updatePF(bus);
%% do Newton iterations
i = 0;
converged = false;
while (˜converged && i < max_iters)
i = i + 1;
%% evaluate Jacobian
J = getJ(Ybus, V, pv, pq);
%% form augmented Jacobian with V as continuation parameter
delF_dLambda = [-participation(pv); -participation(pq);
-participation(pq) .* initQPratio(pq)];
delVm = [zeros(1, npv+npq) -participation(pq)’ 0];
%dV/dangle = 0, dV/dV = participations, dV/dlambda = 0;
delVm = zeros(1, npv+npq*2 + 1);
%
delVm(npv+find(pq == continuationBus)) = -1;
delVm(pv == continuationBus) = -1;
augJ = [ J delF_dLambda;
delVm ];
%% compute update step
dx = -(augJ \ F);
if ˜isempty(strfind(lastwarn, ’singular’)),
lastwarn(’No error’);
break;
converged = false;
end
%% update voltage.
Appendix B. Source Code for Continuation Power Flow 149
Va( [pv;pq] ) = Va( [pv;pq] ) + dx( [Vangles_pv Vangles_pq]);
Vm(pq) = Vm(pq) + dx(Vmag_pq);
lambda = lambda + dx(lambdaIndex);
% NOTE: voltage magnitude of pv buses, voltage magnitude and
% angle of reference bus are not updated, so they keep as
% constants (ie, the value as in the initial guess)
V = Vm .* exp(1i * Va);
% NOTE: angle in radians in pf solver, but degrees in case data
Vm = abs(V); %% update Vm and Va again in case
Va = angle(V); %% we wrapped around with a negative Vm
[bus, F] = updatePF(bus);
%% check for convergence
normF = norm(F, inf);
if verbose > 1,
fprintf(’\niteration [%3d]\t\tnorm of mismatch: %10.3e’,...
i, normF);
end
converged = normF < tolerance;
end
iterNum = i;
%
if ˜isempty(strfind(lastwarn, ’singular’)),
lastwarn(’No error’);
converged = false;
end
[bus, F] = updatePF(bus)
%% set load as lambda indicates
bus(:,PD) = bus(:,PD) .*(participation == 0) ...
+ participation.*lambda.*baseMVA;
bus(:,QD) = bus(:,PD) .* initQPratio;
%% compute complex bus power injections (generation - load)
Appendix B. Source Code for Continuation Power Flow 150
SbusInj = makeSbus(baseMVA, bus, gen);
%% evalute F(x0)
F = Feval(V,Vm_assigned, SbusInj, Ybus, pv, pq, continuationBus);
end
end
function F = Feval(V, Vm_assigned, SbusInj, Ybus, pv, pq, continuationBus)
% F = Feval(V, Vm_assigned, SbusInj, Ybus, pv, pq, continuationBus)
%
% Feval evaluate power flow equations
%
% Evaluates the power flow equations S = V x V* / X
%% calculate mismatch
mis = SbusInj - ( V .* conj(Ybus * V) );
%% order mismatches according to formulation
F = [ real(mis(pv));
real(mis(pq));
imag(mis(pq));
abs(V(continuationBus)) - Vm_assigned(continuationBus); ];
end
function J = getJ(Ybus, V, pv, pq)
% J = getJ(Ybus, V, pv, pq)
%
% getJ get jacobian of power flow equations
%
% This function gets the Jaciobian of the power flow equations at
[dSbus_dVm, dSbus_dVa] = dSbus_dV(Ybus, V);
j11 = real(dSbus_dVa([pv; pq], [pv; pq]));
j12 = real(dSbus_dVm([pv; pq], pq));
j21 = imag(dSbus_dVa(pq, [pv; pq]));
j22 = imag(dSbus_dVm(pq, pq));
J = -[ j11 j12;
j21 j22; ];
%% form augmented Jacobian
Appendix B. Source Code for Continuation Power Flow 151
%NOTE: the use of ’-J’ instead of ’J’ is due to that the definition of
%dP(,dQ) in the textbook is the negative of the definition in MATPOWER.
%In the textbook, dP=Pinj-Pbus; In MATPOWER, dP=Pbus-Pinj. Therefore,
%the Jacobians generated by the two definitions differ only in the
%sign.
end
Appendix C
Source Code for Visualizations
The visualizations developed in this thesis work were built using Python and PySide, an LGPL-licensed
python implementation of the Qt framework. These visualizations are built on cross-platform, open-
source technology and can be readily installed on a large variety of platforms.
C.1 Software Representations of Power Systems
The following code describes object oriented software representations of Power System elements and
contingencies, including techniques to build a responsive one-line diagram from the system structure
that integrates contingencies.
"""
written by Anton Lodder 2012-2014
all rights reserved.
This software is the property of the author and may not be copied,
sold or redistributed without expressed consent of the author.
"""
import sys, os, inspect
try:
import pytreemap
except:
#walk up to ’pytreemap’ and add to path.
realpath = os.path.realpath(
os.path.dirname(inspect.getfile(inspect.currentframe())))
(realpath, filename) = os.path.split(realpath)
while filename != ’pytreemap’:
(realpath, filename) = os.path.split(realpath)
sys.path.append(realpath)
import pytreemap
152
Appendix C. Source Code for Visualizations 153
from numpy import *
from collections import defaultdict
import weakref
from pytreemap import DistanceCalculations as DC
from PySide.QtGui import *
from PySide.QtCore import *
import sys
import warnings
## test
""" The simple test for PowerNetwork is to buid a
one-line diagram from a case. Further testing
should include testing distance calculations
and other aspects of the code """
def main():
# from PowerNetwork import Bus, Branch
# from VisBuilder import CPFfile
from pytreemap.visualize.DetailsWidget import DetailsWidget
#
#
# mCPFfile = CPFfile(’cpfResults_case118_1level’)
# #open a default cpf file
# mElements = mCPFfile.Branches + mCPFfile.Buses
#
from pytreemap.system.PowerNetwork import Bus, Branch
from pytreemap.visualize.VisBuilder import JSON_systemFile
file = ’case30_geometry.json’
# file = ’case118_geometry.json’
file = os.path.join(pytreemap.system.__path__[0], file)
Appendix C. Source Code for Visualizations 154
mSystem = JSON_systemFile(sys=file);
mElements = mSystem.Transformers \
+ mSystem.Branches \
+ mSystem.Buses \
+ mSystem.Generators
# elList = mSystem.getElementList()
# print(’wait’);
app = QApplication(sys.argv)
mDetails = DetailsWidget()
mOneline = OneLineWidget(mElements,
shape = [0,0,900,700], details = mDetails)
mVis = Vis(oneline =mOneline, details = mDetails)
sys.exit(app.exec_())
## code
def boundingRect(elList):
bounds = array([list(el.boundingRect().getCoords()) for el in elList])
boundingRect = [min(bounds[:,0]),
min(bounds[:,1]), max(bounds[:,2]), max(bounds[:,3])]
return boundingRect
class Vis(QMainWindow):
Appendix C. Source Code for Visualizations 155
def __init__(self, oneline = None, details = None):
super().__init__()
self.oneline = oneline
self.widget = QWidget()
self.setCentralWidget(self.widget)
oneline.setParent(self.widget)
layout = QGridLayout()
self.widget.setLayout(layout)
# layout.addLayout(self.widget)
layout.setSpacing(0)
# layout.addWidget(details,1,0,3,1)
layout.addWidget(oneline,4,0,1,1)
self.setGeometry(20,20,900,900)
self.setWindowTitle(’Visualize’)
self.show()
# layout.setContentsMargins(0, 0, 0, 0)
# layout.setSpacing(0)
# log(’visualization created’)
class OneLineWidget(QGraphicsView):
Appendix C. Source Code for Visualizations 156
""" A widget in which a oneline is drawn. contains a graphics scene """
def __init__(self, elList, shape=None,details = None):
QGraphicsView.__init__(self)
shape = shape if shape != None else [10,10,900,900]
if shape:
x,y,w,h = shape
self.move(x,y)
self.resize(w,h)
#css :: simplify the styling of scrollbars
self.verticalScrollBar().setStyleSheet(
"""QScrollBar:vertical{border:gray;background:white;width:12px;
margin:12px 0 12px 0;}QScrollBar::handle:vertical{background
:gray;min-height:12px;}QScrollBar::add-line:vertical{backgro
und:none;height:26px;subcontrol-position:bottom;subcontrol-o
rigin:margin;}QScrollBar::sub-line:vertical{background:none;
height:26px;subcontrol-position:top left;subcontrol-origin:m
argin;position:absolute;}QScrollBar::add-page:vertical,QScro
llBar::sub-page:vertical{background:none;}""")
self.horizontalScrollBar().setStyleSheet(
"""QScrollBar:horizontal{border:none;background:white;height:12
px;margin:0 12px 0 12px;}QScrollBar::handle:horizontal{backg
round:gray;min-width:26px;}QScrollBar::add-line:horizontal{b
ackground:none;width:26px;subcontrol-position:right;subcontr
ol-origin:margin;}QScrollBar::sub-line:horizontal{background
:none;width:15px;subcontrol-position:top left;subcontrol-ori
gin:margin;position:absolute;}QScrollBar::add-page:horizonta
l,QScrollBar::sub-page:horizontal{background:none;}""")
diagramBound = boundingRect(elList)
self.details = details
#build a graphicsscene
self.scene = QGraphicsScene(self)
if diagramBound:
xa,ya,w_,h_ = diagramBound
xa, ya, = round(xa-40), round(ya-40)
w_, h_ = round(w_ + 40 ),round(h_+40)
self.setSceneRect(xa,ya,w_,h_)
scale = min([w/w_, h/h_]) * 0.99
Appendix C. Source Code for Visualizations 157
else:
self.setSceneRect(*shape)
scale = 1
self.setScene(self.scene)
self.setCacheMode(QGraphicsView.CacheBackground)
self.setRenderHint(QPainter.Antialiasing)
self.setTransformationAnchor(QGraphicsView.AnchorUnderMouse)
self.setResizeAnchor(QGraphicsView.AnchorViewCenter)
self.elements = []
self.addElement(elList)
self.scale(scale,scale)
self.setWindowTitle(’Oneline’)
# self.show()
def wheelEvent(self, event):
self.scaleView(math.pow(2.0, event.delta()/240.0))
def scaleView(self, scaleFactor):
factor = self.matrix().scale(scaleFactor, scaleFactor) \
.mapRect(QRectF(0, 0, 1, 1)).width()
if factor < 0.07 or factor > 100:
return
self.scale(scaleFactor, scaleFactor)
def addElement(self, element):
try:
if type(element) is dict: element = element.values()
for el in element :
self.addElement(el)
except TypeError:
self.elements += [element]
self.scene.addItem(element)
self.show()
Appendix C. Source Code for Visualizations 158
class InputError(Exception):
def __str__():
return "Require id & pos OR from_dict"
class Element(QGraphicsItem,object ):
""" Object representing single grid elements, with child types
such as branch, bus, generator and transformer. Has methods
for comparing elements for equality, storing position
information, as well as implements the necessary functions
to be drawn in a one-line diagram."""
color = ’#F0F0F0’
weight = 10
geo = defaultdict(None)
hColor = {False: QColor("#b2b8c8"), True: QColor("#e45353")}
# hColor = {False: QColor("#b6cdc1"), True: QColor("#e45353")}
def __init__(self,id=None, pos=None,
connected=None, from_dict=None):
super(Element, self).__init__()
if id is None and pos is None and from_dict is None:
raise InputError()
if from_dict:
self.fromDict(from_dict)
else:
self.id=id
self.pos = pos
if connected: self.connected = list(connected)
#self.connected will always be a list.
else:
Appendix C. Source Code for Visualizations 159
self.connected = []
self.faults = []
self.setAcceptHoverEvents(True)
self.newPos = QPointF()
self.setCacheMode(self.DeviceCoordinateCache)
self.setFlag(QGraphicsItem.ItemSendsGeometryChanges)
self.setZValue(-1)
self.highlight = False
def fromDict(self, mDict):
self.pos = mDict[’pos’]
self.id = mDict[’id’]
def toDict(self): #use this to serialize to a dict
mDict = { ’type’: self.__class__.__name__,
’id’: self.id,
’pos’: self.pos}
return mDict
def html_name(self):
return "<p>{}</p>{}".format(self.__class__.__name__, self.id)
def html_connected_li(self):
return [ "<li>{}</li>".format(el.html_name()) \
for el in self.connected]
def html_connected(self):
elList = "<ul>{}</ul>".format( "".join(self.html_connected_li()))
return "<div class=’info’><p>Connections:</p>{}</div>" \
.format(elList)
def html_percentage(self):
if len(self.faults) > 0:
fault = [el for el in self.faults if len(el) < 2][0]
pctStr = "{:.3f}".format(100*fault.getGlobalContext())
else:
pctStr = "N/A"
Appendix C. Source Code for Visualizations 160
return "<div class=’info’><p>Pct. Loadability:</p>{}%</div>" \
.format(pctStr)
def html(self):
return "<div class=’el’><h>{}</h>{}{}</div>" \
.format(str(self),
self.html_connected(),
self.html_percentage())
def setDetails(self):
if self.scene().parent().details:
self.scene().parent().details.setContent(self.html())
def __repr__(self):
string = "{:6s} {:04d}".format(self.__class__.__name__ ,self.id)
return string
def shortRepr(self):
# return "{:.2s}{:d}".format(self.__class__.__name__, self.id)
return str(self.id)
def __eq__(self, other):
return \
True if self.__class__.__name__ == other.__class__.__name__ \
and self.id == other.id else False
def __cmp__(self, other):
if self.__class__.__name__ < other.__class__.__name__: return -1
elif self.__class__.__name__ > other.__class__.__name__: return 1
else: return self.id - other.id
def __hash__(self): return hash(str(self))
Appendix C. Source Code for Visualizations 161
def getGeo(self): return Element.geo[self.__class__][self.id]
def getPos(self): return self.pos
def secondary(self): return self.getGeo()
def addFault(self,fault):
try: self.faults.append(fault)
except AttributeError: self.faults = [fault]
def boundingRect(self):
pos = self.getPos()
if pos:
return QRectF([pos[0],pos[1],0,0])
else:
return QRect([0,0,0,0])
def fitIn(self, newBox, oldBox):
# the default fitIn behaviour is to scale
# whatever comes from self.getPos()
point = self.getPos()
point = self.scalePoint(point, newBox, oldBox)
self.pos = point
def scalePoint(self, point, newBox, oldBox):
#scale point in ’oldBox’ to fit in box (x0,y0,xn,yn)
x0,y0,xn,yn = newBox
width,height = xn-x0,yn-y0
widthL = oldBox[2]-oldBox[0]
heightL = oldBox[3]-oldBox[1]
xL,yL = oldBox[0],oldBox[1]
def scale(x,y):
x = x0 + width * (x-xL)/widthL
y = y0 + height * (y-yL)/heightL
return [x,y]
return scale(*point)
#
Appendix C. Source Code for Visualizations 162
def shape(self):
try:
return self.mShape
except:
self.mShape = self.defineShape()
return self.mShape
def defineShape(self):
path = QPainterPath()
pos = self.getPos()
radius = self.__class__.weight
try:
path.moveTo(QPointF(*pos))
except:
print(’wait’)
path.addEllipse(QRectF(pos[0]-radius,
pos[1]-radius, 2*radius, 2*radius))
return path
def paint(self, painter, option, widget):
mColor = self.__class__.hColor[self.highlight]
# or self.isUnderMouse()]
painter.setPen(mColor)
painter.setBrush(mColor)
painter.drawPath(self.shape())
def hoverEnterEvent(self, event):
self.update(self.boundingRect())
def hoverLeaveEvent(self,event):
self.update(self.boundingRect())
def mousePressEvent(self, event):
print(str(self))
self.toggleHighlight()
for fault in self.faults: fault.toggleHighlight()
Appendix C. Source Code for Visualizations 163
self.setDetails()
def toggleHighlight(self):
# print(’<toggle highlight: {:s}>’.format(str(self)))
self.highlight = not self.highlight
self.update(self.boundingRect())
def setHighlight(self, set = False):
if self.highlight != set:
self.toggleHighlight()
def setGraph(self,graph):
self.graph = weakref.ref(graph)
def distanceFrom(self, other):
return sqrt(self.getPos()**2 + other.getPos()**2)
class Branch(Element):
color = ’#DB0058’
radius = 1
def __init__(self,id=None, pos=None, buses=None, from_dict=None):
if id is None and pos is None and from_dict is None:
raise InputError()
if from_dict:
pos = from_dict[’pos’]
id = from_dict[’id’]
buses = from_dict[’buses’]
super().__init__(id, pos, buses)
#assign self to buses
if buses:
for bus in buses: bus.connected.append(self)
def boundingRect(self):
if self.pos:
Appendix C. Source Code for Visualizations 164
x,y = array(self.pos).transpose()
return QRectF( min(x)-Branch.radius,
min(y)-Branch.radius,
max(x)-min(x)+2*Branch.radius,
max(y)-min(y)+2*Branch.radius)
else:
return QRect(0,0,0,0)
def fitIn(self, newBox, oldBox):
points = self.pos
self.pos = [self.scalePoint(point, newBox, oldBox) \
for point in points]
def toDict(self):
mDict = { ’type’: self.__class__.__name__,
’id’: self.id,
’pos’: self.pos,
’buses’: [el.id for el in self.connected]}
return mDict
def defineShape(self):
path = QPainterPath()
path.setFillRule(Qt.WindingFill)
radius = Branch.radius
rotation = array( [[0,-1],[1,0]])
points = zip(self.pos[0:-1], self.pos[1:])
for p0, pn in points:
(x0,y0),(xn,yn) = p0,pn
dx,dy = array(pn) - array(p0)
dV = array([dx,dy])
mag_dV = linalg.norm(dV)
if mag_dV == 0:
break;
v = dot(rotation, dV) * radius / mag_dV
startAngle = arctan2(*v) * 180/pi + 90
Appendix C. Source Code for Visualizations 165
path.moveTo(QPointF(*p0-v))
#starting arc
path.addEllipse(x0-radius, y0-radius,2*radius,2*radius)
#rectangular part
path.lineTo(QPointF(*p0-v))
path.lineTo(QPointF(*pn-v))
path.lineTo(QPointF(*pn+v))
path.lineTo(QPointF(*p0+v))
path.moveTo(QPointF(*pn+v))
path.addEllipse(QRectF(xn-radius, yn-radius,
2*radius, 2*radius))
return path.simplified()
def distanceFrom(self,other):
if self.pos:
try:
if type(other) is Transformer:
# if transformer, return the smallest distance from
# all elements in the transformer.
return other.distanceFrom(self)
elif type(other) is Branch: #if the other is a line,
return DC.lineToLine(self.pos, other.pos)[0]
else:
return DC.pointToLine(self.pos, other.getPos())[0]
except NoneType:
return None
else:
return None
class Bus(Element):
color = ’#408Ad2’
w,h = 16,5
def defineShape(self):
x,y = self.getPos()
path = QPainterPath()
path.moveTo(x,y)
path.addRect(QRectF(x-Bus.w/2, y-Bus.h/2, Bus.w, Bus.h))
Appendix C. Source Code for Visualizations 166
return path
def toDict(self):
mDict = { ’type’: self.__class__.__name__,
’id’: self.id,
’pos’: self.pos
}
return mDict
def boundingRect(self):
if self.pos:
x,y = self.getPos()
# return QRectF(*[x-Bus.w/2, y-Bus.h/2, Bus.w,Bus.h])
return QRectF(* [x-Bus.w*1.1, y-Bus.h*1.1,
Bus.w*2.2, Bus.h*2.2])
else:
return QRect(0,0,0,0)
def distanceFrom(self, other):
try:
if type(other) is not type(self):
return other.distanceFrom(self)
else:
return DC.pointToPoint(self.getPos(), other.getPos())
except (TypeError, AttributeError):
return None
def toggleHighlight(self):
super().toggleHighlight()
for branch in self.connected:
branch.toggleHighlight()
def paint(self, painter, option, widget):
#overwrite paint to draw numbers
super().paint(painter, option, widget)
painter.setPen(Qt.black)
painter.setFont(QFont(’serif’, 5))
x,y = self.getPos();
painter.drawText(QPointF(x - Bus.w, y), str(self.id))
Appendix C. Source Code for Visualizations 167
class Gen(Element):
color = ’#FF9700’
# hColor = {False: QColor("#b2b8c8"), True: QColor("#e45353")}
hColor = {False: QColor("#509CF2"), True: QColor("#e45353")}
def __init__(self, id=None, bus=None, from_dict=None):
if id is None and bus is None and from_dict is None:
raise InputError()
if from_dict:
self.__init__(from_dict[’id’], from_dict[’bus’])
else:
super().__init__(id, [None,None],[bus])
if bus:
bus.connected.append(self)
@property
def bus(self):
return self.connected[0]
def toDict(self):
mDict = { ’type’: self.__class__.__name__,
’id’: self.id,
’pos’: self.getPos(),
’bus’: self.connected[0].id}
return mDict
def getPos(self):
return self.connected[0].getPos()
def boundingRect(self):
if self.connected[0] and self.connected[0].pos:
return self.shape().boundingRect()
else:
return QRect(0,0,0,0)
def distanceFrom(self, other):
try:
return other.distanceFrom(self.connected[0])
except AttributeError:
return None
Appendix C. Source Code for Visualizations 168
def defineShape(self):
x,y = self.getPos()
path = QPainterPath()
mFont = QFont(’Helvetica’, 8, QFont.Light)
path.addText(QPointF(x+2,y-Bus.h-2), mFont, "G")
mFont = QFont(’arial’, 5, QFont.Light)
path.addText(QPointF(x+10, y-Bus.h), mFont, str(self.id))
return path
#
# def paint(self, painter, option, widget):
#
# painter.setBrush(Qt.blue)
# painter.setPen(Qt.blue)
# painter.drawRect(self.boundingRect())
#
#
# super().paint(painter, option, widget)
# x,y = self.bus.getPos()
#
#
# mFont = QFont(’courier’, 15, QFont.Bold)
# # print(QFontInfo(mFont).family())
# painter.setBrush(Qt.red)
# painter.setPen(Qt.red)
# # painter.drawText(x+2,y-5, ’G’)
class Transformer(Element):
color = ’#80E800’
def __init__(self, id=None, elements=None, from_dict=None):
if id is None and elements is None and from_dict is None:
raise InputError()
if from_dict:
id = from_dict[’id’]
elements = from_dict[’connected’]
super().__init__(id,[], elements)
Appendix C. Source Code for Visualizations 169
# def __init__(self, id, elements):
# # self.elements = elements
# super().__init__(id, [None,None])
# @property
# def connected(self):
# return self.elements
#
# @connected.setter
# def connected(self):
# pass
def toDict(self):
mDict = {
’type’: self.__class__.__name__,
’id’: self.id,
’pos’: self.getPos(),
’connected’: {’Bus’:[el.id for el in self.connected \
if type(el) is Bus],
’Branch’: [el.id for el in self.connected \
if type(el) is Branch]
}
}
return mDict
def getPos(self):
pos = []
for el in self.connected:
if type(el) is Branch:
pos += el.getPos()
else:
pos.append(el.getPos())
# # pos = [el.getPos() for el in self.elements]
# return pos
return mean(pos,0)
def toggleHighlight(self):
for el in self.connected:
el.toggleHighlight()
Appendix C. Source Code for Visualizations 170
def boundingRect(self):
rects = [list(el.boundingRect().getRect()) \
for el in self.connected if type(el) is Bus]
rects = [el for el in rects if el != None]
if rects:
rects = array(rects)
points = [rects[:,0].transpose(),
rects[:,1].transpose(),
rects[:,0]+rects[:,2], rects[:,1] + rects[:,3]]
x0,y0 = min(points[0]), min(points[1])
xn,yn = max(points[2]), max(points[3])
return QRectF( x0,y0, xn-x0, yn-y0)
else:
return QRect(0,0,0,0)
def fitIn(self, *args):
pass
def distanceFrom(self,other):
if type(other) == type(self):
distances = []
for EL in self.connected:
for el in other.connected:
distances.append(EL.distanceFrom(el))
return min(distances) if distances else None
else:
distances = \
[el.distanceFrom(other) for el in self.connected]
return min(distances) if distances else None
def paint(self, painter, option, widget):
pass
# painter.setPen(Qt.red)
# painter.setBrush(Qt.red)
#
# x,y,w,h = self.boundingRect().getRect()
# painter.drawText(x+15,y+15,’Trans’)
Appendix C. Source Code for Visualizations 171
class Fault(object):
""" Object represents a system fault from a power system
perspective. Includes information about elements involved
as well as any child faults [these must be specified using
.addConnection()]. Does not implement any UI related
methods for including in a Treemap or Treemap/Oneline combo"""
levelContext = defaultdict(list)
globalContext = defaultdict(list)
cumulativeContext = defaultdict(list)
def __init__(self,listing, reduction = None):
#listing is a dictionary containing: label, elements
self._subValue = None
self.suppress = False
self.label = listing[’label’] if ’label’ in listing else ’none’
if ’label’ in listing: del listing[’label’]
self.value = listing[’reduction’] \
if ’reduction’ in listing else reduction
self.elements = listing[’elements’]
self.connections = [] #for tracking sub-faults
self.elements.sort(key=hash)
for element in self.elements:
element.addFault(self)
def isParentOf(self, other):
#return true if contains same elements as ’other’ plus extras
nSelf, nOther = len(self.elements), len(other.elements)
#only a direct child if there is one more element
if nOther != nSelf + 1: return false
s,o= 0,0
matches = 0;
Appendix C. Source Code for Visualizations 172
nomatch = 0;
while matches < nSelf:
if s >= nSelf or o >= nOther:
# if we passed the end of either array
# but not all in self.elements are matched
return False
#increment matches count and move forward in both lists
if self.elements[s] == other.elements[o]:
matches += 1
s+=1
o+=1
#increment nomatch count and move forward only in child list
else:
nomatch+=1
o+=1
#if we count more than 1 nomatch it can’t be a direct child
if nomatch > 2:
return False
#if matches == nSelf, all in self.elements
# are matched and loop stops. is parent.
return True
def __repr__(self):
return ’Fault ({})’.format(repr(self.elements))
def __str__(self):
return repr(self)
def __len__(self):
return len(self.elements);
#override ’in’ to tell us if the fault contains an element
def __contains__(self, other):
Appendix C. Source Code for Visualizations 173
if issubclass(type(other), Element):
return other in self.elements
else:
return False
@staticmethod
def setGlobalContext(floor, ceiling):
Fault.globalContext = {’floor’: floor, ’ceiling’:ceiling}
@staticmethod
def setLevelContext(level, floor, ceiling):
Fault.levelContext[level] = {’floor’:floor, ’ceiling’:ceiling}
@staticmethod
def setCumulativeContext(level,floor,ceiling):
Fault.cumulativeContext[level] = {’floor’: floor,
’ceiling’: ceiling}
def getGlobalContext(self):
try:
min = self.globalContext[’floor’]
max = self.globalContext[’ceiling’]
except: return 0
return (self.value - min) / (max-min) if (max-min) > 0 else 0
def getLevelContext(self):
try:
min = self.levelContext[len(self.elements)][’floor’]
max = self.levelContext[len(self.elements)][’ceiling’]
except:
return 0;
return (self.value - min) / (max-min) if (max-min) > 0 else 0
def getCumulativeContext(self):
try:
min = self.cumulativeContext[len(self.elements)][’floor’]
max = self.cumulativeContext[len(self.elements)][’ceiling’]
except: return 0
return (self.subTreeValue() - min) / \
Appendix C. Source Code for Visualizations 174
(max-min) if (max-min) > 0 else 0
# def value(self):
# return self.reduction
@property
def secondary(self):
try:
return self._secondary
except AttributeError:
if len(self.elements) == 1:
self._secondary = None
else:
import itertools
distances = [ elA.distanceFrom(elB) for elA, elB in \
itertools.combinations(self.elements,2)]
distances = [ el for el in distances if el != None]
#filter out None
if distances:
self._secondary = mean(distances)
else:
self._secondary = None
return self._secondary
@secondary.setter
def secondary(self,x):
self._secondary = x
@property
def subValue(self):
return self._subValue or self.value \
+ sum([subFault.subValue \
for subFault in self.connections])
@property
def level(self): #return the n-k contingency level k
return len(self.elements)
def getElements(self):
return self.elements
Appendix C. Source Code for Visualizations 175
def addConnection(self,connection):
self.connections += [connection]
def strip(self, stripElement):
self.elements, strip = \
[el for el in self.elements if el != stripElement],\
[el for el in self.elements if el == stripElement]
return strip
def subFault(self, element):
from copy import copy
newFault = copy(self)
strip = newFault.strip(element)
newFault.siblings += strip
return newFault
def html_elements(self):
elList = "<ul>{}</ul>".format( "".join([ "<li>{}</li>" \
.format(el.html_name()) for el in self.elements]))
return "<div class=’info’><p>Elements:</p>{}</div>"\
.format(elList)
def html_reduction(self):
return "<div class=’info’><p>Reduction:</p>{:.3f}MW</div>"\
.format(self.value)
def html_affected(self):
affecting = [el for el in self.elements if \
(type(el) is Bus or type(el) is Transformer)]
if affecting:
from itertools import chain
affected = list(chain(*[el.html_connected_li() \
for el in affecting]))
elList = "<ul>{}</ul>".format( "".join(affected))
return \
"<div class=’info’><p>Affected Elements</b>{}</div>" \
.format(elList)
else:
return ""
def html(self):
return "<div class=’el’><h>Fault</h>{}{}{}</div>" \
Appendix C. Source Code for Visualizations 176
.format(self.html_elements(),
self.html_reduction(), self.html_affected())
class Line:
"""
Construct for getting basic properties of a multi-segment
branch/line geometry. May be obsolete.
"""
def __init__(self, myNodes):
self.nodesX = myNodes[0]
self.nodesY = myNodes[1]
def __str__(self):
return ’Line object’
def __repr__(self):
return ’<PowerNetwork.Line [’+ ’, ’\
.join([ ’({0:.1f},{1:.1f})’.format(x,y) \
for x,y in zip(self.nodesX, self.nodesY)]) +’] object>’
def draw(self,axes, color="#0000FF"):
for index in range(0, len(self.nodesX)-1):
axes.plot( self.nodesX[index:index+2],
self.nodesY[index:index+2], c=color)
def getLength(self):
sum = 0;
for index in range(0,len(self.nodesX)-1):
sum += sqrt((self.nodesX[index+1]-self.nodesX[index])**2
+ (self.nodesY[index+1]-self.nodesY[index])**2)
return sum
def getMidpoint(self):
if not hasattr(self, ’midPoint’):
branch_geo =array([self.nodesX,self.nodesY]).transpose()
#get distance between each point
distances = [0] + list( sqrt(sum(square(branch_geo[1:,:] \
- branch_geo[0:-1,:])),1))
Appendix C. Source Code for Visualizations 177
ltHalf = where(distances < sum(distances)/2)[0][-1]
# index the tuple returned by where, then take the
# last index for which the condition is still true
percentAlong = \
(sum(distances)/2 - cumsum(distances)[lt_half])/ \
distances[lt_half+1]
xM,yM = branch_geo[ltHalf,:] \
+ percentAlong * ( branch_geo[ltHalf+1] \
- branch_geo[ltHalf])
self.midPoint = xM,yM
return self.midPoint
def getPosition(self):
return self.getMidpoint()
def flatten(l, ltypes=(list, tuple)):
# method to flatten an arbitrarily deep nested
# list into a flat list
ltype = type(l)
l = list(l)
i = 0
while i < len(l):
while isinstance(l[i], ltypes):
if not l[i]:
l.pop(i)
i -= 1
break
else:
l[i:i + 1] = l[i]
i += 1
return ltype(l)
Appendix C. Source Code for Visualizations 178
if __name__ == "__main__":
main()
C.2 Cross-Referencing Elements and Contingencies
The following code describes how to take a list of contingencies of varying n − k levels and identify
relationships to power system elements in a oneline, as well as how to build out the tree structure of the
data between different contingencies to facilitate the construction of contingency visualizations.
"""
written by Anton Lodder 2012-2014
all rights reserved.
This software is the property of the author and may not be copied,
sold or redistributed without expressed consent of the author.
"""
if __name__ == ’__main__’:
import sys, os, inspect
try:
import pytreemap
except:
#walk up to ’pytreemap’ and add to path.
realpath = os.path.realpath(os.path.dirname(inspect \
.getfile(inspect.currentframe())))
(realpath, filename) = os.path.split(realpath)
while filename != ’pytreemap’:
(realpath, filename) = os.path.split(realpath)
sys.path.append(realpath)
import pytreemap
from numpy import *
from collections import defaultdict
from pytreemap.system.PowerNetwork \
import Bus, Branch, Gen, Transformer, Element
def main():
from pytreemap.visualize.TreemapGraphics import TreemapFault
file = ’cpfResults_4branches’
Appendix C. Source Code for Visualizations 179
file = os.path.join(pytreemap.__path__[0], ’sample_results’, file)
# mCase = (os.path.join(pytreemap.__path__[0],
# ’sample_results’, ’cpfResults_small.json’),
# os.path.join(pytreemap.__path__[0],
# ’system’,’case30_geometry.json’))
mCase = (os.path.join(pytreemap.__path__[0], ’sample_results’,
’cpfResults_case30_2level.json’),
os.path.join(pytreemap.__path__[0],
’system’,’case30_geometry.json’))
print(’Creating JSON File’)
mCPFresults_JSON = JSON_systemFile(*mCase)
# print(’\nCreating MATLAB File’)
# mCPFresults_MATLAB = MATLAB_systemFile(file)
elements = mCPFresults_JSON.getElements()
print(’\n\t|| Getting Faults from JSON File’)
(faults_JSON, faultTree_JSON) = getFaults(TreemapFault, mCPFresults_JSON)
# print(’\nGetting Faults from MATLAB File’)
# (faults_MATLAB, faultTree_MATLAB) \
# = getFaults(TreemapFault, mCPFresults_MATLAB)
print(’runs’)
class doneLog(object):
def __init__(self, message, reportFunc=None):
self.message = message
self.reportFunc = reportFunc
def __call__(self, f):
from functools import wraps
@wraps(f)
Appendix C. Source Code for Visualizations 180
def wrapped(*args, **kwargs):
out = f(*args, **kwargs)
print("\t|| {}{}".format(self.message, " - {}" \
.format(self.reportFunc(out)) \
if self.reportFunc else ""))
return out
return wrapped
def log(string):
print("\t" + ’|| ’ + string)
def getFaults(FaultType, cpfFile, filter=0):
#get faults
@doneLog("faultTree created", lambda x: "{} levels".format(len(x)))
def buildFaultTree(faults):
# From a flat list of faults, build a dictionary that
# divides the faults by n-i contingency level.
faultTree = defaultdict(list)
#sort faults by number of element in each
for fault in faults:
faultTree[len(fault.getElements())] += [fault]
if 0 in faultTree:
del faultTree[0]
return faultTree
@doneLog("Connections Built")
def buildConnections(faults, faultTree):
""" Go over each fault give it a list of sub-faults to track. """
#get fault position masks by element
maskLength = len(faults)
faultByElement = defaultdict(int)
for index, fault in enumerate(faults):
for element in fault.elements:
faultByElement[element] += 1<<index
Appendix C. Source Code for Visualizations 181
def int2bool(i,n):
return fromiter( ((False,True)[i>>j & 1] \
for j in range(0,n) ), bool)
def trueIndices(i,n):
return (j for j in range(0,n) if i>>j & 1)
log(’fault indexes listed per-element’)
keys = sorted(faultTree.keys())
keys.reverse()
# #identify connections
keys = sorted(faultTree.keys())
faultsArray=array(faults)
for level in keys[0:-1]:
for fault in faultTree[level]:
masks = (faultByElement[element] \
for element in fault.elements)
mask = masks.__next__()
for el in masks:
mask = mask & el
subFaults = \
(faults[i] for i in trueIndices(mask, maskLength) \
if len(faults[i].elements) == 1+level)
for subFault in subFaults:
fault.addConnection(subFault)
@doneLog("Suppressed Duplicate Faults")
def find_duplicates(cpfFile):
elements = cpfFile.getElements();
mEls = list(elements[Bus].values()) \
+ list(elements[Transformer].values())
for el in mEls:
# for each bus or transformer, get its
# faults and identify ones that overlap
fs = sorted(el.faults, key = lambda x: len(x.elements))
Appendix C. Source Code for Visualizations 182
if type(el) is Transformer:
mlist = [ subbus.connected for subbus in el.connected \
if type(subbus) is Bus]
els_connected = el.connected \
+ [item for sublist in mlist for item in sublist]
else:
els_connected = el.connected
nm2 = [mFault for mFault in fs if len(mFault.elements) == 2]
nm2 = [mFault for mFault in nm2 \
if any([(mEl in mFault) for mEl in els_connected])]
def suppress_this(fault):
fault.suppress = True
for el in fault.connections:
suppress_this(el)
for mFault in nm2:
suppress_this(mFault)
@doneLog(’Context Limits Set’)
def setContextLimits(FaultType, faults):
""" Set limits for context getter, allowing fault to be evaluated
in context of its level. Used by Tree. """
values = [fault.value for fault in faults]
FaultType.setGlobalContext(min(values), max(values))
for level, levelFaults in faultTree.items():
values = [fault.value for fault in levelFaults]
FaultType.setLevelContext(level, min(values), max(values))
values = [fault.subValue for fault in levelFaults]
FaultType.setCumulativeContext(level, min(values), max(values))
@doneLog(’Normalized Secondary Values’)
def normalizeSecondaries(faults):
""" Normalize secondary values so faults can be
Appendix C. Source Code for Visualizations 183
displayed on the same scale. """
#normalize secondaries
def normalize(x):
#identify indices of None
nonIndexes = [i for i, el in enumerate(x) if el is None]
if len(nonIndexes) < len(x):
#replace None with 0 so we can do math on it
x = [el if el else 0 for el in x]
#perform mask
x = array(x)
x = x - min(x)
x = x / max(x)
#put Nones back where indicated by nonIndexes
x = [el if i not in nonIndexes \
else None for i,el in enumerate(x)]
return x
#initialize and get secondary values for all faults.
secondaryValues = [fault.secondary for fault in faults]
secondaryValues = normalize(secondaryValues)
for value, fault in zip(secondaryValues, faults):
fault.secondary = value;
#run the various methods
faults = cpfFile.getFaults(FaultType, filter=filter)
faultTree = buildFaultTree(faults)
buildConnections(faults, faultTree)
find_duplicates(cpfFile)
setContextLimits(FaultType, faults)
normalizeSecondaries(faults)
return faults, faultTree
class ResultsFile(object):
Appendix C. Source Code for Visualizations 184
@property
def Branches(self):
return list( self.getElements()[Branch].values() )
@property
def Buses(self):
return list( self.getElements()[Bus].values() )
@property
def Transformers(self):
return list( self.getElements()[Transformer].values() )
@property
def Generators(self):
return list( self.getElements()[Gen].values() )
@property
def baseLoad(self):
try:
return self._baseLoad
except AttributeError:
self._baseLoad = self.getBaseLoad()
return self._baseLoad
def getElementList(self, elements=None):
if not elements:
#allow getElementList to be used by getElements()
elements = self.getElements()
elList = []
for elTypeList in sorted(elements.values(),
key= lambda x: x.__class__.__name__):
elList += list(elTypeList.values())
return elList
def getFaults(self, FaultType, filter=0):
try:
return self.faults
except AttributeError:
self.faults = self.buildFaults(FaultType, filter)
return self.faults
Appendix C. Source Code for Visualizations 185
def getElements(self):
try:
return self.elements
except AttributeError:
# print(’defaulted to building elements due to error’)
self.elements = self.buildElements()
return self.elements
class JSON_systemFile(ResultsFile):
def __init__(self, res = None, sys=None):
self.resultsFile = res
if sys is None:
try:
#assume that system file is given in the results file.
import json
with open(res, ’r’) as f:
results = json.load(f)
self.systemFile = results[’geometry_file’]
import os
assert os.path.isfile(self.systemFile)
except:
log("No System File Given")
self.systemFile = None
else:
self.systemFile = sys
def getBaseLoad(self):
""" How to get the base load from cpfResults.mat"""
import json
with open(file, ’r’) as f:
results = json.load(f)
return results[’baseLoad’]
Appendix C. Source Code for Visualizations 186
@doneLog(’Faults Created’, len)
def buildFaults(self, FaultType, filter):
import json
with open(self.resultsFile, ’r’) as f:
results = json.load(f)
self._baseLoad = results[’baseLoad’]
baseLoad = self._baseLoad
elements = self.getElements()
mapKeys = { Transformer:’Transformer’,
Bus:’Bus’,Branch:’Branch’,Gen:’Gen’}
def faultListings(faultDicts):
# break out faults in JSON to
#list of Elements and load reduction
for faultDict in faultDicts:
faultEls = []
for Type in mapKeys.keys():
faultEls += [elements[Type][id] for id in \
faultDict[’elements’][mapKeys[Type]]]
yield { ’reduction’: baseLoad-faultDict[’load’],
’elements’:faultEls}
# faults = [FaultType(listing) for listing in \
# faultListings(results[’faults’]) \
# if listing[’reduction’]/baseLoad > filter/100]
faults = [FaultType(listing) for listing in \
faultListings(results[’faults’])]
return faults
@doneLog(’Grid Elements Created’,
lambda elements: len(ResultsFile.getElementList(None,elements)))
def buildElements(self):
print(’Building elements:’)
Appendix C. Source Code for Visualizations 187
if self.systemFile == None:
# what to do if no system file is given -
# identify the elements from the results
# file and create objects with no positional data
import json
with open(self.resultsFile, ’r’) as f:
results = json.load(f)
from collections import defaultdict
elDict = defaultdict(dict)
for fault in results[’faults’]:
for element, ids in fault[’elements’].items():
for id in ids:
elDict[element][id] = True;
mapKeys = { ’Transformer’:Transformer,
’Bus’:Bus,
’Branch’:Branch,
’Gen’:Gen}
elements = defaultdict(dict)
for elementType, ids in elDict.items():
elements[mapKeys[elementType]] = \
{id: mapKeys[elementType](id) for id in ids}
return elements
else:
# expect the elements to be in a json list of dicts, each
# dict should represent an element and should contain
# the info needed to produce that list.
import json
with open(self.systemFile, ’r’) as f:
elDicts = json.load(f)
elements = dict()
#build Buses
busDicts = [mDict for mDict in elDicts if mDict[’type’] == ’Bus’]
elements[Bus] = {el[’id’]: Bus(from_dict=el) for el in busDicts}
#build Branches
Appendix C. Source Code for Visualizations 188
branchDicts = [mDict for mDict in elDicts \
if mDict[’type’] == ’Branch’]
for mDict in branchDicts: #convert bus listings into actual Buses.
mDict[’buses’] = [elements[Bus][i] for i in mDict[’buses’]]
elements[Branch] = {el[’id’]: Branch(from_dict=el) \
for el in branchDicts}
#Build Generators
genDicts = \
[mDict for mDict in elDicts if mDict[’type’] == ’Gen’]
for mDict in genDicts:
mDict[’bus’] = elements[Bus][mDict[’bus’]]
elements[Gen] = \
{el[’id’]: Gen(from_dict=el) for el in genDicts}
transDicts = [mDict for mDict in elDicts \
if mDict[’type’] == ’Transformer’]
for mDict in transDicts:
mDict[’connected’] \
= [elements[Bus][i] for i in mDict[’connected’][’Bus’]] \
+ [elements[Branch][i] for i in mDict[’connected’][’Branch’]]
elements[Transformer] \
= {el[’id’]: Transformer(from_dict=el) for el in transDicts}
return elements
class MATLAB_systemFile(ResultsFile):
def __init__(self, fileName=None):
if fileName is None:
fileName = os.path.join(pytreemap.__path__[0],
’sample_results’, ’cpfResults_4branches.mat’)
import scipy.io
self.results = scipy.io.loadmat(fileName, struct_as_record=False)
def getBaseLoad(self):
""" How to get the base load from cpfResults.mat"""
return self.results[’baseLoad’][0][0]
Appendix C. Source Code for Visualizations 189
def CPFloads(self):
""" How to get the results of CPF from cpfResults.mat"""
return self.results[’CPFloads’][0]
def getReductions(self):
""" Get the reduction in loading from the
base load and CPF results."""
return self.baseLoad() - self.CPFloads()
@doneLog(’Faults Created’, len)
def buildFaults(self, FaultType, filter = 0):
""" build a list of faults from cpfFile """
baseLoad = self.baseLoad
faults = [ FaultType(listing, baseLoad-load) for listing, load in \
zip(self.faultListings(),self.CPFloads()) \
if (baseLoad-load)/baseLoad > filter/100]
return faults
def faultListings(self):
""" Create a set of dumb-listings for faults in cpfResults.mat """
elements = self.getElements()
# convert fault listings into simple lists
# instead of scipy matlab structures
def collapse(listing):
""" Convert a matlab fault listings to Python-readable listings,
from inscrutable scipy matlab structures. """
branch, bus, gen, trans \
= [list(el[0]) if len(el) == 1 else list(el) for el in \
[listing.branch, listing.bus, listing.gen, listing.trans]]
faultEls = [];
for Type, typelist in zip([Branch, Bus, Gen, Transformer],
[branch, bus, gen, trans]):
faultEls += [elements[Type][id] for id in typelist]
Appendix C. Source Code for Visualizations 190
relisting = defaultdict(list)
relisting[’label’] = str(listing.label[0])
relisting[’elements’] = faultEls
return relisting
faultListings = (collapse(listing[0][0]) for listing in \
self.results[’branchFaults’][0])
return faultListings
def baseSystem(self):
return self.results[’base’][0,0]
@doneLog(’Grid Elements Created’)
def buildElements(self):
base = self.baseSystem()
#get a list of buses attached to each branch
branchBusEnds \
= [ [int(el) for el in listing[0:2]] for listing in base.branch]
nBranches = len(base.branch)
nBusses = len(base.bus)
nGens = len(base.gen)
# since numpy isn’t that great at loading cell arrays, we need to use
# try/catch to ensure we don’t try to read an empty trans array
nTrans = len(base.trans[0]) if len(base.trans) > 0 else 0
elements = defaultdict(list)
def getBranchId(busEnds):
#find the branch index of a branch from the busses it connects to.
busEnds = set(busEnds)
mIndex = -1
for index, branch in enumerate(branchBusEnds):
mBranch = set(branch)
intersection = set.intersection(mBranch, busEnds)
if len( intersection) > 1: return index
return mIndex
def getGenId(bus):
#find the id of a generator given the bus it is in
Appendix C. Source Code for Visualizations 191
try:
return \
1 + [ int(el) for el in base.gen.transpose()[0]].index(bus)
except ValueError:
return -1
def defaultIZE(dictionary,default_factory=list):
newDict = defaultdict(default_factory)
for k,v in dictionary.items():
newDict[k]=v
return newDict
def getTransEls(trans):
transEls = []
#get branches involved
transEls += [elements[Branch][getBranchId(listing)] \
for listing in (trans[0][0] \
if len(trans[0]) > 0 else [])]
#get busses involved
# import pdb; pdb.set_trace()
mBusses = defaultIZE(elements[Bus])
transEls += [mBusses[id] for id in (trans[0][1][0] \
if len(trans[0][1]) > 0 else [])]
#get gens involved
transEls += [ mBusses[getGenId(bus)] \
for bus in ( trans[0][2][0] \
if len(trans[0][2]) > 0 else [])]
transEls = [el for el in transEls if el != []]
return transEls
def negateY(element):
element = transpose(array(element))
element = transpose([list(element[0]), list(element[1]*-1)])
element = [list(point) for point in element]
return element
# build elements
busIds, busPos \
= [int(el) for el in base.bus.transpose()[0]], base.bus_geo
Appendix C. Source Code for Visualizations 192
genBusses = [int(el) for el in base.gen.transpose()[0]]
branchPos = [element for element in base.branch_geo[0]]
# branchPos = [negateY(element) for element in base.branch_geo[0]]
elements[Bus] = {id: Bus(id, list(pos)) for id, pos in \
zip(busIds, busPos)}
branch_buses = [ [elements[Bus][key] for key in el] \
for el in branchBusEnds]
elements[Branch] \
={int(id): Branch(id, list ([ list(point) \
for point in el]), buses) \
for id, el, buses in \
zip(range(1,nBranches+1), branchPos, branch_buses)}
# create branches with busses in them, let the branches assign
# themselves to their busses
elements[Gen] = {int(id): Gen(id, elements[Bus][bus]) \
for id, bus in zip(range(1,nGens+1), genBusses)}
if len(base.trans) > 0:
elements[Transformer] \
= {int(id): Transformer(id, getTransEls(trans)) \
for id, trans in zip( range(1,nTrans+1), base.trans[0])}
self.elements = elements #save self.elements for later
return self.elements
def boundingRect(self, elList=None):
if not elList:
elList = self.getElementList()
return boundingRect(elList)
if __name__ == "__main__":
main()
Appendix C. Source Code for Visualizations 193
C.3 Building A Treemap
C.3.1 Treemap Layout
The following code describes how to arrange a list of values in an ordered squarified treemap tiling
layout.
"""
written by Anton Lodder 2012-2014
all rights reserved.
This software is the property of the author and may not be copied,
sold or redistributed without expressed consent of the author.
"""
import numpy as np
from collections import defaultdict
import colorsys
from PySide.QtGui import QWidget, QPainter, QApplication, QColor
from PySide.QtCore import QRect
def main():
import sys
width, height = 900, 900
x0, y0, xn, yn = pos = [10,10,800,800]
#sample values for testing
values = [
61.604163314943435, 294.6813017301497, 93.649276939196398,
112.70697780452326, 88.953116126775967, 97.039574700870162,
137.10143908004227, 89.092705912171823, 97.275641027366532,
58.075842761050581, 333.70787878589113, 79.082361864220388,
65.173822806601834, 61.620466443905116, 79.0610858923568,
119.15194594331513, 157.76307523648643, 83.495491052375769,
146.8211675408229, 62.277396872459576, 59.428178065922452,
67.902811582920208, -9.3029028577009285, -8.7962074508714068,
14.208160768913103, 101.8804773503449, 73.045427399512732,
64.955629666655113, 83.746037258431556, 197.25455362773596,
67.215787200178852, 64.870088543368524, 61.454496111136791,
Appendix C. Source Code for Visualizations 194
68.981997349090875, 65.545621430207575, 87.484672267841347,
98.503136492634326, 192.05389943959915, 60.262705723778026,
60.522842879217364, 64.049680842453768
]
values = np.array(values)
rectangles, leftovers = layout(values, pos)
#build a visualization of the rectangles
app=QApplication(sys.argv)
canvas = TestCanvas([100,100,900,900])
for rect in rectangles:
print(rect)
canvas.addRect(rect)
sys.exit(app.exec_())
class TestCanvas(QWidget):
def __init__(self, pos):
super().__init__()
self.rectangles = []
self.setGeometry(*pos)
self.show()
def addRect(self, rect):
self.rectangles.append(rect)
def paintEvent(self,e):
painter = QPainter(self)
for rect in self.rectangles:
try:
xa,ya,xb,yb = rect
rect = xa,ya,xb-xa,yb-ya
Appendix C. Source Code for Visualizations 195
painter.setBrush(randomColor())
painter.drawRect(QRect(*rect))
except:
print(’rect was foul’)
def static_var(varname, value):
def decorate(func):
setattr(func, varname, value)
return func
return decorate
def randomColor():
h,s,v = np.random.rand(3)
# print h,s,v
r,g,b = colorsys.hsv_to_rgb(h, s*0.6+0.3, v*0.4+0.5)
return ’#%02X%02X%02X’ % (r*255, g*255, b*255)
@static_var(’roundingError’, 0)
def layColumn(values, pos, quantize=True, minBoxArea = 2):
xa, ya, xb, yb = pos
dX, dY = xb-xa, yb-ya
if dY < 1.0 or dX < 1.0:
return [], values, pos
# colLength is the shorter dimension,
# boxes are lined up along this dimension
colLength = dY if dY <= dX else dX
#start with an empty list
a =[]
aspect = []
Appendix C. Source Code for Visualizations 196
def fitValues(values, Y):
# take a set of values and a column (row) length, and
# produce the column (row) width and box height (widths)
try:
x = sum(values)/Y
except:
pass
ys = np.array(values)/x
aspect = x/ys# *x/sum(x)
return x,ys, aspect
#keep the last two box elements
save = []
while (len(save) < 1 or save[-1][’aspect’] < 1) and len(values) > 0:
#take an element out of values and add it to the column
a += [values.pop()]
#get colWidth, length of each box and aspect ratio of each box
colWidth, box, aspectRatios = fitValues(a,colLength)
#save average aspect ratio and length of each box.
save.append({’aspect’: np.average(aspectRatios),
’box’:box, ’colWidth’: colWidth } )
# #only keep the last two aspect ratio/box combinations
if len(save) > 2:
save.pop(0)
#check to see which of the closest 2 columns has best aspect ratio
minAspect, index \
= min( (asp, ind) for ind, asp in enumerate(abs(1-np \
.array([el[’aspect’] for el in save]))))
if index < ( len(save) - 1):
values.append(a.pop())
boxLengths, colWidth = save[index][’box’], save[index][’colWidth’]
#if values is empty, we need to enforce
# that colWidth pushes exactly to border
Appendix C. Source Code for Visualizations 197
if len(values)<1:
colWidth = dX if dY <dX else dY
if quantize:
""" we need to quantize the treemap to integer
values in order to draw
borders between blocks.
When we quantize, we need to manage our rounding
errors to ensure that they do not propegate -
otherwise we end up with overflowing
boxes or a large gap at the end.
"""
modValue = colWidth + layColumn.roundingError
#incorporate accumulated rounding error into column width
if modValue <2.5:
"""
If modValue < 0.5, we get divide-by-zero
error and the output will not make any sense.
This is the hard limit on when we have
to substitute values with one summary block.
Practically we limit to modValue > 2.5 because
below a column width of 3, there will be no box
between the boundaries.
"""
while a:
values.append(a.pop())
return [], values, pos
layColumn.roundingError += colWidth - np.round(modValue)
# update accumulated error to reflect
# deviation from optimal colWidth
colWidth = np.round(modValue) #round modified column width
if colWidth > (dY if dY > dX else dX):
colWidth = dY if dY>dX else dX
boxLengths = np.array(a)/colWidth
#recalculate box lengths based on the new column width
Appendix C. Source Code for Visualizations 198
# boxLengths = boxLengths - layColumn.roundingError \
# * colLength / len(boxLengths)
boxLengths = np.round(boxLengths) #round to integer value
roundError = colLength - sum(boxLengths)
index = len(boxLengths)-1 if roundError < 0 else 0;
# start with biggest value if adding space,
# smallest if taking away space
increment = -1 if roundError < 0 else 1;
while roundError > 0:
boxLengths[index] = boxLengths[index]+increment
roundError = roundError - increment
index = (index + increment )% len(boxLengths)
#move up list if adding space
# (increment = 1) else move down
# index = len(boxLengths)-1 if index < 0 else index
# boxLengths[-1] = boxLengths[-1] \
# + (colLength - sum(boxLengths))
# absorb rounding error into the smallest
# box in the row to preserve column length
#lay out box dimensions
if dY <= dX:
boxPositions \
= [[xa, ya+y, min(xa+colWidth, xb), min(ya+y+dy,yb)] \
for y, dy in zip(np.cumsum([0] \
+list(boxLengths[0:-1])), boxLengths)]
nextBox = [xa + colWidth, ya, xb, yb]
else:
boxPositions \
= [[xa+x, ya, min(xa+x + dx, xb), min(ya + colWidth,yb)] \
for x, dx in zip(np.cumsum([0] \
+list(boxLengths[0:-1])), boxLengths)]
nextBox = [xa, ya+colWidth, xb, yb]
""" Here we set a minimum box size below which
we don’t draw blocks. As blocks
Appendix C. Source Code for Visualizations 199
in the Treemap get small (into the single-pixel
dimensions e.g. 4x4), the effects of rounding
error become large relative to the box area and we
prefer to replace the smallest boxes with a generic.
"""
areas = [(xn-x0)*(yn-y0) for x0,y0,xn,yn in boxPositions]
if max(areas) < minBoxArea:
while a:
values.append(a.pop());
return [], values, pos
#returning boxPos = [] indicates
# that no values were laid out
# and nextBox should be used as
#a summary for small insignificant values
#
boxPositions.reverse()
return boxPositions, values, nextBox
# def positionRectangles(pos, ys):
# x0, y0, xn, yn = pos
# return [ [x0, y0+y, x, y0+y+dy] \
# for y, dy in zip( np.cumsum([0] + list( ys[0:-1] )), ys)]
#draw outline
@static_var(’blocks_summarized’,0)
def layout(values, pos, quantize=True, ):
#assume pos is in the form [x0,y0, xn,yn]
nValues = len(values)
#scale values to fill the given area
values = np.array(values) * (pos[2]-pos[0])*(pos[3]-pos[1]) \
/sum([val for val in values if val > 0])
Appendix C. Source Code for Visualizations 200
# sort values from smallest to largest
# (so you can .pop() the biggest value), get indexes
values, origIndexes \
= zip( *sorted(zip(list(values), range(len(values)))))
values, origIndexes = list(values), list(origIndexes)
desiredArea = sum(values);
rectangles = []
nextBox = pos
boxPos = pos
col = 0;
origIndexes_rect = []
#track original indexes of values
#that have been laid out in the treemap
# columnNo = 0;
while len(values) > 0 and boxPos:
# columnNo = columnNo +1
# if columnNo == 14:
# print(’wait’);
boxPos, values, nextBox = layColumn(values,
nextBox,
quantize=quantize,
minBoxArea = 4*4)
#fit the next x values into a row/column
if nextBox:
rectangles = boxPos + rectangles
origIndexes_rect \
= origIndexes[len(origIndexes)-len(boxPos):] + origIndexes_rect
origIndexes = origIndexes[0:len(origIndexes)-len(boxPos)]
col += 1
if len(values) > 1 and not boxPos: #we need to do a fill box
rectangles = [[]]*len(origIndexes) + rectangles
#pad rectangles with empty-indicates these have been replaced
origIndexes_rect = origIndexes + origIndexes_rect
#add indexes of leftover rectangles
Appendix C. Source Code for Visualizations 201
rectangles.append(nextBox)
origIndexes_rect.append(nValues)
elif len(values) == 1:
# we don’t need a fill box but
# layColumns quit early and boxPos is empty
rectangles.insert(0,nextBox)
origIndexes_rect = origIndexes + origIndexes_rect
origIndexes = []
origIndexes_rect, rectangles \
= zip(*sorted( zip(origIndexes_rect, rectangles)))
# layout.blocks_summarized += len(origIndexes)
# print(’<Treemap.layout> blocks summarized: {}’ \
# .format(layout.blocks_summarized))
return list(rectangles), list(origIndexes)
@static_var(’mods’, [0])
def randomColor(level=1):
def rgb(h,s,v): return ’#%02X%02X%02X’ % tuple( [ int(round(el*255)) \
for el in colorsys.hsv_to_rgb(h,s,v)])
if level == 1:
# print randomColor.mods
randomColor.mods = [(randomColor.mods[0] + 0.3)%1, 1]
elif level > 1:
randomColor.mods = randomColor.mods[0:level-1] \
+ [(random.rand()-0.5)*4.0/10 * 1/level]
# randomColor.h += random.rand() * 7.0/10 * 1/self.level**2
# print randomColor.mods
return QColor(rgb(sum(randomColor.mods)%1,0.3,0.7))
Appendix C. Source Code for Visualizations 202
if __name__ == "__main__":
main()
C.3.2 Treemap Visualization
The following code describes how to build a responsive treemap, adapting the system-based object
oriented representation of a Fault to a visual framework for drawing contingency visualizations using
treemaps.
"""
written by Anton Lodder 2012-2014
all rights reserved.
This software is the property of the author and may not be copied,
sold or redistributed without expressed consent of the author.
"""
if __name__ == ’__main__’:
import sys, os, inspect
try:
import pytreemap
except:
#walk up to ’pytreemap’ and add to path.
realpath = os.path.realpath(os.path.dirname(inspect \
.getfile(inspect.currentframe())))
(realpath, filename) = os.path.split(realpath)
while filename != ’pytreemap’:
Appendix C. Source Code for Visualizations 203
(realpath, filename) = os.path.split(realpath)
sys.path.append(realpath)
import pytreemap
from PySide.QtCore import *
from PySide.QtGui import *
import sys
# from PowerNetwork import *
import colorsys
from numpy import *
from pytreemap.Treemap import layout
from pytreemap.visualize.DetailsWidget import DetailsWidget
from pytreemap.visualize.VisBuilder import MATLAB_systemFile, \
JSON_systemFile, getFaults
from pytreemap.system.PowerNetwork import Fault
import re
def main():
from pytreemap.visualize.TreemapGraphics \
import TreemapFault, TreemapGraphicsVis
def getFullFileNames(geoFile, resultFile):
return (os.path.join(pytreemap.system.__path__[0], geoFile),
os.path.join(pytreemap.__path__[0], \
’sample_results’, resultFile))
# mCase = (’cpfResults_case118_2level.json’, ’case118_geometry.json’)
mCase = (’cpfResults_case30_2level.json’, ’case30_geometry.json’)
# mCase = (’cpfResults_tree.json’,’case30_geometry.json’)
mFile = (os.path.join(pytreemap.__path__[0], ’sample_results’, mCase[0]),
os.path.join(pytreemap.__path__[0], ’system’, mCase[1]))
mCPFresults = JSON_systemFile(*mCase)
(faults, faultTree) = getFaults(TreemapFault, mCPFresults)
Appendix C. Source Code for Visualizations 204
#single out one fault
# mFault = 1
# faultTree[1] = faultTree[1][mFault:mFault+1]
# # file = ’cpfResults_4branches’
# # file = ’cpfResults_case30_2level’
# file = ’cpfResults’
# # file = ’cpfResults_case118_2level’
# #
# (faults, faultTree) = getFaults(TreemapFault, CPFfile(file))
# #
# # values = [14, 1, 17, 14, 17, 18, 8, 8, 6, 10, 2, 1, 4, 9, 10, 0, 16,
# 13, 8, 12, 6, 17, 5, 1, 19, 4, 11, 16, 11, 5, 17, 16, 4, 7,
# 17, 14, 11, 16, 13, 19]
# #
# # values = [flt.subValue for flt in faultTree[1][1].connections]
# #
app = QApplication(sys.argv)
# ex = DetailsWidget()
# ex = DetailsTreemap(faultTree)
# ex = TreemapGraphicsVis(pos = [100,100,1500,1030],faultTree = faultTree)
ex = TreemapGraphicsVis(faultTree = faultTree)
# ex = TreemapGraphicsVis(pos = [100,100,100,100],faultTree = faultTree)
# ex = TreemapGraphicsVis(pos = [100,100,100,100],values = values)
# ex2 = TreemapGraphicsVis( pos=[1100,50,400,400],
# values = values, name="Treemap of Random Values")
sys.exit(app.exec_())
Appendix C. Source Code for Visualizations 205
def static_var(varname, value):
def decorate(func):
setattr(func, varname, value)
return func
return decorate
def randomColor(level=1, secondary = None):
def rgb(h,s,v): return ’#%02X%02X%02X’ % tuple( [ int(round(el*255)) \
for el in colorsys.hsv_to_rgb(h,s,v)])
h = 0.32*(1+level)%1
# secondary = None #comment out to add distance encoding
if secondary is not None:
s = (secondary**(1/2)) * 0.4 + 0.2
v = (secondary**(1/2)) * 0.5 + 0.5
else:
s = 0.4
v = 0.8
return QColor(rgb(h,s,v))
# @static_var(’mods’, [0])
# def randomColor(level=1, secondary = None):
# def rgb(h,s,v): return ’#%02X%02X%02X’ % tuple( [ int(round(el*255)) \
# for el in colorsys.hsv_to_rgb(h,s,v)])
#
# if level == 1:
# # print randomColor.mods
# randomColor.mods = [(randomColor.mods[0] + 0.3)%1, 1]
# elif level > 1:
# randomColor.mods = randomColor.mods[0:level-1] \
# + [(random.rand()-0.5)*4.0/10 * 1/level]
# # randomColor.h += random.rand() * 7.0/10 * 1/self.level**2
# # print randomColor.mods
#
# h = sum(randomColor.mods) %1
# h = randomColor.mods[0]%1
Appendix C. Source Code for Visualizations 206
#
#
# # secondary=None
# if secondary is not None:
#
# s = (secondary**(1/2)) * 0.4 + 0.2
# v = (secondary**(1/2)) * 0.6 + 0.4
# else:
# s = 0.4
# v = 0.7
# return QColor(rgb(h,s,v))
class DetailsTreemap(QMainWindow):
def __init__(self, faultTree, pos=None):
super().__init__()
if pos is None:
pos = [20,20,800,900]
details = DetailsWidget([0,0,200,200])
self.treemap = TreemapGraphicsVis( pos = [0,0,600,600],
faultTree = faultTree,
details = details)
self.treemap.setParent(self)
layout = QVBoxLayout()
layout.addWidget(self.treemap)
layout.addWidget(details)
layout.setStretchFactor(self.treemap, 3)
layout.setStretchFactor(details,1)
layout.setContentsMargins(0, 0, 0, 0)
layout.setSpacing(0)
self.widget = QWidget()
self.setCentralWidget(self.widget)
self.widget.setLayout(layout)
self.setGeometry(*pos)
Appendix C. Source Code for Visualizations 207
self.setWindowTitle(’Visualize’)
self.show()
# log(’visualization created’)
class TreemapGraphicsVis(QGraphicsView):
border = 10;
def __init__(self, pos=[10,10,500,500], faultTree=None, values=None,
name="TreemapGraphics", details= None):
super().__init__()
print("Treemap pos vector:",pos)
(x,y,w,h) = self.pos = pos
# (x,y,w,h) = pos
self.setMinimumSize(w,h)
self.outlines = []
self.widgets = []
self.details = details
self.scene = QGraphicsScene(self)
self.setSceneRect(5,5,w-10,h-10)
border = TreemapGraphicsVis.border;
if faultTree:
self.build_fromFaultTree(faultTree,
[border,border,w-2*border,h-2*border])
elif values:
self.build(values)
self.setScene(self.scene)
self.setSceneRect(QRectF(border, border, w-2*border, h-2*border))
self.setCacheMode(QGraphicsView.CacheBackground)
Appendix C. Source Code for Visualizations 208
self.setTransformationAnchor(QGraphicsView.AnchorUnderMouse)
self.setResizeAnchor(QGraphicsView.AnchorViewCenter)
self.setWindowTitle(name)
# self.scale(1,1)
self.show()
# self.scale(.99
def sizeHint(self):
return QSize(*self.pos[2:4])
def addWidget(self,widget):
self.widgets.append(widget)
self.scene.addItem(widget)
def addOutline(self, xa, ya, xb, yb, level):
self.outlines.append( ((xa,ya,xb,yb),level) )
def drawForeground(self, painter,rect):
painter.setPen(Qt.black)
painter.setBrush(Qt.NoBrush)
for (xa,ya,xb,yb), level in self.outlines:
painter.setPen(QColor(*([15*level]*3)))
painter.drawLine(xa,ya, xb,ya)
painter.drawLine(xb,ya,xb,yb)
painter.drawLine(xb,yb,xa,yb)
painter.drawLine(xa,yb,xa,ya)
def build(self,values):
""" build a treemap from a list of numbers
This funciton is for building a fault-tree
without a tree-structure. It takes ’values’
as a list of numbers and builds a single-level
treemap out of that.
"""
pos = self.sceneRect().getRect()
x0,y0,w,h = pos
self.addOutline( x0,y0,x0+w,y0+h, 1)
Appendix C. Source Code for Visualizations 209
rectangles, _ = layout(values, [x0+1,y0+1,x0+w-1,y0+h-1])
for el in rectangles:
if el:
xa,ya,xb,yb = el
Rectangle(self,[xa,ya,xb-xa,yb-ya])
def build_fromFaultTree(self,
faultTree,
square,
startLimit=1,
depthLimit =2,
filter=0):
""" build a treemap from a list of faults. """
border = TreemapGraphicsVis.border
# square = [border,
# border,
# self.width()-border*2,
# self.height()-border*2]
def recursive_build(faultList,
square,
mLevel,
parent = None,
filter=0):
mLevel = len(faultList[0].elements)
#
x0,y0,xn,yn = square
self.addOutline(x0,y0,xn,yn, mLevel)
square = [x0+1,y0+1,xn-1,yn-1]
if len(faultList) == 0:
return None
#filter out faults with subTree of negative reduction:
faultList = \
[fault for fault in faultList if fault.subValue > filter]
faultList = \
[fault for fault in faultList if not fault.suppress]
Appendix C. Source Code for Visualizations 210
# lay out faults --- parent is only included if it causes a
# reduction in loadability
rectangles, leftovers = layout(([parent.value] \
if (parent is not None and parent.subValue > 0) else []) \
+[fault.subValue for fault in faultList], square)
if parent is not None and parent.subValue > 0:
#only try to pop rect if it has positive loadability
# (otherwise it was not added to layout scheme)
parentRect = rectangles.pop(0)
#rectangle representing elements that were too small
if len(faultList) < len(rectangles):
xa,ya,xb,yb = rectangles.pop()
leftoverRect = Rectangle(self,
[xa,ya,xb-xa,yb-ya],
fill=Qt.Dense3Pattern);
leftoverRect.color = randomColor(mLevel)
# self.addOutline(xa,ya,xb,yb,mLevel+1)
#rectangle representing the parent fault
if parent is not None and parent.subValue > 0 and parentRect:
fault = parent
xa,ya,xb,yb = parentRect
fault.addRectangle(Rectangle(self,[xa,ya,xb-xa,yb-ya]))
# self.addOutline(xa,ya,xb,yb, mLevel+1)
if mLevel >= depthLimit:
#lay out faults and add a rectangle widget to each fault
#
for fault,rectangle in zip(faultList,rectangles):
if not rectangle: continue
xa,ya,xb,yb = rectangle
fault.addRectangle(Rectangle(self,
[xa,ya,xb-xa,yb-ya]))
# self.addOutline(xa,ya,xb,yb,mLevel+1)
# mWindow.addWidget(fault)
Appendix C. Source Code for Visualizations 211
else:
"""
There should be some limit here as
to how small a rectangle
gets recursively built.
"""
for fault, rectangle in zip(faultList, rectangles):
if not rectangle: continue
xa,ya,xb,yb = rectangle
if (xb-xa)*(yb-ya) > 10*10 and fault.connections:
randomColor(mLevel-startLimit+1)
#prime random colour generator
recursive_build(fault.connections, rectangle,
mLevel+1, parent = fault)
else:
fault.addRectangle(Rectangle(self,\
[xa+1,ya+1,xb-xa-2,yb-ya-2]))
recursive_build(faultTree[startLimit], square, startLimit)
class TreemapFault(Fault):
def __init__(self, listing, reduction=None):
super().__init__(listing, reduction)
self.rectangles = []
def toggleHighlight(self):
for rect in self.rectangles: rect.toggleHighlight()
def addRectangle(self,newRectangle, level=None):
newRectangle.color = randomColor(len(self.elements), self.secondary)
newRectangle.fault = self
Appendix C. Source Code for Visualizations 212
self.rectangles.append(newRectangle)
return newRectangle
class Rectangle(QGraphicsItem,object):
def __init__(self,mGraphicsView, pos, fill=Qt.SolidPattern):
super().__init__()
self.pos = QRectF(*pos)
self.color = QColor(200,100,100)
self.highlight = False
self.fill = fill
self.fault = None
if mGraphicsView: mGraphicsView.addWidget(self)
self.setCacheMode(self.DeviceCoordinateCache)
self.setFlag(QGraphicsItem.ItemSendsGeometryChanges)
self.setAcceptHoverEvents(True)
def mousePressEvent(self, event):
if self.fault:
print("{}. reduced loadability: {:.0f}, area: {:.0f}."\
.format(self.fault, self.fault.value,
self.pos.width()*self.pos.height()))
self.setDetails()
else: print(str(self))
def hoverEnterEvent(self, event):
# QToolTip.showText(event.screenPos(), self.toolTip())
# import cProfile
# print(’<hover enter>’)
self.toggleHighlight()
if self.fault:
Appendix C. Source Code for Visualizations 213
for el in self.fault.elements:
el.toggleHighlight()
print(el)
#highlight redundant fault
# for el in self.fault.rectangles:
# el.toggleHighlight()
# for el in self.fault.elements:
# el.toggleHighlight()
def hoverLeaveEvent(self, event):
# print(’<hover leave>’)
self.toggleHighlight()
if self.fault:
for el in self.fault.elements: el.toggleHighlight()
# for el in self.fault.rectangles:
# el.toggleHighlight()
# for el in self.fault.elements:
# el.toggleHighlight()
def toggleHighlight(self, list=None):
self.highlight = not self.highlight
self.update(self.boundingRect())
def boundingRect(self):
return QRectF(self.pos)
def toolTip(self):
try:
return re.search("Fault \(\[(.*?)\]\)",
str(self.fault)).group(1)
except AttributeError:
return ""
@property
def level(self):
if self.fault:
try:
return self._level
Appendix C. Source Code for Visualizations 214
except AttributeError:
self._level = self.fault.level
return self._level
else:
return 0
def paint(self, painter, option, widget):
painter.setPen(QColor(*([15*self.level+1]*3)))
brush = QBrush(self.fill)
if self.fill != Qt.SolidPattern:
# painter.setBrush(QColor.fromHsv(self.color.hue(),
# self.color.saturation() * 0.6, 230))
painter.setBrush(QColor.fromHsv(self.color.hue(),
self.color.saturation() * 0.6, 100))
painter.drawRect(self.pos)
if self.highlight:
brush.setColor(QColor.fromHsv(self.color.hue(),
self.color.saturation() * 0.6, 80))
else: brush.setColor(self.color)
if self.fault and self.fault.suppress:
brush.setColor(Qt.black)
painter.setBrush(brush)
painter.drawRect(self.pos)
# # add annotations
# mText = QGraphicsSimpleTextItem(", "\
# .join([el.shortRepr() \
# for el in self.fault.elements]), parent=self)
# mText.setPos(*self.pos.getRect()[0:2])
# mText.setFont(QFont(’sans-serif’, 15))
Appendix C. Source Code for Visualizations 215
def setDetails(self):
if self.scene().parent().details:
self.scene().parent().details.setContent(self.fault.html())
if __name__ == "__main__":
main()
C.4 Building a Tree Diagram
The following code describes how to build a responsive tree diagram, adapting the system-based object
oriented representation of a Fault to a visual framework for tree diagram visualization of contingencies.
"""
written by Anton Lodder 2012-2014
all rights reserved.
This software is the property of the author and may not be copied,
sold or redistributed without expressed consent of the author.
"""
if __name__ == ’__main__’:
import sys, os, inspect
try:
import pytreemap
except:
#walk up to ’pytreemap’ and add to path.
realpath = os.path.realpath(os.path.dirname(inspect \
.getfile(inspect.currentframe())))
(realpath, filename) = os.path.split(realpath)
while filename != ’pytreemap’:
(realpath, filename) = os.path.split(realpath)
sys.path.append(realpath)
import pytreemap
from numpy import *
import colorsys
from collections import defaultdict
from pytreemap.visualize.VisBuilder import getFaults, JSON_systemFile
from pytreemap.system.PowerNetwork \
import Fault, Branch, Bus, Gen, Transformer
from pytreemap.Treemap import layout
from PySide.QtGui import *
from PySide.QtCore import *
# from FaultTreemap import *
Appendix C. Source Code for Visualizations 216
# from sets import Set
import sys
def main():
# mCase =(’cpfResults_tree.json’,’case30_geometry.json’)
mCase =(’cpfResults_8.json’, ’case30_geometry.json’)
mCase =( os.path.join(pytreemap.__path__[0],
’sample_results’,mCase[0]),
os.path.join(pytreemap.system.__path__[0],mCase[1]))
app = QApplication(sys.argv)
mVis = ContingencyTree(*mCase)
sys.exit(app.exec_())
class TreeGraphicsVis(QGraphicsView):
def __init__(self, pos=None, faultTree=None):
super().__init__()
self.faultTree = faultTree
(x,y,w,h) = self.pos = [100,100,1800,700] if pos == None else pos
self.move(x,y)
self.resize(w,h)
self.setWindowTitle(’Tree Visualization’)
self.widgets = []
self.scene=QGraphicsScene(self)
Appendix C. Source Code for Visualizations 217
self.setSceneRect(10,10,w-20,h-20)
self.setScene(self.scene)
self.legend = Legend( [ (mClass.__name__, mClass.color) \
for mClass in [Branch, Bus, Gen, Transformer]])
self.scene.addItem(self.legend)
self.setCacheMode(QGraphicsView.CacheBackground)
self.show()
self.layoutMap()
def layoutMap(self):
width, height = self.width(), self.height()
#define spacing and layout for fault-tree
# from a connections dictionary
def hspacing(numEls, width):
nominalRadius = 10;
if numEls < 2:
sideGap,gap = round(width/2.0), 0
else:
sideGap = max(round(width * (20-0.2*numEls) / 100), 10)
gap = max(nominalRadius*2+5,
round((width-2*sideGap)/(numEls-1)) )
return sideGap, gap
y = round(0.15*height)
ygap = (height - y*2) / (len(self.faultTree.keys()) - 1)
# build for equal spacing with fixed radii
# for levelNo,level in self.faultTree.items():
# mLabel = QGraphicsTextItem("n-{}".format(levelNo))
# mLabel.setFont(QFont("Helvetica", 25))
# mLabel.setPos(150,y-20)
# self.addWidget(mLabel)
#
# # self.levelLabels.append( (levelNo, 80, y))
Appendix C. Source Code for Visualizations 218
# sideGap, gap = hspacing(len(level), width)
# x = sideGap
# # for fault in sorted(level,\
# key= lambda mFault: mFault.value) :
# for fault in level:
# fault.radius = 20
# fault.setPos((x,y))
# fault.setGraphicsView(self)
# fault.setParent(self)
# # fault.setLevel(levelNo)
# # self.draw(fault)
# x+= gap
#
# y+= ygap
# #build for equal spacing with radii scaled by levelContext
for levelNo, level in self.faultTree.items():
mLabel = QGraphicsTextItem("n-{}".format(levelNo))
mLabel.setFont(QFont("Helvetica", 25))
mLabel.setPos(150,y-20)
self.addWidget(mLabel)
# self.levelLabels.append( (levelNo, 80, y))
if len(level) <2: #case where only one fault is present
fault = level[0]
fault.radius = 15
fault.setPos((width/2,y))
fault.setGraphicsView(self)
# fault.setLevel(levelNo)
# self.draw(fault)
elif len(level) == 2:
# case where exactly two faults are present and we
# want to add spacing outside
level[0].radius, level[1].radius = (30,20) \
if level[0].getLevelContext() > level[1] \
.getLevelContext() else (20,30)
level[0].setPos( (width * 0.30+ level[0].getRadius(), y))
level[1].setPos( (width - width*0.30 \
- level[1].getRadius(), y))
# level[0].setLevel(levelNo)
# level[1].setLevel(levelNo)
else:
Appendix C. Source Code for Visualizations 219
x,_ = sideGap, _ = hspacing(len(level), width)
# this is a little bogus since I only want ’sideGap’
space = width-2*sideGap
sizes = [fault.getLevelContext()*0.6+0.4 for fault in level]
#get all levels
#first try to set sizes for XX% coverage
scale = (space *0.80)/sum(sizes)
gap = (space*0.20)/(len(sizes))
#change this divisor to change spacing
radii = [size*scale / 2.0 for size in sizes]
mMax = max(radii)
#if some are too big, scale them all
# down and increase the gap size
if mMax > 30:
radii = [radius * 30 /mMax for radius in radii]
gap = (space - sum(radii)*2) / (len(radii))
#change this divisor to change spacing
x+= gap/2
for radius, fault in zip(radii, level):
fault.radius = radius
fault.setPos( (x+radius, y))
fault.setParent(self)
fault.setGraphicsView(self)
# fault.setLevel(levelNo)
# self.draw(fault)
x+= 2* radius + gap
y+= ygap
for level in self.faultTree.values():
for fault in level:
fault.setBoundingRect()
def initUI(self, width, height, title):
self.show()
Appendix C. Source Code for Visualizations 220
def addWidget(self,widget):
self.widgets.append(widget)
self.scene.addItem(widget)
@staticmethod
def qtColor(colorString):
r,g,b = [colorString[1:3],colorString[3:5], colorString[5:7]]
r,g,b = [int(num,16) for num in [r,g,b]]
return QColor(r,g,b)
class ContingencyTree(TreeGraphicsVis):
def __init__(self, results_file,
system_file, pos=[20,20,1600,700]):
(faults, faultTree) \
= getFaults(TreeFault,
JSON_systemFile(results_file, system_file))
super().__init__(pos, faultTree)
class Legend(QGraphicsItem):
def __init__(self, items):
Appendix C. Source Code for Visualizations 221
super().__init__()
self.pos = [20,20,120,25]
self.items = items
def boundingRect(self):
return QRectF(*self.pos)
def paint(self, painter, option, widget):
x,y,width,height = self.pos
for text, color in self.items:
painter.setBrush(QColor(color))
painter.setPen(Qt.NoPen)
painter.drawRect(x,y,width,height)
painter.setFont(QFont(’serif’, 10))
painter.setPen(Qt.black)
metrics = painter.fontMetrics()
fw,fh = metrics.width(text),metrics.ascent()
painter.drawText(x+ (width - fw)/2.0,
y +(height+ fh)/2.0,text)
y = y+height+2
class TreeFault(Fault,QGraphicsItem):
radius = 10
def __init__(self, listing, reduction=None):
Fault.__init__(self,listing, reduction=reduction)
QGraphicsItem.__init__(self)
self.pos = None,None
self.highlight = False
self.radius = 10;
self.setCacheMode(self.DeviceCoordinateCache)
self.setFlag(QGraphicsItem.ItemSendsGeometryChanges)
self.setAcceptHoverEvents(True)
def getRadius(self):
Appendix C. Source Code for Visualizations 222
try:
return self.radius
except:
return 10 + 0.9*len(self.elements)-1
def setPos(self,pos):
self.pos = pos;
# self.setBoundingRect()
def setGraphicsView(self,gv):
gv.addWidget(self)
def setParent(self, parent):
self.parent = parent
def topConnectorPos(self):
x,y = self.pos
# return x,y-self.radius()
return x,y
def bottomConnectorPos(self):
x,y = self.pos
# return x,y+self.radius()
return x,y
def mousePressEvent(self, event):
print("{}. reduced loadability: {:.0f}."\
.format(str(self), self.value))
# self.setDetails()
def hoverEnterEvent(self, event):
print(’<hover enter on {}>’.format(self))
self.toggleHighlight(True)
def hoverLeaveEvent(self, event):
print(’<hover leave on {}>’.format(str(self)))
self.toggleHighlight(False)
def toggleHighlight(self, onoff = None):
if type(onoff) is bool:
Appendix C. Source Code for Visualizations 223
self.highlight = onoff
else:
self.highlight = not self.highlight
self.update(self.boundingRect())
for el in self.connections:
el.toggleHighlight(onoff)
def shape(self):
try:
return self.mShape
except:
self.mShape = self.defineShape()
return self.mShape
def defineShape(self):
path = QPainterPath()
pos = self.pos
radius = self.radius
path.moveTo(QPointF(*pos))
path.addEllipse( QRectF(pos[0]-radius,
pos[1]-radius, 2*radius, 2*radius))
return path
def boundingRect(self):
try:
return self._boundingRect
except:
self._boundingRect = QRectF(pos[0]-self.radius,
pos[1]-self.radius,
2*self.radius,
2*self.radius)
return self._boundingRect
def setBoundingRect(self):
tops = array([el.topConnectorPos() \
for el in [self]+self.connections])
minX, minY = min(tops[:,0]), min(tops[:,1])
maxX, maxY = max(tops[:,0]), max(tops[:,1])
Appendix C. Source Code for Visualizations 224
x,y,w,h = (minX - self.radius, minY-self.radius,
maxX - minX + self.radius*2, maxY - minY + self.radius*2)
self._boundingRect = QRectF( x - w*0.01, y-h*0.01, w*1.02, h*1.02)
def paint(self, painter, option, widget):
(x,y), r = self.pos, self.getRadius()
x0,y0, x_,y_ = x-r,y-r,2*r,2*r
startAngle, arcAngle = 0, 360 * 1/len(self.elements)
#scale the font to the radius
mFont = QFont(’serif’, round((r*0.9 \
if len(self.elements)>1 else 1.8*r) *.6))
painter.setPen(Qt.black)
painter.setFont(mFont)
painter.setRenderHint(QPainter.Antialiasing, True)
def putText(qp,x,y,text):
qp.setPen(QColor(0,0,0))
metrics = qp.fontMetrics()
fw,fh = metrics.width(text),metrics.height()
qp.drawText(QPointF(x-fw/2,y+fh/4),text)
pen = QPen(QColor(10,10,10), 1, Qt.SolidLine)
for other in self.connections:
# weight = 0.2 + 3*other.getLevelContext() \
# + 2*self.getLevelContext()
weight=2
if self.highlight:
weight = weight+2
painter.setPen(QPen(Qt.black, weight))
else:
painter.setPen(QPen(Qt.gray, weight))
Appendix C. Source Code for Visualizations 225
xT,yT = self.bottomConnectorPos()
xB,yB = other.topConnectorPos()
# painter.setPen(QPen(Qt.black, weight))
painter.drawLine(QPointF(xT,yT),QPointF(xB,yB))
weight = 2 if self.highlight else 1
for index,element in enumerate(self.elements):
painter.setBrush(QColor(element.__class__.color))
painter.setPen(QPen(QColor(80,80,80), weight))
if len(self.elements) > 1:
painter.drawPie(QRectF(x-r,y-r,2*r,2*r),
round(startAngle*16),
round(arcAngle*16))
else:
painter.drawEllipse(QRectF(x-r,y-r,2*r,2*r))
lAngle = startAngle + (arcAngle/2.0) #in degrees
rd = 8.0/15 * r if len(self.elements) > 1 else 0
yd = y - rd * sin(pi/180 * lAngle)
xd = x + rd * cos(pi/180 * lAngle)
putText(painter, xd,yd,str(element.id))
startAngle += arcAngle
# print ’\n’
painter.setRenderHint(QPainter.Antialiasing, False)
if __name__ == "__main__":
main()