reaching for 100% reliable electricity services: multisystem interactions and fundamental solutions

11
REACHING FOR 100% RELIABLE ELECTRICITY SERVICES: MULTI-SYSTEM INTERACTIONS AND FUNDAMENTAL SOLUTIONS G. Deconinck, R. Belmans, J. Driesen, B. Nauwelaers, E. Van Lil K.U.Leuven – ESAT, Kasteelpark Arenberg 10, 3001 Leuven, Belgium Introduction The supply of electrical energy is extremely critical for society and yet very vulnerable. Almost all elements of society (industry, offices, public services, medical care, farming and food processing, mobility, etc.) stop operating immediately (except if special back-up installations are used), if electricity supply fails, at a huge economic cost [StVa94], [DeGu04]. The combined approach of a meshed, redundant transmission grid with large generation units, and a radial distribution system has led to a very reliable supply, certainly in Europe for the last decades. The generation, transmission, distribution and supply of electrical energy was in the hands of vertically integrated companies, that had often a national (or at least local) monopoly, some of them being state owned and others private, controlled by the government. The liberalised electricity market, requiring unbundling of the vertically integrated system, together with the increased vulnerability of society to power failures, the public resistance against infrastructure expansions and the increased penetration of ‘uncontrollable’ generation based on renewable energy (e.g. wind turbines or photovoltaics), have recently put pressure on the security of supply. Distributed generation, together with advanced possibilities of decentralised intelligent control, multi-route massive data communication, and fast-acting power electronics open alternative routes to this reliable supply. On the one hand, statistics show that power outages are unavoidable [Fair04]; on the other hand ever more sophisticated information and communication technologies (ICT) are deployed for monitoring and control, allowing to react quickly and adequately to disturbances. It is recognised as such that liberalisation of the electricity markets while retaining or even increasing the current level of security of supply will be impossible if no fundamental breakthroughs will be seen in the triangle power engineering – control – telecommunication/data management. Indeed, several fundamental problems remain unsolved, blocking the conceptual turnaround of the electric energy system. Difficulties to assess the power systems vulnerabilities. Distributed generation and local power balance. A suited communication and control architecture. Interdependent critical infrastructures. The following subsections discuss the state of the art of these elements in detail. Difficulties to assess the power systems vulnerabilities Due to unbundling, all system data are no longer centrally available. Market participants take decisions not known to the transmission grid operator which may endanger the reliability of the overall grid, for instance mothballing or even halting a power plant needed to support the voltage level in a zone of the grid. The existing planning systems do not take care for such events and are far too much deterministic. This has to change fundamentally in the years to come, making the transmission grid (and the distribution grid) far less dependent on the behaviour of market participants. Load flow calculations have become an everyday tool in the exploitation of the transmission grid and are based on well known and available techniques. However, due to the far less predictable international power flows, caused by international trade on the one hand and massive introduction of wind power in some countries on the other, the classical load flow calculations dealing with the control zone of one TSO (Transmission System Operator), are no longer sufficient. International coordination is needed, requiring new tools especially to account for the stochastic and dynamic nature of some of the inputs [ArWa01]. Load flow calculations also do not account for the control of new types of equipment that are step by step introduced in the grid to control the flow, as for instance phase shifting transformers. In the near future, a new group of fast reacting, power electronics based devices will be seen: so-called FACTS (Flexible Alternating Current Transmission System) devices and HVDC links, again jeopardizing the capabilities and results of classical load flow calculations. The transient behaviour will become totally different, as fast reacting power electronic systems are introduced. These systems require dynamic modelling and control. Furthermore, these devices interact with each other and the existing grid infrastructure, demanding a coordinated, global control, and this for totally different time frames [Bose03], [PoHW98].

Upload: independent

Post on 08-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

REACHING FOR 100% RELIABLE ELECTRICITY

SERVICES: MULTI-SYSTEM INTERACTIONS AND

FUNDAMENTAL SOLUTIONS

G. Deconinck, R. Belmans, J. Driesen, B. Nauwelaers, E. Van Lil K.U.Leuven – ESAT, Kasteelpark Arenberg 10, 3001 Leuven, Belgium

Introduction

The supply of electrical energy is extremely critical for society and yet very vulnerable. Almost all elements of society (industry, offices, public services, medical care, farming and food processing, mobility, etc.) stop operating immediately (except if special back-up installations are used), if electricity supply fails, at a huge economic cost [StVa94], [DeGu04]. The combined approach of a meshed, redundant transmission grid with large generation units, and a radial distribution system has led to a very reliable supply, certainly in Europe for the last decades. The generation, transmission, distribution and supply of electrical energy was in the hands of vertically integrated companies, that had often a national (or at least local) monopoly, some of them being state owned and others private, controlled by the government.

The liberalised electricity market, requiring unbundling of the vertically integrated system, together with the increased vulnerability of society to power failures, the public resistance against infrastructure expansions and the increased penetration of ‘uncontrollable’ generation based on renewable energy (e.g. wind turbines or photovoltaics), have recently put pressure on the security of supply. Distributed generation, together with advanced possibilities of decentralised intelligent control, multi-route massive data communication, and fast-acting power electronics open alternative routes to this reliable supply. On the one hand, statistics show that power outages are unavoidable [Fair04]; on the other hand ever more sophisticated information and communication technologies (ICT) are deployed for monitoring and control, allowing to react quickly and adequately to disturbances. It is recognised as such that liberalisation of the electricity markets while retaining or even increasing the current level of security of supply will be impossible if no fundamental breakthroughs will be seen in the triangle power engineering – control – telecommunication/data management.

Indeed, several fundamental problems remain unsolved, blocking the conceptual turnaround of the electric energy system. • Difficulties to assess the power systems

vulnerabilities. • Distributed generation and local power balance.

• A suited communication and control architecture. • Interdependent critical infrastructures.

The following subsections discuss the state of the art of these elements in detail.

Difficulties to assess the power systems vulnerabilities

Due to unbundling, all system data are no longer centrally available. Market participants take decisions not known to the transmission grid operator which may endanger the reliability of the overall grid, for instance mothballing or even halting a power plant needed to support the voltage level in a zone of the grid. The existing planning systems do not take care for such events and are far too much deterministic. This has to change fundamentally in the years to come, making the transmission grid (and the distribution grid) far less dependent on the behaviour of market participants.

Load flow calculations have become an everyday tool in the exploitation of the transmission grid and are based on well known and available techniques. However, due to the far less predictable international power flows, caused by international trade on the one hand and massive introduction of wind power in some countries on the other, the classical load flow calculations dealing with the control zone of one TSO (Transmission System Operator), are no longer sufficient. International coordination is needed, requiring new tools especially to account for the stochastic and dynamic nature of some of the inputs [ArWa01].

Load flow calculations also do not account for the control of new types of equipment that are step by step introduced in the grid to control the flow, as for instance phase shifting transformers. In the near future, a new group of fast reacting, power electronics based devices will be seen: so-called FACTS (Flexible Alternating Current Transmission System) devices and HVDC links, again jeopardizing the capabilities and results of classical load flow calculations.

The transient behaviour will become totally different, as fast reacting power electronic systems are introduced. These systems require dynamic modelling and control. Furthermore, these devices interact with each other and the existing grid infrastructure, demanding a coordinated, global control, and this for totally different time frames [Bose03], [PoHW98].

The modelling and computation of these future grids will therefore be difficult and computationally demanding. Furthermore, due to the introduction of power electronic interfaces between the grid and generator sets, for instance in wind turbine systems, the equivalent inertia present in the grid to smooth out frequency variations, is far less. Storage devices with different reaction times also have a firm impact on transients, for instance superconducting coils. This fundamentally influences the transient behaviour [KlSK05].

Figure 1: Today’s load flow calculations identify no

dynamic problems

The treatment of the techno-economic system, bidding processes accounting for limited technical flows, acquiring ancillary services (frequency control, voltage control, black start,… ) by the TSO from generators, are difficult questions, not covered by classical grid models.

Finally the behaviour of microgrids [Lass02] is now analyzed with classical tools, which are to a certain extent tuned for the lower power level. This is far from optimal as the stochastic nature of generation, storage and load in this lower power grid is much higher. Resynchronisation imposes a hard constraint on the remaining energy in the storage devices present in the microgrid at that time, since one should avoid that this voltage and frequency equalisation can only be achieved by removing loads. Once voltage and frequency equalisation is stably achieved, one must align the phase angles of the microgrid and the main grid. This will be achieved most easily by regulating the power supplied by storage devices. Also this will require new tools for modelling.

Distributed generation and local power balance

Decentralised or distributed generation, together with the advanced possibilities of control and power electronics open routes to new approaches to electric energy supply [JACK00], [vSPv01], [GPBF03], [LGWZ03], [MVVD04]. Their impact on the reliability is however the target of a lot of research as it is far from evident whether it will contribute to an

increased reliability or worsen the situation [Brow02]. When different generators are connected to radial distribution lines, the conventional protection schemes result in a loss of selectivity, distorted over-current detection and increased short-circuit currents. Furthermore, their power output is not predictable in time. This requires that protection parameters are continuously adapted, depending on the state of the local grid (e.g. instantaneous generation and consumption levels). More specifically, the following problems have to be re-addressed in a fundamentally different way.

• Selectivity: System protection is selective if only the protection device closest to the fault is triggered first to remove or isolate the fault; if this takes too long, the protection at a higher level takes over. In this way, only the faulty components and their direct neighbourhood are switched off [Ande99]. Without distributed generators, power always flows in one direction in the radial distribution grid, during normal operation as well as when faults occur. Hence, classical selectivity schemes apply time grading. When generators are dispersed in a radial grid, this system is inadequate. A possible scenario is the disconnection of a healthy feeder by its own protective device, as it contributes to the short circuit current that flows towards a fault in a neighbouring feeder. On the other hand, if a fault occurs in the connection between the supplying grid and a local network, disconnection of feeders (including their generators) should take place [MBAG03]. The tripping current of protective devices has to be situated between the maximum load and the minimal fault current. The actual tripping value at each time instant depends on the available sources and the instantaneous load. Hence, parameters of protection devices should be updated continuously in order to provide optimal selectivity [EnH04]. This requires a fundamental study of the associated problems and possible solutions. Clearly, hard-wired solutions without built in, distributed intelligence with reliable telecom links are totally inadequate.

• Single-phase connection: Small generator units inject single-phase power into the distribution grid, e.g. small photovoltaic systems or Stirling engine based generator sets [BeDK99]. This affects the balance of the three-phase voltage, resulting in increased currents in the neutral conductor and possible stray currents to the earth [DuDe02]. This current should be limited to prevent overloading and to assure electric safety. Single-phase power injection is limited to a prescribed value, e.g. in Belgium 10 kW per single-phase connection [BFE03]. The protection system shall take this into account.

• Overcurrent and earth-fault protection: Due to the presence of generators in the distribution network, the fault or over-current as measured at the beginning of a feeder or supplied by other local

generators is reduced [Bark02]. If this reduction is significant, currents detected at various points are too low to trigger fast disconnection [KaLe02]. This can result in prolonged over-currents or earth faults. Hence a distributed measurement-based method is required.

• Disconnection of generators: When low-cost generators are being deployed, it is not unrealistic that they become faulty; they have to be protected against internal and external short-circuits, over- and undervoltages, unbalanced currents, abnormal frequencies, harmonic distortions and excessive torques [Mozi01]. In those situations, a fast disconnection is required in order to protect the rest of the grid [GCBG01].

• Criticality of the applications: Some parts of the grid require a higher reliability of the energy provision than others – and this is time dependent. There is a need to have different approaches to such areas. For instance, during periods of grid instability, an operation theatre in a hospital can be decoupled from the grid and function on its own. However, this theatre is not critical when it is not in use. Hence, one needs to reach for 100% reliable electricity services, custom made for the application.

Figure 2: Impact of distributed generation on voltage

stability in a distribution grid after connection of

different types of DG units

Clearly, many basic or fundamental problems need to be solved in order to provide adequate protection and to ensure safety and reliability in a distribution network by local electrical power. This applies to the electrical characteristics of the network as well as to the required dynamic adaptation of the protection mechanisms (based on an underlying measurement system). Also existing guidelines and standards provide insufficient guidance to adequately deal with these questions. However, many opportunities are provided by the use of power electronics and ICT infrastructure.

Communication and control architecture

In a vertically integrated power system, the need for communication was not high. The power plants outputs were centrally dispatched using the merit order of the individual units. The same dispatch took also care of the control of the transmission grid. The frequency control was organised internationally by sending control signals initiating automatic responses of the power plants. The voltage control was performed locally by the grid operator often sending control signals to individual power plants for adapting the reactive power output (excitation control). Due to unbundling, keeping the dynamic active and reactive power balance is far less evident and control signals have to be exchanged between the transmission grid operator and the different generators that operate in its control area.

Between the transmission and the distribution grid, almost no interaction was required. The new electricity system, with distributed generation, local energy storage and the introduction of microgrids that can operate disconnected from the transmission grid, but has to be able to reconnect when required, will need permanent interaction between both grids. Distribution has to become an actively controlled system, with decentralised control units.

Metering was very simple: the supply nodes (consumers) were metered, with the well known Ferraris meter for energy at most residential clients and a somewhat more sophisticated metering system (active power in time, power factor, energy) for larger, industrial customers. The amount of metered data did not require a lot of data transfer. With regard to tariff choices, signals were sent via the grid wires, using CAB signals (ripple control) at different frequencies. It is obvious that in the liberalised market, where for a real market of the non-storable product electricity real time pricing is the only real solution, closely linked to active demand side management and storage control, more advanced metering is a must. Some attempts are being made in Europe, the best known being the digital meters being introduced in households in Italy.

There are many problems involved in designing a well-suited ICT infrastructure that serves well all these requirements [HaCD99]. • Communication architecture: Which is an

appropriate communication architecture suited to support point-to-point communication as well as broadcasting and multicasting of information? Indeed, for different purposes, a component needs to exchange information with different other components (e.g. protection purposes, stability control, economic optimisation of set points). Which models for information exchange need to be supported (push/pull, event-triggered/time-triggered, …)?

• Interoperability: System operators are required to economically switch from dedicated

2 1 SUB SUB 106

405

201 202 203 204 205 207 206 208

103

102

101

104

105 109

108

107

110 111

302 301 303

402 401 403 404

305 304

406 407

409

WINDT

408

0 .9 0

0 .9 2

0 .9 4

0 .9 6

0 .9 8

1 .0 0

1 .0 2

1 .0 4

1 .0 6

1 .0 8

1 2 4 0 1 4 0 2 4 0 3 4 0 4 4 0 5 4 0 6 4 0 7 4 0 8 4 0 9N o d e

Volt

age

(pu

)

S y n 6 M WS y n 3 M WI n d 3 M WI n d 6 M WB a s e c a s e

communication systems from a single vendor towards interoperable, multivendor protocols, implying that all equipment must be able to communicate with off-the-shelf equipment from other vendors, or from peer system operators. Several communication protocols are being proposed as more generic solutions (e.g. IEC61580, CA2.0, TASE.2 (ICCP), DNP3, IEC 60870-5-10x, OPC). These many standardisation initiatives also show that industry itself feels the urgent need for a consensus solution, which is not there yet.

• Dynamic aspects and different timescales: Due to switching of generators and loads, the components that need to communicate will vary in time, and hence, the logical communication topology has to follow accordingly. This requires resource discovery and overlay networks, e.g. as deployed by peer-to-peer networks [VaDB04a]. Furthermore, some phenomena require a control action within a few cycles, while others can tolerate seconds or even minutes of reaction time. In current situations, some local control algorithms require no communication, while the centralised approaches require several roundtrip transmission periods; hence, they make use of dedicated (expensive) communication lines if the communication latency of the centralised approach is not sufficiently short (e.g. for SPS -Special Protection Systems- and WAMS/WACS -Wide Area Measurement/Control Systems).

• Dealing with redundancy and uncertainty in information: some parameters are measured on different acquisition points, while others cannot be measured directly. The information infrastructure needs to deploy ‘sensor fusion’ to derive correct values of the required parameters.

• Dependability aspects: Which aspects influence the dependability of the communication and how does this affect the control functions that rely on it [Amin02]? What are relevant fault and failure models? Which quantitative levels of availability, error detection latency, etc. are required? How can messages be timely delivered on top of an unreliable communication infrastructure? It is insufficient to just plug in a communication network in order to be successful, but rather there needs to be a communication architecture that is flexible, predictable, scalable, reliable and provides information security (integrity, confidentiality, authentication, availability), and different quality-of-service levels. This requires appropriate middleware to manage communication [Bose03], [VaDB04b] and mechanisms to integrate fault tolerance [DeDB02]. From the standardisation side, there are several working groups devoted to the application of Information Technology in Electric Power Systems. A recent paper providing a survey on ongoing activities has been published by the Cigré Joint Working Group D2/B3/C2-01 “Security

for Information Systems and Intranets in Electric Power Systems” [Eric04]. Also other authors confirm the inadequacy in the

current communication infrastructure in electric power systems [XMVC02], or argue that the distribution system of the future should be complemented by a communication infrastructure that allows a decentralised approach, e.g. by autonomous agents [KuKi03].

State-of-the-art on interdependent critical

infrastructures

There is a consensus in literature on critical infrastructures that interdependency analyses and models constitute a necessary step [RiPK01]. The International CIIP Handbook 2006 [AbDu06] [DuMa06] is a comprehensive collection of information on the various initiatives undertaken by the different countries on the theme of Critical Information Infrastructure Protection (CIIP), mainly at governmental level. (Information infrastructure refers to the telecommunication and data management infrastructure used to control other infrastructures, such as the electric power system.) The CIIP Handbook underlies the need of developing methodologies for analyzing interdependencies and guiding protection of critical information infrastructures.

In the US, the North-American Electricity Reliability Council (NERC) manages the ES-ISAC (Electricity Sector Information Sharing and Analysis Centre), for the exchange of information on critical risks in the electric power sector. In particular two indexes have been developed for indicating the threat levels of possible physical and cyber attacks (as of writing this paper, both are at an elevated 3/5 level, indicating significant risks). These instruments are very helpful for creating alertness on the situation, but also a general awareness on the risks.

Also in the US, the Electric Power Research Institute (EPRI) started the “Infrastructure Security Initiative” addressing power system security at both electrical and cyber levels. The Department of Energy (DoE) published the “21 Steps to Improve Cyber Security of SCADA Networks” [DoE], whilst the Sandia National Laboratories developed a research program on SCADA electronic security [Sand05]. A different work which is especially related to this is a research project by the University of Washington on “Interdependency of IT, Protection, Power and Communication Networks” [ScLi04].

In Europe the need for setting up research programs on critical infrastructures has been recognised for several years [CESI01]. However, a limited number of European projects have addressed the emerging problems of digitalised critical infrastructures: ACIP [ACIP05] (Analysis & Assessment for Critical Infrastructure Protection),

SAFEGUARD [SAFE05] (Intelligent Agents Organisation to Enhance Dependability and Survivability of Large Complex Critical Infrastructure), CRUTIAL [DDDD06] (Critical Utility InfrastructurAL Resilience), GRID (A Coordination Action on ICT vulnerabilities of power systems and the relevant defence methodologies), Ci2rco (Critical Information Infrastructure Research Co-ordination), etc.

Figure 3: Communication, control and electricity

infrastructure are entangled

The 100% reliable electricity services research

project

All of these fundamental problems lead K.U.Leuven – ESAT to start up a large research project aiming at delivering 100% reliable electricity services. The rational of this project is to analyse and fundamentally contribute to integrated electric power systems that embrace telecommunication and distributed control in order to enhance the reliability of the electric power supply, and to identify the boundaries of the approaches. It is not the goal to obtain a 100.00% reliability level for the entire electric power system, but rather to provide fundamental solutions to reach for full electricity service from a user’s perspective and budget, by delivering tailored electric energy supply.

The project consists of five interacting work packages (WP). • In WP1 the modelling of dynamic and complex

power systems is tackled by developing multidisciplinary dynamic, static and stochastic modelling principles of power systems and interdependent infrastructures alongside methods to assess power systems vulnerabilities, risks and impacts.

• WP2 designs and evaluates the concepts of a decentralised intelligence network (ICT architecture) for reliable power supply, to improve reaction speed and hence controllability, without compromising protection, safety, security and reliability.

• WP3 combines the modelling effort with the ICT architecture in order to evaluate its potential on the reliability of the electric power system (EPS) and the electricity services. It studies the effects of

implementing active demand-side management and storage facilities in the transmission and distribution grid.

• In WP4, the vulnerabilities (such as fault propagation) are assessed resulting from the combined use of control, communication and power infrastructure, and the multi-layer interactions between interdependent power-control-telecommunication networks are investigated.

• WP5 finally brings the different elements together and investigates how close to 100% the optimal reliability of supply can be made at certain preset technological and economic boundary conditions. It optimises risks with regard to partial availability (islanding, active load control, …) and restoration after failure.

The different work packages are detailed below.

Work package 1: modelling

To be able to study advanced power distribution systems, suitable modelling methods and simulation tools are to be established. Traditionally single-phase balanced load flow computation or dynamic simulations are applied to high-level models of transmission systems. However, with the change in nature of distribution networks on the one hand and due to the further introduction of fast-reacting power electronic systems and system-wide control, more complex modelling is required.

A first problem with this sort of models is that large networks with relatively slow time constants or eigenmodes are combined with fast-switching power electronics and dynamic control mechanisms [DBVW04], yielding an overall numerical stiffness. A treatment as a “coupled multiphysics problem” seems a necessity. In general, adequate decomposition into subproblems and non-linear iteration techniques are to be introduced (for instance bifurcation problems and chaotic behaviour).

The problem with the different time scales involved is to be dealt with by traditional problem decomposition or by dynamic phasor modelling approaches [StAy00]. However, it may be necessary to take a fundamentally different approach and leave the conservative separate time and frequency domain description. The possible application of alternative basis functions in the solution approximation, combining the properties of time and frequency domain simulations, e.g. wavelets, is to be looked at [ZhMG99]. As such, the by nature oscillatory or switching events are described using approximation functions with a limited length and frequency content.

In general the size of the model and the simulation time will grow tremendously. Because of the uncontrolled nature, full deterministic modelling is impossible: generators follow market strategies, distributed generators are subject to changes in

Intelligent meter

Substation

Communication

Electricity

Dispersed generation

Control

weather (sun, wind) or customer behaviour (heat demand to a Combined Heat/Power CHP unit) and loads act more and more dynamically due to the internal power electronics [PDHB05]. A statistical description of generation and demand becomes necessary. Although Monte Carlo and other stochastic solution methods have been applied in the past, this problem has to be reconsidered due to the shift in simulation focus to distribution systems.

Closely related is the need for optimisation tools in advanced power systems. Any problem in this application soon becomes a nonlinear mix of discrete (connection point, switching state) and continuous parameters (power rating). Special algorithms such as genetic algorithms need to be considered.

Work package 2: ICT architecture

Based on the requirements of the control algorithms and the generic security aspects, this task will identify how data acquisition equipment will forward the information optimally to the evaluation units and the actuators. It will quantitatively identify not only parameters of the communication network in terms of bandwidth, latency, jitter, …, but also determine the optimal topology of the network and the physical layers used; it will fundamentally analyse the requirements if different communication media like fibre optics, UMTS, GSM or even satellite are involved. Starting from the multitude of communication models and protocols deployed at SCADA and inter-system level, it will identify the traffic and hence determine the optimal network suited to this traffic.

This work package can be further subdivided into two principal parts. The first package will analyse the decentralised topology in order to allow the exchange of all information in a hierarchical way, but for normal operations. Furthermore, some information is more critical than other; hence it must be possible to differentiate accordingly. Also, current communication architectures in power systems field are statically defined and often only centralised in topology. Since the network of the future will no longer be fully centralised, the information collected on the network will have to be disseminated not only to a central control point, but to a distributed system of control nodes. This problem has similarities to the reconfiguration problem that occurs regularly in a computer network due to the failure of some components (from links to servers), that can also lead to islands where partial connectivity is guaranteed.

One of the most efficient ways (with the minimum of system overhead) to perform this dissemination consists in creating multicast streams, joined by the interested control units [ShTu02]. The techniques used in mobile applications, for which we developed new implementations, where handoffs occur naturally due to the changing propagation environment (in that

case it was called “Leaf Initiated Join”) are applicable to fast reconfiguration of networks [TDVV00]. Now, the trend is more towards overlay networks, making use of classical unicast networks and network infrastructure (since the control network will be dedicated, this is not really a limiting factor; indeed, here we do not rely on internet providers, that for the moment severely limit the use of multicast protocols), but where a limited set of Multicast Service Nodes can control and manage the flow of information [ShTu02].

In order to ensure a dependable communication infrastructure that reaches the required levels of robustness (reliability) and information security (availability, integrity, confidentiality, authentication, …) several redundancy management techniques need to be added, and traded off against additional costs and overheads. This includes hardware (e.g. multiple communication paths), information (e.g. error-correcting codes), software (e.g. range checking) and time redundancy (e.g. automatic retransmission). This will be based on a representative fault and failure model that identifies the type and frequency of transient and permanent faults, their error manifestation and the failures that result. Based on combinatorial (e.g. reliability block diagrams) and probabilistic (e.g. Markov models) reliability modelling tools, the resulting dependability attributes will be assessed against the additional costs.

This dependability evaluation will consider two different viewpoints: an infrastructure viewpoint that identifies the reliability of links and nodes in the communication network, and an application viewpoint, that considers end-to-end properties for the communication between two elements that need to interact for control reasons.

Based hereon, a suited communication architecture will be studied that supports point-to-point communication as well as broadcasting and multicasting of information. It will identify the physical as well as logical levels of the topologies, and develop communication protocols that are on one hand interoperable with existing protocols and on the other compatible with the open (IP-based) communication as provided by the power equipment, in order to offer both centralised and decentralised communication. Besides the raw data captured by the sensor nodes, it should also provide elements that can aggregate data for specific control purposes, in order to reduce bandwidth. Pushing the information from the sensors to the evaluation units (time- or event-triggered) and pulling the information by the evaluation units from the sensors are both to be supported. In order to differentiate among the criticality of the data, the protocol should support several priorities for the messages.

The second part of the work package focuses on the analysis, monitoring and control of events under abnormal situations (like the occurrence of a major

failure of one of the generation or transmission components). Also, for a distributed control environment, the communication topology needs to change dynamically as elements in the distribution grid become active or inactive. This second part will especially characterise these dynamic aspects, and derive the functional specifications of the ICT infrastructure in order to deal with these changes. However, to cope with the instantaneous high traffic demand under those circumstances, the network will only be able to react sufficiently fast to the emergency, if the different data will be sorted and provided with sufficient priorities to allow a guaranteed delay only for the critical measurement and command data. So those algorithms will have to be improved and extended for differentiated services. This area is still at the forefront of the telecommunication network research. Indeed, aggregating the flows with only the bandwidth demand in mind will exceed the capacity of the network. At least a combination of the delay bound and the reserved rate for the flow should be used to at least preserve and possibly in that case improve the delay bound of the aggregated data traffic [Cobb02]. Finally, a representative fault and failure model will be defined in order to characterise dependability requirements.

In order to deal with the dynamically changing aspects (in terms of component activity and topology), the logical communication topology of the ICT infrastructure, needs to be adapted in an event- or time-triggered way. This will build on our initial work on peer-to-peer networks that allow for resource discovery and build semantic overlay networks on top of that [VaDB04b]. However, there are still a number of fundamental problems to be solved in order to keep the diameter of the communication network small enough in order to keep the latency small and the degree of the network high enough to ensure robust connectivity without having single or hierarchical information sources. The semantic overlay networks (logically) interconnect similar devices – based on an XML-description of the resources – which will facilitate the control algorithms. This allows similar elements to be logically grouped into virtual entities (e.g. all generation units, storage units, manageable loads) which will ease the decentralised control and a service-oriented architecture.

Work package 3: effects on reliability

On the electricity grid one can distinguish different types of Distributed Energy Resources (DER). The best known are distributed generators, that actively inject power, complemented by energy storage systems to balance the energy consumption. Controllable loads are also DERs and can be considered as “negative distributed generation units”. While at this point (distributed generation) technology is well introduced, loads remain quite passive and

non-intelligent. However, by enhancing energy consumption devices with an energy management system, a demand-side management (DSM) strategy can be developed which, complemented by an optimal use of energy storage technology, offers the potential to maximally use grid infrastructure, thereby omitting or at a minimum postponing expensive investments and enhancing reliability.

DSM employs a dependable communication infrastructure. A semi-automatic ‘activation’ of loads by implementation of an appropriate control algorithm equips the customers with a tool to respond to external signals such as electricity prices. However, next to providing price information, data needs to be exchanged between parties involved (active loads, distributed generation units) in order to globally optimise the consumption/generation scheduling within the distribution system at different levels. Grid-connected or autonomous (“microgrid”) operation should become possible.

Assuming each load/DER-unit knows about its ‘process parameters’, including commands of the users, it can then decide what the priority of the next electricity acquisition is. Examples are: a dishwasher should run at night and use a strategy to buy the power in time for a sufficiently long period as required by the task it has to do, a refrigerator sees its temperature rising and wants to buy electricity at a price increasing with that temperature. Such information can be translated into a maximum price at which the active party wants to buy. The dual is the minimum price at which the DER unit can generate electricity. A third quotation can be the price(s) offered by electricity retailers. The information in the form of prices can be brought together on a ‘market’ and cleared leading to new schedules [AkSK04]. As such, the market in fact becomes the highest level of the control system.

A DSM system is very useful in case of contingencies. Load shedding strategies, used in case of a severe power imbalance or instability, become more selective and allow a gradual decrease of system loading. In case of a restart after a system blackout, the ramping up of the load can be controlled. As such, activated loads and storage units can be considered to be able to be part of ancillary services.

This work package studies the impact of DSM strategies on the operation of electricity distribution grids under normal conditions as well as in emergency conditions. First of all, the effects on the power balances are studied for a typical system, followed by a determination of the communication and control bandwidth needs. Different control strategies, including centralised and distributed, are compared. Finally, the contribution of DSM to reliability enhancement is evaluated.

Work package 4: multi-layer interactions

Based on several distributed control scenarios, the objective of this work package is to characterise and analyse multi-layer interactions and the interdependencies between the information infrastructure and the controlled power infrastructure, and to assess their impact on the resilience of these infrastructures with respect to the occurrence of critical outages (risk analysis).

Figure 4: Interdependencies among infrastructures

The risk analysis will be based on hazard identification and risk modelling techniques. These methods aim at the identification of failure scenarios, the analysis of their impact and ranking according to severity and criticality criteria. Various methodologies and techniques can be used to support these analyses including, FMECA (Failure Mode, Effect and Criticality Analysis), fault trees, cause consequence diagrams, state-based and transition systems based models, etc.

Although there has been extensive work on modelling of individual infrastructures (power only [StVa94], telecom only [LuKl00]) and various methods and tools have been developed to predict the consequences of potential disruptions within an individual infrastructure, there is currently no methodology available to model the complex interdependencies existing among several infrastructures [KyWi02], [DoLM04].

The modelling framework to be developed is aimed at taking into account multiple dimensions of interdependencies [RiPK01]: a) type of interdependency (physical, cyber, geographic, logical), b) coupling and response behaviour (loose or tight, inflexible or adaptive), c) various classes of faults that can occur, and d) different time scales. This raises a number of challenging issues that need to be addressed and require appropriate methodologies to overcome them.

A major difficulty lies in the complexity of the modelled infrastructures in terms of size, multiplicity of interactions and types of interdependencies involved. To address this problem, a number of abstractions and appropriate approaches for composing models are necessary. Therefore, the aim is to produce, from conceptual analyses, generic models that can be refined, instantiated and composed

according to hierarchical modelling approaches. Resorting to a hierarchical approach brings benefits under several aspects, among which: i) facilitating the construction of models; ii) speeding up their solution; iii) favouring scalability; iv) mastering complexity by handling smaller models through hiding at one hierarchical level some modelling details of the lower one. Important issues are how to abstract all relevant information of one level to the higher one and how to compose the derived abstract models. Also, composition rules should be defined to build the models describing each level of the hierarchy from the integration of small generic building blocks describing the models components and their interactions. A set of generic models will be defined for classes of functions, components, mechanisms, behaviours and interdependencies based on the analysis of different control scenarios and involved architectures of power, control and communication system.

Particular focus will be put on the types of failures that are characteristic of interdependent infrastructures [DCLN04]: • Cascading failures that occur when a disruption in

one infrastructure causes the failure of a component in a second infrastructure,

• Escalating failures that occur when an existing failure in one infrastructure exacerbates an independent disruption in another infrastructure, increasing its severity or the time to recovery and restoration from this failure,

• Common cause failures that occur when two or more infrastructures are affected simultaneously because of some common cause.

Also multi-agent based approaches [McDa05] will provide a well-suited tool to investigate interactions between control, power and telecommunication infrastructure. This includes type and size of data exchanged, hardware, software and communication technology used, control algorithms, timing information (deadlines, latency, jitter) etc. Such agents will emulate elements of the different infrastructures and execute several centralised and distributed control applications as they are typically deployed in existing situations and will be available in a future, more decentralised setup. Given the dynamic nature of elements and their data, agents will be well suited to illustrate how faults propagate through an infrastructure and to the other layers. It will allow analysis of the effects of failures in each component on the application, as well as quantification of the dependability characteristics (availability, performance, integrity) of the system. This will also allow evaluating reliability and self healing capacity of power and data architectures [Amin01].

Work package 5: optimisation

This work package integrates the results of the previous tasks in order to obtain the fundamental basis for a generally applicable analysis and design

methodology allowing the optimisation of a new electrical energy system concept, given the desired levels of reliability and associated economic aspects. The idea is to come as close as possible to the utopian 100% reliability, dependability and availability of electricity service for all users in space (grid extent) and time (usage horizon), thereby considering additional goals or constraints such as generating minimal losses, making maximal use of renewable sources, minimizing investments and/or operation costs, etc.

In fact, the electricity customers desire a transparent 100% reliable electricity service rather than a supply, as targeted in the current situation, wherein every load can be connected “at any time to any socket,” yielding a very low usage factor of the overall system distribution grid in particular. The emergence of, for instance, DSM and advanced distributed control generates degrees of freedom and introduces flexibility in the system allowing the increase of its efficient usage, thereby deferring investment costs, yet still enhancing reliability. To derive the optimum, a generalised cost function – containing financial parameters (investment, operation/maintenance cost and societal costs [DeGu04]), penalties for non-supplied kWhs, power quality parameters and such – is to be minimised. The optimisation strategy will look for an optimal selection and for delineation of the use of the technology studied and modelled in the previous work packages, on a given consumption prediction scenario and time horizon. The optimisation algorithm will be related to elaborate model-predictive control ideas [GuHW01]. It has to be applicable to all main sectors (industrial, residential, service, transport) with their typical behaviour patterns.

To find the optimum, an extended contingency analysis will have to be performed as well. This has to go far beyond the traditional N-1 or N-x redundancy scenario analyses [KPAA04], as the interdependent control and communication systems need to be taken into account. The priority of contingencies to be analysed follows out of the developed and derived reliability models. To counteract such system-threatening situations, a whole range of actions or methods are available: procure or activate additional ancillary services, load or generator stimulation through market signals, gradual load shedding, grid separations/resynchronisations of microgrids and many more.

As such this task does not intend to derive what the highest level of reliability is that can be achieved with all thinkable technology deployment, but rather results in a methodology to optimally roll-out future technologies in a given socio-economic context, which by itself is provided by the outside world.

Conclusion

This paper outlines the challenges related to obtaining 100% reliable electricity services, and proposes a methodology to systematically attack the related problems.

Acknowledgment. This project is partially supported by the K.U.Leuven Research Council via GOA/2007/09.

References

[AbDu06] I. Abele-Wigert, M. Dunn, International CIIP Handbook 2006 (Vol. I) - An Inventory of 20 National and 6 International Critical Information Infrastructure Protection Policies, ETH Zurich, Center for security studies, 493 pages.

[ACIP05] ACIP-project, “Analysis & Assessment for Critical Infrastructure Protection,” http://www.iabg.de/acip/, last visited Sep. 2006.

[AkSK04] H. Akkermans, J. Schreinemakers, K. Kok, “Microeconomic Distributed Control:Theory and Application of Multi-Agent Electronic Markets,” CRIS2004, 2nd Int. Conf. on Critical Infrastructures, Grenoble, France, Oct. 2004.

[Amin01] M. Amin, “Towards self-healing energy infrastructure systems,” IEEE Computer Applications in Power, Vol. 14, No. 1, Jan. 2001, pp. 20-28.

[Amin02] M. Amin, “Security challenges for the electricity infrastructure,” IEEE Computer, Vol. 35, No. 4, Apr. 2002, pp. 8-10.

[Ande99] P. M. Anderson, Power System Protection, New York: McGraw-Hill, 1999.

[ArWa01] J. Arillaga, N.A. Watson, “Computer modelling of electrical power systems (2nd Ed.)” New York: Wiley & Sons, 2001.

[Bark02] P. Barker, “Overvoltage considerations in applying distributed resources on power systems,” IEEE, Power Engineering Society, Summer Meeting, 1, 21-25 Jul.2002, pp. 109-114.

[BeDK99] A. Beddoes, Y. Dickson, L. Kerford, “Small-scale single phase embedded generators connected at LV,” in Proc. CIRED 15th Int. Conf. on Electricity Distribution, Nice, Jun. 1999.

[BFE03] BFE, “Technische aansluitings-voorschriften voor gedecentraliseerde productie-installaties die in parallel werken met het distributienet,” BFE standard, C10/11, Aug. 2003.

[Bose03] A. Bose, “Power System Stability: New Opportunities for Control,” Chapter in “Stability and Control of Dynamical Systems and Applications,” D. Liu, P.J. Antsaklis (Eds), Birkhäuser (Boston), 2003.

[Brow02] R.E. Brown, “Electric power distribution reliability,” Dekker, New York, 2002.

[CESI01] CESI, K.U.Leuven, et al., “Facing Vulnerabilities of Interdependent Infrastructures - a

European Conference on partnership in research and development,” CESI, Milan, Italy, 21 Nov. 2001.

[Cobb02] J.A. Cobb, “Preserving quality of service guarantees in spite of flow aggregation,” IEEE/ACM Trans. on Networking, Vol. 10, No. 1, Feb 2002, pp. 43-53.

[DBVW04] K. De Brabandere, B. Bolsens, et al., “A voltage and frequency droop control method for parallel inverters,” 2004 IEEE 35th Ann. power electronics specialists conference, Aachen, Germany, Jun. 2004, pp. 2501-2507.

[DCLN04] I. Dobson, B.A. Carreras, V. Lynch, D.E. Newman, “Complex Systems analysis of series of blackouts: cascading failure, criticality, and self-organization,” Bulk Power System and Control, Cortina d’Ampezzo, Italy, Aug. 2004.

[DDDD06] G. Dondossola, G. Deconinck, F. Di Giandomenico, S. Donatelli, M. Kaâniche, P. Verissimo, “Critical Utility InfrastructurAL Resilience,” Proc. Int. Workshop on Complex

Network and Infrastructure Protection (CNIP-2006), Rome, Italy, 28-29 Mar. 2006, 4 pages.

[DeDB02] G. Deconinck, V. De Florio, O. Botti, “Software-Implemented Fault-Tolerance and Separate Recovery Strategies Enhance Maintainability,” IEEE Trans. Reliability, Vol. 51, No. 2, Jun. 2002, pp. 158-165.

[DeGu04] D. Devogelaer, D. Gusbin, “Een kink in de kabel: de kosten van een storing in de stroomvoorziening,” Working paper 18-04, Federaal planbureau, Belgium, Oct. 2004.

[DoE] Department of Energy (USA), “21 steps to improve cyber security of SCADA networks,” http://www.ea.doe.gov/pdfs/21stepsbooklet.pdf, last visited Sep. 2006.

[DoLM04] G. Dondossola, O. Lamquet, M. Masera, “Emerging Standards and Methodological Issues for the Security Analysis of Power System Information Infrastructures,” Proc. 2nd Int. Conf. on Critical Infrastructures (CRIS-2004), Grenoble, France, Oct. 25-27, 2004, 6 pages on CDROM.

[DuDe02] R. C. Dugan, T. E. McDermott, “Operating conflicts for distributed generation interconnected with utility distribution systems,” IEEE Industry Applications Magazine, Mar. 2002, pp. 19-25.

[DuMa06] M. Dunn, V. Mauer (Eds.), International CIIP Handbook 2006 (Vol. II) - Analyzing Issues, Challenges, and Prospects, ETH Zurich, Center for security studies, 235 pages.

[DVDB05] G. Deconinck, J. Van de Vyver, A. Dusa, R. Belmans, “A Distributed Algorithm for Improved Dependability in Electricity Networks by Integrating Point-to-Point Communication and Control,” Revue de l'Électricité et de l'Électronique (SEE), Sep. 2005, pp. 31-36.

[EnH04] J.H.R. Enslin, P.J.M. Heskes, “Harmonic interaction between a large number of distributed power inverters and the distribution network,” IEEE Trans. Power Electron., Vol. 19, No. 6, Nov. 2004, pp. 1586-1593.

[Fair04] P. Fairley, “The Unruly Power Grid,” IEEE Spectrum, Aug. 2004, pp. 16-21.

[GCBG01] M. Guillot, C. Collombet, P. Bertrand, B. Gotzig, “Protection of embedded generation connected to a distribution network and loss of mains detection,” Proc. CIRED 16th Int. Conf. on Electricity Distribution, Amsterdam, Jun. 2001.

[GPBF03] J. Greakbanks, D. Popovic, M. Begovic, T.C. Green, “On optimisation for security and reliability of power systems with distributed generation,” Power Tech 2003, Bologna, Italy.

[GuHW01] Y. Guo, D. Hill, Y. Wang, “Global Transient Stability and Voltage Regulation for Power Systems,” IEEE Transactions on Power Systems, Vol. 16, No. 4, Nov. 2001, pp. 678-688.

[HaCD99] N. Hadjsaid, J. Canard, F. Dumas, “Dispersed generation increases the complexity of controlling, protecting and maintaining the distribution system,” IEEE Computer Applications in Power, Apr. 1999, pp. 23-28.

[JACK00] N. Jenkins, R. Allan, P. Crossley, D. Kirschen, G. Strbac, Embedded Generation, IEE, London (UK), 2000.

[KaLe02] M. A. Kashem, G. Ledwich, “Impact of distributed generation on protection of single wire earth return lines,” Electric Power Systems Research, Vol. 62, 2002, pp. 67-80.

[KlSK05] B. Klöckl, P. Stricker, G. Koeppel, “On the properties of stochastic power sources in combination with local energy storage,” CIGRÉ symposium on power systems with dispersed generation, Athens, Apr. 2005.

[KPAA04] P. Kundur, J. Paserba, et al., “Definition and Classification of Power System Stability,” IEEE Trans. on Power Systems, Vol.19, No.2, May 2004, pp. 1387-1401.

[KuKi03] J.D. Kueck, B.J. Kirby, “The Distribution System of the Future,” The Electricity Journal (Elsevier Science), Jun. 2003, pp. 78-87.

[KyWi00] N. Kyriakopoulos, M. Wilikens, “Dependability of complex open systems: A unifying concept for understanding Internet-related issues.” Proc. 3rd Information Survivability Workshop (ISW2000) (IEEE Comp. Soc.), Boston/Cambridge, MA, USA, Oct. 2000, 4 pages.

[Lass02] R. H. Lasseter, “Microgrids,” IEEE, Power Engineering Society, Winter Meeting, 1, Jan. 2002, pp. 305-308.

[LGWZ03] J. Liang, T.C. Green, G. Weiss, Q.-C. Zhong, “Hybrid Control of Multiple Inverters in an

Island-Mode Distribution System,” IEEE Power Electronics Specialist Conference (PESC), 2003.

[LuKl00] H.A.M. Luiijf, M.H.A. Klaver, “BITBREUK, de kwetsbaarheid van de ICT-infra-structuur en de gevolgen voor de informatie-maatschappij,” workshop Vulnerabilities of ICT-networks, Amsterdam, The Netherlands, Mar. 2000.

[MBAG03] M. Megdiche, Y. Besanger, J. Aupied, R. Garnier, N. Hadjsaid, “Reliability assessment of distribution systems with distributed generation including fault location and restoration process,” in Proc. CIRED 17th Int. Conf. on Electricity Distribution, Barcelona, May 2003.

[McDa05] S.D.J. McArthur, E.M. Davidson, "Concepts and approaches in multi-agent systems for power applications," Proc. 13th Int. Conf. on Intelligent Systems Application to Power Systems, Nov. 2005, 5 pp.

[MJCL03] B. W. Min, K. H. Jung, M. S. Choi, S. J. Lee, S. H. Hyun, S. H. Kang, “Agent-based adaptive protection coordination in power distribution systems,” Proc. CIRED 17th Int. Conf. on Electricity Distribution, Barcelona, May 2003.

[Mozi01] C. J. Mozina, “Interconnection protection of IPP generators at commercial/industrial facilities,” IEEE Trans. Industry Applications, Vol. 37, No 3, May-Jun. 2001, pp. 681-688.

[MVVD04] K.J.P. Macken, K. Vanthournout, J. Van den Keybus, G. Deconinck, R. Belmans, “Distributed Control of Renewable Generation Units With Integrated Active Filter,” IEEE Trans. on Power Electronics, Vol. 19, No. 5, Sep. 2004, pp. 1353-1360.

[PDHB05] G. Pepermans, J. Driesen, D. Haeseldonckx, R. Belmans, W. D'Haeseleer: “ Distributed generation: definition, benefits and issues,” Energy Policy, Vol. 33, No. 6, JEP01382, Apr. 2005, pp. 787-798.

[PoHW98] D.H. Popovic, D.J. Hill, Q. Wu, “Coordinated static and dynamic voltage control in large power systems,” Bulk power system dynamic and control IV, Santorini, Greece, Aug. 1998.

[RiPK01] S.M. Rinaldi, J.P. Peerenboom, T.K. Kelly, “Identifying, understanding, and analyzing critical infrastructures interdependencies,” IEEE Control Systems Magazine, Dec. 2001, pp. 11-25.

[SAFE05] project IST-2001-32685, http://www.ist-safeguard.org, last visited Sep. 2006.

[Sand05] http://www.sandia.gov/scada/ , last visited Sep. 2006.

[ScLi04] K. Schneider, C.-C. Liu, “A proposed method of partially-decentralised power system protection” International Conference on Securing Critical Infrastructures, CRIS 2004, Grenoble, France, Oct. 2004.

[ShTu02] S.Y. Shi, J.S. Turner, “Multicast routing and bandwidth dimensioning in overlay networks,” IEEE Journal on Selected Areas in Communications, Vol. 20, No. 8, Oct. 2002 pp. 1444-1455.

[StAy00] A. Stankovic, T. Aydin, “Analysis of asymmetrical faults in power systems using dynamic phasors,” IEEE Trans. on Power Systems, Vol. 15, No. 3, Aug. 2000, pp. 1062-1068.

[StVa94] I. Steekmans, A. Van Wijk, “Stroomloos. Kwetsbaarheid van de samenleving; gevolgen van verstoringen van de elektriciteitsvoorziening,” Rathenau Instituut, Den Haag 1994.

[TDVV00] M. Teughels, I. De Coster, E. Van Lil and A. Van de Capelle, “Leaf initiated join hand-over evaluation,” Wireless networks, Vol. 6, No. 5, Nov. 2000, pp. 347-354.

[VaDB04a] K. Vanthournout, G. Deconinck, R. Belmans, “A Small World Overlay Network for Resource Discovery,” Proc. 10th ACM Int. Conf. on Parallel Processing (Euro-Par-2004), Lecture Notes in Computer Science Vol. 3149 (Springer-Verlag), Pisa, Italy, Aug. 31-Sep. 3, 2004, pp. 1068-1075.

[VaDB04b] K. Vanthournout, G. Deconinck, R. Belmans, “A Middleware Control Layer for Distributed Generation Systems,” Proc. of IEEE Power Systems Conference and Exhibition (PSCE-2004), New York City, NY, Oct. 2004, 5 pages.

[vSPv01] A. M. van Voorden, J. G. Slootweg, G. C. Paap, L. van der Sluis, “Potential for renewable energy generation in an urban distribution network,” Proc. CIRED 16th Int. Conf. on Electricity Distribution, Amsterdam, Jun. 2001.

[XMVC02] Z. Xie, G. Manimaran, V. Vittal, A.G. Phadke, V. Centeno, “An information architecture for future power systems and its reliability analysis,” IEEE Trans. on Power Systems, Vol. 17, No. 3, Aug. 2002, pp. 857-863.

[ZhMG99] T. Zheng, E. Makram, A. Girgis, “Power System Transient and Harmonic Studies using Wavelet Transform,” IEEE Trans. on Power Delivery, Vol.14, Nr.4, Oct.1999, pp. 1461-1468.

06 CRIS WS 29