bipolar pattern association using a two-layer feedforward neural network

4
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTALTHEORY AND APPLICATIONS, VOL. 40, NO. 12, DECEMBER 1993 943 Transactions Briefs Bipolar Pattern Association Using a Two-Layer Feedforward Neural Network Jianbian Hao, Shaohua Tan, Member, IEEE, and Joos Vandewalle, Fellow, IEEE Abstract-The objective of this paper is to present a design technique that constructs a two-layer feedforward network for the realization of an arbitrary set of bipolar associations (pi, qi), i = 1, . . . , k. The underlying idea is to use a layer of the hard-logic neurons to identify each pj in the winner-take-all fashion. Then the second layer of the so-called sign neurons picks up the corresponding pattern qj. An important feature of the net is that it can be used as an error-correcting associative memory if the thresholds of the hard-logic neurons in the first layer are properly adjusted. I. INTRODUCTION The design problem treated herein is the following: Let an arbitrary but well-defined’ set of associations be given by (pz + qz), i = 1,. . . , k, where p, and q; are bipolar vectors2 of dimensions n and m, respectively. We want to find a layered feedforward neural network with linear threshold neurons that implements this set of associations. In other words, the net should be built such that when p, is inserted as its input, its output will be qz. Sometimes, the input vector p, (i = l,...,li) are subject to distortion, thus may differ in several bits from the original p,. We want the net be designed so that when a distorted input pz (different from p, only up to a prescribed number of bits) is applied as the input to the net, q; will still be generated as the corresponding output. It can be expected that this extra requirement may lead to some complication of the design. Without explicit reference to the layered feedforward network structure, this abstract design problem has its origin in many areas, including associative memories, cognitive inference systems, pattem recognition, logic circuit, error-correction coding, control, etc., and has been considered quite extensively in the past decades [2], [3], [51-[71. The well-known Hopfield nets represent one class of techniques that utilize convergent dynamics for the association. The problem with this class of techniques, however, is that we need to understand the relationship between the weights and the fixed points to come up with effective weight design rules. However, the intriguing nonlin- earity of the neural network often obscures such a relationship (see discussions in [4]). Up to now, there is no single design rule that can Manuscript received March 19, 1992; revised November 4, 1992. This work was partially carried out at the ESAT Laboratory, Katholieke Universiteit Leuven, in the framework of a Concerted Action Project of the Flemish Com- munity, entitled “Applicable Neural Networks.” The scientific responsibility is assumed by its authors. This paper was recommended by Associate Editor G. E. Ford. J. Hao and J. Vandewalle are with the ESAT Laboratory, Department of Electrical Engineering, Katholieke Universiteit Leuven, Heverlee, Belgium. S. Tan is with the Department of Electrical Engineering, National University of Singapore, Singapore. IEEE Log Number 9211965. ‘The well-definedness is in the sense that p, # p, for z # j. ’Bipolar vectors are vectors whose entries are either -1 or 1. Unless otherwise stated. all the vectors are assumed to be column vectors. Layer 1 Layer 2 1 1. Fig. 1. Example of two-layer feedforward net. meet (deterministically or statistically) both the fixed point and the basin requirements. The other class of techniques uses the feedforward structure of perceptron neural nets. Depending on how the weights and the thresholds are set, there is iterative design that employs a learning mechanism [l], and direct design that sets the weights directly using algebraic or geometrical properties of the feedforward nets [6]. Typical problems with this class of techniques are the lack of analytical insights to guide the learning process, the complexity in exploring the geometrical structure of the given data in association with the feedforward nets, and the excessive computation load. The objective of this paper is to present a new design technique for the problem. This technique belongs to the second class of techniques, and has the important features of being systematic, simple, general, and efficient. The basic network structure generated by the new technique is a generic two-layer feedforward net. There are two different types of neurons used for the two different layers. Both the number of neurons in each layer and the corresponding weights are determined directly from the given set of associations without computation. When there is the error-correction requirement, the thresholds for the neurons in the first layer will be adjusted properly. Such an adjustment is straightforward and results in different error- correction ranges for the input vectors pt (i = 1, . . . , k). The paper is structured as follows. In Section 11, the discrete neurons and the two-layer feedforward not to be used in our de- velopment are discussed. This discussion also serves to establish the nomenclature and conventions for the subsequent exposition. Section I11 presents the detailed construction of the network for a given set of vector associations. We also compare the new construction with the existing ones. Section IV shows how to modify the thresholds to meet the additional requirement of error correction in the input vectors. An illustrative example is provided in Section V, followed by additional remarks and conclusions in Section VI. 11. NEURONS, MULTILAYER FEEDFORWARD NETWORKS, AND VECTOR ASSOCIATIONS The feedforward neural net considered herein has two layers, as shown in Fig. 1. Each of the layers consists of a set of computing elements called neurons. As this is a feedforward structure, the connection is defined only for neurons in adjacent layers. Since the outputs of the neurons in the first layer are not available directly from the point of view of network output, all the neurons in the first layer are called hidden neurons. Hence the first layer itself is called the hidden layer of the net. 1057-7122/93$03.00 0 1993 IEEE

Upload: j

Post on 01-Mar-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Bipolar pattern association using a two-layer feedforward neural network

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 40, NO. 12, DECEMBER 1993 943

Transactions Briefs

Bipolar Pattern Association Using a Two-Layer Feedforward Neural Network

Jianbian Hao, Shaohua Tan, Member, IEEE, and Joos Vandewalle, Fellow, IEEE

Abstract-The objective of this paper is to present a design technique that constructs a two-layer feedforward network for the realization of an arbitrary set of bipolar associations ( p i , qi), i = 1, . . . , k. The underlying idea is to use a layer of the hard-logic neurons to identify each pj in the winner-take-all fashion. Then the second layer of the so-called sign neurons picks up the corresponding pattern qj. An important feature of the net is that it can be used as an error-correcting associative memory if the thresholds of the hard-logic neurons in the first layer are properly adjusted.

I. INTRODUCTION The design problem treated herein is the following: Let an arbitrary

but well-defined’ set of associations be given by ( p z + q z ) , i = 1,. . . , k, where p , and q; are bipolar vectors2 of dimensions n and m, respectively. We want to find a layered feedforward neural network with linear threshold neurons that implements this set of associations. In other words, the net should be built such that when p , is inserted as its input, its output will be q z .

Sometimes, the input vector p , (i = l,...,li) are subject to distortion, thus may differ in several bits from the original p , . We want the net be designed so that when a distorted input p z (different from p , only up to a prescribed number of bits) is applied as the input to the net, q; will still be generated as the corresponding output. It can be expected that this extra requirement may lead to some complication of the design.

Without explicit reference to the layered feedforward network structure, this abstract design problem has its origin in many areas, including associative memories, cognitive inference systems, pattem recognition, logic circuit, error-correction coding, control, etc., and has been considered quite extensively in the past decades [ 2 ] , [3], [51-[71.

The well-known Hopfield nets represent one class of techniques that utilize convergent dynamics for the association. The problem with this class of techniques, however, is that we need to understand the relationship between the weights and the fixed points to come up with effective weight design rules. However, the intriguing nonlin- earity of the neural network often obscures such a relationship (see discussions in [4]). Up to now, there is no single design rule that can

Manuscript received March 19, 1992; revised November 4, 1992. This work was partially carried out at the ESAT Laboratory, Katholieke Universiteit Leuven, in the framework of a Concerted Action Project of the Flemish Com- munity, entitled “Applicable Neural Networks.” The scientific responsibility is assumed by its authors. This paper was recommended by Associate Editor G. E. Ford.

J. Hao and J. Vandewalle are with the ESAT Laboratory, Department of Electrical Engineering, Katholieke Universiteit Leuven, Heverlee, Belgium.

S. Tan is with the Department of Electrical Engineering, National University of Singapore, Singapore.

IEEE Log Number 9211965. ‘The well-definedness is in the sense that p , # p , for z # j . ’Bipolar vectors are vectors whose entries are either -1 or 1. Unless

otherwise stated. all the vectors are assumed to be column vectors.

Layer 1 Layer 2

1 1.

Fig. 1. Example of two-layer feedforward net.

meet (deterministically or statistically) both the fixed point and the basin requirements.

The other class of techniques uses the feedforward structure of perceptron neural nets. Depending on how the weights and the thresholds are set, there is iterative design that employs a learning mechanism [l], and direct design that sets the weights directly using algebraic or geometrical properties of the feedforward nets [6]. Typical problems with this class of techniques are the lack of analytical insights to guide the learning process, the complexity in exploring the geometrical structure of the given data in association with the feedforward nets, and the excessive computation load.

The objective of this paper is to present a new design technique for the problem. This technique belongs to the second class of techniques, and has the important features of being systematic, simple, general, and efficient. The basic network structure generated by the new technique is a generic two-layer feedforward net. There are two different types of neurons used for the two different layers. Both the number of neurons in each layer and the corresponding weights are determined directly from the given set of associations without computation. When there is the error-correction requirement, the thresholds for the neurons in the first layer will be adjusted properly. Such an adjustment is straightforward and results in different error- correction ranges for the input vectors pt ( i = 1, . . . , k).

The paper is structured as follows. In Section 11, the discrete neurons and the two-layer feedforward not to be used in our de- velopment are discussed. This discussion also serves to establish the nomenclature and conventions for the subsequent exposition. Section I11 presents the detailed construction of the network for a given set of vector associations. We also compare the new construction with the existing ones. Section IV shows how to modify the thresholds to meet the additional requirement of error correction in the input vectors. An illustrative example is provided in Section V, followed by additional remarks and conclusions in Section VI.

11. NEURONS, MULTILAYER FEEDFORWARD NETWORKS, AND VECTOR ASSOCIATIONS

The feedforward neural net considered herein has two layers, as shown in Fig. 1. Each of the layers consists of a set of computing elements called neurons. As this is a feedforward structure, the connection is defined only for neurons in adjacent layers. Since the outputs of the neurons in the first layer are not available directly from the point of view of network output, all the neurons in the first layer are called hidden neurons. Hence the first layer itself is called the hidden layer of the net.

1057-7122/93$03.00 0 1993 IEEE

Page 2: Bipolar pattern association using a two-layer feedforward neural network

944 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 40, NO. 12, DECEMBER 1993

The neurons considered here are -multi-input single-output nonlin- ear functional elements with the following transfer function:

where y is the output of the neuron; xz is the ith input of the neuron, and w, is its corresponding weight; U is a constant threshold value; and f(.) is some nonlinear function. We are interested in two types of nonlinear functions, thus two types of neurons. The nonlinear function for the first type is given by

Such an fl(.) is called a hard-logic function. The corresponding neurons are naturally called hard-logic neurons.

The following nonlinear function f~(.) defines the second type of neurons as

This is the so-called signfinction, thus the corresponding neurons are termed sign neurons. A conventional neural net is normally composed of a single type of neurons. We shall see that it is, in fact, advantageous to use two different types of neurons in the same network. This is one of the important features of our approach to the design of the net.

Returning to the two-layer network structure, all the neurons in the first layer are the hard-logic neurons, and those in the second layer are the sign neurons. It will become clear later that this special arrangement will enable a simple implementation of the given associations.

For easy reference of the inputs and outputs of the nets, and of different neurons, weights, and threshold values in the feedforward network, the following convention is followed. p,, q, are used to denote the ith input vector and the j t h output vector of the two-layer feedforward nets, respectively. For a gvien two-layer feedforward net, we normally count its layers from the input to the output, starting with the layer immediately following the inputs. To count the number of neurons in a particular layer, we start from the neuron at the top going downward. w,, in layer k is normally used to refer to the weight on the link between the ith neuron in the (k - 1)th layer to the j t h neuron in the kth layer. Similarly, U~ in layer k refers to the threshold for the ith neuron in layer k. We also speak of cross weight wz, if i # j , and direct weight if i = j. Following the convention, it is relatively easy to refer to any particular neuron and its associated weights and thresholds. Fig. 1 gives an example of our numbering convention.

In. PATTERN ASSOCIATION BY TWO-LAYER FEEDFORWARD NETS We now discuss the detailed design procedure to construct the

feedforward net that realizes the given associations. To begin, let a set of k well-dejned associations be given by

p , + q2 (i = l , . . . , k )

where p , and qz are bipolar vectors of dimensions n and m, respec- tively. As the first step of the construction, a two-layer feedforward net configuration is set up, which consists of the layer of the input nodes, the first and second layers of neurons. The number of input nodes is n, which is the same as the dimension of the given input

patterns p,. The neurons in the first layer, k in number (k is the number of the associations), are the hard-logic neurons. Finally, the total number m of sign neurons will be allocated to the second layer. From the preceding arrangement, it is clear that the geometric size of our feedforward net is dictated by the length and the dimensions of the given patterns. It grows linearly with the size of the association problem.

Having fixed the configuration of the net, we can now assign the weight for each connection and the threshold for each neuron. The assignment is a straightforward substitution process, and no computation is necessary. More precisely, the weight connecting the ith input node to the j th neuron in the first layer is the ith entry of the pattern p, (i = 1, 2 , . . . , n; j = 1, 2 , . . . , k); and the weight from the ith neuron in the first layer to the j th neuron in the second layer is the j t h entry of the pattern qz (i = 1, 2 , . . . , k; j = 1, 2 , . . . , m). In other words, the j th neuron in the first layer corresponds to the j th pair of matched patterns (p, and q,) and is connected to the input nodes by the corresponding elements of p, and to the neurons in the second layer by the corresponding elements of 4,. Observe that this weight assignment process guarantees that all the weights will assume either 1 or -1 only, which is a great advantage from the viewpoint of hardware implementation.

The thresholds of all the neurons in the second layer (sign neurons) are simply set to zero, whereas those neurons in the first layer (hard-logic neurons) are normally set in accordance with the error- correction requirement. When there is no such requirement, we tend to set it to be 0, where n - 1 < 6' < n, and n is the dimension of the input patterns. In the rest of the section, we shall choose 6' = n - 0.5 for our analysis.

Let us prove that the two-layer feedforward net designed in the preceding fashion indeed realizes the given set of associations. To this end, we arrange the weights from the input nodes to Layer 1 into a k x n matrix W1. The arrangement is such that the entries in the ith (i = 1,. . . , k) row of WI are the weights from all the input nodes to the ith neuron of Layer 1. Furthermore, the weights from Layer 1 to Layer 2 can be arranged into an m x k matrix WZ. In this case, the entries in the ith (i = 1, 2 , . . . , k) column of WZ are the weights form the ith neuron of Layer 1 to all the neurons of Layer 2. Because of the ways that the weights are set, we obviously have the following equations:

where p, ' s and q z ' s are the given patterns (column vectors).

by O1 and Similarly, if the threshold vectors of Layers 1 and 2 are denoted

respectively, we have

(3)

m x l

Upon denoting the n-dimensional input vector to the net by 2, the k-dimensional output vector of Layer 1 by g and the m-dimensional output vector of the net by y. we can easily see that the function implemented by the feedforward net is the following:

Page 3: Bipolar pattern association using a two-layer feedforward neural network

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 40, NO. 12, DECEMBER 1993

1- 1 1 1 ' 1 1-

~

945

p 2 =

where f l ( . ) and fz (.) are, respectively, the vectors of the hard-logic and sign functions defined previously.

Suppose the vector p, ( i = 1,. . . , IC) is applied as the input vector to the net, that is, I = p, . Then the output vector g of Layer 1 will be

-1 -1

Since pT . p, < n - 0.5 for j # i ('j = 1, 2 , . . . , IC) and p: ' p , = n > n - 0.5, we have g = [ 0 . . . 0 1 O...0lT. Or, in other words, only the ith element of g is 1, and all the rest are 0.

Consequently, we have

p3 =

which is what we have to prove.

--1- -1 -1 -1 ' 1

- 1-

Iv. ASSOCIATION NETWORK AS HETEROASSOCIATIVE MEMORY We now show that the feedforward net designed in the preceding

section can easily incorporate the error-correction feature. In this sense, the network functions as a heteroassociative memory.

Let us provide a short discussion on the heteroassociative memory before proceeding to the detailed design procedure. The heteroasso- ciation deign problem is stated in its general form as follows: For a given set of bipolar associations p, + qz and a set of nonnegative integers rt (i = 1,. . . , IC), we need to build a neural net such that when p, is presented as the input of the net, q1 will be the output; moreover, all the input patterns with Hamming distance equal to or smaller than r , to p , should also give rise to the output qz ( i = 1,. . . , IC). r , is normally called the ith error-correction range. Because of the constraint imposed by the error-correction range, the input patterns p l , . . . , pk cannot be completely chosen at will. In fact, the input patterns must satisfy the following condition:

pTp, < n - 2r, - 2r, ( 5 )

for any i # j ( i , j = 1, 2 , . . . ,k) to avoid overlap among the adjacent error-correction ranges.

To visualize condition (3, each p, along with its error-correction range can be perceived as the center of the ith square with the side length T~ in the two-dimensional plane. Thus, all the input pattems correspond to IC such squares. Equation (5) implies that these IC squares will not overlap. The arbitrariness of the input pattems is therefore confined to within condition (5). Note that (5) together with the well-definedness of the set of association is the basis to judge if a given heteroassociation problem is meaningful. Any set of associations satisfying these two conditions are termed proper.

It is easy to verify that to incorporate the error-correction require- ment on the feedforward net, the only modification needed is on the thresholds of the neurons in the first layer of the net, and it can be chosen as n - 2r, - 0.5 for the ith neuron. Or in the following vector notation:

n - 27-1 - 0.5 n - 2r2 - 0.5

(6)

n - 2Tk - 0.5

which is readily verified.

TABLE I

Layer 1 Layer 2

w11 w12 w13 w2 1

w22 w23 w3 1

w32 w3 3

w4 1

w4 2

w43 w5 1

w5 2

w5 3

w6 1

w62 w63 el 02

1 1

-1 1 1

-1 1

- 1 - 1

1 - 1 -1

1 -1

1 1

- 1 1

1.5 3.5

1 1 1 1

-1 - 1 -1 - 1

1 * *

* *

* *

0 0

03 3.5 0

V. DESIGN EXAMPLES Let us illustrate our design approach using a simple example. Example: In this example, a feedforward net is designed for the

associations of (pi + ql), (p2 + q 2 ) , and (p3 + 43). The patterns are given as

Pl = 41 = 1 :I

4 2 = [:;I 4 3 = [I;] (7)

and the error-correction ranges are specified by r1 = 2, ~2 = 1, and r3 = 1.

The input pattern condition (5) is obviously satisfied by the three given pattems as

pTp2 = -2 < n - 2r1 - 2r2 = 0 pTp3 = -2 < n - 2rl - 2T3 = o p:p3 = -2 < n - 2I-3 - 2rz = 2

where n = 6 is the dimension of the p, ' s . Next, we construct a two- layer feedforward net with six input nodes, three neurons in Layer 1, and three neurons in Layer 2, and assign the weights and the thresholds shown in Table I.

Page 4: Bipolar pattern association using a two-layer feedforward neural network

946 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS-I: FUNDAMENTAL THEORY AND APPLICATIONS, VOL. 40, NO. 12, DECEMBER 1993

An important observation from the examples is that the design is carried out systematically with virtually no computation involved (only minor computations are needed in the determination of the thresholds in the first layer). Another observation is that the resulting network is not necessarily greater than those generated by the conventional techniques. In fact, the computation efficiency of the design technique can be fully appreciated when we use- it to solve very large problems.

VI. CONCLUSIONS We have presented a general design method that produces a

two-layer feedforward neural net for an arbitrarily specified (but well- defined) bipolar association problem. We have also shown that the network can have the feature of error correction if the thresholds for the neurons in the first layer are set properly.

The obvious advantage in using a feedforward net for pattern association is that we have to be concerned only with the algebraic rather than the dynamic nature of the neural net. The latter is obviously much mor eintriguing and far less understood. It has been shown that with the feedforward net we have the perfect control over the size of the correction range. This is in contrast to the feedback net for which the correction range will have to be made as the basin of attraction of a stable fixed point, which is nearly impossible to be controlled by changing the weights and/or the thresholds only.

In terms of very-large-scale-integration implementation, the feed- forward net discussed earlier also has greater simplicity than the feedback nets. In this case, subtle issues such as stability need not be of concern in the circuit design. The drawbacks seem to be that the net structure is problem dependent, and the two-layer structure does not allow modular implementation of the net. Thus when the underlying matching problem is changed or has higher dimension, the crircuit will have to be redrawn to reflect the modification, and the existing circuit cannot be used any longer. Another disadvantage seems to be that the thresholds of the first layer depend on the dimension of the input patterns. This is somehow undesirable as the higher input dimension will lead to higher thresholds. However, this latter difficulty can be avoided by the rearrangement of the algorithmic order of the network computations. One may also realize the threshold values of n - 0.5 as described earlier can pose difficulty for larger n when one tries to build an actual implementation circuit. One way to deal with the proble is to introduce a preprocessing mechanism that near-orthogonalizes the given set of data. This allows a more robust threshold to be introduced. The authors are currently looking at the possibility of actual circuit implementation of the technique.

r11

r21

131

141

[51

REFERENCES

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representation by error propagation,” in Parallel and Distributed Pro- cessing, D. E. Rumelhart and J. L. McClelland, Eds. Cambridge, MA: MIT Press, vol. 1, ch. 8. J. J. Hopfield, “Neuronal networks and physical systems with emergent collective computational abilities,” Proc. Natl. Acad. Sciences, vol. 79, pp. 2554-2558, 1982. T. Kohonen, Selforganization and Associative Memory, 2nd ed. New York Springer-Verlag. 1988. A. Dembo, “On the capacity of associative memories with linear thresh- old functions,” IEEE Trans. Inform. Theory, vol. IT-35, pp. 709-720, 1989. J. Hao, S.. Tan, and J. Vandewalle, “A geometric approach to the structural synthesis of multilayer perceptron neural networks,” in Proc. 1990 Int. Con$ Neural Net., vol. 2, 1990, pp. 881-885.

[6] E. B. Baum, “On the capabilities of multilayer perceptrons,” J. Com-

[7] J. E. Hopcroft and R. L. Mattson, “Synthesis of minimal threshold logic plexity, vol. 4, pp. 193-215, 1988.

networks,” IEEE Trans. Elec. Comput., vol. EC-6, pp. 552-560, 1965.

Power Converters as Natural Gyrators

Mehrdad Ehsani, Senior Member, IEEE, Iqbal Husain, Student Member, ZEEE, and M. 0. BilgiG

Abstract-This paper presents a study based on the open-loop gyrator behavior of a class of dc-dc converters characterized by double bridges. If the switching frequency of the double bridge converter is high enough with respect to its natural frequencies, the converter can be viewed as a gyrator from its terminals. Unlike other gyrator modeling approaches of converters that appear in the literature, the double bridge converters naturally behave (open loop) as a gyrator without any extra control circuit.

I. INTRODUCTION A gyrator is a realizable network that couples an input port to

an output port through a gyrostatic coefficient. It is a lossless and storageless two-port network that transforms one-port networks at the gyrator output into their dual, with respect to their gyration conductance, when viewed from the input side. The circuit symbol for the basic gyrator appears in Fig. 1 , and the defining equations are [I]

where g is called the gyration conductance and has the unit of l/n. The instantaneous power transfer through the gyrator network is conserved (a POPI circuit) and is given by

p = ( l / g ) i 1 i * = gu1v2. (3)

These gyrator equations also describe the instantaneous behavior of switching power converters, averaged over one switching cycle. Any dc-dc switching power converter can satisfy the POPI condition by closed-loop control of output voltage or current. Several switching power converters, such as buck, boost, buck-boost, and flyback converters [2 ] , [3], are possible candidates for gyrator realization. In all these realization attempts, an external closed-loop control circuit is required to force the converter circuit to behave as a gyrator. In the following analysis, it is shown that the double bridge converter is naturally a gyrator circuit, without any closed-loop control to force this behavior.

11. DOUBLE BRIDGE CONVERTERS The double bridge converters are characterized by an input bridge

and an output bridge for power conversion and linked by a passive element for the temporary storage of energy. The double bridge

Manuscript received December 9, 1991; revised February 12, 1993, and June 30, 1993. This paper was recommended by Associate Editor K. Onaga.

The authors are with the Power Electronics Laboratory, Department of Electrical Engineering, Texas A&M University, College Station, TX 77843.

IEEE Log Number 9211964.

1057-7122/93$03.00 0 1993 IEEE