aggregating diophantine equations

10
Zeitschrift fiir Operations Research, Band 16, 1972, Seite 1 -10. Physica-Verlag, Wiirzbur9 Aggregating Diophantine Equations By F. GLOVER, Boulder1), and R. E. D. WOOLSEY, Golden 2) Eingegangen am 29. M/irz 1971 Summary: MATHEWS [1897] has given a theorem for aggregating two diophantine equations with positive integer coefficients into a single equation that has the same solution set as its parents over the nonnegative integers. Building on this result, ELMAGHRABY and WIG [1970] show how to shrink the inequality constraints of a bounded variable integer program to a single constraint equation. However, such applications are limited, as we show, by a greater than exponential growth in coeffi- cient size as successive constraints are aggregated into one. To mitigate this situation, we give new theorems and implementation procedures that provide exponential order reductions in the coefficient growth attending the aggregation process. Zusammenfassung: MATHEWS [1897] hat ein Theorem zur Zusammenfassung zweier diophantischer Gleichungen mit positiven ganzzahligen Koeffizienten zu einer einzigen Gleichung mit derselben L/isungsmenge wie die beiden ursprfinglichen Gleichungen entwickelt. Aufbauend auf dieses Ergeb- nis zeigten ELMAGHRABYund WIG [1970] eine M6glichkeit, die Ungleichungen eines ganzzahligen Optimierungsproblems mit begrenzten Variablen sukzessive auf eine einzige Gleichung zu reduzieren. Die praktische Anwendbarkeit ist jedoch begrenzt. Bei der sukzessiven Zusammenfassung der Neben- bedingungen zu einer einzigen wachsen die Koeffizienten st/irker als exponentiell an. Um diesen Nachteil zu mindern, werden bier neue Theoreme und Anwendungsprozeduren entwickelt. Diese gew/ihrleisten, dab das Anwachsen der Koeffizienten im Verlaufe des Aggregationsprozesses um einen Faktor exponentieller Ordnung geringer ist. 1. Introduction A theorem due to MATHEWS [1897] shows how to aggregate two equations whose coefficients are all positive integers into a single equation that has the same solution set when the variables are restricted to nonnegative integers. Building on this result, ELMACHRABY and WIC [1970] give rules for transforming the inequality constraints of a bounded variable integer program into a single constraint equation. However, as we show, the applications of [MATHEWS] and [ELMA~HRABY/WI6] are severely limited by a greater than exponential growth in the size of the coefficients (reflected in the growth of the right-hand side) as suc- cessive constraints are aggregated into one. We provide new results that yield significantly improved methods for aggregating such equations. In particular, we indicate a modified approach for applying MATHEWS' theorem that provides an exponential order reduction in the coefficient size, and also indicate a generali- 1) Prof. FRED GLOVER, University of Colorado, Boulder, Colorado. 2) Prof. ROBERT E. D. WOOLSEY, Institute for Operations Research, Colorado School of Mines, Golden, Colorado 80401.

Upload: independent

Post on 30-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Zeitschrift fiir Operations Research, Band 16, 1972, Seite 1 -10. Physica-Verlag, Wiirzbur 9

Aggregat ing Diophant ine Equat ions

By F. GLOVER, Boulder1), and R. E. D. WOOLSEY, Golden 2)

Eingegangen am 29. M/irz 1971

Summary: MATHEWS [1897] has given a theorem for aggregating two diophantine equations with positive integer coefficients into a single equation that has the same solution set as its parents over the nonnegative integers. Building on this result, ELMAGHRABY and WIG [1970] show how to shrink the inequality constraints of a bounded variable integer program to a single constraint equation. However, such applications are limited, as we show, by a greater than exponential growth in coeffi- cient size as successive constraints are aggregated into one. To mitigate this situation, we give new theorems and implementation procedures that provide exponential order reductions in the coefficient growth attending the aggregation process.

Zusammenfassung: MATHEWS [1897] hat ein Theorem zur Zusammenfassung zweier diophantischer Gleichungen mit positiven ganzzahligen Koeffizienten zu einer einzigen Gleichung mit derselben L/isungsmenge wie die beiden ursprfinglichen Gleichungen entwickelt. Aufbauend auf dieses Ergeb- nis zeigten ELMAGHRABY und WIG [1970] eine M6glichkeit, die Ungleichungen eines ganzzahligen Optimierungsproblems mit begrenzten Variablen sukzessive auf eine einzige Gleichung zu reduzieren. Die praktische Anwendbarkeit ist jedoch begrenzt. Bei der sukzessiven Zusammenfassung der Neben- bedingungen zu einer einzigen wachsen die Koeffizienten st/irker als exponentiell an. Um diesen Nachteil zu mindern, werden bier neue Theoreme und Anwendungsprozeduren entwickelt. Diese gew/ihrleisten, dab das Anwachsen der Koeffizienten im Verlaufe des Aggregationsprozesses um einen Faktor exponentieller Ordnung geringer ist.

1. Introduction

A theorem due to MATHEWS [1897] shows how to aggregate two equations whose coefficients are all positive integers into a single equation that has the same solution set when the variables are restricted to nonnegative integers. Building on this result, ELMACHRABY and WIC [1970] give rules for transforming the inequality constraints of a bounded variable integer program into a single constraint equation. However, as we show, the applications of [MATHEWS] and [ELMA~HRABY/WI6] are severely limited by a greater than exponential growth in the size of the coefficients (reflected in the growth of the right-hand side) as suc- cessive constraints are aggregated into one. We provide new results that yield significantly improved methods for aggregating such equations. In particular, we indicate a modified approach for applying MATHEWS' theorem that provides an exponential order reduction in the coefficient size, and also indicate a generali-

1) Prof. FRED GLOVER, University of Colorado, Boulder, Colorado. 2) Prof. ROBERT E. D. WOOLSEY, Institute for Operations Research, Colorado School of Mines,

Golden, Colorado 80401.

F. GLOVER and R. E. D. WOOLSEY

zation of MATHEWS' theorem that further reduces this growth by more than half. Finally, we give a theorem that reduces the growth of coefficient size (beyond that of the generalization of MATHEWS' theorem) by a factor which is again of exponential order. Numerical examples are provided to illustrate our results.

2. MATHEWS'Theorem and Some Applications

The theorem of MATHEWS [1897] may be stated as follows:

Theorem 1.

Consider a system of two equations with strictly positive integer coefficients

s l - Y~ a l j x j = b l (1)

s 2 =- ~ a2jx ~ = b 2 . (2)

a) If there exist nonnegative integer values for the xj variables satisfying (1) and (2) then b 2 a , j a 2 i >_ bl for at least o n e j .

b) If t is any positive integer such that

t > b2 Max {a l ja2 j } > b, J

then the solution set of (1) and (2) in nonnegative integer variables is the same as that of the single equation

s 1 + ts 2 = b 1 + tb 2. (3)

MATHEWS observes that this theorem can be applied to two equations with nonnegative (but not all positive) integer coefficients by first replacing them with the pair

S1 "Ol- $2 ~--- bl + b2 (4)

s, + 2s2 = bl + 2bz. (5)

Equations (4) and (5), which have all positive integer coefficients, are clearly equivalent to (I) and (2), and hence can assume the role of(l) and (2) in MATH~WS' theorem.

ELMA~I-rRABV and Wm [1970] have further noted that MATHEWS' results can be applied to a system of linear inequalities in nonnegative integer coefficients

aox j <_ b~ i = 1 , . . . , m (6)

by adding slack variables to transfer the inequalities into the equations

a o x i + Yi = bi i = l , . . . , m (7)

and then recursively use the construction (4) and (5) to produce equations that can assume the role of (1) and (2) in MATHEWS' theorem. Moreover, if some of the alj are negative, but if (6) includes constraints of the form x2 _< U~, then these latter constraints may be added in appropriate multiples to the other constraints (after introducing slack variables) to make all of the a,j nonnegative in (7).

Aggregating Diophantine Equations

In the context of integer programming, such a reduction of a set of constraints to a single constraint seems very appealing at first glance, since knapsack (single- constraint) problems are often easier to solve than problems with multiple con- straints, provided the problem coefficients lie in a "reasonable" range. However, the proposal to recursively aggregate pairs of equations into a single equation using (4) and (5) quickly runs into grave difficulty. To illustrate, consider for simplicity the case in which bi = b for all i. Then it is readily verified that aggre- gating the first two equations of (7) (using (4) and (5)) yields a right-hand side exceeding 2 3 b 2 in (3) of MATH~WS' theorem. Aggregating this resulting equation (3) again with the third equation of (7) yields a new equation (3) in MATHZWS' theorem with a right-hand side exceeding 2 6 b 4. In general the result of recursively aggregating m such equations in this manner will yield an equation with a right- hand side exceeding 2Ub v, where u = 3 . 2 m - z and v = 2 m-1. This number is clearly astronomical for even as few as 7 constraints.

3. An Improved Implementation

Before proceeding to more theoretical considerations, we shall first build upon the observations of ELMAGI-mnBY and WIG to provide a markedly improved way to implement MATHEWS' theorem. That is, instead of recursively aggregation via (4) and (5) we precondition the equations of (7) to make the use of (4) and (5) unnecessary.

Specifically, the preconditioned system results by replacing equation i of (7) with the sum of the first i equations. Then, to begin the aggregation process, the first equation of the new system is replaced by the sum of the first two equations and the resulting first and second equations take the role of (1) and (2). After aggregation, equation (3) becomes the top equation of the remaining system, and once again the second equation (of the current system) is added to the first, and aggregation proceeds. In general, if s = c denotes the equation obtained by aggregating the first k - 1 equations of the preconditioned system, and if y = d denotes the k th equation of this system, then equations (1) and (2) for the next step are respectively s + y = c + d and y = d.

This approach to aggregation can be shown to result in a final right-hand side equal to a posynomial Cm bm + c m _ l b m - 1 + . . . + q b , where % = m 2 ( m - 1)! and c m_ 1 >- m2 (m - 2) 14 m- 2.

This is of course vastly better than recursively aggregating via (4) and (5), but still unfortunately, vastly worse than one would hope possible. The first step toward improving this situation is indicated in the next section.

4. Generalization of MATHEWS'Theorem

The assumptions of MATHEWS' theorem can be weakened and the conclusion strengthened to provide the following statement:

F. GLOVER and R. E. D. WOOLSEY

Theorem 2.

Assume (1) has all nonnegative integer coefficients with bt > 0, and (2) has all positive integer coefficients. Then (a) of Theorem 1 is true and, moreover, (b) is true for all positive integer t satisfying

t > bt - Min {alj } and

t _> b 2 Max {(aw/a2j ) - (bt/za2j)} , where J

z = Max {(6bt + b2)/Min((Satj + a2i)} �9 6=0.1 j

Proof." a) Suppose on the contrary bzat j < b taz j for all j. Multiplying both sides by

xj (_> O) and summing yields b2st < bls2 contradicting the fact that both sides equal bt bE.

b) Clearly (1) and (2) imply (3). To show the reverse implication, suppose s 2 = b 2 + q, which from (3) gives s t = b 1 - tq. We argue q --= 0 as follows. If q > 0, then t > bt - Min {ax~ } implies sj < Min {at j}, which is impossible, by the nonnegativity of the atj and the fact that any solution to (3) must have xj > 0 forsomej. Ifq < 0,thens 2 < b2ands 1 > b t + t .Bu tbyde f in i t i on ta2 j >_ bza l j - b2bl /z for all j, which, upon multiplying both sides by xj and summing yields ts2 >- b2 st - bzbt ( 2 xj)/z. Also, from the definition of z it follows that z > ~ xj for all xj satisfying (3), and hence ts2 > bzs t - b 2 b l . Using the inequality sa > bt + t we obtain ts 2 >_ tb2, contrary to the fact that s 2 < b 2.

We note by the foregoing proof that any value of z will suffice as long as z _> ~ xj holds for all xj satisfying (3). Thus, the theorem can be further improved by specifying that z = (bx + b2 t)/Min (al~ + a2jt). Although this appears at first

J to be unacceptable, since it makes a lower bound on t dependent on t itself, brief examination shows that it is easy to determine the smallest t satisfying the con- ditions of the theorem for z so defined.

On advantage of Theorem 2 in the present context arises from the fact that the theorem can be applied directly to the preconditioned system of Section 3 without having to add the second equation to the first at each step. The effect that this advantage is to produce a final right-hand side somewhat less than half of that resulting from Theorem 1. A second advantage of Theorem 2 is the smaller lower bound imposed on t. The effect of this latter advantage may be substantial (as demonstrated in the numerical examples of Section 7), but depends on the problem data and is difficult to calculate precisely.

5. Theorem for the "Complete Nonnegativity" Case

We now provide a theorem that directly accommodates the possibility of 0 coefficients in both equations (1) and (2).

Aggregating Diophant ine Equations

Theorem 3.

Assume (1) and (2) have nonnegative integer coefficients with bl and b 2 positive. Let t~ and t2 be two positive integers satisfying the following conditions:

(i) t 1 and t2 are relatively prime (ii) ta does not divide b 2 and t 2 does not divide b 1

(iii) t~ > b2 - a2 and t 2 > bt - a~, where a~ denotes the smallest of the positive

aij .

Then, restricting the xj to nonnegative integers, the solution set of (1) and (2) is the same as the solution set of

t l s 1 + t 2 s 2 = t l b 1 + t 2 b 2 . (8)

Proof."

Clearly (1) and (2) imply (8). To show the reverse implication, we rewrite (8) in the form

S 1 = b I - - ( S 2 - - b z ) t 2 / t l .

From (i) it follows that s 2 - b2 = t l q and s~ = b~ - t2q for some integer q. We demonstrate q = 0 by contradiction. If q > 0 then st _< ba - t2 and hence t 2 < b 1 - s 1. Moreover, by (ii) 1) and s~ = b 1 - t2q it follows that s 1 :p 0. Thus, sl > a~, which in turn implies t2 < ba - al contrary to (iii). Thus q > 0 is impos- sible. The impossibility of q < 0 follows similarly.

The chief advantage of Theorem 3 over the preceding theorems is that no preliminary manipulation of the problem equations is necessary (when all coef- ficients are nonnegative) in order to make this theorem immediately applicable. Satisfying condition (i) can sometimes be a nuisance, but ordinarily not a major one. The result of aggregating the m equations of (7) (each with right-hand b) by Theorem 3 gives a final right-hand side that typically lies between 2 m- ~ b m and 2 ~- a (b + 1) m, which is a factor of improvement roughly on the order of m ( m !/2 m)

better than the result of applying Theorem 2. Still further improvements can be obtained from Theorem 3 by means of extensions indicated in the next section.

6. Extensions

Investigation of the proof of Theorem 3 shows that the restrictions placed on tt and tz by condition (iii) can be relaxed to allow these multiples to be selected from a wider range. Turning first to a rather general case, let Y~ denote the set consisting of all distinct positive integers less than k~ that can be assumed by s~ (for i = 1,2). Then it is readily verified that (iii) can be replaced by the condition

(iii') tt > b2 - k2 and tl does not divide any of the numbers b 2 - y, y ~ Y2, t2 > bt - kl and t2 does not divide any of the numbers b I - y, y e Y1.

1) Condition (ii) can of course be dispensed with here if al is redefined to be the smallest a 0.

F. GLOVER and R. E, D. WOOLSEY

Condition (iii') itself implies the still less restrictive condition

(iii") b 2 - t I < k2, Min {y E YI: ti divides b 2 - y}

b I - t 2 • k l , Min {y~ Y2:t2 divides bt - y}.

The generality of this amended version of Theorem 3, and the trouble involved in identifying the sets Y~, would appear to make the extension of limited value. However, we will demonstrate that the elements of I:1 and Y2 can be efficiently generated by a particularly simple set of rules. Specifically, let ~ denote the vector (~1,ct2 . . . . ,c~r) whose components are the distinct positive integers contained among the aij coefficients (for a given value of i), arranged in ascending order. Then Y~ will consist of integers that can be expressed as the vector product c~w, where w = (wh),x~ is a vector of nonnegative integers, determined as follows:

To begin, let w = 0, h = 1, and Yi = 0. 1. Let p be the least positive integer such that ~ w + p c~ h ~ Y~. (When h = 1, p is

always 1.) If ~ w + p % <_ k~ go to Step 2 and otherwise go to Step 3. 2. Let wh = Wh + p, let Y/-- Y/U {c~w} (for the new w vector), and let h = 1.

Then return to Step 1. 3. If h = r, the generation of Y~ is complete. If h < r, let Wh ----- 0, let h = h + 1

and return to Step 1.

The arrangement of the eh in ascending order is the key to the legitimacy of the foregoing procedure, which is designed to automatically bypass alternative possibilities for generating the same integer from different w vectors.

The justification of the procedure following by noting that, if ew + pc~he Y~, then by the sequence in which the w vectors are generated 1) all integers of the form c~w + pet h + e w * ( < ki) have already been generated for those vectors w* > 0 for which w* = 0 for i > h. (The duplications provided by such "in- cremental" vectors w* are avoided by procedure without the check for inclusion in Y~ at Step 1.)

It is also possible to generate the elements of Y~ by the algorithm of GLOVER [1969], which produces these elements in ascending order (thereby allowing an adaptive determination of the "cutoff' number ki), but this method involved somewhat more computational machinery and is undoubtedly not as efficient in the present setting as the procedure indicated.

A final and somewhat less involved extension of Theorem 3 applies to the case in which the coefficients of (1) and (2) may be of arbitrary sign, but in which the presence of currently unaggregated constraints implies Ui > s~ >_ L~, where at least two of these bounds are finite and U~ > b i > Li (i = 1,2). Then as long as tl and t 2 are nonzero and have a greatest common divisor equal to 1, it follows

as before that s~ = bl + t2q, s2 = b2 - t l q, and any set of restrictions on tl and t z

1) The w vectors need not be explicitly manufactured if one desires computat ional streamlining.

A g g r e g a t i n g D i o p h a n t i n e E q u a t i o n s

that force sa or s 2 to be outside of its bounded interval for q @ 0 will make (8) equivalent to (1) and (2). Thus, in particular,'q > 0 is avoided by any of the restric- tions

tt > b 2 - L2 or < bz - U2 o r

t 2 > U 1 - b ~ or < L a - b 1

and q < 0 is avoided by any of the restrictions

t I > U 2 - - b 2 or < L 2 - b 2

o r

t 2 > b~ - L 1 or < b 1 - U~.

For the case in which (1) and (2) have nonnegative coefficients, this extension can yield better multiples than the direct statement of Theorem 3 provided the bounded intervals for sl and s 2 are relatively "tight" (e.g., U i - Li < 2bi). However, assuming that the bounds Ui and L~ are determined by reference to unaggregated equations (such as constraints imposing bounds on the problem variables), the same effect is created simply by adding appropriate (positive or negative) multi- ples of these unaggregated equations to (1) and (2) so that the latter satisfy the nonnegativity conditions of Theorem 3. Thereupon, after aggregating, one may again add appropriate multiples of the unaggregated equations to the aggregated equation to "eliminate" their presence. Although such pre- and post-aggregative manipulations are less convenient to apply than the indicated extension, they may give rise to better aggregated constraints by making it possible to generate the sets Y~ in the manner previously discussed.

7. Numerical Examples The ideas of the preceding sections may be illustrated by applying them to the

following set of equations:

2xl +4Xz + x3 + x 4 + 0 x 5 + 0 x 6 = 9 7xl + 2 x 2 + 4 x 3 + 0 x 4 + x 5 + 0 x 6 = 13 3xl + 5Xa + 6X3 + 0x4 q- 0x5 + x 6 = 17

X~_> 0 and integer, j = 1 , . . . , 6.

For convenience we shall hereafter represent all equations by listing only their coefficients. Thus the original set is represented by

el: 2 4 1 1 0 0 9 e2: 7 2 4 0 1 0 13 e3: 3 5 6 0 0 1 17.

7.1 MATHEWS' Theorem Implemented Via (4) and (5)

First, to apply MAa'IJEWS' theorem using (4) and (5), we replace el and e2 by the equations

F. GLOVER and R. E. D. WOOLSEY

! �9 e~. 9 6 5 1 1 0 22 t , e z. 11 10 6 2 1 0 31

which respectively take the role of (1) and (2). (Variables whose coefficients are 0 in both equations are disregarded.) The condition t > b z Max {al/a2j } thus becomes t > 31.1. Letting t = 32, we obtain the aggregated equation e'~ + 32e~, or

e~: 361 326 197 65 33 0 1014.

Equations e2 and eg are now put in the form of (4) and (5) to produce the equations

e~" 364 331 203 65 33 1 1031 t , . e 3. 367 336 209 65 33 2 1048

With these latter equations in the role of (1) and (2), we have t > 1048" 1, or t = 1049, which yields the final aggregated equation

385,347 352,795 219,444 68,850 34,650 2,099 1,100,383.

7.2 MATHEWS' Theorem Implemented with the Preconditioned System

Equations el, e 2 and e3 in the preconditioned form of Section 3 become

dl: 2 4 1 1 0 0 9 d 2: 9 6 5 1 1 0 22 d3: 12 11 11 1 1 1 39.

To apply MATn~WS' theorem we first add d 2 to d 1 to obtain d'~: 11 10 6 2 1 0 31

Equations.d] and d 2 respectively take the roles of (1) and (2). Thus we obtain t > 22" 2, or t = 45, to give the aggregated equation d] + 45d 2

d~: 416 280 231 47 46 0 1021.

The next step is to treat d~ as the new "first" equation (in place of d~ and d2). Adding the current "second" equation, d3, to d~ gives

d~: 428 291 242 48 47 1 1069.

Equation d~ now takes the role of (1) and d 3 takes the role of (2). Thus, t > 39.48, or t = 1773. The final aggregated equation, d's + 1773d 3 is then

21,704 19,794 19,745 1,821 1,820 1,774 70,216.

7.3 Aggregating by Theorem 2

We again use the preconditioned equations dl, d 2 and d 3, but without adding the second to the first at each step. An acceptable value for z in Theorem 2 at each step of aggregating such a preconditioned system is almost invariably the right- hand side of the equation that currently takes the role of (2). (A quick check of the resulting equation (3) immediately shows whether this assumption is valid.

Aggregating Diophantine Equations

More simply, validity is assured in the present context whenever the computed

value of t exceeds the right-hand side of equation (I).) Thus with d~ and d 2 initially

in the role of (1) and (2), the first value of t is computed relative to z = 22. From the inequality t > b 2 Max { ( z a ~ - bO/za2j } we obtain t > 22 ' (22"

�9 4 - 9)/22" 6, or t = 14. This exceeds the right-hand side of d 1 ( = 9), verifying the legitimacy of the assumed z value, and thus, the aggregated equation is

d 1 + 14d2, or d~: 128 87 71 15 14 0 317.

For the next step, d 3 takes the role of (2) and we let z = 39. The restriction on t becomes t > 39.(39" 1 2 8 - 317)/39.12, or t = 390 (which exceeds 317). The

final aggregated equation is therefore, d~ + 390d3, or

4,808 4,377 4,361 405 404 390 15,527.

7.4 Aggregating by Theorem 3

Theorem 3 applies directly to the original equations e~, e z and %. Taking e~

and e 2 as (1) and (2), t 1 and t 2 are required to satisfy t 1 > 13 - 1, t 2 > 9 - 1, where t 1 is not permitted to divide 13 and t 2 is not permitted to divide 9. Thus we may select t~ = 14 and (to assure that t 1 and t 2 are relatively prime) t~ = 11.

This yields the first aggregated equation

e2: 105 78 58 14 11 0 269.

With e~ and % now in the roles of (1) and (2), we have tl > 269 - 11 and

t 2 > 17 - 1, where t~ may not divide 269 and t 2 may not divide 17. The values t 1 = 259 and t 2 = 18 are relatively prime and hence are acceptable�9 The final

aggregated equation is then

2,667 2,699 2,598 252 198 259 9,245�9

The preceding equation can be improved using the approach of Section 6. Op- portunity for a smaller value of tl o r t 2 occurs only for an equation in which all

coefficients exceed 1, and hence, in the present context, for an equation which is already an aggregate of others�9 Equation e~ is thus the only candidate�9 The set I11 for this equation can be generated entirely from the two coefficients 11 and 14.

This follows from the fact that these coefficients are aggregated from coefficients which generate all possible left-hand sides in equations ea and %. (In general, to generate the set II1 for any aggregated equation derived from (7), it suffices to

restrict attention to the coefficients of the slack variables�9 It is unnecessary to go to the trouble of selecting a value of k~ and of generating the set I11 in this case, since an acceptable value of t~ is readily verified to be given by ta = 269 = (14- 11 - (14 + 11)), or t~ = 140. Thus, to make t z and ta relatively prime, we select t 2 = 19 to give the aggregated equation

2,415 2,182 1,961 266 209 140 7,491.

10 F. GLOVER and R. E. D. WOOLSEY

8. Concluding Remarks

An intriguing possibility for using the results of the preceding sections is to apply them "in reverse"; that is, to use them to disaggregate an aggregated equa- tion into new component equations. (Such an approach might be used as part of a cutting strategy, for example.) Disaggregation procedures can be attempted by a search over possibilities, and the development of effective ways to do this would be especially worthwhile. However, new theorems that yield more direct methods of disaggregation would be particularly useful. To give a simple example to demonstrate the point, it would be handy to be able to easily determine that the equation

9xa + 8X 2 "~- 12X3 = 46

can be disaggregated into the equations

x 1 = 2, x 2 = 2, x 3 = 1.

The theorems of this paper make such a determination possible, but do not give an immediate clue about what multiples should be examined to effect the dis- aggregation.

References

BRADLEY, G. H. : Transformation of Integer Programs to Knapsack Problems, Discrete Mathematics, l, No. 1, 29-45 , 1971.

ELMAGHRABY, S. E. and M. K. WIG: On the Treatment of Stock Cutting Problems as Diophantine Programs, Operations Research Report No. 61, North Carolina State University at Raleigh, May 11, 1970.

GLOVER, Fred: Integer Programming Over a Finite Additive Group, Siam J. Control, 7, No. 2, 213-231, 1969.

MATHEWS, G. B. : On the Partition of Numbers, Proceedings of the London Mathematical Society, 28, 486-490, 1897.