4352-16011-1-pb.pdf
TRANSCRIPT
7/27/2019 4352-16011-1-PB.pdf
http://slidepdf.com/reader/full/4352-16011-1-pbpdf 1/7
DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 4, ISSUE 1, JANUARY 2009 28
A COMPARATIVE STUDY OF THE METHODS OF
SOLVING NON-LINEAR PROGRAMMING PROBLEM
Bimal Chandra Das
Department of Textile Engineering
Daffodil International University, Dhaka, BangladeshE-mail: [email protected].
Abstract: The work present in this paper is based
on a comparative study of the methods of solving
Non-linear programming (NLP) problem. We
know that Kuhn-Tucker condition method is an
efficient method of solving Non-linear
programming problem. By using Kuhn-Tucker
conditions the quadratic programming (QP)
problem reduced to form of Linear
programming(LP) problem, so practically simplex
type algorithm can be used to solve the quadratic
programming problem (Wolfe’s Algorithm).Wehave arranged the materials of this paper in
following way. Fist we discuss about non-linear
programming problems. In second step we discuss
Kuhn- Tucker condition method of solving NLP
problems. Finally we compare the solution
obtained by Kuhn- Tucker condition method with
other methods. For problem so consider we use
MATLAB programming to graph the constraints
for obtaining feasible region. Also we plot the
objective functions for determining optimum
points and compare the solution thus obtained
with exact solutions.
Keywords: Non-linear programming, objective function ,convex-region, pivotal element, optimal
solution.
1 IntroductionFor decision making optimization plays thecentral role. Optimization is the synonym of
the word maximization/minimization. It
means choose the best. In our time to takeany decision, we use most modern scientific
methods best on computer implementations.Modern optimization theory based on
computing and we can select the best
alternative value of the objective function.The optimization problems have two major divisions. One is linear programming
problem and other is non-linear programming
problem [1]. But the modern game theory,dynamic programming problem, integer
programming problem also part of the
optimization theory having wide range of
application in modern science, economics
and management. Linear and non-linear
programming problem optimizes an objective
function subject to a class of linear and non-
linear equality or inequality conditions called constraints, usually subject to non-negativity
restrictions of the variables. It was introduced
by [2]. In the present work we tried tocompare some methods of non-linear
programming problem. We know that, for
solving a non-linear programming problemvarious algorithms can be used, but only few
of the methods will be affective for solving
problems. Usually none of the algorithmshave no relations with others and there is no
universal algorithm like simplex method in
linear programming for solving a non-linear
programming problem. In the work we willapply the methods for solving problem rather
than its theoretical descriptions.
2 Non-linear programmingLike linear programming, non-linear
programming is a mathematical technique for determining the optimal solutions to many
business problems. The problem of
maximizing or minimizing a given function
⎪⎭
⎪⎬
⎫
≥≤
=
)b(b) x(g
) x f( Z
iii
Subject to
(1)
is called to general non-linear programming
problem, if the objective function i.e. the
function )(−
x f or any one of the constraints
function )( xgi is non-linear ; or both are
non-linear for −
x nonnegative. In a word, the
non-linear programming problem is that of
choosing nonnegative values of certainvariables, so as to maximize or minimize a
given non-linear function subject to a givenset of linear or non-linear inequality
constraints; or maximize or minimize a linear
7/27/2019 4352-16011-1-PB.pdf
http://slidepdf.com/reader/full/4352-16011-1-pbpdf 2/7
DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 4, ISSUE 1, JANUARY 2009 29
function subject to a given set of non-linear inequality. The problem (2.1) can be re
written as:
Maximize Z= )x.....,,,,( n321 x x x f
Subject to
⎪⎪⎪⎪
⎭
⎪⎪⎪⎪
⎬
⎫
≥≥≥
≥≤
≥≤
≥≤
000
,
..................................................
..................................................
,
,
21
321
223212
113211
n
mmnm
n
n
x , .... , x x
)b(b) x , .... ,x ,x(xg
)b(b) x , .... ,x ,x(xg
)b(b) x , .... ,x ,x(xg
(2)
Any vector −
x satisfying the constraints &
non-negativity restrictions will be called afeasible solution for the problem.
Geometrically, each of the n- no negativity
restrictions ,0≥ j x n j .....3,2,1= defines a
half- space of non-negative values & theintersection of all such half-spaces is the non-
negative orthant, a subset of Euclidian n-space. In E2 the non-negative orthant is the
non-negative first quadrant. Each of the n-
inequality constraints and it was introduced
by [3].
m1,2,....,i ) b( b )( ii-
=≥≤ xgi (3)
also defines a set of points in Euclidian n-
space and the intersection of these m-sets
with non-negative orthant is called asopportunity set i.e.
X= 0; b)g( :E --
n
≥≤∈− x x x (4)So, geometrically a non-linear programming
problem is that of finding a point or a set of points in the opportunity set at which the
highest contour of the objective function is
attained. We know that the optimum solution
of linear programming problem does occur atan extreme point of the convex opportunity
set. However in case of non-linear
programming problem the solution can existat the boundary or in the interior of the
opportunity set. (Weierstrass theorem : If
the non-empty set X is compact (i.e. closed
and bounded ) and objective function F )(−
x is
continuous on X then F )(−
x has a global
maximum either in X in the interior or on the boundary of X2.[5])
3 Kuhn-Tucker Theory
The impetus for generalizations of this theory
is contained in the material presented where
we observed that under certain conditions, a point at which f(x) takes on a relative
maximum for points satisfying
( ) ,,.....,2,1 , mib xgii == is a saddle point
of the Lagrangian function F(x,λ ).The
material presented was originally developed
By H.W. Kuhn and A.W.Tucker [5]. Thetheorem has been of fundamental importancein developing a numerical procedure for
solving quadratic programming problems.
A function F(x,λ ), x-being an n-component
and λ an m-component vector, is said to have
a saddle point at [ ]DD
λ , x if
( ) ( ) ( ) ,,, λ λ λ DDDD
xF xF xF ≤≤ (5)
holds for all x in an ∈-neighborhood of λ°. If
(5) holds for all x and λ , then F(x,λ ) is said
have a saddle point in the large on a global
saddle point at ( )DD
λ , x . We shall find it
convenient to specialize the definition of asaddle point to cases where certain
components of x and λ are restricted to be
non-negative, others are to be non-positive,
and a third category is to be unrestricted insign.
( )
( )
( )
( )6
,.....,1 ,0,
,.....,1 ,0,
,.....,1 ,0,
⎪⎪⎪⎪
⎭
⎪⎪⎪⎪
⎬
⎫
+==∂
∂
+=≥∂
∂
=≤∂
∂
nt j xF x
t s j xF x
s j xF x
j
j
j
DD
DD
DD
λ
λ
λ
;,.....,1
,0 ;,.....,1 ,0
t s j
xs j x j j
+=
≤=≥DD
( )7 ;,.....1 , nt jed unrestrict x j +=D
( ) ( )8;,.....,1 ,0, n j xF x
x j
j ==∂
∂DD
D
λ
( ) ui xF i
,.....,1 ,0, =≥∂
∂DD
λ λ
( ) ( )9 ,.....,1 ,0, vui xF i
+=≤∂
∂DD
λ λ
( ) mvi xF i
,.....,1 ,0, +==∂∂
DDλ
λ
vuiui ii ,...,1 ,0 ;,...,1 ,0 +=≤=≥DD
λ λ
( )10 ,...,1 , mvied unrestrict i +=D
λ
( ) ( )11 .,....,1 ,0, mi xF i
i ==∂
∂DD
D
λ λ
λ
7/27/2019 4352-16011-1-PB.pdf
http://slidepdf.com/reader/full/4352-16011-1-pbpdf 3/7
DAS: A COMPARATIVE STUDY OF THE METHODS OF SOLVING NON-LINEAR PROGRAMMING PROBLEM30
Equations (6) through (12) represent a set of
necessary conditions which [ ]DD
λ , x must
satisfy if ( )λ , xF has a saddle point at
[ ]DD
λ , x for [ ]λ , x ∈ W , provided that
./cF ∈
Now the sufficient conditions can be stated asfollows. Let [ ]
DDλ , x be a point satisfying (6)
through (12). Then if there exists an ∈-
neighborhood about [ ]DD
λ , x such that for
points [ ]D
λ , x ∈ W , in this neighborhood
( ) ( ) ( )( ) ( )13 ,,,DDDDDD
x-x xF xF xF x
λ λ λ ∇+≤
( ) ( ) ( )( ) ( )14 ,,,DDDDDD
λ λ λ λ λ λ - xF xF xF ∇+≥
then ( )λ , xF has a saddle point at [ ]DD
λ , x
for [ ]λ , x ∈ W . If (13) and (14) hold for all
,, 21 W W x ∈∈ λ it follows that has a global
saddle point at [ ]DD λ , x for [ ]λ , x ∈ W .From the above we can deduce Kuhn-Tucker’s conditions as follows:
Let us consider the non-linear programming
problem as:
( )( )
.0 ≥
≤
=
x
b xgtosubject
x f Z Maximize
To obtain Kuhn-Tucker necessary conditions,
let us convert the inequality constrains of the
above problem to equality constraints byadding a vector of m slack (surplus)
variables:
( ) x f Z Max =
( )
( ) ( ).,.....,, 21 m
T sssswhere xgbsor
bs xgtosubject
=−=
=+
Now the Lagrangian function for the problem
takes the form, ( ) ( ) { })(, xgb x f x L −+= λ λ
Assumed that ,, /cg f ∈ first order
necessary conditions can be obtained bytaking first order derivatives with respect to
λ , x of the Lagrangian function
( )
( )( ) .0
0
0
0
=−≡∂
∂∴
≥−≡∂
∂
=⎟ ⎠
⎞⎜⎝
⎛
∂
∂−
∂
∂≡
∂
∂∴
≤∂
∂−∂
∂≡∂
∂
xgbF
xgbF
and
x x
g
x
f x
x
F
xg
x f
xF
λ λ
λ
λ
λ
Rewriting the Lagrangian form we get
( ) ( ) ( )( ) .,.....,1 ,, mi xgb x f xF i
iii=−+≡ ∑ λ λ
Now if ( ) x f takes constraints local optimum
at* x x = it is necessary that a vector
*λ exists
such that
( ) ( )( ) ( )∑ ≤∇−∇≡∇i
i xi x x xg x f xF 0,.1 ***** λ λ
( )[ ] [ ( )
] 00)(
)(,.21
***
=≤∇−
∇≡∇
∗∗∗
∗
=
∑
∑
ji x
i
i
x
n
j
T
x
x xg
x f x xF
λ
λ
( ) ( ) ( ) iiii b xgif xgb xF ≤≥−≡∇ **** 0,.3 λ λ
( ) ( )( ) ( )
iiii
iiii
b xgif xgb
b xgif xgb
==−≡
≥≤−≡**
**
0
0
( )[ ] ( )[ ] .0,.4***** ∑ =−≡∇
i
iii ggb xF λ λ λ λ
4 Comparison of solutions by Kuhn-
Tucker condition and others
In this part we want to show that various NLP problems can be solved by differentmethods. Our aim is to show the affective
ness of the methods considered:
Example: Let us consider the problem
Maximize2
121 32 x x x Z −+=
Subject to the constraints:
0,
42
21
21
≥
≤+
x x
x x
First we want to solve above problem by
graphical solution method.The given problem can be rewriting as:
Maximize Z ( ) ⎟ ⎠
⎞⎜⎝
⎛ ++−−=
3
131 2
2
1 x x
Subject to the constraints
0,
42
21
21
≥
≤+
x x
x x
We observe that our objective function is a
parabola with vertex at (1, -1/3) and
constraints are linear. To solve the problem
graphically, first we constract the graph of the constraint in the first quadrant since
01 ≥ x and 02 ≥ x by considering the
inequation to equation. 42 21 =+ x x
and it was introduced by [6].Each point has co-ordinates of the
tupe ),( 21 x x
and conversely every ordered pair ),( 21 x x or
real numbers determines a point in the plane.
Thus our search for the number pair
7/27/2019 4352-16011-1-PB.pdf
http://slidepdf.com/reader/full/4352-16011-1-pbpdf 4/7
DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 4, ISSUE 1, JANUARY 2009 31
),( 21 x x is restricted to the points of the first
quadrant only
Fig. 1 Optimum solution by graphical method
We get the convex region OAB as
opportunity set. Since our search is for such a pair (x1 ,x2) which gives a maximum value of
2
121 32 x x x −+ and lies in the convex
region. The desire point will be that point of the region at which a side of the convex
region is tangent to the parabola. For this
proceed as follows:
Differentiating the equation of the parabola,we get
( ) 15 3
22
0232
1
1
2
1121
x
dx
dx
dx xdxdx
−−=⇒
=−+
Now differentiating the equation of the
constraint, we get
)16(2102
1
221 −=⇒=+ dx
dxdxdx
Solving equation (15) and (16), we get
4
1 1 = x
8
15xand 2 = .This shows that the
parabola ( ) ⎟ ⎠
⎞⎜⎝
⎛ ++−−=
3
131 2
2
1 x x z has a
tangent to it the line 42 21 =+ x x at the
point .8
15,
4
1⎟ ⎠
⎞⎜⎝
⎛ Hence we get the maximum
value of the objective function at this point.Therefore,
Zmax2
121 32 x x x −+=
.8
15x,
4
1
16
9721 === xat
Let us solve the above problem by using [7]
Kuhn-Tucker Conditions. The Lagrangianfunction of the given problem is
( ) ( ).2432,, 21
2
12121 x x x x x x xF −−+−+≡ λ λ
By Kuhn-Tucker conditions, we obtain
( ) ( ) ( )
( ) ( ) 0.with024F
02322
024F
)(
023F
,022 )(
21
211
2
2
1
1
21
2
1
1
≥=−−≡∂
∂
=−+−−≡∂∂+
∂∂
≥−−≡∂
∂
≤−≡∂
∂≤−−≡
∂
∂
λ λ λ
λ
λ λ
λ
λ λ
x xd
x x x x
F x
x
F xc
x xb
x x
x
F a
Now there arise the following cases:
Case (i) : Let 0=λ , in this case we get
from
0023x
F and 022
2
1
1
≤⋅−≡∂
∂≤−≡
∂
∂ x
x
F
03 ≤⇒ which is impossible and this
solution is to be discarded and it was
introduced by [8].Case (ii): Let 0≠λ . In this case we get
from
(17) 42 024
0)24(
2121
21
=+⇒=−−
=−−
x x x x
x xλ
Also from 022 1
1
≤−−≡∂
∂λ x
x
F
0232
≤−≡∂
∂λ
x
F 022 1 ≥−+∴ λ x
and 2
3032 ≥⇒≥− λ λ
If we take
2
3=λ , then
2
12 1 ≥ x
If we consider 2
12 1 = x then
4
11 = x .
Now putting the value of x1 in (4 .3),
we get8
15 x 2 =
( ) .2
3,
8
15,
4
1,, 21 ⎟
⎠
⎞⎜⎝
⎛ =∴ λ x x for this solution
satisfied 002
3F
satisfied 008
150
4
1x
satisfied 04
15116
8
15
24
1
4
F
satisfied 02
323
F
satisfied 02
314
2
3
4
122
2
2
1
1
4
2
1
=⋅≡∂
∂
=×+×≡∂
∂+
∂
∂
=
−−
=/⋅/−−≡∂
∂
=/
⋅/−≡∂
∂
=−−
=−⋅−≡∂
∂
λ λ
λ
x
F x
x
F
x
x
F
Thus all the Kuhn-Tucker necessary
conditions are satisfied at the point .8
15,
4
1⎟ ⎠
⎞⎜⎝
⎛
0 5 1 0 1 5 2 0
0
5
1 0
1 5
2 0
x 1
x 2
1 7 .
6 .
- 3 .1
- 3 .1g 1
g 2 g 3
O p t i m u m
0
A
B
7/27/2019 4352-16011-1-PB.pdf
http://slidepdf.com/reader/full/4352-16011-1-pbpdf 5/7
DAS: A COMPARATIVE STUDY OF THE METHODS OF SOLVING NON-LINEAR PROGRAMMING PROBLEM32
Hence the optimum (maximum) solution tothe given NLP problem is
2
121max 32 x x x Z −+=
.8
15,4
1at x 16
97
4
1
8
153
4
12
21
2
===
⎟ ⎠
⎞⎜⎝
⎛ −⋅+⋅=
x
Let us solve the problem by Beale’s method.
Maximize ( ) 2
121 32 x x x x f −+=
Subject to the constraints:
0,
42
21
21
≥
≤+
x x
x x
Introducing a slack variables s, the constraint
becomes
0,
42
21
21
≥
=++
x x
s x x
since there is only one constraint, let s be a basic variable. Thus we have by [9]
( ) ( ) 21 ,x x x ,s x NB B
== with 4=s
Expressing the basic x B and the objectivefunction in terms of non-basic x NB, we have
s =4-x1-2x2 and f =2x1+3x2-x12.
We evaluated the partial derivatives of f
w.r.to non-basic variables at x NB=0, we get
( )
3
202222
0
0
0
2
1
1
=⎟⎟
⎠
⎞⎜⎜
⎝
⎛
∂
∂
=⋅−=−=⎟⎟ ⎠
⎞⎜⎜⎝
⎛
∂
∂
=
=
=
NB
NB
NB
x
x
x
x
f
x x
f
since both the partial derivatives are positive,
the current solution can be improved. As
2 x
f
∂
∂gives the most positive value, x2 will
enter the basis. Now, to determine the leaving basic variable,
we compute the ratios:
203,
24min
,min,min22
20
32
30
=⎪⎭⎪⎬⎫
⎪⎩⎪⎨⎧
−=
⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
=⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
α
γ
α
α
γ
γ
α
α
kk
ko
hk
ho
since the minimum occurs for ,30
30
α
α s will
leave the basis and it was introduced by [10].
Thus expressing the new basic variable, x2 aswell as the objective function f
in terms of the new non-basic variables ( x1 and s) we have:
2
11
2
11
1
12
2
3
26
22232
222
xs x
xs x
x f and
s x x
−−+=
−⎟ ⎠
⎞⎜⎝
⎛ −−+=
−−=
we, again, evaluate the partial derivates of f
w. r. to the non-basic variables:
.2
3
2
12
2
1
0
010
1
1
−=⎟ ⎠
⎞⎜⎝
⎛
∂
∂
=⎟ ⎠
⎞⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛
∂
∂
=
==
NB
NB
x
x x
s
f
x x
f
since the partial derivatives are not all
negative, the current solution is not optimal,clearly, x1 will enter the basis.
For the next Criterion, we compute the ratios.
4
3
2
2/1,
2/1
2,min
11
10
21
20 =⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
−−=
⎪⎭
⎪⎬⎫
⎪⎩
⎪⎨⎧
γ
γ
α
α
since the minimum of these ratios correspond
to
11
10
γ
γ , non-basic variables can be
removed. Thus we introduce a free variable,
u1 (unrestricted), as an additional non-basicvariable, defined by
11
1
14
12
2
1
2
1
2
1 x x
x
f u −=⎟
⎠
⎞⎜⎝
⎛ −=
∂
∂=
Note that now the basis has two basic
variables x2 and x1 (just entered). That is, wehave
( ) ( )211 ,, x x xand us x B NB ==
Expressing the basic x B in terms of non-basic
x NB , we have, 11 4
1u x −=
( ) .2
1
2
1
8
154
2
1 1312 su x x xand −+=−−=
The objective function, expressing in terms
of x NB is,
.2
3
16
97
4
1
2
1
2
1
8
153
4
12
2
1
2
111
us
usuu f
−−=
⎟ ⎠
⎞⎜⎝
⎛ −−⎟
⎠
⎞⎜⎝
⎛ −++⎟
⎠
⎞⎜⎝
⎛ −=
Now,2
3
0 0
−=⎟ ⎠
⎞⎜⎝
⎛
∂
∂
==
u NB xs
F ; 02 1
1
00
=−=⎟⎟ ⎠
⎞⎜⎜⎝
⎛
==
uu
f
u NB x
δ
δ
since 01
≤∂
∂
u
f for all x j in x NB and ,0=
∂
∂
u
f
7/27/2019 4352-16011-1-PB.pdf
http://slidepdf.com/reader/full/4352-16011-1-pbpdf 6/7
DAFFODIL INTERNATIONAL UNIVERSITY JOURNAL OF SCIENCE AND TECHNOLOGY, VOLUME 4, ISSUE 1, JANUARY 2009 33
the current solution is optimal. Hence theoptimal basic feasible solution to the given
problem is:
16
97*,
8
15 ,
4
121 == Z x x
Let us solve the given problem by [3] using
Wolfe’s algorithm2
121 32 x x x Z Max −+=
subject to the constraints
0,
42
21
21
≥
≤+
x x
x x
First we convert the inequality constraints to
equality by introducing slack variables s 0≥
( ) 18 42 21 =++ s x x
Therefore, the Lagrangian function of the
problem is
( ) ( )21
2
121 2432, x x x x x x L −−+−+≡ λ λ
( )
( )4 b ,3
2C
,2
1
A ,2 1,00
01-
DHereT
=⎟⎟ ⎠
⎞⎜⎜⎝
⎛ =
⎟⎟ ⎠
⎞
⎜⎜⎝
⎛
==⎟⎟ ⎠
⎞
⎜⎜⎝
⎛
= A
We know by Wolfe’s algorithm:
C V A DX T −=+− λ 2
⎟⎟ ⎠
⎞⎜⎜⎝
⎛ −=⎟⎟
⎠
⎞⎜⎜⎝
⎛ +⎟⎟
⎠
⎞⎜⎜⎝
⎛ −⎟⎟
⎠
⎞⎜⎜⎝
⎛ ⎟⎟ ⎠
⎞⎜⎜⎝
⎛ −⇒
3
2
2
1
0 0
0 12
2
1
2
1
v
v
x
x λ
( )
( )20 32
19 22
32
22
2
11
2
11
=−
=−+⇒
−=+−
−=+−−⇒
v
v x
v
v x
λ
λ
λ
λ
Introducing two artificial variables a1 and a2
in equations (4. 5) and (4. 6), we get
( )
( )22 32
21 22
22
111
=+−
=+−+
av
av x
λ
λ
The quadratic programming problem isequivalent to
( ) 21
21
..
aa Z Maxei
aa Z Min
−−=−
+=
subject to the constraints
0,0,0
0,,,,,,,
32
22
42
2211
212121
22
111
21
===
≥
=+−=+−+
=++
sv xv xwith
aasvv x x
av
av x
s x x
λ
λ
λ
λ
The solution of the problem can be shown in
the following modified simplex tableau: Itwas Introduced by [10]. Here we get three
constraints equations with eight unknown.
So, for basic solution of the system alwaysthree variables will take non-zero values and
others are zero. We will find out non zero
values for x1 , x2 &λ or s In our case initial
basic solution is 04,3,2 21 =⇒=== zsaa .
Our next goal is to improve the value of Z. In
tableau –1, we put constraints system as:
Table 1 Constraints system
x1 x2 λ v1 v2 s a1 a2 c
12
0
2
0
0
0
1
2
0
-1
0
0
0
-1
1
0
0
0
1
0
0
0
1
4
2
3
Now taking x1 as leading variables. Then weget min (4/1, 2/2) is 2/2. So, second element
of the 1st column is the pivotal element and
the corresponding column is the pivotalcolumn. So, x1 enters in basis. Reducing the
pivotal element to unity by dividing allelements of the pivotal row by 2, we get thetable 2.
Table 2 Reducing first pivotal element to unity.
x1 x2 λ v1 v2 s a1 a2 c
11
0
2
0
0
01/2
2
0-1/2
0
0
0
-1
1
0
0
01/2
0
0
0
1
4
1
3
Reducing zero all elements of the pivotal
column except pivotal one, we get the table 3
Table 3 Reducing zero all elements of 1st pivotalcolumn except pivotal one.
x1 x2 λ v1 v2 s a1 a2 c
0
10
2
00
-1/2
1/22
1/2
-1/20
0
0-1
1
00
-1/2
1/20
0
01
3
13
Now taking x2 as our next leading variable.
Then we get 1st element of the second column is our next pivotal element. Reducing
it to unity by dividing all elements of the
pivotal now by 2 and next taking λ as our
next leading variable. Then we get
min 2
3,
2/1
1
⎭⎬⎫
⎩⎨⎧ is
23 . So, 2 is our next pivotal
element. Reducing the pivotal element tounity, we get, the tableau-5.
Table 4 Reducing 3rd pivotal element to unity.
x1 x2 λ v1 v2 s a1 a2 c
0
1
0
1
0
0
-1/4
1/2
1
1/4
-1/2
0
0
0
-1/2
1/2
0
0
-1/4
1/2
0
0
0
1/2
3/2
1
3/2
7/27/2019 4352-16011-1-PB.pdf
http://slidepdf.com/reader/full/4352-16011-1-pbpdf 7/7
DAS: A COMPARATIVE STUDY OF THE METHODS OF SOLVING NON-LINEAR PROGRAMMING PROBLEM34
Making zero all elements of the pivotalcolumn except pivotal one, we get the table 5.
Table 5 Optimal solution
From T-5 we obtain the optimal solution as
.2
3 ,
8
15 ,
4
121 === λ x x
Thus for the optimal solution for the given
QP problem is
( ) .8
15,
4
1,
16
97
4
1
8
153
4
12
32
21
2
2
121
⎟ ⎠
⎞⎜⎝
⎛ ==
⎟ ⎠
⎞⎜⎝
⎛ −⋅+⋅=
−+=
∗∗ x xat
x x x Z Max
which is same as we obtain by Kuhn-Tuker condition method. For all kinds of non-linear
programming problem, we can show that the
optimal solution by Kuhn-tucker condition issame as any other method we considered
Therefore the solution obtained by graphical
solution method, Kuhn-Tucker conditionsand Wolfe’s method are same.
5 ConclusionTo obtain an optimal solution to the non-linear programming problem, we observe that
Kuhn-Tucker conditions are more useful thanany other methods of solving NLP problem.
Because in a NLP problem particular problems are solve by particular method.
There is no universalism in the methods of solving NLP problem. But we have shown
that by Kuhn-Tucker conditions all kinds of
NLP problems can be solved. The NLP
problem involving two variables can easilysolve by graphical solution method, but the
problem involving more variables cannotsolve by graphical solution method. Besides,
for all NLP problems, graphical solution
method do not gives always-optimal solution.Only quadratic programming problems can
solve by Wolfe’s method.
Therefore, from the above discussion, we can
say that kuhn-Tucker conditions are the bestmethod for solving only NLP problem.
References[1] Greig, D. M: “Optimization”. Lougman- Group
United , New York (1980).[2] Gupta, P. K. Man Mohan: “Linear programming
and theory of Games” Sultan Chand & sons,
New Delhi, India
[3] Honhendalken, B. Von: “SimplicialDecomposition in Non-linear Programming
procedure”.
[4] G. Hadley: “Non-linear and dynamic programming”.
[5] M. S. Bazaraa & C. M. Shetty: “Non-linear
programming theory and algorithms”.[6] Mittal Sethi: “Linear Programming ’’Pragati
Prakashan, Meerut India 1997.[7] Adadie. J: “On the Kuhn-Tucker Theory,” in Non-
Linear Programming , J. Adadie. (Ed) 1967b.
[8] Hildreth. C: “A Quadratic Programming
procedure”.
Naval Research Logistics Quarterly.
[9] D. M.Himmeldlau: Applied Non-linear
programming.
[10] G. R. Walsh: “Methods of optimization” Jon Wiley
and sons ltd , 1975, Rev. 1985.
Bimal Chandra Das has
completed M. Sc in pure
Mathematics and B. Sc (Hons)
in Mathematics from Chittagong
University. Now he is working
as a Lecturer under theDepartment of Textile Engineering in Daffodil
International University. His area of research is
Operation Research.
x1 x2 λ
v1 v2 s a1 a2 c
0
10
1
00
0
01
1/4
-1/20
-1/8
1/4-1/2
1/2
00
-1/4
00
1/8
-1/41/2
15/8
1/43/2