Different Non Linear Programming Methods Computer Science

Essay add: 28-10-2015, 20:21   /   Views: 191

Linear programming has been successfully applied for managerial decision making in business, service sector , govt. and manufacturing .however it is not without inherent limitations, the most significant being its linearity assumption. the linear objective function requires the use of constant returns to scale, constant marginal return, fixed input and output prices, etc. when increasing or decreasing returns to scale in the objective function or constraints, we must rely on non linear programming methods. Non-linearity also arises when any of the cost or profit coefficients in a linear programming model is the random variable.


In mathematics, nonlinear programming (NLP) is the process of solving a system of equalities and inequalities, collectively termed constraints, over a set of unknown real variables, along with an objective function to be maximized or minimized, where some of the constraints or the objective function are nonlinear.

Mathematical formulation of the problem

The general non-linear programming problem can be stated mathematically in the following form

Optimize (max or min)Z=

Subject to the constraints

And for all j=1,2,…..n

Where and are real valued function of n decision variables and at least one of these is non linear.


A typical nonconvex problem is that of optimizing transportation costs by selection from a set of transportion methods, one or more of which exhibit economies of scale, with various connectivities and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge, or coastal tankship. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes.

Practical situation of non-linearity

The situations in which non linearties are built into the programming model are:

Gasoline blending:-in the model of blending gasoline from so called refinery raw stocks usually contains non-linear constraints relating to each blend, octane relating, since this quality characteristic varies non-linearly with the amount of tetraethyl lead added to the mix.

Sales revenue:- in marketing, we usually observe that -a lower a product's price, the greater the sales quantity. Therefore, sales revenue does not vary proportionately with price. Consequently, this phenomenon reflects the objective function to be non-linear.

Safety-stock inventory levels:- safety-stocks are usually maintained to accommodate weekly fluctuations in sales.one approach used to solve such a multiperiod model is to let the safety stock level for an item be a function of both, its forecasted sales quantity and the fraction of capacity utilization implied by this forecast.


• Optimal solution may occur at extreme point; may occur on the boundary of feasible region; may occur in the interior of feasible region.

• Many problems will have both a global optimum and several local optima; it is very hard to distinguish between local and global optima.

Methods for solving non-linear programming problem

Several methods have been developed for solving non-linear programming problems. these are direct substitution method, Lagrange multiplier method, Kuhn-tucker conditions and graphical method.

Direct substitution method

Since the constraint set is also continuous and differentiable, any variable in the constraint set can be expressed in terms of the remaining variables. Then it is substituted into the objective function. The new objective function so obtained is not subject to any constraint and hence its optimum value can be obtained by the unconstrained optimization method.

Sometimes this method is not convenient, particularly when there are more than two variables in the objective function and are subject to constraint.

Example of Direct substitution method

Question: find the optimum solution of the following constrained multivariable problem.

Minimize Z=+

Subject to the constraint

Solution: since the given problem has three variables and one quality constraint, any one of the variables can be removed from Z with the help of the quality constraint. Let us choose variable to be eliminated from Z. then from the quality constraint, we have

Substituting the value of in the objective function, we get

Z or

The necessary condition for minimum of Z is that the gradient


That is …..(i)

….. (ii)

On solving equations (i) and (ii) we get and

To find whether the solution so obtained is minimum or not, apply the sufficiency condition by forming a Hessian matrix. The Hessian matrix for the given objective function is

Since the matrix is symmetric and principal diagonal elements are positive, is positive definite and the objective function is convex. Hence the optimum solution to the given problem is,

Lagrange Multipliers Method

In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the maximum/minimum of a function subject to constraints.

For example: consider the optimization problem


subject to

We introduce a new variable (λ) called a Lagrange multiplier, and study the Lagrange function defined by

(λ may be either added or subtracted). If (x, y) is a maximum for the original constrained problem, then there exists a λ such that (x, y, λ) is a stationary point for the Lagrange function (stationary points are those points where the partial derivatives of Λ are zero). However, not all stationary points yield a solution of the original problem. Thus, the method of Lagrange multipliers yields a necessary condition for optimality in constrained problems.

Interpretation of the Lagrange multiplier

The value of Lagrange Multiplier which was introduced as an additional variable can be used to provide valuable information about the sensitivity of an optimal value of the objective function to change in resource levels (right hand side values of the constraints).

The general non-linear programming problem with two decision variables and one equality constraint can be stated as

Minimize Z=

Subject to the constraint


the necessary condition to be satisfied for the solution of the problem are


If we want to observe the effect of change in the optimal value of the objective function with respect to a change in b, differentiate the constraint with respect to b, we get


Rewrite this as

, j=1,2

Substituting the value of equation (iii), we get


hence =-.this relationship indicates that if we increase (or decrease) b,

the value of would indicate approximately how much the optimal value of objective function would decrease or increase. Thus depending upon the value of (positive, negative or zero) it will provide a different estimation of the value of the change in the objective function.

Applications of Lagrange multiplierEconomics

Constrained optimization plays a central role in economics. For example, the choice problem for a consumer is represented as one of maximizing a utility function subject to a budget constraint. The Lagrange multiplier has an economic interpretation as the shadow price associated with the constraint, in this example the marginal utility of income.

Control theory

In optimal control theory, the Lagrange multipliers are interpreted as costate variables, and Lagrange multipliers are reformulated as the minimization of the Hamiltonian, in Pontryagin's minimum principle.

Example of Lagrange multiplier

Suppose you want to find the maximum values for

with the condition that the x and y coordinates lie on the circle around the origin with radius √3, that is,

As there is just a single condition, we will use only one multiplier, say λ.

The constraint g(x, y)-c is identically zero on the circle of radius √3. So any multiple of g(x, y) may be added to f(x, y) leaving f(x, y) unchanged in the region of interest. Let

The critical values of Λ occur when its gradient is zero. The partial derivatives are

Equation (iii) is just the original constraint. Equation (i) implies x = 0 or λ = −y. In the first case, if x = 0 then we must have by (iii) and then by (ii) λ = 0. In the second case, if λ = −y and substituting into equation (ii) we have that,

Then x2 = 2y2. Substituting into equation (iii) and solving for y gives this value of y:

Thus there are six critical points:

Evaluating the objective at these points, we find

Therefore, the objective function attains a global maximum (with respect to the constraints) at and a global minimum at The point is a local minimum and is a local maximum, as may be determined by consideration of the Hessian matrix of Λ.

Kuhn tucker conditions

In mathematics, the Karush-Kuhn-Tucker conditions (also known as the Kuhn-Tucker or KKT conditions) are necessary for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.

We consider the following nonlinear optimization problem:

where is the function to be minimized, where are the functions of the inequality constraints and are the functions of the equality constraints, and where m and l are the number of inequality and equality constraints, respectively.

Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which have allowed only equality constraints. The KKT conditions were originally named after Harold W. Kuhn, and Albert W. Tucker, who first published the conditions.Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his masters thesis.

Necessary conditions for Kuhn tucker conditions

Suppose that the objective function, i.e., the function to be minimized, is and the constraint functions are and . Further, suppose they are continuously differentiable at a point x * . If x * is a local minimum that satisfies some regularity conditions, then there exist constants and such that [3]


Primal feasibility

Dual feasibility

Complementary slackness

Sufficient conditions

In some cases, the necessary conditions are also sufficient for optimality. This is the case when the objective function f and the inequality constraints gj are continuously differentiable convex functions and the equality constraints hi are affine functions.

The Kuhn tucker conditions will be used to derive optimality conditions.

If the given problem is of minimization or if the constraints are of the form on the other hand if the problem is of maximization with constraints of the form

Example of Kuhn tucker conditions:

Question: Determine so as to

Maximize Z = 12

Subject to the constraint


Solution: here

The Lagrangian function can be formulated as

The Kuhn-tucker necessary conditions can

be stated as


Or 12+2


(ii)=0, i=1,2




There may arise four cases:

Case 1: if ,then from condition (i), we have


Solving these equations, we get .however, this solution violates condition (iii) and therefore it may be discard.

Case 2: , then from condition (ii) we have

=0 0r

Substituting these values in condition (i), we get and however this solution violates the condition (iv) and therefore may be discard.

Case 3: , then the condition (ii) and (i) we have

Solving these equations, we get .however , this solution violates the condition (iv) and therefore may be discarded.

Case 4: then from conditions (i) and (ii) we have

Solving these equations, we get this solution does not violate any of the Kuhn -tucker conditions and therefore must be accepted.

Hence the optimum solution of the given problem is

and Max Z =

Graphical method

The optimal solution of an linear programming problem is obtained at one of the extreme points of the feasible solution space. However, in case of non-linear programming, the optimum solution may not be obtained at the extreme point of its feasible reason.

Example of graphical method

Solve graphically the following non-linear programming problem

Maximize Z=

Subject to the constraint

Solution: in this non-linear programming problem, the objective function is non-linear whereas constraints are linear. plot the given constraints on the graph

By the usual method as shown in figure.

The feasible region is shown by the shaded region in fig. thus the optimal point must be somewhere in the convex region ABC. However the desired point would be that at which a side of the convex region is tangent to the circle,


The gradient of the tangent to this circle can be obtained by differentiating the equation

Z= with respect to i.e

Or …..(i)

The gradient of the line and


And …..(iii)


If the line is the tangent to the circle,

then substituting from equation (ii)in equation (i)

we get

and hence for , the equation =12 gives . This means the tangent of the line =12 is at . But this does not satisfy the given constraints.

Similarly if the line is the tangent to the circle, then substituting

from equation (iii) in equation (i), we get

And hence for the equation gives. This means the tangent of the circle of the line is at .this point lies in the feasible region and satisfies both the constraints. Hence the optimal solution is


There are several methods to solve non-linear programming. but Kuhn tucker is the best method to solve non-linear programming. Because with graphical method and direct substitution method, it is not easy to solve three dimensional or more than three dimensional constraints. With the help of Kuhn tucker conditions, we can solve it easily.

Article name: Different Non Linear Programming Methods Computer Science essay, research paper, dissertation