Solve The System Of Non Linear Equations Biology

Essay add: 28-10-2015, 20:06   /   Views: 221

In numerical analysis, Newtons method also known as the Newton-Raphson method, named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a real-valued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem (detailed below). Unfortunately, when iteration begins far from the desired root, Newton's method can fail to converge with little warning; thus, implementations often include a routine that attempts to detect and overcome possible convergence failures.

Given a function Æ’(x) and its derivative Æ’ '(x), we begin with a first guess x0. Provided the function is reasonably well-behaved a better approximation x1 is

The process is repeated until a sufficiently accurate value is reached:

An important and somewhat surprising application is Newton-Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction.

The algorithm is first in the class of HouseholderHYPERLINK "http://en.wikipedia.org/wiki/Householder's_method"'HYPERLINK "http://en.wikipedia.org/wiki/Householder's_method"s methods, succeeded by HalleyHYPERLINK "http://en.wikipedia.org/wiki/Halley's_method"'HYPERLINK "http://en.wikipedia.org/wiki/Halley's_method"s method.

Description

The function Æ’ is shown in blue and the tangent line is in red. We see that xn+1 is a better approximation than xn for the root x of the function f.

The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.

Suppose Æ’ : [a, b] â†’ R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.

That is

Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive

We start the process off with some arbitrary initial value x0. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) The method will usually converge, provided this initial guess is close enough to the unknown zero, and that Æ’'(x0) ≠ 0. Furthermore, for a zero of multiplicity 1, the convergence is at least quadratic (see rate of convergence) in a neighbourhood of the zero, which intuitively means that the number of correct digits roughly at least doubles in every step. More details can be found in the analysis section below.

Application to minimization and maximization problems

Newton's method can also be used to find a minimum or maximum of a function. The derivative is zero at a minimum or maximum, so minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes:

SOLUTION OF THE EQUATIONS IS AS FOLLOWS-

Given:

x²+y=11….(1), y²+x=7….(2)

Rewriting the given equation as follows

x²+y-11=0 and x+y²-7=0

let f(x,y) = x²+y-11 and g(x,y)= x+y²-7

so fx=2x, fy=1, gx=1 and gy=2y

Let the initial approximation to a root of given system be (2.9, 1.9) i.e.

x0=2.9 and y0=1.9 (Clearly x=3 & y=2 satisfies the given non-linear system)

from (1) y = 11- x²

put value of y in (2)

(11-x²)² + x = 7

121-(x²)²-22x² +x =7

(x²)²-22x²+x = -114

By hit and trial method

Put x=3

81-198+3 = -114

84 - 198 = -114

-114 = -114

So x=3

Put value of x=3 in (1)

9+y = 11

Y = 2

ITERATION 1:

Since x0=2.9 y=1.9

f(x0,y0) = (2.9)² + 1.9 -11

= 8.41 +1.9 -11

= 10.31 - 11

f(x0,y0) = -0.69

g(x0,y0) = 2.9 + (1.9)²-7

= 2.9 + 3.61 -7

= 6.51 - 7

g(x0,y0) = -0.49

fx(x0,y0) = 2*x0

= 2*2.9

= 5.8

fy = 1

gx = 1

gy = 2*y

= 2*1.9

= 3.8

D = fx(x0,y0)* gy(x0,y0) - fy(x0,y0)* gx(x0,y0)

= 5.8*3.8 - 1*1

= 22.04 - 1

= 21.04

Now find the values of h and k

h = 1 -f(x0,y0) fy(x0,y0)

D -g(x0,y0) gy(x0,y0)

= 1 -(-0.69) 1

21.04 -(-0.49) 3.8

= 1/21.04 (0.69* 3.8 - 0.49)

= (2.622-0.49)/21.04

= 2.312/21.04

= 0.101

K = 1 -fx(x0,y0) -f(x0,y0)

D -gx(x0,y0) -g(x0,y0)

= 1 5.8 -(-0.69)

D 1 -(-0.49)

= 1/21.04 (5.8 *0.49 - 0.69)

= 1/21.04 (2.842 - 0.69)

= 2.152/21.04

= 0.102

So the first approximation as a root is given by

x1 = x0 + h

= 2.9 + 0.101

= 3.001

Y1 = y0 + k

= 1.9 + 0.102

= 2.002

ITERATION 2

Since x1 = 3.001

And y1 = 2.002

f(x1,y1) = (3.001)² + 2.002 -11

= 9.006 +2.002 -11

= 11.008-11

= 0.008

g(x1,y1) = 3.001 + (2.002)² - 7

= 3.001 + 4.008 -7

= 7.009 - 7

= 0.009

fx(x1,y1) = 2*x1 = 2*3.001

= 6.002

fy(x1,y1) = 1

gx(x1,y1) = 1

gy(x1,y1) = 2*y1 = 2*2.002

= 4.004

D = fx(x1,y1)* gy(x1,y1) - fy(x1,y1)* gx(x1,y1)

= 6.002*4.004 - 1*1

= 24.032 - 1

= 23.032

h = 1 -f(x1,y1) fy(x1,y1)

D -g(x1,y1) gy(x1,y1)

= 1 -0.008 1

23.032 -0.009 4.004

= 1/23.032 (-0.008*4.004-(-0.009))

= 1/23.032(-0.032+0.009)

= -0.023/23.032

= -0.000998

K = 1 -fx(x1,y1) -f(x1,y1)

D -gx(x1,y1) -g(x1,y1)

= 1 6.002 -0.008

23.032 1 -0.009

= 1/23.032(6.002*(-0.009) - (-0.008))

= 1/23.032(-0.054+0.008)

= -0.046/23.032

= -0.00199

So the second approximation as a root is given by

x2 = x1 + h = 3.001+ (-0.000998)

= 3.00002

y2 = y1 + k = 2.002 + (-0.00199)

= 2.00001

From 1a nd 2nd iteration it is clear that there is no change in the successive approximation to the root upto first two decimal places. So the root of given system is given by

X = 3.00

And y = 2.00(upto two decimal places)

Hence equation is solved.

Analysis

Suppose that the function Æ’ has a zero at α, i.e., Æ’(α) = 0.

If f  is continuously differentiable and its derivative is nonzero at Î±, then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence {xn} will converge to α.

If the function is continuously differentiable and its derivative is not 0 at α and it has a second derivative at α then the convergence is quadratic or faster. If the second derivative is not 0 at α then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of α, then:

where

If the derivative is 0 at α, then the convergence is usually only linear. Specifically, if Æ’ is twice continuously differentiable, Æ’ '(α) = 0 and Æ’ ''(α) â‰  0, then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence of iterates converges linearly, with rate log10 2 (Süli & Mayers, Exercise 1.6). Alternatively if Æ’ '(α) = 0 and Æ’ '(x) â‰  0 for x â‰  0, x in a neighborhood U of α, α being a zero of multiplicity r, and if Æ’ âˆˆ Cr(U) then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence of iterates converges linearly.

However, even linear convergence is not guaranteed in pathological situations.

In practice these results are local and the neighborhood of convergence are not known a priori, but there are also some results on global convergence, for instance, given a right neighborhood U+ of α, if f is twice differentiable in U+ and if , in U+, then, for each x0 in U+ the sequence xk is monotonically decreasing to α.

Proof of quadratic convergence for Newton's iterative method

According to TaylorHYPERLINK "http://en.wikipedia.org/wiki/Taylor's_theorem"'HYPERLINK "http://en.wikipedia.org/wiki/Taylor's_theorem"sHYPERLINK "http://en.wikipedia.org/wiki/Taylor's_theorem" theorem, any function f(x) which has a continuous second derivative can be represented by an expansion about a point that is close to a root of f(x). Suppose this root is Then the expansion of f(α) about xn is:

    (1)

where the Lagrange form of the Taylor series expansion remainder is

where ξn is in between xn and

Since is the root, (1) becomes:

    (2)

Dividing equation (2) by and rearranging gives

    (3)

Remembering that xn+1 is defined by

    (4)

one finds that

That is,

    (5)

Taking absolute value of both sides gives

    (6)

Equation (6) shows that the rate of convergence is quadratic if following conditions are satisfied:

sufficiently close to the root

The term sufficiently close in this context means the following:

(a) Taylor approximation is accurate enough such that we can ignore higher order terms,

(b)

(c)

Finally, (7) can be expressed in the following way:

where M is the supremum of the variable coefficient of on the interval defined in the condition 1, that is:

The initial point has to be chosen such that conditions 1 through 3 are satisfied, where the third condition requires that

Nonlinear systems of equations

One may use Newton's method also to solve systems of k (non-linear) equations, which amounts to finding the zeroes of continuously differentiable functions F : Rk → Rk. In the formulation given above, one then has to left multiply with the inverse of the k-by-k Jacobian matrix JF(xn) instead of dividing by f '(xn). Rather than actually computing the inverse of this matrix, one can save time by solving the system of linear equations

for the unknown xn+1 − xn. Again, this method only works if the initial value x0 is close enough to the true zero. Typically, a well-behaved region is located first with some other method and Newton's method is then used to "polish" a root which is already known approximately.

Nonlinear equations in a Banach space

Another generalization is the Newton's method to find a root of a function F defined in a Banach space. In this case the formulation is

where is the Fréchet derivative applied at the point Xn. One needs the Fréchet derivative to be boundedly invertible at each Xn in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton-Kantorovich theorem.

CONCLUSION-Advantages and Disadvantages:

The method is very expensive - It needs the function evaluation and then the derivative evaluation. 

If the tangent is parallel or nearly parallel to the x-axis, then the method does not converge. 

Usually Newton method is expected to converge only near the solution.

The advantage of the method is its order of convergence is quadratic.

Convergence rate is one of the fastest when it does converges



Article name: Solve The System Of Non Linear Equations Biology essay, research paper, dissertation