ROOT FINDING TECHNIQUES: Secant Method
Consider a function f ( x ) which has the following graph:
Suppose that we want to locate the root r which lies near the points x0 and x1. The main idea in secant method is to approximate the curve with a straight line for x between the values of x0 and r. The straight line is assumed to be the secant which connects the two points ( x0, f(x0) ) and (x1, f(x1) ). Graphically we can represent this with the picture below, where the secant line and the new root estimate x2 are shown in red. We can see that x2 is closer to the root r than either x1 or x0. If we were to repeat the process we can very easily see that x3 would be even closer to the root as it is shown below in green. Analytically how can we obtain x2 if we know x0 and x1? Lets try to write the equation of the red straight line. Its slope m is given by
and its equation is: The point x = x2 corresponds to the point of the straight line where y = 0. Consequently we obtain: If we solve this equation for x2 we obtain the iteration formula: As an example of the method consider the following table which illustrates the steps needed to find the positive root of x2 2 = 0 We start with the initial guesses x0 = 2 and x1 = 1.8 .
Note that the actual root of the equation is the square root of two, which is 1.414213562. If we round this value to 4 decimal places we have 1.4142. What is the error between this exact value and our approximation of the first, second, and third line of the table? Check to see how we dealt with this question in our study of Newtons method and repeat the calculations for this example. Here we want to illustrate another point of the iteration procedures. How do we know when to stop since we normally dont know the exact answer?
Look at the table below where we list the values of smallness of f(x) and closeness of x and you can see that if you set some tolerances ahead of time, then you can have an if statement which will decide whether the iteration should stop. WARNING: You must also have a bound on the maximum number of iterations, to prevent the divergent cases from running forever.
PROS & CONS of the method: 1) When the method works, works fast. It is a bit slower than Newtons method but much faster than the bisection method 2) If we have the wrong starting values the method diverges ( produces increasingly nonsensical results) 3) We only need to calculate the function itself at each step only once. |
|||||||||||||||||||||||||||||||||||||||||||||||||||
Jacob
Y. Kazakia © 2001 All rights reserved
|