The Newton-Raphson Metho d
1 Intro duction
The Newton-Raphson method, or Newton Method, is a powerful technique
for solving equations numerically. Like so much of the differential calculus,
it is based on the simple idea of linear approximation. The Newton Method,
properly used, usually homes in on a root with devastating efficiency.
The essential part of these notes is Section 2.1, where the basic formula
is derived, Section 2.2, where the procedure is interpreted geometrically,
and—of course—Section 6, where the problems are. Peripheral but perhaps
interesting is Section 3, where the birth of the Newton Method is described.
2 Using Linear A pproximations to Solve Equations
Let f(x) be a well-behaved function, and let r be a root of the equation
f(x) = 0. We start with an estimate x0 of r. From x0, we produce an
improved—we hope—estimate x1. From x1, we produce a new estimate
x2. From x2, we produce a new estimate x3. We go on until we are ‘close
enough’ to r—or until it becomes clear that we are getting nowhere.
The above general style of proceeding is called iterative. Of the many iterative root-finding procedures, the Newton-Raphson method, with its combination of simplicity and power, is the most widely used. Section 2.4 describes another iterative root-finding procedure, the Secant Method.
C omment . The initial estimate is sometimes called x1, but most mathematicians prefer to start counting at 0.
Sometimes the initial estimate is called a “guess.” The Newton Method
is usually very very good if x0 is close to r, and can be horrid if it is not.
The “guess” x0 should be chosen with care.
1
2.1 T he Newton-Raphson Iteration
Let x0 be a good estimate of r and let r = x0 + h. Since the true root is r,
and h = r − x0, the number h measures how far the estimate x0 is from the
truth.
Since h is ‘small,’ we can use the linear (tangent line) approximation to
conclude that
0 = f(r) = f(x0 + h) ≈ f(x0) + hf0(x0),
and therefore, unless f0(x0) is close to 0,
h ≈ − f(x0)
f0(x0) .
It follows that
r = x0 + h ≈ x0 − f(x0)
f0(x0) .
Our new improved (?) estimate x1 of r is therefore given by
x1 = x0 −
f(x0)
f0(x0) .
The next estimate x2 is obtained from x1 in exactly the same way as x1 was
obtained from x0:
x2 = x1 −
f(x1)
f0(x1) .
Continue in this way. If xn is the current estimate, then the next estimate
x
n+1 is given by
x
1 Intro duction
The Newton-Raphson method, or Newton Method, is a powerful technique
for solving equations numerically. Like so much of the differential calculus,
it is based on the simple idea of linear approximation. The Newton Method,
properly used, usually homes in on a root with devastating efficiency.
The essential part of these notes is Section 2.1, where the basic formula
is derived, Section 2.2, where the procedure is interpreted geometrically,
and—of course—Section 6, where the problems are. Peripheral but perhaps
interesting is Section 3, where the birth of the Newton Method is described.
2 Using Linear A pproximations to Solve Equations
Let f(x) be a well-behaved function, and let r be a root of the equation
f(x) = 0. We start with an estimate x0 of r. From x0, we produce an
improved—we hope—estimate x1. From x1, we produce a new estimate
x2. From x2, we produce a new estimate x3. We go on until we are ‘close
enough’ to r—or until it becomes clear that we are getting nowhere.
The above general style of proceeding is called iterative. Of the many iterative root-finding procedures, the Newton-Raphson method, with its combination of simplicity and power, is the most widely used. Section 2.4 describes another iterative root-finding procedure, the Secant Method.
C omment . The initial estimate is sometimes called x1, but most mathematicians prefer to start counting at 0.
Sometimes the initial estimate is called a “guess.” The Newton Method
is usually very very good if x0 is close to r, and can be horrid if it is not.
The “guess” x0 should be chosen with care.
1
2.1 T he Newton-Raphson Iteration
Let x0 be a good estimate of r and let r = x0 + h. Since the true root is r,
and h = r − x0, the number h measures how far the estimate x0 is from the
truth.
Since h is ‘small,’ we can use the linear (tangent line) approximation to
conclude that
0 = f(r) = f(x0 + h) ≈ f(x0) + hf0(x0),
and therefore, unless f0(x0) is close to 0,
h ≈ − f(x0)
f0(x0) .
It follows that
r = x0 + h ≈ x0 − f(x0)
f0(x0) .
Our new improved (?) estimate x1 of r is therefore given by
x1 = x0 −
f(x0)
f0(x0) .
The next estimate x2 is obtained from x1 in exactly the same way as x1 was
obtained from x0:
x2 = x1 −
f(x1)
f0(x1) .
Continue in this way. If xn is the current estimate, then the next estimate
x
n+1 is given by
x