アイザックニュートン

Sir Isaac Newton was an English physicist, mathematician, astronomer, natural philosopher who is perceived and considered by a substantial number of scholars and the general public as one of the most influential men in history.Philosophiæ Naturalis Principia Mathematica, published in 1687, is by itself considered to be among the most influential books in the history of science, laying the groundwork for most of classical mechanics. In this work, Newton described universal gravitation and the three laws of motion which dominated the scientific view of the physical universe for the next three centuries. In mechanics, Newton enunciated the principles of conservation of both momentum and angular momentum. In mathematics, Newton shares the credit with Gottfried Leibniz for the development of the differential and integral calculus. He also demonstrated the generalised binomial theorem, developed the so-called "Newton's method" for approximating the zeroes of a function, and contributed to the study of power series.

ニュートン法とは

In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a real-valued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem. This is discussed in detail below. Unfortunately, when iteration begins far from the desired root, Newton's method can easily lead an unwary user astray with little warning. Thus, good implementations of the method embed it in a routine that also detects and perhaps overcomes possible convergence failures.

近似の考え方

Given a function ƒ(x) and its derivative ƒ '(x), we begin with a first guess x0 . A better approximation x1 is

newton1.png

An important and somewhat surprising application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction.

newton5.JPG

ニュートン法の記述

The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated.

Suppose ƒ : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point.

newton2.png

Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive

newton3.png

We start the process off with some arbitrary initial value x0. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.)

最大値・最小値を求める方法

Newton's method can also be used to find a minimum or maximum of a function. The derivative is zero at a minimum or maximum, so minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes:

newton4.png

歴史的説明

Newton's method was described by Isaac Newton in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) .However, his description differs substantially from the modern description given above: Newton applies the method only to polynomials. He does not compute the successive approximations xn, but computes a sequence of polynomials and only at the end, he arrives at an approximation for the root x. Finally, Newton views the method as purely algebraic and fails to notice the connection with calculus.

Newton's method was used by 17th century Japanese mathematician Seki Kōwa to solve single-variable equations, though the connection with calculus was missing.

Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using fluxional calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.

実用上の考察

Newton's method is an extremely powerful technique -- in general the convergence is quadratic: the error is essentially squared at each step (that is, the number of accurate digits doubles in each step). However, there are some difficulties with the method.

  • 1. Newton's method requires that the derivative be calculated directly. In most practical problems, the function in question may be given by a long and complicated formula, and hence an analytical expression for the derivative may not be easily obtainable. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two points on the function. In this case, the Secant method results. This has slightly slower convergence than Newton's method but does not require the existence of derivatives.
  • 2 If the initial value is too far from the true zero, Newton's method may fail to converge. For this reason, Newton's method is often referred to as a local technique. Most practical implementations of Newton's method put an upper limit on the number of iterations and perhaps on the size of the iterates. If the derivative of the function is not continuous the method may fail to converge.
  • 3. It is clear from the formula for Newton's method that it will fail in cases where the derivative is zero. Similarly, when the derivative is close to zero, the tangent line is nearly horizontal and hence may "shoot" wildly past the desired root.
  • 4. If the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent.

例題

Consider the problem of finding the square root of a number. There are many methods of computing square roots, and Newton's method is one.

For example, if one wishes to find the square root of 612, this is equivalent to finding the solution to

X^2=612

The function to use in Newton's method is then,

f(x)= X^2-612

with derivative,

ƒ '(x) =2x

With an initial guess of 10, the sequence given by Newton's method is

newton6.png

ニュートン法で 3次方程式 P(x) =8x^3+4x^2-4x-1=0を解く

唯一の実数解の厳密解は、x=cos(2π/7)です。P"(x) = 48x + 8 なので少なくともx > 0 ではy = P(x) は下に凸です.

一般に(an, P(an)) における接線は

y = 4(6an^2 + 2an − 1)x − 16an^3 − 4an^2 − 1

x 軸との交点のx 座標an+1 はan+1 =16an^3 + 4an^2 + 1 {an} はこの漸化式をみたし,初項をa1 = 1とでも与えて繰り返し計算できます。一般項がきれいな式で求まるわけではありませんが,電卓を使って数値計算するのは容易です. これが、ニュートン法です。

高次元の場合

ニュートン法は、接線を一次近似式、接線のx切片を一次近似式の零点と考えることにより、より高次元の関数の場合に一般化できる。 対象となる関数を f: Rm → Rm, x ∈ Rm とし、  f(x)=0 を求める問題を考えよう。 今 x0 ∈ Rm が既知であるとする。x0における f(x) の一次近似式の零点を求める式は

newton7.png

ヤコビ行列∂f(x0) が正則行列であるとして

newton8.png

コンピュータで計算を行う場合 ∂f(x0)-1f(x0) を直接求めることは困難なので、

newton9.png

という方程式をガウスの消去法などの解法によって線形方程式系を解き r を求め、x = x0 - r によって x を求める。

ここで求めた x はx0よりも f(x) = 0 の解に近いことが見込まれる。そこで、今求めた x を x1 として、再び同様の計算を繰り返す。計算を繰り返すことによって xn は f(x) = 0 の解に近づいていく

 Xがベクトルの場合の極値問題

R^n→R^1の関数f(X)の極値を求めるには、∇f(X)=0を解けばよいです。 ここで∇f(X)はR^n→R^nの関数で、そのi番目の成分は∂f/∂x_iです。

この方程式を解くニュートン法は1次元の場合とほぼ同じであり、漸化式は X_{n+1} = X_{n} - D(X_{n})^(-1) ∇f(X_{n}) で与えられます。ここで、D(X)はn*n行列であり、その(i,j)成分は∂^2f/∂x_i∂x_jです。

で、この問題は、目的関数が2次なので、D(X)は定数行列となり、任意の点X_{0}に対してX_{1}が極値をとる点となります。またD(X)は正定値のため、この極値で最小をとることもいえます。

ジュリア集合

 複素平面上の複素変数 z に対して、前節で説明したニュートン・ラプソン法(以後ニュートン法)を使って方程式 f(z)=0 の近似値を求める場合に、数列{zn}が収束しないような点(初期値z0)が存在します。そのような点 z0 の集合をジュリア集合といいます。

  • フラクタルである図形の特徴は自己相似であること、つまりその図形の一部分が全体と相似であることです。ジュリア集合とは1918年にフランス人の数学者ジュリアによって発表されたものです。 当時はコンピュータが未発達であったためほとんど話題にのぼらなかったそうです。 ここでジュリア集合の計算ができます
  • フランスの数学者Gaston Maurice Juliaが、1918年に論文「Memoire sur l'iteration des fonctions rationnelles」で発表した。マンデルブロー集合のBenoit Mandelbrotは1924年生まれですから、マンデルブローが生まれる前からジュリア集合は知られていました。ジュリア集合とマンデルブロー集合の違いは、Z=Z^2+Cの繰返し計算において、マンデルブロー集合がZの初期値Z0を常にZ0=0+0iとして、C平面上に図形を描画するのに対し、ジュリア集合では、Cを任意に与え、Z0平面上に図形を描画することです。参考
  • Gaston Maurice Julia (1893 - 1978) French mathematician best known for Julia sets. In his 20s, Julia served in the French Army in World War I, losing his nose in the conflict. Coming home from the war, Julia attracted the attention of mathematicians for his paper on iterated rational functions. Even in those days before computer graphics, he was able to visualize which Julia sets are connected and which are not. During World War II, he taught at the École Polytechnique. One of his students there, Benoît Mandelbrot, would grow up to popularize his work on fractals.

参考:制約なし最適化問題に対する準ニュートン法 参考:演習の例 参考:フラクタルの不思議 複素平面をニュートン法で色分け


添付ファイル: filejulia.JPG 395件 [詳細] filenewton9.png 363件 [詳細] filenewton8.png 373件 [詳細] filenewton7.png 362件 [詳細] filenewton6.png 370件 [詳細] filenewton5.JPG 390件 [詳細] filenewton4.png 402件 [詳細] filenewton3.png 395件 [詳細] filenewton2.png 389件 [詳細] filenewton1.png 407件 [詳細]

トップ   編集 凍結解除 差分 バックアップ 添付 複製 名前変更 リロード   新規 一覧 単語検索 最終更新   ヘルプ   最終更新のRSS
Last-modified: 2010-05-08 (土) 20:11:24 (5099d)