アイザックニュートン †Sir Isaac Newton was an English physicist, mathematician, astronomer, natural philosopher who is perceived and considered by a substantial number of scholars and the general public as one of the most influential men in history.Philosophiæ Naturalis Principia Mathematica, published in 1687, is by itself considered to be among the most influential books in the history of science, laying the groundwork for most of classical mechanics. In this work, Newton described universal gravitation and the three laws of motion which dominated the scientific view of the physical universe for the next three centuries. In mechanics, Newton enunciated the principles of conservation of both momentum and angular momentum. In mathematics, Newton shares the credit with Gottfried Leibniz for the development of the differential and integral calculus. He also demonstrated the generalised binomial theorem, developed the so-called "Newton's method" for approximating the zeroes of a function, and contributed to the study of power series. ニュートン法とは †In numerical analysis, Newton's method (also known as the Newton–Raphson method), named after Isaac Newton and Joseph Raphson, is perhaps the best known method for finding successively better approximations to the zeroes (or roots) of a real-valued function. Newton's method can often converge remarkably quickly, especially if the iteration begins "sufficiently near" the desired root. Just how near "sufficiently near" needs to be, and just how quickly "remarkably quickly" can be, depends on the problem. This is discussed in detail below. Unfortunately, when iteration begins far from the desired root, Newton's method can easily lead an unwary user astray with little warning. Thus, good implementations of the method embed it in a routine that also detects and perhaps overcomes possible convergence failures. 近似の考え方 †Given a function ƒ(x) and its derivative ƒ '(x), we begin with a first guess x0 . A better approximation x1 is An important and somewhat surprising application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number using only multiplication and subtraction. ニュートン法の記述 †The idea of the method is as follows: one starts with an initial guess which is reasonably close to the true root, then the function is approximated by its tangent line (which can be computed using the tools of calculus), and one computes the x-intercept of this tangent line (which is easily done with elementary algebra). This x-intercept will typically be a better approximation to the function's root than the original guess, and the method can be iterated. Suppose ƒ : [a, b] → R is a differentiable function defined on the interval [a, b] with values in the real numbers R. The formula for converging on the root can be easily derived. Suppose we have some current approximation xn. Then we can derive the formula for a better approximation, xn+1 by referring to the diagram on the right. We know from the definition of the derivative at a given point that it is the slope of a tangent at that point. Here, f ' denotes the derivative of the function f. Then by simple algebra we can derive We start the process off with some arbitrary initial value x0. (The closer to the zero, the better. But, in the absence of any intuition about where the zero might lie, a "guess and check" method might narrow the possibilities to a reasonably small interval by appealing to the intermediate value theorem.) 最大値・最小値を求める方法 †Newton's method can also be used to find a minimum or maximum of a function. The derivative is zero at a minimum or maximum, so minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes: 歴史的説明 †Newton's method was described by Isaac Newton in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) .However, his description differs substantially from the modern description given above: Newton applies the method only to polynomials. He does not compute the successive approximations xn, but computes a sequence of polynomials and only at the end, he arrives at an approximation for the root x. Finally, Newton views the method as purely algebraic and fails to notice the connection with calculus. Newton's method was used by 17th century Japanese mathematician Seki Kōwa to solve single-variable equations, though the connection with calculus was missing. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using fluxional calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero. 実用上の考察 †Newton's method is an extremely powerful technique -- in general the convergence is quadratic: the error is essentially squared at each step (that is, the number of accurate digits doubles in each step). However, there are some difficulties with the method.
例題 †Consider the problem of finding the square root of a number. There are many methods of computing square roots, and Newton's method is one. For example, if one wishes to find the square root of 612, this is equivalent to finding the solution to X^2=612 The function to use in Newton's method is then, f(x)= X^2-612 with derivative, ƒ '(x) =2x With an initial guess of 10, the sequence given by Newton's method is ニュートン法で 3次方程式 P(x) =8x^3+4x^2-4x-1=0を解く †唯一の実数解の厳密解は、x=cos(2π/7)です。P"(x) = 48x + 8 なので少なくともx > 0 ではy = P(x) は下に凸です. 一般に(an, P(an)) における接線は y = 4(6an^2 + 2an − 1)x − 16an^3 − 4an^2 − 1 x 軸との交点のx 座標an+1 はan+1 =16an^3 + 4an^2 + 1 {an} はこの漸化式をみたし,初項をa1 = 1とでも与えて繰り返し計算できます。一般項がきれいな式で求まるわけではありませんが,電卓を使って数値計算するのは容易です. これが、ニュートン法です。 高次元の場合 †ニュートン法は、接線を一次近似式、接線のx切片を一次近似式の零点と考えることにより、より高次元の関数の場合に一般化できる。 対象となる関数を f: Rm → Rm, x ∈ Rm とし、 f(x)=0 を求める問題を考えよう。 今 x0 ∈ Rm が既知であるとする。x0における f(x) の一次近似式の零点を求める式は ヤコビ行列∂f(x0) が正則行列であるとして コンピュータで計算を行う場合 ∂f(x0)-1f(x0) を直接求めることは困難なので、 という方程式をガウスの消去法などの解法によって線形方程式系を解き r を求め、x = x0 - r によって x を求める。 ここで求めた x はx0よりも f(x) = 0 の解に近いことが見込まれる。そこで、今求めた x を x1 として、再び同様の計算を繰り返す。計算を繰り返すことによって xn は f(x) = 0 の解に近づいていく Xがベクトルの場合の極値問題 †R^n→R^1の関数f(X)の極値を求めるには、∇f(X)=0を解けばよいです。 ここで∇f(X)はR^n→R^nの関数で、そのi番目の成分は∂f/∂x_iです。 この方程式を解くニュートン法は1次元の場合とほぼ同じであり、漸化式は X_{n+1} = X_{n} - D(X_{n})^(-1) ∇f(X_{n}) で与えられます。ここで、D(X)はn*n行列であり、その(i,j)成分は∂^2f/∂x_i∂x_jです。 で、この問題は、目的関数が2次なので、D(X)は定数行列となり、任意の点X_{0}に対してX_{1}が極値をとる点となります。またD(X)は正定値のため、この極値で最小をとることもいえます。 ジュリア集合 †複素平面上の複素変数 z に対して、前節で説明したニュートン・ラプソン法(以後ニュートン法)を使って方程式 f(z)=0 の近似値を求める場合に、数列{zn}が収束しないような点(初期値z0)が存在します。そのような点 z0 の集合をジュリア集合といいます。
参考:制約なし最適化問題に対する準ニュートン法 参考:演習の例 参考:フラクタルの不思議 複素平面をニュートン法で色分け |