Chaotic Convergence of Newton's Method

In 1680 Newton proposed an algorithm for finding roots of polynomials. His method has since evolved but the core concept remains intact. The convergence of Newton's Method has been widely challenged to be unstable or even chaotic. Here we briefly review this evolution, and consider the question of stable convergence. Newton's method may be applied to any complex analytic function, such as polynomials. Its derivation is based on a Taylor series expansion in the Laplace frequency $s=\sigma+\jmath\omega$. The convergence of Newton's method depends on the Region of Convergence (RoC). Under certain conditions, nonlinear (NL) limit-cycles appear, resulting in a reduced rate of convergence to a root. Since Newton's method is inherently complex analytic (that is, linear and convergent), it is important to establish the source of this NL divergence, which we show is due to violations of the Nyquist Sampling theorem, also known as aliasing. Here the conditions and method for uniform convergence are explored. The source of the nonlinear limit-cycle is explained in terms of aliasing. We numerically demonstrate that reducing the step-size always results in a more stable convergence. The down side is that it always results in a sub-optimal convergence. It follows that a dynamic step-size would be ideal, by slowly increasing the step-size until it fails, and then decreasing it in small steps until it converges. Finding the optimal step-size is a reasonable solution.
Source: IEEE Transactions on Signal Processing - Category: Biomedical Engineering Source Type: research