Using Newton's method, solve a system of nonlinear equations. Tangent method

The problem of finding solutions to a system of n nonlinear algebraic or transcendental equations with n unknowns of the form

f 1(x 1, x 2, … x n) = 0,

f 2(x 1, x 2, … x n) = 0,

……………………

f n (x 1 ,x 2 ,… x n ) = 0,

widely considered in computing practice. Similar systems of equations can arise, for example, during numerical modeling of nonlinear physical systems at the stage of searching for their stationary states. In a number of cases, systems of the form (6.1) are obtained indirectly, in the process of solving some other computational problem. For example, when trying to minimize a function of several variables, you can look for those points in multidimensional space where the gradient of the function is zero. In this case, it is necessary to solve the system of equations (6.1) with the left sides - projections of the gradient onto the coordinate axes.

In vector notation, system (6.1) can be written in a more compact form

vector column of functions, the symbol () T denotes the transponition operation

Finding solutions to a system of nonlinear equations is a much more complex task than solving a single nonlinear equation. Nevertheless, a number of iterative methods for solving nonlinear equations can be extended to systems of nonlinear equations.

Simple iteration method

The simple iteration method for systems of nonlinear equations is essentially a generalization of the method of the same name for one equation. It is based on the fact that the system of equations (6.1) is reduced to the form

x 1= g 1(x 1, x 2, … , x n) , x 2= g 2(x 1, x 2, … , x n) ,

……………………

x n= g n(x 1 , x 2 , … , x n) ,

and iterations are carried out according to the formulas

x 1 (k + 1 )= g 1 (x 1 (k ), x 2 (k ), ... , x n (k )), x 2 (k + 1 )= g 2 (x 1 (k ), x 2 (k ), … , x n (k )) ,

……………………………

x n (k + 1)= g n (x 1 (k), x 2 (k), ..., x n (k)).

Here the superscript indicates the approximation number. The iterative process (6.3) begins with some initial approximation

(x 1 (0) ,x 2 (0) ,… ,x n (0) ) and continue until the increment modules

all arguments after one k-iteration will not become less than the given value ε : x i (k + 1 ) − x i (k )< ε дляi = 1,2,… ,n .

Although the simple iteration method leads directly to a solution and is easy to program, it has two significant disadvantages. One of them is slow convergence. Another is that if the initial approximation is chosen far from the true solution (X 1,X 2,…,X n), then the convergence

method is not guaranteed. It is clear that the problem of choosing an initial approximation, which is not simple even for one equation, becomes very complex for nonlinear systems.

Solve a system of nonlinear equations:

(x...

) =0

F n (x 1 ...

x n) = 0 .

There are no direct methods for solving nonlinear systems of general form. Only in some cases can system (4.1) be solved directly. For example, in the case of two equations, it is sometimes possible to express one unknown in terms of the other and thus reduce the problem to solving one nonlinear equation with respect to one unknown.

Iterative methods are usually used to solve systems of nonlinear equations.

Newton's method

In the case of one equation F(x) = 0, the algorithm of Newton's method was easily obtained by writing the tangent equations to the curve y = F(x). The Newton method for systems of equations is based on the use of the expansion of functions F 1 (x 1 ...x n) in a Taylor series, and the terms containing

existing second (and higher order) derivatives are discarded. Let the approximate values ​​of the unknowns of system (4.1) be equal to

responsible a 1 ,a 2 ,....,a n . The task is to find the increments (by

edits) to these values

x 1 ,x 2 ,...,

x n , thanks to which the solution of the system

topics will be written as:

x 1= a 1+ x 1,

x 2= a 2+

x 2, .... ,x n = a n + x n.

Let us expand the left-hand sides of equations (4.1) taking into account the Taylor series expansion, limiting ourselves to only the linear terms of the relative

exactly increments:

F1 (x1 ... xn ) ≈ F1 (a1 ... an ) +

∂ F 1

x 1+

+ ∂ F 1

xn,

∂x

∂x

F2 (x1 ... xn ) ≈ F2 (a1 ... an ) +

∂ F 2

x 1+

∂ F 2

xn,

∂x

∂x

...................................

F n(x 1 ... x n) ≈ F n(a 1 ... a n) +

∂Fn

x 1+

∂Fn

xn.

∂x

∂x

Substituting into system (4.1), we obtain the following system of linear algebraic equations for increments:

∂ F 1

∂ F 1

+ ∂ F 1

= −F ,

∂x

∂x

∂x

∂ F 2

∂ F 2

∂ F 2

= −F ,

∂x

∂x

∂x

..............................

∂Fn

∂Fn

∂Fn

= −F .

∂x

∂x

∂x

Values ​​F 1 ...

derivatives

are calculated at

x 2 = a 2, …x n = a n.

The determinant of system (4.3) is the Jacobian:

∂ F 1

∂ F 1

∂x

∂x

∂ F 2

∂ F 2

J = ∂ x

∂x.

… … … …

∂ F n… … ∂ F n∂ x 1 ∂ x n

x 1= a 1,

For a unique solution to the system to exist, the Jacobian must be nonzero at each iteration.

Thus, the iterative process of solving a system of equations by Newton’s method consists in determining the increments x 1 , x 2 , ..., x n to the values ​​of the unknowns at each iteration by solving the system of linear algebraic equations (4.3). Counting stops if all increments become small in absolute value: maxx i< ε . В ме-

In Newton's method, a successful choice of the initial approximation is also important to ensure good convergence. Convergence deteriorates as the number of equations in the system increases.

As an example, consider using Newton's method to solve a system of two equations:

∂ ∂ F 1. x

The quantities on the right side are calculated at x = a,y = b.

If the conditions are met

y−b

< εи

x−a

for a given M, then

x and y values ​​are displayed,

otherwise

there is a conclusion

x ,y ,M .

For example:

Let's set the task to find valid roots of this equation.

And there definitely are! - from articles about function graphs And equations of higher mathematics you know very well what the schedule is polynomial function odd degree intersects the axis at least once, therefore our equation has at least one real root. One. Or two. Or three.

First, it begs to check the availability rational roots According to corresponding theorem, only the numbers 1, –1, 3, –3 can claim this “title”, and by direct substitution it is easy to make sure that none of them “suits”. Thus, irrational values ​​remain. The irrational root(s) of a polynomial of degree 3 can be found exactly (express through radicals) using the so-called Cardano formulas , however, this method is quite cumbersome. But for polynomials of the 5th and higher degrees there is no general analytical method at all, and, in addition, in practice there are many other equations in which exact values it is impossible to obtain real roots (although they exist).

However, in applied (for example, engineering) problems, it is more than acceptable to use approximate values ​​calculated with a certain accuracy.

Let's set the accuracy for our example. What does it mean? This means that we need to find SUCH an approximate value of the root (roots) in which we We are guaranteed to be wrong by no more than 0.001 (one thousandth) .

It is absolutely clear that the solution cannot be started “at random” and therefore in the first step the roots separate. To separate a root means to find a sufficiently small (usually single) segment to which this root belongs and on which there are no other roots. The simplest and most accessible graphical method of root separation. Let's build point by point graph of a function :

From the drawing it follows that the equation, apparently, has a single real root belonging to the segment. At the ends of this interval the function takes values ​​of different signs: , and from the fact continuity of the function on the segment An elementary way to clarify the root is immediately visible: we divide the interval in half and select the segment at the ends of which the function takes different signs. In this case, it is obviously a segment. We divide the resulting interval in half and again select the “different sign” segment. And so on. Such sequential actions are called iterations. In this case, they should be carried out until the length of the segment becomes less than twice the calculation accuracy, and the middle of the last “different-sign” segment should be chosen as the approximate value of the root.

The considered scheme received a natural name - half division method. And the disadvantage of this method is speed. Slowly. Very slow. There will be too many iterations before we achieve the required accuracy. With the development of computer technology, this, of course, is not a problem, but mathematics is what mathematics is for, to look for the most rational solutions.

And one of the more effective ways to find the approximate value of the root is precisely tangent method. The brief geometric essence of the method is as follows: first, using a special criterion (more on that a little later) one of the ends of the segment is selected. This end is called initial approximation of the root, in our example: . Now we draw a tangent to the graph of the function at the abscissa (blue dot and purple tangent):

This tangent crossed the x-axis at the yellow point, and note that in the first step we almost “hit the root”! It will be first root approach. Next, we lower the yellow perpendicular to the graph of the function and “get” to the orange point. We again draw a tangent through the orange point, which will intersect the axis even closer to the root! And so on. It is not difficult to understand that using the tangent method, we are approaching the goal by leaps and bounds, and it will take literally several iterations to achieve accuracy.

Since the tangent is defined through derivative of the function, then this lesson ended up in the “Derivatives” section as one of its applications. And without going into detail theoretical justification of the method, I will consider the technical side of the issue. In practice, the problem described above occurs approximately in the following formulation:

Example 1

Using the graphical method, find the interval on which the real root of the equation is located. Using Newton's method, obtain an approximate value of the root with an accuracy of 0.001

Here is a “sparing version” of the task, in which the presence of a single valid root is immediately stated.

Solution: on the first step the root should be separated graphically. This can be done by plotting (see illustrations above), but this approach has a number of disadvantages. Firstly, it’s not a fact that the graph is simple (we don’t know in advance), and the software is not always at hand. And secondly (corollary from 1st), with considerable probability the result will not even be a schematic drawing, but a rough drawing, which, of course, is not good.

Well, why do we need unnecessary difficulties? Let's imagine equation in the form, CAREFULLY construct graphs and mark the root in the drawing (“X” coordinate of the point of intersection of the graphs):

Obvious advantage this method is that graphs of these functions are built by hand much more accurately and much faster. By the way, note that straight crossed cubic parabola at a single point, which means that the proposed equation actually has only one real root. Trust, but verify ;-)

So, our “client” belongs to the segment and “by eye” is approximately equal to 0.65-0.7.

On the second step need to choose initial approximation root Usually this is one of the ends of the segment. The initial approximation must satisfy the following condition:

Let's find first And second derived functions :

and check the left end of the segment:

Thus, the zero “did not fit.”

Checking the right end of the segment:

- Everything is fine! We choose as the initial approximation.

On the third step The road to the root awaits us. Each subsequent root approximation is calculated from the previous data using the following recurrent formulas:

The process ends when the condition is met, where is a predetermined calculation accuracy. As a result, the “nth” approximation is taken as the approximate value of the root: .

Next up are the routine calculations:

(rounding is usually carried out to 5-6 decimal places)

Since the obtained value is greater than , we proceed to the 1st approximation of the root:

We calculate:

, so there is a need to move to the 2nd approximation:

Let's move on to the next round:

, thus, the iterations are completed, and the 2nd approximation should be taken as the approximate value of the root, which, in accordance with the given accuracy, should be rounded to one thousandth:

In practice, it is convenient to enter the results of calculations into a table; in order to shorten the entry somewhat, a fraction is often denoted by:

If possible, it is better to carry out the calculations themselves in Excel - it is much more convenient and faster:

Answer: accurate to 0.001

Let me remind you that this phrase implies the fact that we made a mistake in our assessment true meaning root by no more than 0.001. Those in doubt can pick up a calculator and once again substitute the approximate value of 0.674 on the left side of the equation.

Now let’s “scan” the right column of the table from top to bottom and notice that the values ​​are steadily decreasing in absolute value. This effect is called convergence a method that allows us to calculate the root with arbitrarily high accuracy. But convergence does not always take place - it is ensured a number of conditions, about which I kept silent. In particular, the segment on which the root is isolated must be small enough– otherwise the values ​​will change randomly and we will not be able to complete the algorithm.

What to do in such cases? Check that the specified conditions are met (see link above), and, if necessary, reduce the segment. So, relatively speaking, if in the analyzed example the interval was not suitable for us, then we should consider, for example, the segment. In practice, I have encountered such cases, and this technique really helps! The same must be done if both ends of the “wide” segment do not satisfy the condition (i.e., none of them are suitable as an initial approximation).

But usually everything works like a clock, although not without pitfalls:

Example 2

Determine graphically the number of real roots of the equation, separate these roots and, using Newton’s method, find approximate values ​​of the roots with accuracy

The condition of the problem has become noticeably stricter: firstly, it contains a strong hint that the equation does not have a single root, secondly, the requirement for accuracy has increased, and thirdly, with the graph of the function much more difficult to cope with.

And therefore solution Let's start with a saving trick: imagine the equation in the form and draw graphs:


From the drawing it follows that our equation has two real roots:

The algorithm, as you understand, needs to be “cranked” twice. But this is even in the most severe cases; sometimes you have to examine 3-4 roots.

1) Using criterion Let's find out which end of the segment to choose as the initial approximation of the first root. Finding derivatives of functions :

Testing the left end of the segment:

- came up!

Thus, is an initial approximation.

We will refine the root using Newton’s method using the recurrent formula:
- until the fraction modulo will not be less than the required accuracy:

And here the word “module” acquires non-illusory importance, since the values ​​​​are negative:


For the same reason, you should show increased attention when moving to each next approximation:

Despite the fairly high requirement for accuracy, the process again ended at the 2nd approximation: , therefore:

Accurate to 0.0001

2) Let's find the approximate value of the root.

We check the left end of the segment for lice:

, therefore, it is not suitable as an initial approximation.

Newton's method (also known as the tangent method) is an iterative numerical method for finding the root (zero) of a given function. The method was first proposed by the English physicist, mathematician and astronomer Isaac Newton (1643-1727), under whose name it became famous.

The method was described by Isaac Newton in the manuscript De analysi per aequationes numero terminorum infinitas (lat. .About analysis by equations of infinite series), addressed in 1669 to Barrow, and in the work De metodis fluxionum et serierum infinitarum (Latin: The method of fluxions and infinite series) or Geometria analytica ( lat.Analytical geometry) in the collected works of Newton, which was written in 1671. However, the description of the method differed significantly from its current presentation: Newton applied his method exclusively to polynomials. He did not calculate successive approximations of x n, but a sequence of polynomials and as a result obtained an approximate solution of x.

The method was first published in the treatise Algebra by John Wallis in 1685, at whose request it was briefly described by Newton himself. In 1690, Joseph Raphson published a simplified description in his work Analysis aequationum universalis (lat. General analysis of equations). Raphson viewed Newton's method as purely algebraic and limited its use to polynomials, but he described the method in terms of successive approximations x n instead of the more difficult to understand sequence of polynomials used by Newton.

Finally, in 1740, Newton's method was described by Thomas Simpson as a first-order iterative method for solving nonlinear equations using derivatives as outlined here. In the same publication, Simpson generalized the method to the case of a system of two equations and noted that Newton's method can also be applied to solve optimization problems by finding the zero of the derivative or gradient.

In accordance with this method, the task of finding the root of a function is reduced to the task of finding the point of intersection with the x-axis of the tangent plotted to the graph of the function.

Fig.1 . Function change graph

A tangent line drawn at any point to the graph of a function is determined by the derivative of this function at the point under consideration, which in turn is determined by the tangent of the angle α (). The point of intersection of the tangent with the x-axis is determined based on the following relationship in a right triangle: tangent of the anglein a right triangle is determined by the ratio of the opposite side to the adjacent side of the triangle. Thus, at each step, a tangent to the graph of the function is constructed at the point of the next approximation . Point of intersection of the tangent with the axis Ox will be the next point of approach. In accordance with the method under consideration, calculating the approximate value of the root oni-iterations are carried out according to the formula:

The slope of the straight line is adjusted at each step in the best possible way, however, you should pay attention to the fact that the algorithm does not take into account the curvature of the graph and, therefore, during the calculation process it remains unknown in which direction the graph may deviate.

The condition for the end of the iterative process is the fulfillment of the following condition:

Where ˗ permissible error in determining the root.

The method has quadratic convergence. The quadratic rate of convergence means that the number of correct signs in the approximation doubles with each iteration.

Mathematical justification

Let a real function be given, which is defined and continuous in the area under consideration. It is necessary to find the real root of the function in question.

The derivation of the equation is based on the method of simple iterations, according to which the equation is reduced to an equivalent equation for any function. Let us introduce the concept of a contraction mapping, which is defined by the relation .

For the best convergence of the method, the condition must be satisfied at the point of the next approximation. This requirement means that the root of the function must correspond to the extremum of the function.

Derivative of the contraction mapis defined as follows:

Let us express the variable from this expressionsubject to the previously accepted statement that when it is necessary to ensure the condition . As a result, we obtain an expression for defining the variable:

Taking this into account, the compression function is as follows:

Thus, the algorithm for finding a numerical solution to the equation is reduced to an iterative calculation procedure:

Algorithm for finding the root of a nonlinear equation using the method

1. Set the starting point of the approximate value of the root of the function, as well as the calculation error (small positive number) and the initial iteration step ().

2. Calculate the approximate value of the root of the function in accordance with the formula:

3. We check the approximate value of the root for the specified accuracy, in the case of:

If the difference between two successive approximations becomes less than the specified accuracy, then the iterative process ends.

If the difference between two successive approximations does not reach the required accuracy, then it is necessary to continue the iterative process and go to step 2 of the algorithm under consideration.

Example of solving equations

by methodNewton for an equation with one variable

As an example, consider solving a nonlinear equation using the methodNewton for an equation with one variable. The root must be found with accuracy as a first approximation.

Option for solving a nonlinear equation in a software packageMathCADpresented in Figure 3.

The calculation results, namely the dynamics of changes in the approximate value of the root, as well as the calculation errors depending on the iteration step, are presented in graphical form (see Fig. 2).

Fig.2. Calculation results using Newton's method for an equation with one variable

To ensure the specified accuracy when searching for an approximate value of the root of the equation in the range, it is necessary to perform 4 iterations. At the last iteration step, the approximate value of the root of the nonlinear equation will be determined by the value: .

Fig.3 . Program listing inMathCad

Modifications of Newton's method for an equation with one variable

There are several modifications of Newton's method that are aimed at simplifying the computational process.

Simplified Newton's method

In accordance with Newton's method, it is necessary to calculate the derivative of the function f(x) at each iteration step, which leads to an increase in computational costs. To reduce the costs associated with calculating the derivative at each calculation step, you can replace the derivative f’(x n) at point x n in the formula with the derivative f’(x 0) at point x 0. In accordance with this calculation method, the approximate value of the root is determined by the following formula:Modified Newton's method

Newton's difference method

As a result, the approximate value of the root of the function f(x) will be determined by the expression of Newton’s difference method:

Newton's two-step method

In accordance with Newton's method, it is necessary to calculate the derivative of the function f(x) at each iteration step, which is not always convenient and sometimes practically impossible. This method allows the derivative of a function to be replaced by a difference ratio (approximate value):

As a result, the approximate value of the root of the function f(x) will be determined by the following expression:

Where

Fig.5 . Newton's two-step method

The secant method is a two-step method, that is, a new approximationdetermined by the two previous iterations And . The method must specify two initial approximations And . The convergence rate of the method will be linear.

  • Back
  • Forward

In order to add your comment to the article, please register on the site.

Federal Agency for Education

Sochi State University of Tourism and Resort Business

Faculty of Information Technologies and Mathematics

Department of General Mathematics

Coursework in the discipline

"Numerical Methods"

"Newton's method and its modifications for solving systems of nonlinear equations"

Completed:

3rd year student

groups 06-INF

Lavrenko M.V.

Checked:

associate professor, candidate

pedagogical sciences


In connection with the development of new computer technology, modern engineering practice is increasingly encountering mathematical problems, the exact solution of which is very difficult or impossible to obtain. In these cases, one usually resorts to one or another approximate calculation. That is why approximate and numerical methods of mathematical analysis have been widely developed in recent years and have acquired exceptional importance.

This course work examines the famous Newton method and its modification for solving systems of nonlinear equations. Solving systems of nonlinear equations is one of the difficult problems in computational mathematics. The difficulty is to determine whether the system has a solution, and, if so, how many. The convergence of the basic and simplified Newton methods and the method obtained from Newton's method using an iterative process for the approximate inversion of Jacobi matrices is studied.

It also briefly describes: the false position methods, the secant method, the Steffensen method, which often turns out to be the best choice for solving systems of nonlinear equations than the secant method or the false position method.


The famous Newton method is one of the most effective methods for solving a wide variety of nonlinear problems. The calculation formula of the method can be obtained using various approaches. Let's look at two of them.

1) Tangent method.

Let us derive the calculation formula of the method for solving the nonlinear equation

from simple geometric considerations. Let be a given initial approximation to the root. At the point with coordinates, we draw a tangent to the graph of the function and take the abscissa of the point of intersection of this tangent with the axis as a new approximation. Similarly, we take the abscissa of the point of intersection with the axis of the tangent drawn to the graph at the point with coordinates as an approximation. Continuing this process further, we obtain the sequence close to the root.

Equation of a tangent drawn to the graph of a function

at a point has the form: . (1.1)

Assuming in equality (1.1)

, we note that when the abscissa condition is met, the point of intersection of the tangent with the axis satisfies the equality: . (1.2)

Expressing from it

, we get the calculation formula Newton's method : , . (1.3)

Due to this geometric interpretation, this method is often called tangent method .

Let it be necessary to solve a system of equations

(1) - given, nonlinear (including linear ones)

real-valued functions n real variables

. Having designated , ,

this system (2.1) can be written by one equation

(2)

relative to the vector function F vector argument x. Thus, the original problem can be considered as a problem about the zeros of the nonlinear mapping

In this formulation, it is a direct generalization of the main problem of the previous chapter - the problem of constructing methods for finding zeros of one-dimensional nonlinear mappings. In fact, this is the same problem, only in spaces of higher dimension. Therefore, it is possible to either re-construct methods for solving it based on the approaches developed above, or to carry out a formal transfer of the calculation formulas derived for the scalar case. In any case, care should be taken about the validity of certain operations on vector variables and vector functions, as well as about the convergence of the iterative processes obtained in this way. Often the convergence theorems for these processes are trivial generalizations of the corresponding results obtained for methods for solving scalar equations. However, not all results and not all methods can be transferred from the case. n= 1 in case n≥2. For example, dichotomy methods will no longer work here, since the set of vectors is not ordered. At the same time, the transition from n= 1 to n 2 introduces its own specifics into the problem of finding the zeros of a nonlinear mapping, taking into account which leads to new methods and various modifications of existing ones. In particular, the large variability of methods for solving nonlinear systems is associated with the variety of ways in which linear algebraic problems that arise during step-by-step linearization of a given nonlinear vector function can be solved F ( x ).

2) Linearization method.

STATE EDUCATIONAL INSTITUTION

"Transnistrian State University named after. T.G. Shevchenko"

Rybnitsa branch

Department of Physics, Mathematics and Informatics

Coursework

in the discipline: “Workshop on solving problems on a computer”

"Newton's method for solving nonlinear equations"

Completed:

3rd year student;

330th group

specialties: “Informatics”

with additional majoring in English

Nistor A.G..

Checked:

teacher Panchenko T. A.


The introduction of computers into all spheres of human activity requires specialists of various profiles to master the skills of using computer technology. The level of training of university students is increasing, who from the first year are introduced to the use of computers and the simplest numerical methods, not to mention the fact that the implementation of coursework and diploma projects, the use of computer technology becomes the norm in the vast majority of universities.

Computer technology is now used not only in engineering calculations and economic sciences, but also in such traditionally non-mathematical specialties as medicine, linguistics, and psychology. In this regard, it can be stated that the use of computers has become widespread. A large category of specialists has emerged - computer users who need knowledge on the use of computers in their industry - skills in working with existing software, as well as creating their own software adapted to solve a specific problem. And here descriptions of high-level programming languages ​​and numerical methods come to the aid of the user.

Numerical methods are developed and researched, as a rule, by highly qualified mathematicians. For most users, the main task is to understand the basic ideas and methods, features and applications. However, users want to work with a computer not only as a highly intelligent calculator, but also as an assistant in everyday work, a repository of information with quick and orderly access, as well as a source and processor of graphical information. I intend to demonstrate all these functions of a modern computer in this course work.

Goals and objectives.

The purpose of this course work is to study and implement in a software product the solution of nonlinear equations using Newton's method. This work consists of three sections, conclusion and appendix. The first section is theoretical and contains general information about Newton's method. The second is the practical part. Here we describe Newton's method, analyzed with specific examples. The third is devoted to testing the program and analyzing the results. Finally, a conclusion about the work done is presented.

The purpose of this course work is the software implementation of Newton's method for solving nonlinear equations.

To do this, you must complete the following tasks:

1. Study the necessary literature.

2. Review existing methods for solving nonlinear equations.

3. Study Newton's method for solving nonlinear equations.

4. Consider the solution of nonlinear equations by Newton’s method using specific examples.

5. Develop a program for solving nonlinear equations by Newton’s method.

6. Analyze the results.

Consider the problem of finding the roots of a nonlinear equation

The roots of equation (1) are those values ​​of x that, when substituted, turn it into an identity. Only for the simplest equations it is possible to find a solution in the form of formulas, i.e. analytical form. More often it is necessary to solve equations using approximate methods, the most widespread among which, due to the advent of computers, are numerical methods.

The algorithm for finding roots using approximate methods can be divided into two stages. At the first stage, the location of the roots is studied and their separation is carried out. The region is found in which the root of the equation or the initial approximation to the root x 0 exists. The simplest way to solve this problem is to examine the graph of the function f(x) . In the general case, to solve it it is necessary to use all the means of mathematical analysis.

The existence of at least one root of equation (1) on the found segment follows from Bolzano’s condition:

f(a)*f(b)<0 (2)

This implies that the function f(x) is continuous on this interval. However, this condition does not answer the question about the number of roots of the equation on a given segment. If the requirement of continuity of a function is supplemented with the requirement of its monotonicity, and this follows from the constancy of sign of the first derivative, then we can assert the existence of a single root on a given segment.

When localizing roots, it is also important to know the basic properties of this type of equation. For example, let us recall some properties of algebraic equations:

where are the real coefficients.

a) An equation of degree n has n roots, among which there can be both real and complex. Complex roots form complex conjugate pairs and, therefore, the equation has an even number of such roots. If n is odd, there is at least one real root.

b) The number of positive real roots is less than or equal to the number of variable signs in the sequence of coefficients. Replacing x with –x in equation (3) allows us to estimate the number of negative roots in the same way.

At the second stage of solving equation (1), using the obtained initial approximation, an iterative process is constructed that makes it possible to refine the value of the root with a certain predetermined accuracy. The iterative process consists of sequential refinement of the initial approximation. Each such step is called an iteration. As a result of the iteration process, a sequence of approximate values ​​of the roots of the equation is found. If this sequence approaches the true value of the root x as n grows, then the iterative process converges. An iterative process is said to converge to at least order m if the following condition is met:

, (4)


where C>0 is some constant. If m=1, then we speak of first-order convergence; m=2 - about quadratic, m=3 - about cubic convergence.

Iterative cycles end if, for a given permissible error, the criteria for absolute or relative deviations are met:

or a small discrepancy:

This work is devoted to the study of an algorithm for solving nonlinear equations using Newton's method.

1.1 Review of existing methods for solving nonlinear equations

There are many different methods for solving nonlinear equations, some of them are presented below:

1)Iteration method. When solving a nonlinear equation using the iteration method, we will use the equation written in the form x=f(x). The initial value of the argument x 0 and the accuracy ε are specified. The first approximation of the solution x 1 is found from the expression x 1 =f(x 0), the second - x 2 =f(x 1), etc. In the general case, we find the i+1 approximation using the formula xi+1 =f(xi). We repeat this procedure until |f(xi)|>ε. Condition for convergence of the iteration method |f"(x)|<1.

2)Newton's method. When solving a nonlinear equation by the Newton method, the initial value of the argument x 0 and the accuracy ε are specified. Then at point (x 0 ,F(x 0)) we draw a tangent to the graph F(x) and determine the point of intersection of the tangent with the x 1 axis. At the point (x 1 ,F(x 1)) we again construct a tangent, find the next approximation of the desired solution x 2, etc. We repeat this procedure until |F(xi)| > ε. To determine the point of intersection (i+1) of the tangent with the abscissa axis, we use the following formula x i+1 =x i -F(x i)\ F’(x i). Condition for the convergence of the tangent method F(x 0)∙F""(x)>0, etc.

3). Dichotomy method. The solution technique comes down to gradually dividing the initial uncertainty interval in half according to the formula C k = a k + b k /2.

In order to select the required one from the two resulting segments, it is necessary to find the value of the function at the ends of the resulting segments and consider the one on which the function will change its sign, that is, the condition f (a k) * f (in k) must be satisfied<0.

The process of dividing the segment is carried out until the length of the current uncertainty interval is less than the specified accuracy, that is

in to - a to< E. Тогда в качестве приближенного решения уравнения будет точка, соответствующая середине интервала неопределённости.

4). Chord method. The idea of ​​the method is that a chord is constructed on the segment, subtending the ends of the arc of the graph of the function y=f(x), and the point c, the intersection of the chord with the x-axis, is considered an approximate value of the root

c = a - (f(a)Х (a-b)) / (f(a) - f(b)),

c = b - (f(b)Х (a-b)) / (f(a) - f(b)).

The next approximation is sought on the interval or depending on the signs of the function values ​​​​at points a, b, c

x* O, if f(c)H f(a) > 0;

x* O if f(c)Х f(b)< 0 .


If f"(x) does not change sign to , then denoting c=x 1 and considering a or b as the initial approximation, we obtain iterative formulas of the chord method with a fixed right or left point.

x 0 =a, x i+1 = x i - f(x i)(b-x i) / (f(b)-f(x i), with f "(x)Х f "(x) > 0;

x 0 =b, x i+1 = x i - f(x i)(x i -a) / (f(x i)-f(a), with f "(x)Х f "(x)< 0 .

The convergence of the chord method is linear.

1.2 Newton's method algorithm

Let's build an effective algorithm for calculating the roots of the equation. Let the initial approximation be given. Let's calculate the value of the function and its derivative at this point. Let's look at a graphical illustration of the method:

.


(8)

Continuing this process, we obtain the famous Newton formula:

(9)

Here is the simplest recursive subroutine function:

function X_Newt(x,eps:real):real;

y:=x-f(x)/f1(x);

if abs(f(x)) > eps

then X_Newt:=X_Newt(y,eps)

Newton's method (tangents) is characterized by a quadratic rate of convergence, i.e. At each iteration, the number of correct signs doubles. However, this method does not always lead to the desired result. Let's consider this issue in more detail.

Let us transform equation (1) to an equivalent equation of the form:

In the case of the tangent method . If the initial approximation to the root x=x 0 is known, then we will find the next approximation from the equation x 1 =g(x 0), then x 2 =g(x 1),... Continuing this process, we obtain the recurrent formula of the simple iteration method

x k+1 =g(x k) (11)

The iterative process continues until conditions (5-7) are met.

Does the described computational process always lead to the desired solution? Under what conditions will it converge? To answer these questions, let us again turn to the geometric illustration of the method.

The root of the equation is represented by the intersection point of the functions y=x and y=g(x). As can be seen from Fig. 3(a), if the condition is satisfied, then the process converges, otherwise it diverges (Fig. 3(b)).


So, in order for the iterative process to be convergent and lead to the desired result, the following condition must be met:

The transition from the equation f(x)=0 to the equation x=g(x) can be carried out in various ways. In this case, it is important that the selected function g(x) satisfies condition (12). For example, if the function f(x) is multiplied by an arbitrary constant q and the variable x is added to both sides of equation (1), then g(x)=q*f(x)+x. Let us choose a constant q such that the convergence rate of the algorithm is the highest. If 1

Newton's method has a high convergence rate, but it does not always converge. The convergence condition, where g(x) = x – f(x)/ f’(x), is reduced to the requirement.

In practical calculations, it is important to choose the initial value as close as possible to the desired value, and to install a “looping guard” in the program.

The disadvantage of the method is that at each step it is necessary to calculate not only the function, but also its derivative. This is not always convenient. One of the modifications of Newton's method is to calculate the derivative only at the first iteration:

(13)

Another modification method is to replace the derivative with a finite difference

(14)

Then (15)

The geometric meaning of this change in Newton's algorithm is that from the tangent we come to the secant. The secant method is inferior to Newton's method in terms of convergence speed, but does not require the calculation of the derivative. Note that the initial approximations in the secant method can be located either on different sides of the root or on the same side.

Let us write down the algorithm of Newton's method in general form.

1. Set the initial approximation x (0) so that the condition is satisfied

f(x (0))*f’’(x (0))>0. (16)

Set a small positive number ε as the accuracy of the calculations. Set k = 0.

2. Calculate x (k+1) using formula (9):


.

3. If | x (k+1) - x (k) |< ε, то процесс вычисления прекратить и положить х* = x (k+1) . Otherwise, increase k by 1 (k = k + 1) and go to step 2.

Let's manually solve several nonlinear equations using Newton's method, and then compare the results with those obtained when implementing the software product.

Example 1

sin x 2 + cosx 2 - 10x. = 0.

F’(x)=2x cosx 2 - 2x sinx 2 - 10.

F’’(x)=2cosx 2 - 4x 2 sinx 2 - 2sinx 2 - 4x 2 cosx 2 = cosx 2 (2-4x 2) - sinx 2 (2+4x 2).


Now, based on the graph, let’s take the first approximate root and check condition (16): f(x (0)) * f’’(x (0)) > 0.

Let x (0) = 0. 565, then f(0. 565)*f’’(0. 565) = -4. 387 * (-0.342) = 1.5 > 0,

The condition is met, so we take x (0) = 0.565.

k x(k) f(x(k)) f’(x(k)) | x(k+1) - x(k) |
0 0. 565 -4. 387 -9. 982 0. 473
1 0. 092 0. 088 -9. 818 0. 009
2 0. 101 0. 000 -9. 800 0. 000
3 0. 101

It follows that the root of the equation is x = 0.101.

Example 2

Solve the equation using Newton's method.

cos x – e -x2/2 + x - 1 = 0

Calculations should be carried out with an accuracy of ε = 0.001.

Let's calculate the first derivative of the function.

F’(x) = 1 – sin x + x*e -x2/2 .

Now let's calculate the second derivative of the function.

F’’(x) = e -x2/2 *(1-x 2) – cos x.

Let's build an approximate graph of this function.

Now, based on the graph, let’s take the first approximate root and check condition (16): f(x (0)) * f’’(x (0)) > 0.

Let x (0) = 2, then f(2)*f’’(2) = 0.449 * 0.010 = 0.05 > 0,

The condition is met, so we take x (0) = 2.

Now let's create a table of values ​​to solve this equation.

k x(k) f(x(k)) f’(x(k)) | x(k+1) - x(k) |
0 2 0. 449 0. 361 1. 241
1 -0. 265 0. 881 0. 881 0. 301
2 -0. 021 0. 732 0. 732 0. 029
3 0. 000 0. 716 0. 716 0. 000
4 1. 089

It follows that the root of the equation is x = 1.089.

Example 3

Solve the equation using Newton's method.

Calculations should be carried out with an accuracy of ε = 0.001.

Let's calculate the first derivative of the function.

F’(x) = 2*x + e -x .

Now let's calculate the second derivative of the function.

F’’(x) = 2 - e -x .

Let's build an approximate graph of this function.


Now, based on the graph, let’s take the first approximate root and check condition (16): f(x (0)) * f’’(x (0)) > 0.

Let x (0) = 1, then f(2)*f’’(2) = 0. 632 * 1, 632 = 1, 031 > 0,

Now let's create a table of values ​​to solve this equation.

k x(k) f(x(k)) f’(x(k)) | x(k+1) - x(k) |
0 1, 000 0, 632 2, 368 0, 267
1 0, 733 0, 057 1, 946 0, 029
2 0, 704 0, 001 1, 903 0, 001
3 0, 703

It follows that the root of the equation is x = 0.703.

Solve the equation using Newton's method.

cos x –e -x/2 +x-1=0.

Let's calculate the first derivative of the function.


F’(x) = -sin x + e -x/2 /2+1.

Now let's calculate the second derivative of the function.

F’’(x) = -cos x - e -x/2/4.

Let's build an approximate graph of this function.

Now, based on the graph, let’s take the first approximate root and check condition (16): f(x (0)) * f’’(x (0)) > 0.

Let x (0) = 1, then f(2)*f’’(2) = -0. 066 * (-0.692) = 0.046 > 0,

The condition is met, so we take x (0) = 1.

Now let's create a table of values ​​to solve this equation.

k x(k) f(x(k)) f’(x(k)) | x(k+1) - x(k) |
0 1, 000 -0. 066 0. 462 0. 143
1 1. 161 -0. 007 0. 372 0. 018
2 1. 162 0. 0001. 0. 363 0. 001
3 1. 162

It follows that the root of the equation is x = 1.162.

Example 5

Solve the equation using Newton's method.

2+e x - e -x =0.

Let's calculate the first derivative of the function.

F’(x) = e x +e -x .

Now let's calculate the second derivative of the function.

F''(x) = e x -e -x .

Let's build an approximate graph of this function.

Now, based on the graph, let’s take the first approximate root and check condition (16): f(x (0)) * f’’(x (0)) > 0.

Let x (0) = 1, then f(2)*f’’(2) = 0. 350 * 2, 350 = 0. 823 > 0,

The condition is met, so we take x (0) = 1.

Now let's create a table of values ​​to solve this equation.

k x(k) f(x(k)) f’(x(k)) | x(k+1) - x(k) |
0 1, 000 0, 350 3, 086 0, 114
1 0, 886 0, 013 2, 838 0, 005
2 0, 881 0, 001 2, 828 0, 000
3 0, 881

It follows that the root of the equation is x = 0.881.

3.1 Description of the program

This program is designed to work in text and graphic mode. It consists of the Graph module, Crt, three functions and three procedures.

1. The Crt module is designed to provide control over screen text modes, extended keyboard codes, colors, windows and sound;

2. The Graph module is designed to provide control over graphic objects;

3. procedure GrafInit - initializes the graphics mode;

4. function VF – calculates the value of the function;

5. function f1 – calculates the value of the first derivative of the function;

6. function X_Newt – implements an algorithm for solving an equation using Newton’s method.

7. procedure FGraf – implements the construction of a graph of a given function f(x);

Ots=35 - a constant that determines the number of points for indentation from the monitor boundaries;

fmin, fmax – maximum and minimum values ​​of the function;

SetColor(4) – a procedure that sets the current color of a graphic object using a palette, in this case it is red;

SetBkColor(9) is a procedure that sets the current background color using a palette, in this case a light blue color.

8. Procedure MaxMinF – will calculate the maximum and minimum values ​​of the function f(x).

Line – a procedure that draws a line from a point with coordinates (x1, y1) to a point with coordinates (x2, y2);

MoveTo – a procedure that moves the pointer (CP) to a point with coordinates (x, y);

TextColor(5) – a procedure that sets the current color of characters, in this case it is pink;

Outtexty(x, y, 'string') – a procedure that outputs a string starting at position (x, y)

CloseGraph is a procedure that closes the graphics system.

3.2 Testing the program

To test the program, we will take the examples that were solved in the practical part of the work in order to compare the results and check the correct operation of the program.

1) sin x 2 + cosx 2 - 10x. = 0.

Enter a = -1

Enter b=1

= [-1, 1]

(function graph output)


We get: x=0.0000002

2) cos x – e -x2/2 + x - 1 = 0.

This program calculates the roots of a nonlinear equation using Newton's method with eps accuracy and draws an approximate graph of the function on the segment.

Enter a = -3

Enter b=3

= [-3, 3]

(function graph output)

The root of the equation found by Newton's method:

Let's check by substituting the resulting answer into the equation.

We get: x=-0.0000000

3) x 2 - e -x = 0.

This program calculates the roots of a nonlinear equation using Newton's method with eps accuracy and draws an approximate graph of the function on the segment.

Enter a = -1

Enter b=1

= [-1, 1]

Enter the calculation precision eps=0. 01

(function graph output)

The root of the equation found by Newton's method:

Let's check by substituting the resulting answer into the equation.

We get: x=0.0000000

4) cos x –e -x/2 +x-1=0.

This program calculates the roots of a nonlinear equation using Newton's method with eps accuracy and draws an approximate graph of the function on the segment.

Enter a = -1.5

Enter b=1.5

= [-1,5, 1,5 ]

Enter the calculation precision eps=0. 001

(function graph output)

The root of the equation found by Newton's method:


Let's check by substituting the resulting answer into the equation.

We get: x=0.0008180

5) -2+e x - e -x =0.

This program calculates the roots of a nonlinear equation using Newton's method with eps accuracy and draws an approximate graph of the function on the segment.

Enter a = -0.9

Enter b=0.9

= [-0,9, 0,9]

Enter the calculation precision eps=0. 001

(function graph output)

The root of the equation found by Newton's method:

Let's check by substituting the resulting answer into the equation.

The goal of the work was to create a program that calculates the root of a nonlinear equation using Newton's method. Based on this, we can conclude that the goal was achieved, since the following tasks were solved for its implementation:

1. The necessary literature has been studied.

2. Existing methods for solving nonlinear equations are reviewed.

3. Newton's method for solving nonlinear equations was studied.

4. The solution of nonlinear equations by Newton’s method is considered using an example.

5. The program was tested and debugged.

List of used literature

1. B.P. Demidovich, I.A. Maron. Fundamentals of computational mathematics. – Moscow, ed. "Science"; 1970.

2. V.M. Verzhbitsky. Numerical methods (linear algebra and nonlinear equations). – Moscow, “Higher School”; 2000.

3. N.S.Bakhvalov, A.V.Lapin, E.V.Chizhonkov. Numerical methods in problems and exercises. – Moscow, “Higher School”; 2000.

4. Matthews, John, G., Fink, Curtis, D. Numerical methods MATLAB, 3rd edition. - Moscow, “Villas”; 2001.



Did you like the article? Share with your friends!