Derivation of the calculation formula for the simple iteration method. Simple iteration method

By analogy with (2.1), system (5.1) can be represented in the following equivalent form:

where g(x) is an iterative vector function of the vector argument. Systems of nonlinear equations often arise directly in the form (5.2) (for example, in numerical schemes for differential equations); in this case, no additional effort is required to transform equations (5.1) into system (5.2). If we continue the analogy with the simple iteration method for one equation, then the iteration process based on equation (5.2) can be organized as follows:

  • 1) some initial vector x ((,) e 5 o (x 0, A)(it is assumed that x* e 5„(x 0, A));
  • 2) subsequent approximations are calculated using the formula

then the iteration process is completed and

As before, we need to find out under what conditions

Let's discuss this issue by performing a simple analysis. First we introduce the error of the ith approximation as e(^ = x(i) - x*. Then we can write

Let us substitute these expressions into (5.3) and expand g(x* + e (/i)) in powers e(k> in the neighborhood of x* as a function of the vector argument (assuming that all partial derivatives of the function g(x) are continuous). Taking into account also that x* = g(x*), we get

or in matrix form

B = (bnm)= I (x*)1 - iteration matrix.

If the error rate ||е®|| is small enough, then the second term on the right side of expression (5.4) can be neglected, and then it coincides with expression (2.16). Consequently, the condition for the convergence of the iterative process (5.3) near the exact solution is described by Theorem 3.1.

Convergence of the simple iteration method. Necessary and sufficient condition for the convergence of the iterative process (5.3):

and a sufficient condition:

These conditions are of theoretical rather than practical significance, since we do not know x'. By analogy with (1.11), we obtain a condition that can be useful. Let x* e 5 o (x 0, A) and the Jacobian matrix for the function g(x)


exists for all x e S n (x 0 , a) (note that C(x*) = B). If the elements of the matrix C(x) satisfy the inequality

for all x e 5„(x 0, A), then the sufficient condition (5.5) is also satisfied for any matrix norm.

Example 5.1 (simple iteration method) Consider the following system of equations:

One possibility to represent this system in equivalent form (5.2) is to express X from the first equation and x 2 from the second equation:

Then the iteration scheme has the form

The exact solution is x* e 5„((2, 2), 1). Let's choose the initial vector x (0) = (2,2) and ? p = CT 5. The calculation results are presented in table. 5.1.

Table 5.1

||X - X (i_1 > | 2 / X (A) 2

  • 1.50000
  • 1.73205
  • 1.69258
  • 1.34646
  • 1.71914
  • 1.40036
  • 1.71642
  • 1.39483
  • 1.71669
  • 1.39536
  • 1.71667
  • 1.39532

These results show that the convergence is quite slow. In order to obtain a quantitative characteristic of convergence, we carry out a simple analysis, considering x (1/) to be an exact solution. The Jacobian matrix C(x) for our iterative function has the form

then matrix B is approximately estimated as

It is easy to check that neither condition (5.5) nor condition (5.6) are satisfied, but convergence takes place, since 5(B) ~ 0.8.

It is often possible to speed up the convergence of the simple iteration method by slightly changing the calculation process. The idea of ​​this modification is very simple: to calculate P th vector components x (A+1) can be used not only (t = n,..., N), but also the already calculated components of the next approximation vector x k^ (/= 1,P - 1). Thus, the modified simple iteration method can be represented as the following iteration scheme:


If the approximations generated by the iterative process (5.3) converge, then the iterative process (5.8) tends to converge faster due to more complete use of information.

Example 5.2 (modified simple iteration method) The modified simple iteration for system (5.7) is represented as

As before, we choose the initial vector x (0) = (2, 2) and g r = = 10 -5. The calculation results are presented in table. 5.2.

Table 5.2

  • 1.50000
  • 1.11803
  • 1.72076
  • 1.40036
  • 1.71671
  • 1.39538
  • 1.71667
  • 1.39533

I The big change in the order of calculations led to a halving of the number of iterations, and therefore a halving of the number of operations.

Let's replace the original equation with an equivalent one and build iterations according to the rule . Thus, the simple iteration method is a one-step iterative process. In order to start this process, you need to know the initial approximation. Let us find out the conditions for the convergence of the method and the choice of the initial approximation.

Ticket#29

Seidel method

The Seidel method (sometimes called the Gauss-Seidel method) is a modification of the simple iteration method, which consists in the fact that when calculating the next approximation x (k+1) (see formulas (1.13), (1.14)) its already obtained components x 1 ( k+1) , ...,x i - 1 (k+1) are immediately used to calculate x i (k+1) .

In coordinate notation form, the Seidel method has the form:

X 1 (k+1) = c 11 x 1 (k) + c 12 x 2 (k) + ... + c 1n-1 x n-1 (k) + c 1n x n (k) + d 1
x 2 (k+1) = c 21 x 1 (k+1) + c 22 x 2 (k) + ... + c 2n-1 x n-1 (k) + c 2n x n (k) + d 2
...
x n (k+1) = c n1 x 1 (k+1) + c n2 x 2 (k+1) + ... + c nn-1 x n-1 (k+1) + c nn x n (k ) + dn
where x (0) is some initial approximation to the solution.

Thus, the i-th component of the (k+1)-th approximation is calculated by the formula

x i (k+1) = ∑ j=1 i-1 c ij x j (k+1) + ∑ n j=i c ij x j (k) + d i , i = 1, ..., n (1.20)

The condition for the end of the Seidel iterative process when the accuracy ε is achieved in a simplified form has the form:

|| x (k+1) - x (k) || ≤ ε.

Ticket#30

Passing method

To solve systems A x = b with a tridiagonal matrix, the sweep method is most often used, which is an adaptation of the Gauss method to this case.

Let's write the system of equations

d 1 x 1 + e 1 x 2 = b 1
c 2 x 1 + d 2 x 2 + e 2 x 3 = b 2
c 3 x 2 + d 3 x 3 + e 3 x 4 = b 3
... ... ...
c n-1 x n-2 + d n-1 x n-1 + e n-1 x n = b n-1
c n x n-1 + d n x n = b n

in matrix form: A x = b where

A=

Let us write down the formulas of the sweep method in the order of their application.

1. Direct stroke of the sweep method (calculation of auxiliary quantities):

a 2 = -e 1 / d 1 b 2 = b 1 / d 1 a i+1 = -e i / , i=2, ..., n-1 b i+1 = [-c i b i + b i ] / , i=2, ..., n-1 (1.9)

2. Reverse the sweep method (finding a solution):

x n = [-c n b n + b n ] / x i = a i+1 x i+1 + b i+1 , i = n-1, ..., 1

Ticket No. 31

Simple iteration method

The essence of the simple iteration method is to move from the equation

f(x)= 0 (*)

to the equivalent equation

x=φ(x). (**)

This transition can be accomplished in different ways, depending on the type f(x). For example, you can put

φ(x) = x+bf(x),(***)

Where b= const, while the roots of the original equation will not change.

If the initial approximation to the root is known x 0, then the new approximation

x 1=φx(0),

those. general scheme of the iterative process:

x k+1=φ(x k).(****)

The simplest criterion for ending the process

|x k +1 -x k |<ε.

Convergence criterion simple iteration method:

if near the root | φ/(x)| < 1, то итерации сходятся. Если указанное условие справедливо для любого x, then the iterations converge for any initial approximation.

Let's explore the choice of constant b from the point of view of ensuring maximum convergence speed. In accordance with the convergence criterion, the highest speed of convergence is provided when |φ / (x)| = 0. At the same time, based on (***), b = –1/f / (x), and the iteration formula (****) goes into x i =x i-1 -f(x i-1)/f/ (x i-1).- those. into the formula of Newton's method. Thus, Newton's method is a special case of the simple iteration method, providing the highest speed of convergence of all possible options for choosing a function φ(x).


Ticket#32

Newton's method

The main idea of ​​the method is as follows: an initial approximation is specified near the hypothetical root, after which a tangent to the function under study is constructed at the approximation point, for which the intersection with the abscissa axis is found. This point is taken as the next approximation. And so on until the required accuracy is achieved.

Let be a real-valued function defined on an interval and differentiable on it. Then the formula for iterative approximation calculus can be derived as follows:

where α is the angle of inclination of the tangent at the point.

Therefore, the required expression for has the form:

Ticket#33

Golden ratio method
The golden ratio method allows you to eliminate intervals by calculating only one function value at each iteration. As a result of the two considered values ​​of the function, the interval is determined that should be used in the future. This interval will contain one of the previous points and the next point placed symmetrically to it. The point divides the interval into two parts so that the ratio of the whole to the larger part is equal to the ratio of the larger part to the smaller, i.e. equal to the so-called “golden ratio”.

Dividing the interval into unequal parts allows you to find an even more effective method. Let us calculate the function at the ends of the segment [ a,b] and put a=x 1 , b=x 2. Let us also calculate the function at two interior points x 3 , x 4 . Let's compare all four values ​​of the function and choose the smallest among them. Let, for example, the smallest turn out to be f(x 3). Obviously, the minimum must be in one of the segments adjacent to it. Therefore the segment [ x 4 ,b] can be discarded and leave the segment .

The first step has been taken. On the segment, you again need to select two internal points, calculate the function values ​​​​at them and at the ends and take the next step. But at the previous step of calculations, we already found the function at the ends of the new segment and at one of its internal points x 4 . Therefore, it is enough to select one more point inside x 5 determine the value of the function in it and make the necessary comparisons. This quadruples the amount of computation required per process step. What is the best way to place points? Each time the remaining segment is divided into three parts and then one of the outer segments is discarded.
Let us denote the initial uncertainty interval by D.

Since in the general case any of the segments can be discarded X 1, X 3 or X 4, X 2 then select the points X 3 And X 4 so that the lengths of these segments are the same:

x 3 -x 1 =x 4 -x 2.

After discarding, we get a new length uncertainty interval D′.
Let us denote the relation D/D′ with the letter φ:

that is, let us set , where is the next uncertainty interval. But

equal in length to the segment discarded at the previous stage, that is

Therefore we get:

.
This leads to the equation or, equivalently
.

The positive root of this equation gives

.

Ticket#34

interpolation of functions, i.e. Using a given function, constructing another (usually simpler) function whose values ​​coincide with the values ​​of the given function at a certain number of points. Moreover, interpolation has both practical and theoretical significance.

Numerical solution of equations and their systems consists in approximate determination of the roots of an equation or system of equations and is used in cases where the exact solution method is unknown or labor-intensive.

Formulation of the problem[ | ]

Let's consider methods for numerically solving equations and systems of equations:

f (x 1 , x 2 , … , x n) = 0 (\displaystyle f(x_(1),x_(2),\ldots ,x_(n))=0)

( f 1 (x 1 , x 2 , … , x n) = 0 … f n (x 1 , x 2 , … , x n) = 0 (\displaystyle \left\((\begin(array)(lcr)f_(1 )(x_(1),x_(2),\ldots ,x_(n))&=&0\\\ldots &&\\f_(n)(x_(1),x_(2),\ldots ,x_( n))&=&0\end(array))\right.)

Numerical methods for solving equations[ | ]

Let's show how you can solve the original system of equations without resorting to optimization methods. If our system is a SLAE, it is advisable to resort to methods such as the Gaussian method or the Richardson method. However, we will still proceed from the assumption that the form of the function is unknown to us, and we will use one of the iterative methods of numerical solution. Among the wide variety of these, we will choose one of the most famous - Newton's method. This method, in turn, is based on the principle of compressive mapping. Therefore, the essence of the latter will be outlined first.

Compressive mapping[ | ]

Let's define the terminology:

The function is said to perform compressive mapping on if

Then the following main theorem is valid:

Banach's theorem (principle of contraction mappings).
If φ (\displaystyle \varphi )- compressive display on [ a , b ] (\displaystyle ), That:

It follows from the last point of the theorem that the convergence rate of any method based on contraction mappings is no less than linear.

Let us explain the meaning of the parameter α (\displaystyle \alpha ) for the case of one variable. According to Lagrange's theorem we have:

φ (x) ∈ C 1 [ a , b ] .< x 2 ∃ ξ ∈ (x 1 , x 2) : φ ′ (ξ) (x 2 − x 1) = φ (x 2) − φ (x 1) {\displaystyle \varphi (x)\in C^{1}.\quad \forall x_{1},x_{2}\in (a,\;b),\quad x_{1}

∀ x 1 , x 2 ∈ (a , b) , x 1 It follows thatα ≈ | φ ′ (ξ) |

(\displaystyle \alpha \approx |\varphi "(\xi)|)[ | ]

. Thus, for the method to converge, it is sufficient that ∀ x ∈ [ a , b ] | or φ ′ (x) |≤ 1. (\displaystyle \forall x\in \quad |\varphi "(x)|\leq 1.)

General algorithm for successive approximations[ | ]

When applied to the general case of operator equations, this method is called

method of successive approximations

by simple iteration method

. However, the equation can be transformed to a contraction map having the same root in different ways. This gives rise to a number of particular methods that have both linear and higher convergence rates.

In relation to SLAU Consider the system:< 1 {\displaystyle \left\|{\begin{array}{ccc}a_{11}+1&\ldots &a_{1n}\\\vdots &\ddots &\vdots \\a_{n1}&\ldots &a_{nn}+1\end{array}}\right\|<1}

( a 11 x 1 + … + a 1 n x n = b 1 … a n 1 x 1 + … + a n n x n = b n (\displaystyle \left\((\begin(array)(ccc)a_(11)x_(1)+ \ldots +a_(1n)x_(n)&=&b_(1)\\\ldots &&\\a_(n1)x_(1)+\ldots +a_(nn)x_(n)&=&b_(n) \end(array))\right.)

For it, the iterative calculation will look like this:

(x 1 x 2 ⋮ x n) i + 1 = (a 11 + 1 a 12 … a 1 n a 21 a 22 + 1 … a 2 n ⋮ ⋮ ⋱ ⋮ a n 1 a n 2 … a n n + 1) (x 1 x 2 ⋮ x n) i − (b 1 b 2 ⋮ b n) (\displaystyle \left((\begin(array)(c)x_(1)\\x_(2)\\\vdots \\x_(n)\end (array))\right)^(i+1)=\left((\begin(array)(cccc)a_(11)+1&a_(12)&\ldots &a_(1n)\\a_(21)&a_( 22)+1&\ldots &a_(2n)\\\vdots &\vdots &\ddots &\vdots \\a_(n1)&a_(n2)&\ldots &a_(nn)+1\end(array))\right )\left((\begin(array)(c)x_(1)\\x_(2)\\\vdots \\x_(n)\end(array))\right)^(i)-\left( (\begin(array)(c)b_(1)\\b_(2)\\\vdots \\b_(n)\end(array))\right))[ | ]

The method will converge with linear speed if[ | ]

‖ a 11 + 1 … a 1 n ⋮ ⋱ ⋮ a n 1 … a n n + 1 ‖ Double vertical bars indicate some norm of the matrix. Solution of the equation f(x)=0 using Newton's method, initial approximation: x 1 =a. Newton's method (tangent method) One-dimensional case

Optimizing the transformation of the original equation f (x) = 0 (\displaystyle f(x)=0) into a compressive display x = φ (x) (\displaystyle x=\varphi (x)) allows us to obtain a method with a quadratic rate of convergence. φ (x) = x + α (x) f (x) (\displaystyle \varphi (x)=x+\alpha (x)f(x)), Then:

φ ′ (x ∗) = 1 + α ′ (x ∗) f (x ∗) + α (x ∗) f ′ (x ∗) = 0 (\displaystyle \varphi "(x^(*))=1+ \alpha "(x^(*))f(x^(*))+\alpha (x^(*))f"(x^(*))=0)

Let's use the fact that Double vertical bars indicate some norm of the matrix., and we get the final formula for α (x) (\displaystyle \alpha (x)):

α (x) = − 1 f ′ (x) (\displaystyle \alpha (x)=-(\frac (1)(f"(x))))

Taking this into account, the compression function will take the form:

φ (x) = x − f (x) f ′ (x) (\displaystyle \varphi (x)=x-(\frac (f(x))(f"(x))))

Then the algorithm for finding a numerical solution to the equation Double vertical bars indicate some norm of the matrix. reduces to an iterative calculation procedure:

x i + 1 = x i − f (x i) f ′ (x i) (\displaystyle x_(i+1)=x_(i)-(\frac (f(x_(i)))(f"(x_(i) ))))

1. Let a segment be known that contains one root of the equation f(x) = 0. The function f is a continuously differentiable function on this segment (f(x)ОC 1 ). If these conditions are met, the simple iteration method can be used.

2. Using the function f(x), a function j(x) is constructed that satisfies three conditions: it must be continuously differentiable (j(x)ОC 1 ), such that the equation x = j(x) is equivalent to the equation f(x)=0; should also translate a segment into yourself.

We will say that the function j ( x ) translates the segment [ a , b ] into yourself, if for anyone x Î [ a , b ], y = j ( x ) also belongs[ a , b ] ( y Î [ a , b ]).

The third condition is imposed on the function j(x):

Method formula: x n +1 = j(xn).

3. If these three conditions are met for any initial approximation x 0 О sequence of iterations x n +1 = j(x n) converges to the root of the equation: x = j(x) on the segment ().

As a rule, as x 0 one of the ends is selected.

,

where e is the specified accuracy

Number x n +1 when the condition for stopping the iterative process is met, it is approximate value of the root of the equation f(x) = 0 on the segment , found by simple iteration method with accuracy e .

Construct an algorithm to clarify the root of the equation: x 3 + 5x – 1 = 0 on a segment using the method of simple iteration with accuracy e .

1. Function f(x) = x 3 +5x-1 is continuously differentiable on the interval containing one root of the equation.

2. The greatest difficulty in the simple iteration method is the construction of a function j(x) that satisfies all the conditions:

Consider: .

Equation x = j 1 (x) is equivalent to the equation f(x) = 0, but the function j 1 (x) is not continuously differentiable on the interval.

Rice. 2.4. Graph of function j 2 (x)

On the other hand, therefore, . Hence: is a continuously differentiable function. Note that the equation: x = j 2 (x) is equivalent to the equation f(x) = 0 . From the graph (Fig. 2.4) it is clear that the function j 2 (x) transforms the segment into itself.

The condition that the function j(x) takes the segment into itself can be reformulated as follows: let be the domain of definition of the function j(x), and let be the domain of variation of j(x).


If the segment belongs to the segment , then the function j(x) takes the segment to itself.

, .

All conditions for the function j(x) are satisfied.

Iterative process formula: x n +1 = j 2(xn).

3. Initial approximation: x 0 = 0.

4. Condition for stopping the iterative process:

Rice. 2.5. Geometric meaning of the simple iteration method

.

If this condition is met x n +1 – approximate value of the root on the segment, found by simple iteration with accuracy e. In Fig. 2.5. The application of the simple iteration method is illustrated.

Convergence theorem and error estimate

Let the segment contains one root of the equation x = j(x), function j(x ) is continuously differentiable on the interval , translates the segment into itself, and the condition is met:

.

Then for any initial approximation x 0 О subsequence converges to the root of the equation y = j(x ) on the segment and the error estimate is fair:

.

Stability of the simple iteration method. When the conditions of the convergence theorem are met, the algorithm of the simple iteration method is stable.

Complexity of the simple iteration method. The amount of computer memory required to implement the simple iteration method is insignificant. At each step you need to store x n , x n +1 , q And e.

Let us estimate the number of arithmetic operations required to implement the simple iteration method. Let us write down an estimate for the number n 0 = n 0 (e) such that for all n ³ n 0 the inequality holds:

From this estimate it follows that the closer q is to one, the slower the method converges.

Comment. There is no general rule for constructing j(x) from f(x) so that all the conditions of the convergence theorem are satisfied. The following approach is often used: the function j(x) = x + k× f(x) is chosen as the function j, where k constant.

When programming the simple iteration method, stopping the iterative process often requires the simultaneous fulfillment of two conditions:

And .

All other iterative methods that we will consider are special cases of the simple iteration method. For example, when Newton's method is a special case of the simple iteration method.



Did you like the article? Share with your friends!