Theorem on Jordan normal form. Examples of reducing matrices to Jordan form

Examples of reducing matrices to Jordan form

. . Roots characteristic equation: l 1, 2, 3 = 1. .

Eigenvectors A by λ = 1, i.e. core A 1:

, which means basis N(A 1): .

Operator image A 1 M(A 1) we find from the relations:

; basis M(A 1) f 3 (1, 2, –1), etc. f 3 = 2f 1 – f 2, then f 3 Оℒ( f 1 , f 2).

Then: basis will be a vector ; vector complementing the basis before the basis there will be any of the vectors, for example the vector; and the basis There is nothing to add to the basis, because .

Prototype A 1 at= (1, 2, –1) Þ at 1 – at 2 – at 3 = 1, for example (1, 0, 0).

By the way: the system A 1 at= (1, 0, 0) has no solutions, i.e. there is no inverse image of the second layer for the vector (1, 2, –1).

Therefore, the Jordan basis of the operator A: .

And, finally, we have the Jordan form of the operator matrix A: .

2°. Find the normal Jordan form of the matrix linear operator A = and a basis in which the operator matrix has Jordan form.

Δ. For a linear operator matrix A = Let's compose and solve the characteristic equation: det( A- l E) = 0 .

= .

Then: = 0 and therefore l 1, 2 = –1; l 3, 4 = 1.

a) Consider the operator A -1 = A-l E= A+E= A for l = - 1, i.e. the kernel of the operator A-1 . To do this, we solve a system of four linear homogeneous equations with matrix A-1 . From third and fourth equations system it is clear that . Then it can be easily established that . Vector f 1 (1, 1, 0, 0) is the only eigenvector of the operator A, corresponding to the eigenvalue l = -1 and forms the basis of the operator kernel A-1 . Next, we look for the basis of the operator image A –1:

.

Noting that for vectors f 2 , f 3 , f 4 there is a relationship: f 3 + f 4 – f 2 = (0, 0, 0, 1), find the basis of the image of the operator A –1:

(j 1 (1, 1, 0, 0), j 2 (0, –1, 1, 1), j 3 (0, 0, 0, 1).

Noting that the vectors f 1 and coincide, we conclude that this vector forms the basis for the intersection of the image and the kernel of the operator A -1 .

The multiplicity of the root λ = -1 is two, and the eigenvector corresponding to this eigenvalue is only one. Therefore, we believe g 1 equal to the vector, and we are looking for another Jordan basis vector as the inverse image of the first layer for . Let's decide heterogeneous system linear equations and find the second vector g 2 (1, 3/4, 0, 0) Jordan basis corresponding to the eigenvalue l = -1 of multiple two. In this case, which is typical, the vector does not have an inverse image of the second layer, because a system with an extended matrix

has no solutions. This is not accidental, because the eigenvalue l= -1 of multiplicity 2 must correspond to two vectors of the Jordan basis of the operator A:

g 1 (1, 1, 0, 0); g 2 (1, 3/4, 0, 0).

At the same time, we note that:

b) Now consider the eigenvalue l = 1 and, accordingly, the operator A 1 =A+E:

.

Let us find the kernel of this operator, i.e. eigenvectors operator A at λ = 1.

.

Vector f 1 (1, 1, 1, 1) forms the basis of the operator kernel A 1 and is the only eigenvector of the operator A, corresponding to the eigenvalue l = 1.

We are looking for the basis of the image M(A 1) operator A 1 .

.

Noting that f 1 = f 2 + f 3 + f 4, we conclude: the basis of the intersection of the kernel and the image of the operator A 1 is a vector f 1 .

Since there is only one eigenvector, and the eigenvalue has a multiplicity of 2, we need to find another vector of the Jordan basis. Therefore we believe g 3 equal to the vector y 1 (1, 1, 1, 1), and we look for another Jordan basis vector as the inverse image of the first layer for y 1 (1, 1, 1, 1). To do this, we solve an inhomogeneous system of linear equations A 1 g 4 = j 1 and find the vector g 4 (0, 1/2, 0, 1/2) Jordan basis corresponding to the eigenvalue l = 1 of multiple two. In this case, the vector y 1 (1, 1, 1, 1) does not have an inverse image of the second layer, because the system A 1 y = g 4 with extended matrix has no solutions. And again, this is not accidental, because the eigenvalue l= 1 of multiplicity 2 must correspond to two vectors of the Jordan basis, and they have already been found:

g 3 (1, 1, 1, 1); g 4 (0, 1/2, 0, 1/2).

At the same time, we note that: Ag 3 = g 1 , Ag 4 = g 3 + g 4 . For the operator A a Jordan basis is found: . Wherein A G= . ▲

. ; det( A- l E) = 0 l 1, 2 = 1; l 3, 4 = 2.

Δ a) Consider the operator A 1: A 1 -E= . We are looking for the eigenvectors of the operator A for l = 1, i.e. operator kernel A 1 .

. Vectors ( f 1 ,f 2) form a basis N(A 1).

Since vectors f 1 , f 2 , f 3 , f 4 – linearly independent, then , and the vectors complementing the basis to the basis – vectors.

It is known that the matrix of a linear operator in a basis of eigenvectors can be reduced to diagonal form. However, over a set of real numbers, a linear operator may not have eigenvalues, and therefore no eigenvectors. Over the multitude complex numbers any linear operator has eigenvectors, but they may not be enough for a basis. There is another canonical form of the linear operator matrix, to which any matrix over the set of complex numbers can be reduced.

Theorem 10.1. Any matrix with complex elements can be reduced in the set of complex numbers C to Jordan 14 normal form.

Let us give the necessary definitions:

Definition 10.1. Square order matrix n, the elements of which are polynomials of arbitrary degree in the variable λ with coefficients from the set of complex numbers C, is called λ- matrix(or polynomial matrix, or polynomial matrix).

An example of a polynomial matrix is ​​the characteristic matrix A – λ E arbitrary square matrix A. On the main diagonal there are polynomials of the first degree, outside it there are polynomials of the zeroth degree or zeros. Let us denote such a matrix as A(λ).

Example 10.1. Let the matrix be given A= , then A– λ E = =
= A(λ).

Definition 10.2. Elementary transformationsλ-matrices are called the following transformations:

    multiplication of any row (column) of a matrix A(λ) to any number not equal to zero;

    addition to any i-that line ( i th column) of the matrix A(λ) any other j-th line ( j th column) multiplied by an arbitrary polynomial ( ).

Properties of the λ-matrix

1) Using these transformations in the matrix A(λ) any two rows or any two columns can be rearranged.

2) Using these transformations in the diagonal matrix A(λ) the diagonal elements can be swapped.

Example 10.2. 1)

.

2)


.

Definition 10.3. Matrices A(λ) and B(λ) are called equivalent, if from A(λ) we can go to B(λ) using finite number elementary transformations.

The goal is to simplify the matrix as much as possible A(λ).

Definition 10.4. Canonical λ- matrix is called a λ-matrix having the following properties:

    matrix A(λ) diagonal;

    every polynomial e i (), i = 1, 2, …, n is completely divisible by e i –1 ();

    leading coefficient of each polynomial e i (), i = 1, 2, …, n is equal to 1, or this polynomial is equal to zero.

A(λ) =
.

Comment. If among the polynomials e i() zeros occur, they occupy the main diagonal last places(by property 2), if there are polynomials of degree zero, then they are equal to 1 and occupy the first places on the main diagonal.

The zero and identity matrices are canonical λ-matrices.

Theorem10.2. Every λ-matrix is ​​equivalent to some canonical λ-matrix (that is, it can be reduced by elementary transformations to canonical form)

Example 10.3. Reduce matrix A(λ) =
to the canonical form.

Solution. The course of transformations is similar to transformations in the Gauss method, while the upper left element of the matrix, when reducing it to canonical form, is non-zero and has the smallest degree.

A(λ) =
 (swap the first and second columns) 
 (to the second column we add the first column multiplied by ( – 2)) 
 (to the second line we add the first line multiplied by ( – 2)) 
 (swap the second and third columns) 
 (to the third column we add the second column multiplied by ( – 2) 3) 
 (to the third line we add the second line multiplied by ( – 2)) 
.

1. Let some polynomial with coefficients from the field be given

Consider a square matrix of the th order

. (36)

It is easy to check that the polynomial is a characteristic polynomial of the matrix:

.

On the other hand, the minor of the element in the characteristic determinant is equal to . That's why , .

Thus, the matrix has a unique non-unity invariant polynomial equal to .

We will call the matrix the accompanying matrix for the polynomial.

Let a matrix with invariant polynomials be given

Here are all the polynomials have a degree higher than bullet, and each of these polynomials, starting from the second, is a divisor of the previous one. We denote the accompanying matrices for these polynomials by .

Then the quasi-diagonal matrix of the th order

(38)

has as its invariant polynomials polynomials (37) (see Theorem 4 on page 145). Since the matrices and have the same invariant polynomials, they are similar, i.e., there always exists a nonsingular matrix such that

The matrix is ​​called the first natural normal form for a matrix. This normal form is characterized by: 1) a quasi-diagonal appearance (38), 2) a special structure of diagonal cells (36) and 3) additional condition: in a series of characteristic polynomials of diagonal cells, each polynomial, starting from the second, is a divisor of the previous one.

2. Let us now denote by

(39)

elementary matrix divisors in a number field. We denote the corresponding accompanying matrices by

.

Since is the only elementary divisor of the matrix, then, according to Theorem 5, the quasi-diagonal matrix

(40)

has polynomials (39) as its elementary divisors.

Matrices and have the same elementary divisors in the field. Therefore, these matrices are similar, i.e. there always exists a non-singular matrix such that

The matrix is ​​called the second natural normal form for a matrix. This normal form is characterized by: 1) a quasi-diagonal form (40), 2) a special structure of diagonal cells (36) and 3) an additional condition: the characteristic polynomial of each diagonal cell is the degree of a polynomial irreducible in the field.

Comment. Elementary matrix divisors, unlike invariant polynomials, are essentially related to a given number field. If instead of the original numeric field we take another numeric field (which also owns the elements of this matrix), then the elementary divisors may change. Along with the elementary divisors, the second natural normal form of the matrix will also change.

So, for example, let us be given a matrix with real elements. The characteristic polynomial of this matrix will have real coefficients. At the same time, this polynomial can have complex roots. If is a field of real numbers, then among the elementary divisors there may also be powers of irreducible square trinomials with real coefficients. If is the field of complex numbers, then each elementary divisor has the form .

3. Let us now assume that the number field contains not only the elements of the matrix, but also all the characteristic numbers of this matrix. Then the elementary divisors of the matrix have the form

. (41)

Let's consider one of these elementary divisors

and associate it with the following order matrix:

. (42)

It is easy to check that this matrix has only one elementary divisor. We will call matrix (42) a Jordan cell corresponding to the elementary divisor .

The Jordan cells corresponding to the elementary divisors (41) are denoted by

Then the quasi-diagonal matrix

has as its elementary power divisors (41).

The matrix can also be written like this:

Since the matrices and have the same elementary divisors, they are similar to each other, i.e., there is a non-singular matrix such that

A matrix is ​​called a Jordan normal form or simply a Jordan form of a matrix. The Jordan form is characterized by a quasi-diagonal appearance and a special structure (42) of diagonal cells of the th order

Note also that if , then each of the matrices

,

has only one elementary divisor: . Therefore, for a non-singular matrix having elementary divisors (41), along with (III) and (IV), the following representations hold:

Send your good work in the knowledge base is simple. Use the form below

Good work to the site">

Students, graduate students, young scientists who use the knowledge base in their studies and work will be very grateful to you.

Proof: Because Reducibility of a matrix to diagonal form is equivalent to reducibility to a Jordan form in which all Jordan cells are of order 1. All elementary divisors of the matrix A must be polynomials of the first degree. Because all invariant factors of the matrix A - lE are divisors of the polynomial e n (l), then the latter condition is equivalent to the fact that all elementary divisors of e n (l) have degree 1, which is what needed to be proved.

1.6 Minimal polynomial

Consider a square matrix A of order n with elements from the field P. If

f (l) = b 0 l k + b 1 l k -1 + ... + b k -1 l + b k

An arbitrary polynomial from the ring P[l], then the matrix

f(A) = b 0 A k + b 1 A k-1 + … + b k-1 A + b k E

will be called the value of the polynomial f (l) with l = A; Let's pay attention to the fact that free member polynomial f (k) is multiplied by the zero degree of matrix A, i.e. to the identity matrix E.

Let us define a matrix root.

If a polynomial f (k) is canceled by matrix A, i.e. f (A) = 0, then the matrix A will be called matrix root or, where this may not cause confusion, simply the root of the polynomial f (l) .

The matrix A is also a root for such polynomials whose leading coefficients are equal to one - take any non-zero polynomial that is canceled by the matrix A, and divide this polynomial by its leading coefficient.

Definition: The least degree polynomial with leading coefficient 1 that is canceled by the matrix A is called the minimal polynomial of the matrix A and is denoted m A .

Theorem: Each matrix A has only one minimal polynomial.

Proof: Let us assume that there would be two minimal polynomials, for example m 1 (l) and m 2 (k), then their difference would be a non-zero polynomial of lower degree, the root of which was again the matrix A. Dividing this difference by its leading coefficient, we would get a polynomial with a leading coefficient of 1, the root of which would be the matrix A and which would have a lower degree than minimal polynomials m 1 (l) and m 2 (l), which contradicts the definition of minimal polynomials.

Theorem: Any polynomial f(l), whose root is the matrix A, is divisible without remainder by the minimal polynomial m(k) of this matrix.

Proof: Let f(k) not divisible by m(l). Let us denote by q(k) private, through r(l) remainder of division f(l) on m(l), we will have

f(l) = m(l) q(l) + r(l).

Substituting l = A here and using the fact that

m(l) = f(l) = 0,

r(l) = 0.

But the degree of remainder r(k) less than the power of the divisor m(l). That's why r(k) is a non-zero polynomial whose root is the matrix A and whose degree is less than the degree of the minimal polynomial m(l), which is contradictory. The statement has been proven.

It is known that similar matrices will have the same characteristic polynomial. The minimal polynomial also has this property: similar matrices have the same minimal polynomials. But the equality of minimal polynomials is not sufficient condition matrix similarity.

To prove the next theorem we give definition associated matrix.

Let A ij(1) - algebraic complements of the matrix A. We define the adjoint matrix to A, in the notation A v , as transposed to the matrix algebraic additions for A. Thus

A v = .

Theorem: Last elementary divisor e n(l) characteristic matrix A -lE is the minimal polynomial m A.

Proof: Let's write the equality

(-1)n | A - lE | = d n -1 (l) e n (l).

It follows that d n -1 (l) and e n (l) will not be zero. Let B(l) denote the adjoint matrix to the matrix A - lE.

B(l) = (A - lE) (1)

Equality is fair

(A - lE) B(l) = | A - lE | E. (2)

On the other hand, because the elements of the matrix B(l) are the minors of the (n - 1)th order of the matrix A - lE taken with plus or minus signs and only them, and the polynomials d n -1 (l) are the general greatest divisor all these minors, then

B(l) = d n -1 (l) C(l), (3)

and the largest common divisor matrix elements C(n) is equal to 1.

Equalities (3), (2) and (1) imply the equality

(A - lE) d n -1 (l) C (l) = (-1) n d n -1 (l) e n (l) E.

We reduce this equality by a non-zero factor d n -1 (l). Note that if μ(n) is a nonzero polynomial,

D(l) = (d ij (l))

Non-zero l-matrix, and let d st (l) ? 0, then in the matrix c(l) D(l) at the place (s, t) there will be a non-zero element c(l) d st (l). That.

(A - lE) C(l) = (-1) n e n (l) E,

e n (l) E = (lE - A) [(-1) n+1 C(l)]. (4)

From this equality it is clear that the remainder of the left division of the l-matrix on the left by the binomial lE - A is equal to zero. From the lemma proved in section 3, it follows that this remainder is equal to the matrix

e n (A) E = e n (A).

In fact, the matrix e n (n) E can be written as a matrix n-polynomial whose coefficients are scalar matrices, i.e. commute with matrix A.

those. the polynomial e n (n) is indeed canceled by the matrix A. This means that the polynomial e n (n) is completely divisible by the minimal polynomial m (k) matrices A,

e (l) = m (l) q (l). (5)

it is clear that the leading coefficient of the polynomial q(-1) n +1 (n) is equal to one.

Because m (A) = 0, then again, in view of the same lemma from paragraph 3, the remainder of the left division of the n-matrix m (l) E on the binomial lE - A is equal to zero, i.e.

m (l) E = (lE - A) Q(l). (6)

Equalities (5), (4) and (6) are reduced to the equality

(lE - A) [(-1) n+1 C(l)] = (lE - A) .

Both sides of this equality can be reduced by common multiplier(lE - A), because the leading coefficient E of this matrix l-polynomial is a non-singular matrix. That.,

C(l) = (-1) n +1 Q(l) q(l).

The greatest common divisor of the elements of the matrix C(n) is equal to 1. Therefore, the polynomial q(n) must have degree zero, and since its leading coefficient is 1, then

Thus, in view of (5),

e n (l) = m (l),

Q.E.D.

Chapter 2. Problem solving

Example 1. Reduce the l-matrix to canonical form

Solution: Let us reduce this matrix A(n) to canonical form by performing elementary transformations.

1) Add the second line to the first, then multiply the first line by (-l) and (-l 2 -1) and add with the second and third lines, respectively. Add the first and second columns, multiplying the first column by (-l 2 -l). In the resulting matrix, swap the second and third columns. Let's multiply the second line by (-l) and add it to the third. Next, add the second column multiplied by (-l 2 -l + 1). Multiply the second and third lines by (-1).

A(l) = ~ ~ ~ ~ ~ ~ ~ = A(l).

The resulting matrix is ​​canonical, because it has a diagonal form and each subsequent polynomial on the main diagonal is divided by the previous one.

Answer:

Example 2. Prove the equivalence of l-matrices

Solution: Let us reduce the matrix A(n) to canonical form.

1) In the matrix A(l), swap the first and third columns:

2) Subtract the second from the first line:

3) Multiply the first line by (l+1) and subtract the third from it:

4) Multiply the first column by () and by () and subtract the second and third columns, respectively:

5) Swap the second and third lines:

6) Multiply the third line by () and subtract the second line from it:

7) Multiply the third line by (-1):

A(l) ~ = B(l).

Answer: A(l) ~ B(l).

Note that the matrix B(n) is canonical.

Example 3. Prove that given matrix A(l) is unimodular. Reduce to diagonal view.

The determinant of a unimodular matrix is ​​not equal to zero and does not depend on l. Let's calculate?A:

Multiply the first column by (- l 2) and add it with the second one, we get:

A(l) ~~ ~ ~ ~ ~ ~

Answer: the matrix A(n) is unimodular.

Example 4. Using invariant factors, find the Jordan matrix

a) matrix A:

b) matrix B:

c) matrix C:

Solution: For matrix A we compile a table of elementary divisors. In the first column of the table we write the elementary divisors of the last invariant factor: .

Using the table of elementary divisors, we compose a Jordan matrix. For each elementary divisor we write the corresponding Jordan cell: J 1 (1), J 1 (2), J 1 (3), J 1 (4). By placing these cells on the main diagonal of the matrix, we obtain the desired Jordan matrix:

For matrix B, we compose a table of elementary divisors. In the first column of the table we write the only elementary divisor of the last invariant factor, in the second column - the penultimate invariant factor:

Invariant factors

For matrix C we compile a table of elementary divisors. In the first column of the table we write the only elementary divisor of the last invariant factor, the second column - of the penultimate factor, in the third column -

Using the table of elementary divisors, we compose a Jordan matrix. For each elementary divisor we write the corresponding Jordan cell J 2 (1), J 1 (1), J 1 (1). By placing these cells on the main diagonal of the matrix, we obtain the desired Jordan matrix:

Answer:

Example 5. Reduce the following matrices to normal Jordan form:

Solution: 1. For matrix A, we find a normal Jordan matrix, bringing it to canonical form. Compiling a characteristic matrix

matrix Jordan form

2. Let us reduce the matrix A - lE to canonical form.

1) Swap the first and second columns

2) Multiply the first line by (l - 4) and by (-1) and add with the second and third line, respectively

3) Add the third and second columns

4) Add the first column to the second, multiplying the first column by (l).

5) Multiply the second and third rows by (-1), then swap the second and third columns and the second and third rows

Invariant matrix factors

e 1 (l) = 1, e 2 (l) = l - 2

e 3 (l) = = = .

3. Using the obtained invariant factors e 1 (l) and e 2 (l), we compose a table of elementary divisors, and elementary divisors equal to one are not included in the table

For each elementary divisor we write the corresponding Jordan cell J 1 (2), J 2 (2). By placing these cells on the main diagonal of the matrix, we obtain the desired Jordan matrix:

J A = .

Let us reduce matrix B to normal Jordan form through minors.

1. Compose the characteristic matrix

2. Let's find invariant factors. Minors of the first order have the greatest divisor

Let's find all second-order minors:

The greatest common divisor of these polynomials

The third order minor coincides with the determinant of the matrix

det (B - lE) = =.

Let's take the greatest common divisor with the leading coefficient equal to 1.

Let's find the invariant factors:

e 1 (l) = d 1 (l) =1, e 2 (l) = =

3. Using the obtained invariant factors e 2 (l) and e 3 (l), we compose a table of elementary divisors.

4. For each elementary divisor we write down the corresponding Jordan cell J 1 (-1), J 2 (-1). By placing these cells on the main diagonal of the matrix, we obtain the desired Jordan matrix:

J B = .

Answer:

J A =

J B = .

Example 6. Show that the characteristic polynomial of the matrix

is null and void for her.

Solution. Finding the determinant characteristic polynomial matrices A.

Substituting the matrix A instead of the variable l, we obtain

A = 3 A 2 - A 3 = 3= 3= 0,

which is what needed to be shown.

Example 7. Find the minimum polynomial of a matrix

Solution. First way. 1. Compose the characteristic matrix

2. We reduce this l-matrix to normal diagonal form. Let's swap the first and third lines. Let us choose as the leading element the unit that is in the left top corner matrices. Using the leading element we do equal to zero the remaining elements of the first row and first column:

We take the leading element (-l) and make all other elements of the second row and second column equal to zero. Then we multiply the second and third rows by (-1) so that the leading coefficients of the diagonal elements are equal to one. We get the normal diagonal view:

Minimum matrix polynomial

m A (l) =e 3 (l) =.

Second way. 1. We compile a characteristic matrix;

2. find the characteristic polynomial

A (l) = 3l 2 - l 3.

3. find the second order minors of the characteristic matrix (A - lE). Let's limit ourselves to the minors located in the first two lines:

M 12 12 = =, M 13 12 = = -l, M 23 12 = = l.

The expression for the remaining minors coincides with those found. The greatest common divisor of polynomials, (-l), l is equal to l, i.e.

4. According to the formula

we get:

To check, let's calculate

m A (A) =A 2 -3A =

Note that the minimal polynomial m A (A) is annihilating, i.e.

Answer: .

Example 8. Population of the country. Let's divide the country's population into four age groups:

(0.20], (20.40], (40.60], (60,) years. (1)

X(t) = (x 1 (t), x 2 (t), x 3 (t), x 4 (t))

The number of people in these groups at time t. we are interested in the population size in these subgroups (i.e. the age structure of the country's population) in 20, 40, 60,... years (i.e. X(20), X(40), X(60)... ). We will calculate this from the coordinates of the vector X(0) and from the values ​​of birth and death rates, which we will take as close to life as possible.

Let's create an equation for the future.

In 20 years, almost all people from the 1st group will move to the second. Some will die from disease, accidents, etc. Let 0.95 people from the 1st group move to the second group over 20 years. This is the coefficient of the 1st group to the 2nd:

x 2 (t + 20) = 0.95 x 1 (t). (2)

In addition, a small part of the youth from this group will have time to get married and have children before the age of 20, which makes the contribution of the 1st group to the 1st group (after 20 years). Let this contribution be 0.01 of the population of the 1st group. And the 2nd and 3rd groups will also contribute to the 1st group (in the form of children). Let the value of the contribution from the 2nd group = 0.5 of its number (everyone is married and in each family there is one child), and the contribution from the 3rd group = 0.02 of its number. Then

X 1 (t + 20) = 0.01 x 1 (t) + 0.5 x 2 (t) + 0.02 x 3 (t). (3)

Let's set the survival rate in the second group to 0.8, i.e.

X 3 (t + 20) = 0.8 x 2 (t). (4)

And in groups 3 and 4, respectively, 0.7 and 0.4:

X 4 (t + 20) = 0.7 x 3 (t) + 0.5 x 4 (t). (5)

We rewrite the relations we have given (2, 3, 4, 5) in matrix form:

X(t + 20) = AX(t). (6)

Where the matrix A of influence coefficients is:

It is compiled according to the principle:

input number = column number,

output number = line number.

So the coefficient of influence of the 1st group on the 2nd should be written in the 1st column, 2nd row.

According to formula (6), if operator A acts on the population composition X(t) at time t, then the population composition X(t + 20) will be obtained after 20 years. Therefore, operator A is called the shift operator (in this problem, a shift of 20 years).

From formula (6) it follows that

X(t + 40) = AX(t + 20) = AAX(t) = A 2 X(t)

X(t + 60) = AX(t + 40) = AA 2 X(t) = A 3 X(t)

X(t + 20n) = A n X(t) (8)

So, we want to calculate the population after 20, 40, 60,... (assuming that neither the birth rate nor the death rate changes) - i.e. calculate AX(0), A 2 X(0), A 3 X(0),… Product

A n X(0) = AAAA…AX(0)

Can be calculated in different sequences. You can do this:

A(A…(AX(0))). (9)

Or you can do this: first A n then

In this problem, if you need to calculate the future population for only a few moments in time (for example, 200 years in advance), then to reduce the number of operations we will use formula (9). But if we want to pick numeric elements matrix A (for example, find the birth rate at which the country’s population stabilizes at the same level), then method (10) is more convenient. So, let the population today be:

X 1 (0) = 30, x 2 (0) = 40, x 3 (0) = 30, x 4 (0) = 25 (million people).

Now let's carry out calculations for n= 2, 3...10 according to (9) in any of the mathematical computer programs (for example, Mathematics, MathCAD, Maple V). I use some computer program, we get the results, which we enter into the table.

population

We see that in 200 years a country with a population similar to modern Russia, shrank to the population Leningrad region. Let us pay attention to how the population is aging (the proportion of elderly people is becoming larger). This is an obligatory concomitant of population decline. In reality, everything is much worse: a decrease in population within the same territory makes it difficult for young people to meet and marry, reduces the country’s wealth and, as a result, deteriorates medical service etc. and so on. etc. In other words, a decrease in population would also lead to a decrease in the numbers in table A.

For comparison, let’s set the birth rate in group 2 differently, at the level of 4 children per family.

Then the same calculations give us:

population

In 140 years, today's Russia would have caught up with China's billion-strong population and half would consist of young people.

Naturally, if we were only interested in such a simple forecast, we could limit ourselves to simple calculation by (9) and the theory of the Jordan form would not be needed. But we are interested in the ability to manage the process without allowing the death of the country or a catastrophic increase in the population. Therefore, we are interested in three questions:

· Is it possible to stabilize the population by choosing the birth rate (increasing it is easier than reducing mortality);

· what should the birth rate be in order for the country's population to stabilize;

· how the population structure will be established (the ratio between young people and old people) with a stable population size (this ratio determines how many pensioners each worker must feed, and, therefore, together with labor productivity, it determines the standard of living).

Numerical experiment, that is, the calculation of such tables at various sizes birth rate according to (9), perhaps, will allow you to select the birth rate value. But we will get the result with an error unknown to us due to the impossibility of carrying out calculations indefinitely and due to difficulties in understanding the behavior of numbers separate groups. Indeed: values ​​x 3 (t) and x 4 (t) in last table hesitate. If you change the fertility parameter a little, the fluctuations will change somewhat.

According to (8), our country’s population in 20n years is equal to

X(20n)=A n X(0), (12)

Where matrix A is given in (7). We know that

A n = S J n S -1 (13)

Where S is the transition matrix to a new basis, consisting of constant numbers, and J is the Jordan normal form of matrix A.

To calculate J, we need the eigenvalues ​​of matrix A. We use a computer for calculations. Maple V for our matrix A gives four eigenvalues:

l 1 = 0.7095891332

l 2 = - 0.667497875

l 4 = - 0.0320912582

Since the number of distinct eigenvalues ​​= 4, this means that all Jordan cells in the matrix J have order 1, i.e. the matrix J is purely diagonal and its nth power has the form:

Thus, we obtain for (12):

X(20n) = l 1 n V + l 2 n V + l 3 n V + l 4 n V, (14)

where the letters V denote some numerical (constant) column vectors.

The structure of formula (14) shows the behavior of X with increasing n. All terms decrease due to the fact that the eigenvalues ​​are less than 1 in absolute value, i.e. X tends to the 0 vector. The last three terms are decreasing faster than the first. For sufficiently large n the first term will be the main term in this sum. The second term decreases faster than the first, but due to the negativity of the second eigenvalue, it is either added to the first (for even n), or is subtracted from it (for odd n), that is, creates damped oscillations in the behavior of X. These fluctuations correspond to reality, because the cycle of these fluctuations is determined by an arbitrarily chosen interval (20 years). When dividing the population into larger number age groups negative eigenvalues ​​would produce oscillations with a shorter period.

If there is a high birth rate, then the formula for X(20n) still has the form (14), but it will contain other larger eigenvalues. With a high birth rate, the first eigenvalue turns out to be greater than one, and therefore we observe exponential growth population.

From what has been written above, we can conclude: if we want to stabilize the country’s population, we need to select the birth rate so that the first eigenvalue is equal to 1, and all other eigenvalues ​​would be less than 1 in absolute value. This will ensure that the last three terms in the equation tend to 0 formula (14), and then V 1 will turn out to be the desired stable state of the population.

Next we will select the birth rate. Let us return to the matrix A given in (7). The birth rate of children in group 2 (first row, second column) will be replaced by the letter g. As is known, the eigenvalue of the matrix A must be the root of its characteristic equation. Since we need l = 1, we calculate the determinant det(A - E).

We get

det = 0.584880 - 0.57006 g

and from the equality det = 0 we find g = 1.026. We substitute this birth rate value into matrix A (1st row, 2nd column) and again calculate the country's population over an interval of 200 years using (9).

population

They adjusted the birth rate for 200 years in such a way that they ensured the stability of the country's population. It hovers around 130 million. Fluctuations in the numbers of individual groups are quite significant. The reason for these fluctuations is that the matrix A now has two eigenvalues, modulo close to one, and one of them is negative. That is, we have a result something like this

X(20n) = V 1 + (-1) n V 2 + l 3 n V 3 + l 4 n V 4 , (15)

The last two terms decay with increasing n due to the fact that the absolute values ​​of the third and fourth eigenvalues ​​are less than 1. And the second term ensures that X oscillates from the value V 1 - V 2 to the value V 1 + V 2 and back.

Given the approximate value of g, matrix A does not have an eigenvalue exactly equal to 1. Therefore, the size of the groups changes slowly against the background of these large fluctuations. You can, of course, try to adjust the fertility to achieve an eigenvalue that is even more precisely equal to 1, and then find out how close the second eigenvalue is to (-1). But, of course, clarifying the eigenvalues ​​in this problem does not make sense, since initial values and the matrix A itself is given with a large error (and the exact measurement of fertility and mortality, in principle, does not give us the basis for accurate calculations, since it is impossible to fix them). Refinement of this model should follow the path of taking into account other dependencies in society. But from a purely theoretical point of view, we have solved the question of the existence of the limit (14): if one of the eigenvalues ​​is equal to 1, and the rest are less in absolute value, then the limit exists.

Conclusion

Matrices were first mentioned in ancient China, then called the “magic square”. The main application of matrices was solving linear equations. Also, magic squares were known a little later by Arab mathematicians, around then the principle of adding matrices appeared. After developing determinant theory in the late 17th century, Gabriel Cramer (1704 - 1752) began developing his theory in the 18th century and published Cramer's rule in 1751. Around the same period of time, the “Gauss method” appeared. Matrix theory began in the mid-19th century with the work of William Hamilton and Arthur Cayley. Fundamental results in matrix theory belong to Karl Weierstrass (1815 - 1897), Jordan, Frobenius (1849 - 1917). The term matrix was coined by James Sylvester in 1850.

Matrices are found everywhere. For example, a multiplication table is a product of matrices. In physics or others applied sciences matrices are a means of recording data and transforming it. In programming - in writing programs. They are also called arrays. Widely used in technology. For example, any picture on the screen is a two-dimensional matrix, the elements of which are the colors of the dots. In psychology, the understanding of the term is similar to this term in mathematics, but instead mathematical objects certain " psychological objects" - for example, tests. In addition, matrices are widely used in economics, biology, chemistry and even marketing. There is also an abstract model - the theory of marriages in primitive society, where with the help of matrices the permitted marriage options for representatives and even descendants of a particular tribe were shown.

In mathematics, matrices are widely used to compactly write SLAEs or systems differential equations. The matrix apparatus allows one to reduce the solution of SLAEs to operations on matrices.

The Jordan normal form of the matrix is ​​used when calculating the population that will be in a country, region, or world after a certain period of time. Such a matrix gives an idea of ​​changes in population, depending on specific conditions: birth rate and mortality, without allowing either the death of the country or a catastrophic increase in the population.

Matrix theory is not required school curriculum studying mathematics. In schools that have advanced mathematics classes, the basic concepts of matrix theory are taught superficially. Matrices are discussed in more detail when studying higher mathematics.

The work can be recommended to students to expand knowledge in the field of matrix theory, to high school students and mathematics teachers to familiarize themselves with general concepts matrix theory as part of expanding their mathematical horizons.

The tasks set in the work have been solved, the goal has been achieved.

List of used literature

1. Kvashko, L. P. Fundamentals of linear algebra: Textbook. allowance / L. P. Kvashko. - Khabarovsk: Publishing house DVGUPS, 2012. - 78 p. : ill.

2. Written, D. T. Lecture notes on higher mathematics: [at 2 o'clock]. Part 1 / D. T. Written. - 6th ed. - M.: Iris-press, 2006. - 288 p.: ill.

3. Mishina, A. P. Higher algebra. / I. V. Proskuryakov. - M., Fizmatlit, 1962. - 300 p.

4. Romannikov, A.N. Linear algebra: Textbook. manual // Moscow State University economics, statistics and computer science. - M., 2003. - 124 p.

5. Okunev, L. Ya. Higher algebra. / L. Ya. Okunev. - M.: Education, 1966. - 335 p.

6. Faddeev, D.K. Lectures on algebra: Proc. allowance./ D.K. Faddeev.-4th ed., erased..- St. Petersburg: Lan, 2005.- 416 p. - (Textbooks for universities. Special literature. The best classic textbooks. Mathematics).

7. Butuzov, V.F. Linear algebra in questions and problems: textbook. aid for students universities/ V.F. Butuzov. - 3rd ed., revised - St. Petersburg: Lan, 2008. - 256 p. - (Textbooks for universities. Special literature).

8. Voevodin, V.V. Linear algebra: Textbook. allowance/ V.V. Voevodin.-4th ed., erased..- St. Petersburg: Lan, 2008.- 416 p. -(Textbooks for universities. Special literature)

9. Kurosh, A. G. Course of higher algebra: Textbook. allowance./ A.G. Kurosh. 17th ed., - St. Petersburg: Lan Publishing House, 2008. - 432 pp.: ill. - (Textbooks for universities. Special literature).

10. Gelfand, I.M. Lectures on linear algebra./ THEM. Gelfand. - 5th ed., rev. - M.: Dobrosvet, Moscow Center for Continuous mathematics education, 1998. - 320 p.

11. Maltsev, A.I. Fundamentals of linear algebra: Textbook. allowance./A.I. Maltsev. 5th ed., erased. - St. Petersburg: Lan Publishing House, 2009. - 480 pp.: ill. - (Textbooks for universities. Special literature).

12. Gantmakher, F. R. Theory of matrices. Textbook manual for universities./ F.R. Gantmakher, - M. Science. 1967. - 576 p.

13. Lectures on algebra. Semester 2. Issue II. Jordan normal form of the matrix: Educational and methodological manual/ S.N. Tronin. -- Kazan: Kazansky (Privolzhsky) federal university, 2012. - 78 p.

14. Van der Waerden B.L. Algebra / B.L. van der Waerden; Per. with him. A.A. Belsky.-3rd ed., ster.- St. Petersburg: Lan, 2004.- 624 p.

15. Alferova, Z.V. Algebra and number theory. Training and metodology complex/ Z.V. Alferova, E.L. Balyukevich, A.N. Romannikov. - M.: Eurasian Open Institute, 2011. - 279 p.

16. Lancaster, P., Theory of matrices. / P. Lancaster - M.: “Science” 1973, 280 p.

17. Schreyer O. Theory of matrices / E. Sperner. - L.: ONTI, 1936. - 156 p.

18. Shneperman, L.B. Collection of problems in algebra and number theory: Textbook. allowance./ L.B. Shneperman.-3rd ed., erased..- St. Petersburg: Lan, 2008.- 224 p. -(Textbooks for universities. Special literature).

19. Proskuryakov, I. V. Collection of problems in linear algebra. Textbook allowance / I.V. Proskuryakov. - 13th ed., erased. - St. Petersburg: Publishing House "Lan", 2010. - 480 p. -- (Textbooks for universities. Special literature).

20. Collection of problems in algebra: problem book / ed. A.I. Kostrikina. - M.: MTsNMO, 2009. - 404 p.

21. Sushkova M. V. Mathematics at the University / Internet journal of St. Petersburg State Polytechnic University. - 2002. - No. 2. - URL: https://www.spbstu.ru/publications/m_v/N_002/Sushkova/par_02.html.

Posted on Allbest.ru

Similar documents

    Basic operations on matrices and their properties. Product of matrices or multiplication of matrices. Block matrices. The concept of a determinant. Matrix toolbar. Transposition. Multiplication. Determinant of a square matrix. Vector module.

    abstract, added 04/06/2003

    The use of matrices and their types (equal, square, diagonal, unit, zero, row vector, column vector). Examples of operations on matrices (multiplication by number, addition, subtraction, multiplication and transposition of matrices) and properties of the resulting matrices.

    presentation, added 09/21/2013

    Recording form and methods for solving the system algebraic equations with n unknowns. Multiplication and norms of vectors and matrices. Properties of matrix determinants. Eigenvalues and eigenvectors. Examples of using numerical characteristics matrices

    abstract, added 08/12/2009

    Concept, types and algebra of matrices. Determinants of a square matrix and their properties, Laplace's theorem and cancellation. Concept inverse matrix and its uniqueness, construction algorithm and properties. Definition of the identity matrix for square matrices only.

    abstract, added 06/12/2010

    Interpretation of orthogonal and unitary matrix. Main determinants of matrices. Definition of complex square non-degenerate and singular matrices. Methods for finding the determinant. Dodgson condensation method. Skew-symmetric polylinear row function.

    course work, added 06/04/2015

    Calculation of an enterprise's cash costs for the production of products, when expressing their value using matrices. Checking the compatibility of a system of equations and solving them using Cramer’s formulas and using the inverse matrix. Solving algebraic equations using the Gauss method.

    test, added 09/28/2014

    Examples of algebraic matrix groups, classical matrix groups: general, special, symplectic and orthogonal. Components of an algebraic group. Matrix rank, return to equations, compatibility. Linear mappings, operations with matrices.

    course work, added 09/22/2009

    Invertible matrices over the field Zp. Formula for counting invertible matrices of order 2. Formula for counting invertible matrices of order 3. General formula counting invertible matrices over the field Zp. Invertible matrices over Zn.

    thesis, added 08/08/2007

    Calculation method dot product given vectors. Calculation of determinants and ranks of matrices, finding inverse matrices. Resolution of equations using the Cramer method, inverse matrix, and the built-in lsolve function. Analysis of the results obtained.

    laboratory work, added 10/13/2014

    Basic Actions over matrices. Solution matrix equations using the inverse matrix and using elementary transformations. Concepts of inverse and transposed matrices. Solving matrix equations various types: AX=B, HA=B, AXB=C, AX+XB=C, AX=HA.



Did you like the article? Share with your friends!