Equivalent matrix transformations. Elementary system transformations

Document: I.e. The rank of the matrix is ​​preserved when performing the following operations:

1. Changing the order of lines.

2. Multiplying a matrix by a number other than zero.

3. Transposition.

4. Eliminating a string of zeros.

5. Adding another string to a string, multiplied by an arbitrary number.

The first transformation will leave some minors unchanged, but will change the sign of some to the opposite. The second transformation will also leave some minors unchanged, while others will be multiplied by a number other than zero. The third transformation will preserve all minors. Therefore, when applying these transformations, the rank of the matrix will also be preserved (second definition). Eliminating a zero row cannot change the rank of the matrix, because such a row cannot enter a non-zero minor. Let's consider the fifth transformation.

We will assume that the basis minor Δp ​​is located in the first p rows. Let an arbitrary string b be added to string a, which is one of these strings, multiplied by some number λ. Those. to string a is added a linear combination of strings containing the basis minor. In this case, the basis minor Δp ​​will remain unchanged (and different from 0). Other minors placed in the first p lines also remain unchanged, the same is true for all other minors. That. V in this case the rank (by the second definition) will be preserved. Now consider the minor Ms, which does not have all the rows from among the first p rows (and perhaps it does not have any).

By adding an arbitrary string b to the string ai, multiplied by the number λ, we obtain a new minor Ms‘, and Ms‘=Ms+λ Ms, where

If s>p, then Ms=Ms=0, because all minors of order greater than p of the original matrix are equal to 0. But then Ms‘=0, and the rank of matrix transformations does not increase. But it could not decrease either, since the basic minor did not undergo any changes. So, the rank of the matrix remains unchanged.

You can also find the information you are interested in in the scientific search engine Otvety.Online. Use the search form:

Our immediate goal is to prove that any matrix can be reduced to some standard types. The language of equivalent matrices is useful along this path.

Let it be. We will say that a matrix is ​​l_equivalent (n_equivalent or equivalent) to a matrix and denote (or) if the matrix can be obtained from a matrix using finite number row (column or row and column, respectively) elementary transformations. It is clear that l_equivalent and n_ equivalent matrices are equivalent.

First we will show that any matrix can be reduced to special type, called reduced.

Let it be. A non-zero row of this matrix is ​​said to have the reduced form if it contains an element equal to 1 such that all elements of the column other than are equal to zero, . We will call the marked single element of the line the leading element of this line and enclose it in a circle. In other words, a row of a matrix has the reduced form if this matrix contains a column of the form

For example, in the following matrix

the line has the following form, since. Let us pay attention to the fact that in this example an element also pretends to be the leading element of the line. In the future, if a line of the given type contains several elements that have leading properties, we will select only one of them in an arbitrary manner.

A matrix is ​​said to have a reduced form if each of its non-zero rows has a reduced form. For example, matrix

has the following form.

Proposition 1.3 For any matrix there is an equivalent matrix of the reduced form.

Indeed, if the matrix has the form (1.1) and, then after carrying out elementary transformations in it

we get the matrix

in which the string has the following form.

Secondly, if the row in the matrix was reduced, then after carrying out elementary transformations (1.20) the row of the matrix will be reduced. Indeed, since given, there is a column such that

but then and, consequently, after carrying out transformations (1.20) the column does not change, i.e. . Therefore, the line has the following form.

Now it is clear that by transforming each non-zero row of the matrix in turn in the above manner, after a finite number of steps we will obtain a matrix of the reduced form. Since only row elementary transformations were used to obtain the matrix, it is l_equivalent to a matrix. >

Example 7. Construct a matrix of reduced form, l_equivalent to the matrix

The first three paragraphs of this chapter are devoted to the doctrine of equivalence of polynomial matrices. On the basis of this, in the next three paragraphs, an analytical theory of elementary divisors is constructed, i.e., a theory of reducing a constant (few-nomial) square matrix to normal form. The last two paragraphs of the chapter give two methods for constructing a transformation matrix.

§ 1. Elementary transformations of a polynomial matrix

Definition 1. A polynomial matrix or -matrix is ​​a rectangular matrix whose elements are polynomials in:

here is the greatest degree of the polynomials.

we can represent a polynomial matrix as a matrix polynomial with respect to , that is, as a polynomial with matrix coefficients:

Let us introduce into consideration the following elementary operations on a polynomial matrix:

1. Multiplying some, for example th, line by a number.

2. Adding to some, for example the th, line another, for example the th, line, previously multiplied by an arbitrary polynomial.

3. Swap any two lines, for example the th and th lines.

We invite the reader to check that operations 1, 2, 3 are equivalent to multiplying a polynomial matrix on the left, respectively, by the following square matrices of order :

(1)

i.e., as a result of applying operations 1, 2, 3, the matrix is ​​transformed, respectively, into the matrices , , . Therefore, operations of type 1, 2, 3 are called left elementary operations.

The right elementary operations on a polynomial matrix are defined in a completely similar way (these operations are performed not on the rows, but on the columns of the polynomial matrix) and the corresponding matrices (of order ):

As a result of applying the right elementary operation, the matrix is ​​multiplied on the right by the corresponding matrix.

We will call matrices of type (or, what is the same, type ) elementary matrices.

The determinant of any elementary matrix does not depend on and is different from zero. Therefore, for each left (right) elementary operation there is reverse operation, which is also a left (respectively right) elementary operation.

Definition 2. Two polynomial matrices are called 1) left equivalent, 2) right equivalent, 3) equivalent if one of them is obtained from the other by applying, respectively, 1) left elementary operations, 2) right elementary operations, 3) left and right elementary operations.

Let the matrix be obtained from using left elementary operations corresponding to matrices . Then

. (2).

Denoting by the product , we write equality (2) in the form

, (3)

where , like each of the matrices, has a nonzero constant determinant.

In the next section we will prove that every square -matrix with a constant non-zero determinant can be represented as a product of elementary matrices. Therefore, equality (3) is equivalent to equality (2) and therefore means the left equivalence of the matrices and .

In the case of right equivalence polynomial matrices and instead of equality (3) we will have the equality

, (3")

and in the case of (bilateral) equivalence – equality

Here again and are matrices with nonzero and independent determinants.

Thus, Definition 2 can be replaced by an equivalent definition.

Definition 2". Two rectangular -matrices and are called 1) left equivalent, 2) right equivalent, 3) equivalent if, respectively

1) , 2) , 3) ,

where and are polynomial square matrices with constant and nonzero determinants.

We illustrate all the concepts introduced above with the following important example.

Consider a system of linear homogeneous differential equations-th order with unknown argument functions with constant coefficients:

(4)

Mu equation of a new unknown function; the second elementary operation means the introduction of a new unknown function (instead of ); the third operation means changing places in the equations of terms containing and (i.e. ).

1. Let two vector spaces be given and, accordingly, measurements over a number field, and a linear operator mapping into . In this section we will find out how the matrix corresponding to a given linear operator changes when the bases in and change.

Let us choose arbitrary bases and . In these bases, the operator will correspond to the matrix. Vector equality

corresponds to the matrix equality

where and are the coordinate columns for vectors and in bases and .

Let us now choose in and other bases and . In the new bases, instead of , , we will have: , , . At the same time

Let us denote by and the nonsingular square matrices of orders and , respectively, that carry out the transformation of coordinates in spaces and in the transition from old bases to new ones (see § 4):

Then from (27) and (29) we obtain:

Assuming , from (28) and (30) we find:

Definition 8. Two rectangular matrices and same sizes are said to be equivalent if there exist two nonsingular square matrices such that

From (31) it follows that two matrices corresponding to the same linear operator with different choices of bases in and are always equivalent to each other. It is easy to see that, conversely, if a matrix corresponds to an operator for some bases in and , the matrix is ​​equivalent to a matrix , then it corresponds to the same linear operator for some other bases in and .

Thus, each linear operator mapping and corresponds to a class of matrices equivalent to each other with elements from the field.

2. The following theorem establishes a criterion for the equivalence of two matrices:

Theorem 2. In order for two rectangular matrices of the same size to be equivalent, it is necessary and sufficient that these matrices have the same rank.

Proof. The condition is necessary. When multiplying a rectangular matrix by any non-singular square matrix(left or right) the rank of the original rectangular matrix cannot change (see Chapter I, page 27). Therefore, from (32) it follows

The condition is sufficient. Let be a rectangular matrix of size . It defines a linear operator mapping a space with a basis into a space with a basis. Let us denote by number linearly independent vectors among the vectors . Without loss of generality, we can assume that the vectors are linearly independent , and the rest are expressed linearly through them:

. (33)

Let's define a new basis as follows:

(34)

Then by virtue of (33)

. (35)

The vectors are linearly independent. Let's supplement them with some vectors to a basis in .

Then the matrix corresponding to the same operator in new bases; , according to (35) and (36) will have the form

. (37)

In the matrix, ones go along the main diagonal from top to bottom; all other elements of the matrix are equal to zero. Since the matrices and correspond to the same operator, they are equivalent to each other. According to what has been proven, equivalent matrices have the same rank. Therefore, the rank of the original matrix is ​​equal to .

We have shown that an arbitrary rectangular rank matrix is ​​equivalent to the "canonical" matrix. But the matrix is ​​completely determined by specifying the dimensions and numbers. Therefore, all rectangular matrices of given sizes and given rank are equivalent to the same matrix and, therefore, equivalent to each other. The theorem is proven.

3. Let given a linear operator representing -dimensional space in -dimensional. A set of vectors of the form , where , forms vector space. We will denote this space by ; it forms part of space or, as they say, is a subspace in space.

Along with the subspace in, we consider the set of all vectors satisfying the equation

These vectors also form a subspace in ; We will denote this subspace by .

Definition 9. If a linear operator maps to , then the number of dimensions of the space is called the rank of the operator, and the number of dimensions of the space consisting of all vectors satisfying condition (38) is called the defect of the operator.

Among all equivalent rectangular matrices, defining this operator in various bases, there is canonical matrix[see (37)]. Let us denote by and the corresponding bases in and . Then

, .

From the definition and it follows that the vectors form a basis in , and the vectors compare the basis in . It follows from this that is the rank of the operator and

If is an arbitrary matrix corresponding to the operator, then it is equivalent and therefore has the same rank. Thus, the rank of the operator coincides with the rank of the rectangular matrix

,

defining operator in some bases And .

The columns of the matrix contain the coordinates of the vectors . Since it follows that the rank of the operator, i.e. the number of dimensions, is equal to maximum number linearly independent vectors among . Thus, the rank of the matrix coincides with the number of linearly independent columns of the matrix. Since during transposition the rows of the matrix are made into columns, and the rank does not change, the number of linearly independent rows of the matrix is ​​also equal to the rank of the matrix.

4. Let two be given linear operator, and their work.

Let the operator map to , and the operator map to . Then the operator maps to:

Let us introduce matrices , , corresponding to the operators , , for a certain choice of bases , and . Then the operator equality will correspond to the matrix equality ., i.e. in, .



Did you like the article? Share with your friends!