Matrix equivalent. Equivalent matrices

Our immediate goal is to prove that any matrix can be reduced to some standard types. The language of equivalent matrices is useful along this path.

Let it be. We will say that a matrix is ​​l_equivalent (n_equivalent or equivalent) to a matrix and denote (or) if the matrix can be obtained from a matrix using finite number row (column or row and column, respectively) elementary transformations. It is clear that l_equivalent and p_equivalent matrices are equivalent.

First we will show that any matrix can be reduced to special type, called reduced.

Let it be. A non-zero row of this matrix is ​​said to have a reduced form if it contains an element equal to 1 such that all elements of the column other than are equal to zero, . We will call the marked single element of the line the leading element of this line and enclose it in a circle. In other words, a row of a matrix has the reduced form if this matrix contains a column of the form

For example, in the following matrix

the line has the following form, since. Let us pay attention to the fact that in this example, an element also pretends to be the leading element of the line. In the future, if a line of the given type contains several elements that have leading properties, we will select only one of them in an arbitrary manner.

A matrix is ​​said to have a reduced form if each of its non-zero rows has a reduced form. For example, matrix

has the following form.

Proposition 1.3 For any matrix there is an equivalent matrix of the reduced form.

Indeed, if the matrix has the form (1.1) and, then after carrying out elementary transformations in it

we get the matrix

in which the string has the following form.

Secondly, if the row in the matrix was reduced, then after carrying out elementary transformations (1.20) the row of the matrix will be reduced. Indeed, since given, there is a column such that

but then and, consequently, after carrying out transformations (1.20) the column does not change, i.e. . Therefore, the line has the following form.

Now it is clear that by transforming each non-zero row of the matrix in turn in the above manner, after a finite number of steps we will obtain a matrix of the reduced form. Since only row elementary transformations were used to obtain the matrix, it is l_equivalent to a matrix. >

Example 7. Construct a matrix of reduced form, l_equivalent to the matrix

The first three paragraphs of this chapter are devoted to the doctrine of equivalence of polynomial matrices. On the basis of this, in the next three paragraphs, an analytical theory of elementary divisors is constructed, i.e., a theory of reducing a constant (few-nomial) square matrix to normal form. The last two paragraphs of the chapter give two methods for constructing a transformation matrix.

§ 1. Elementary transformations of a polynomial matrix

Definition 1. A polynomial matrix or -matrix is ​​a rectangular matrix whose elements are polynomials in:

here is the greatest degree of the polynomials.

we can represent a polynomial matrix as a matrix polynomial with respect to , that is, as a polynomial with matrix coefficients:

Let us introduce into consideration the following elementary operations on a polynomial matrix:

1. Multiplying some, for example th, line by a number.

2. Adding to some, for example the th, line another, for example the th, line, previously multiplied by an arbitrary polynomial.

3. Swap any two lines, for example the th and th lines.

We invite the reader to check that operations 1, 2, 3 are equivalent to multiplying a polynomial matrix on the left, respectively, by the following square matrices of order :

(1)

i.e., as a result of applying operations 1, 2, 3, the matrix is ​​transformed, respectively, into the matrices , , . Therefore, operations of type 1, 2, 3 are called left elementary operations.

The right elementary operations on a polynomial matrix are defined in a completely similar way (these operations are performed not on the rows, but on the columns of the polynomial matrix) and the corresponding matrices (of order ):

As a result of applying the right elementary operation, the matrix is ​​multiplied on the right by the corresponding matrix.

We will call matrices of type (or, what is the same, type ) elementary matrices.

Determinant any elementary matrix does not depend on and is different from zero. Therefore, for each left (right) elementary operation there is reverse operation, which is also a left (respectively right) elementary operation.

Definition 2. Two polynomial matrices are called 1) left equivalent, 2) right equivalent, 3) equivalent if one of them is obtained from the other by applying, respectively, 1) left elementary operations, 2) right elementary operations, 3) left and right elementary operations.

Let the matrix be obtained from using left elementary operations corresponding to matrices . Then

. (2).

Denoting by the product , we write equality (2) in the form

, (3)

where , like each of the matrices, has a nonzero constant determinant.

In the next section we will prove that every square -matrix with a constant non-zero determinant can be represented as a product of elementary matrices. Therefore, equality (3) is equivalent to equality (2) and therefore means the left equivalence of the matrices and .

In the case of right equivalence polynomial matrices and instead of equality (3) we will have the equality

, (3")

and in the case of (bilateral) equivalence – equality

Here again and are matrices with nonzero and independent determinants.

Thus, Definition 2 can be replaced by an equivalent definition.

Definition 2". Two rectangular -matrices and are called 1) left equivalent, 2) right equivalent, 3) equivalent if, respectively

1) , 2) , 3) ,

where and are polynomial square matrices with constant and nonzero determinants.

We illustrate all the concepts introduced above with the following important example.

Consider a system of linear homogeneous differential equations-th order with unknown argument functions with constant coefficients:

(4)

Mu equation of a new unknown function; the second elementary operation means the introduction of a new unknown function (instead of ); the third operation means changing places in the equations of terms containing and (i.e. ).

Transition to a new basis.

Let (1) and (2) be two bases of the same m-dimensional linear space X.

Since (1) is a basis, the vectors of the second basis can be expanded from it:

From the coefficients of we create a matrix:

(4) – coordinate transformation matrix when moving from basis (1) to basis (2).

Let it be a vector, then (5) and (6).

Relationship (7) means that

The matrix P is non-degenerate, since otherwise it would be linear dependence between its columns, and then between its vectors.

The converse is also true: any non-singular matrix is ​​a coordinate transformation matrix defined by formulas (8). Because P is a non-singular matrix, then its inverse exists. Multiplying both sides of (8) by, we get: (9).

Let there be 3 bases chosen in the linear space X: (10), (11), (12).

From where, i.e. (13).

That. at sequential conversion coordinates, the matrix of the resulting transformation is equal to the product of the matrices of the component transformations.

Let be a linear operator and let a pair of bases be chosen in X: (I) and (II), and in Y – (III) and (IV).

Operator A in a pair of bases I – III corresponds to the equality: (14). The same operator in the pair of bases II – IV corresponds to the equality: (15). That. for a given operator A we have two matrices and. We want to establish a dependency between them.

Let P be the coordinate transformation matrix during the transition from I to III.

Let Q be the coordinate transformation matrix during the transition from II to IV.

Then (16), (17). Substituting expressions for and from (16) and (17) into (14), we obtain:

Comparing this equality with (15), we obtain:

Relation (19) relates the matrix of the same operator in different bases. In the case where the spaces X and Y coincide, role III basis plays I, and IV – II, then relation (19) takes the form: .

Bibliography:

3. Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: textbook for universities, -M. : Physics and mathematics literature, 2000, 368 pp.

Lecture No. 16 (II semester)

Subject: Necessary and sufficient condition matrix equivalence.

Two matrices, A and B, same sizes, are called equivalent, if there are two non-singular matrices R and S such that (1).

Example: Two matrices corresponding to the same operator for different choices of bases in linear spaces x X and Y are equivalent.

It is clear that the relation defined on the set of all matrices of the same size using the above definition is an equivalence relation.



Theorem 8: In order for two rectangular matrices of the same size to be equivalent, it is necessary and sufficient that they be of the same rank.

Proof:

1. Let A and B be two matrices for which it makes sense. The rank of the product (matrix C) is not higher than the rank of each of the factors.

We see that the kth column of matrix C is a linear combination of vectors of columns of matrix A and this holds for all columns of matrix C, i.e. for everyone. That. , i.e. – subspace of linear space.

Since and since the dimension of the subspace is less than or equal to the dimension of the space, then the rank of matrix C is less than or equal to the rank of matrix A.

In equalities (2), we fix the index i and assign k all possible values ​​from 1 to s. Then we obtain a system of equalities similar to system (3):

From equalities (4) it is clear that i-th line matrix C is a linear combination of the rows of matrix B for all i, and then the linear hull spanned by the rows of matrix C is contained in the linear hull spanned by the rows of matrix B, and then the dimension of this linear shell is less than or equal to the dimension of the linear hull of the row vectors of matrix B, which means that the rank of matrix C is less than or equal to the rank of matrix B.

2. Rank of the product of matrix A on the left and on the right by a non-singular one square matrix Q is equal to the rank of matrix A.(). Those. The rank of matrix C is equal to the rank of matrix A.

Proof: According to what was proven in case (1). Since the matrix Q is non-singular, then for it there exists: and in accordance with what was proven in the previous statement.

3. Let us prove that if the matrices are equivalent, then they have the same ranks. By definition, A and B are equivalent if there are R and S such that. Since multiplying A on the left by R and on the right by S produces matrices of the same rank, as proven in point (2), the rank of A is equal to the rank of B.

4. Let matrices A and B be of the same rank. Let us prove that they are equivalent. Let's consider.

Let X and Y be two linear spaces in which bases (basis X) and (basis Y) are chosen. As is known, any matrix of the form defines a certain linear operator acting from X to Y.

Since r is the rank of matrix A, then among the vectors exactly r are linearly independent. Without loss of generality, we can assume that the first r vectors are linearly independent. Then everything else can be expressed linearly through them, and we can write:

Let us define a new basis in space X as follows: . (7)

The new basis in Y space is as follows:

Vectors, by condition, are linearly independent. Let's supplement them with some vectors to the basis Y: (8). So (7) and (8) are two new bases X and Y. Let’s find the matrix of operator A in these bases:

So, in the new pair of bases, the matrix of the operator A is the matrix J. Matrix A was initially an arbitrary rectangular matrix of the form, rank r. Since the matrices of the same operator in different bases are equivalent, this shows that any rectangular matrix of type and rank r is equivalent to J. Since we are dealing with an equivalence relation, this shows that any two matrices A and B of type and rank r , being equivalent to the matrix J are equivalent to each other.

Bibliography:

1. Voevodin V.V. Linear algebra. St. Petersburg: Lan, 2008, 416 p.

2. Beklemishev D.V. Course analytical geometry And linear algebra. M.: Fizmatlit, 2006, 304 p.

3. Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: textbook for universities, -M. : Physics and mathematics literature, 2000, 368 p.

Lecture No. 17 (II semester)

Subject: Eigenvalues ​​and eigenvectors. Own subspaces. Examples.

Document: I.e. The rank of the matrix is ​​preserved when performing the following operations:

1. Changing the order of lines.

2. Multiplying a matrix by a number other than zero.

3. Transposition.

4. Eliminating a string of zeros.

5. Adding another string to a string, multiplied by an arbitrary number.

The first transformation will leave some minors unchanged, but will change the sign of some to the opposite. The second transformation will also leave some minors unchanged, while others will be multiplied by a number other than zero. The third transformation will preserve all minors. Therefore, when applying these transformations, the rank of the matrix will also be preserved (second definition). Eliminating a zero row cannot change the rank of the matrix, because such a row cannot enter a non-zero minor. Let's consider the fifth transformation.

We will assume that the basis minor Δp ​​is located in the first p rows. Let an arbitrary string b be added to string a, which is one of these strings, multiplied by some number λ. Those. to string a is added a linear combination of strings containing the basis minor. In this case, the basis minor Δp ​​will remain unchanged (and different from 0). Other minors placed in the first p lines also remain unchanged, the same is true for all other minors. That. V in this case the rank (by the second definition) will be preserved. Now consider the minor Ms, which does not have all the rows from among the first p rows (and perhaps it does not have any).

By adding an arbitrary string b to the string ai, multiplied by the number λ, we obtain a new minor Ms‘, and Ms‘=Ms+λ Ms, where

If s>p, then Ms=Ms=0, because all minors of order greater than p of the original matrix are equal to 0. But then Ms‘=0, and the rank of matrix transformations does not increase. But it could not decrease either, since the basic minor did not undergo any changes. So, the rank of the matrix remains unchanged.

You can also find the information you are interested in in the scientific search engine Otvety.Online. Use the search form:

1. Let two vector spaces be given and, respectively, measurements over a number field, and a linear operator mapping into . In this section we will find out how the matrix corresponding to a given linear operator changes when the bases in and change.

Let us choose arbitrary bases and . In these bases, the operator will correspond to the matrix. Vector equality

corresponds to the matrix equality

where and are the coordinate columns for vectors and in bases and .

Let us now choose in and other bases and . In the new bases, instead of , , we will have: , , . At the same time

Let us denote by and the nonsingular square matrices of orders and , respectively, that carry out the transformation of coordinates in spaces and during the transition from old bases to new ones (see § 4):

Then from (27) and (29) we obtain:

Assuming , from (28) and (30) we find:

Definition 8. Two rectangular matrices and of the same size are said to be equivalent if there exist two non-singular square matrices and such that

From (31) it follows that two matrices corresponding to the same linear operator with different choices of bases in and are always equivalent to each other. It is easy to see that, conversely, if a matrix corresponds to an operator for some bases in and , the matrix is ​​equivalent to a matrix , then it corresponds to the same linear operator for some other bases in and .

Thus, each linear operator mapping and corresponds to a class of matrices equivalent to each other with elements from the field.

2. The following theorem establishes a criterion for the equivalence of two matrices:

Theorem 2. In order for two rectangular matrices of the same size to be equivalent, it is necessary and sufficient that these matrices have the same rank.

Proof. The condition is necessary. When multiplying a rectangular matrix by any non-singular square matrix (left or right), the rank of the original rectangular matrix cannot change (see Chapter I, p. 27). Therefore, from (32) it follows

The condition is sufficient. Let be a rectangular matrix of size . It defines a linear operator mapping a space with a basis into a space with a basis. Let us denote by number linearly independent vectors among the vectors . Without loss of generality, we can assume that the vectors are linearly independent , and the rest are expressed linearly through them:

. (33)

Let's define a new basis as follows:

(34)

Then by virtue of (33)

. (35)

The vectors are linearly independent. Let's supplement them with some vectors to a basis in .

Then the matrix corresponding to the same operator in new bases; , according to (35) and (36) will have the form

. (37)

In the matrix, ones go along the main diagonal from top to bottom; all other elements of the matrix are equal to zero. Since the matrices and correspond to the same operator, they are equivalent to each other. According to what has been proven, equivalent matrices have the same rank. Therefore, the rank of the original matrix is ​​equal to .

We have shown that an arbitrary rectangular rank matrix is ​​equivalent to the "canonical" matrix. But the matrix is ​​completely determined by specifying the dimensions and numbers. Therefore, all rectangular matrices of given sizes and given rank are equivalent to the same matrix and, therefore, equivalent to each other. The theorem has been proven.

3. Let given a linear operator representing -dimensional space in -dimensional. A set of vectors of the form , where , forms vector space. We will denote this space by ; it forms part of space or, as they say, is a subspace in space.

Along with the subspace in, we consider the set of all vectors satisfying the equation

These vectors also form a subspace in ; We will denote this subspace by .

Definition 9. If a linear operator maps to , then the number of dimensions of the space is called the rank of the operator, and the number of dimensions of the space consisting of all vectors satisfying condition (38) is called the defect of the operator.

Among all equivalent rectangular matrices, defining this operator in various bases, there is canonical matrix[see (37)]. Let us denote by and the corresponding bases in and . Then

, .

From the definition and it follows that the vectors form a basis in , and the vectors compare the basis in . It follows from this that is the rank of the operator and

If is an arbitrary matrix corresponding to the operator, then it is equivalent and therefore has the same rank. Thus, the rank of the operator coincides with the rank of the rectangular matrix

,

defining operator in some bases And .

The columns of the matrix contain the coordinates of the vectors . Since it follows that the rank of the operator, i.e., the number of dimensions, is equal to maximum number linearly independent vectors among . Thus, the rank of the matrix coincides with the number of linearly independent columns of the matrix. Since during transposition the rows of the matrix are made into columns, and the rank does not change, the number of linearly independent rows of the matrix is ​​also equal to the rank of the matrix.

4. Let two be given linear operator, and their work.

Let the operator map to , and the operator map to . Then the operator maps to:

Let us introduce matrices , , corresponding to the operators , , for a certain choice of bases , and . Then the operator equality will correspond to the matrix equality ., i.e. in, .



Did you like the article? Share with your friends!