Matrix equivalent. Solving arbitrary systems of linear equations

1. Let two vector spaces and, respectively, measurements over a number field be given, and linear operator, displaying in . In this section we will find out how the matrix corresponding to a given linear operator changes when the bases in and change.

Let us choose arbitrary bases and . In these bases, the operator will correspond to the matrix. Vector equality

corresponds to the matrix equality

where and are the coordinate columns for vectors and in bases and .

Let us now choose in and other bases and . In the new bases, instead of , , we will have: , , . Wherein

Let us denote by and the nonsingular square matrices of orders and , respectively, that carry out the transformation of coordinates in spaces and in the transition from old bases to new ones (see § 4):

Then from (27) and (29) we obtain:

Assuming , from (28) and (30) we find:

Definition 8. Two rectangular matrices and same sizes are said to be equivalent if there exist two non-singular square matrices such that

From (31) it follows that two matrices corresponding to the same linear operator with different choices of bases in and are always equivalent to each other. It is easy to see that, conversely, if a matrix corresponds to an operator for some bases in and , the matrix is ​​equivalent to a matrix , then it corresponds to the same linear operator for some other bases in and .

Thus, each linear operator mapping and corresponds to a class of matrices equivalent to each other with elements from the field.

2. The following theorem establishes a criterion for the equivalence of two matrices:

Theorem 2. In order for two rectangular matrices of the same size to be equivalent, it is necessary and sufficient that these matrices have the same rank.

Proof. The condition is necessary. When multiplying a rectangular matrix by any non-singular square matrix(left or right) the rank of the original rectangular matrix cannot change (see Chapter I, page 27). Therefore, from (32) it follows

The condition is sufficient. Let - rectangular matrix size . It defines a linear operator mapping a space with a basis into a space with a basis. Let us denote by number linearly independent vectors among the vectors . Without loss of generality, we can assume that the vectors are linearly independent , and the rest are expressed linearly through them:

. (33)

Let's define a new basis as follows:

(34)

Then by virtue of (33)

. (35)

The vectors are linearly independent. Let's supplement them with some vectors to a basis in .

Then the matrix corresponding to the same operator in new bases; , according to (35) and (36) will have the form

. (37)

In the matrix, ones go along the main diagonal from top to bottom; all other elements of the matrix are equal to zero. Since the matrices and correspond to the same operator, they are equivalent to each other. According to what has been proven, equivalent matrices have the same rank. Therefore, the rank of the original matrix is ​​equal to .

We have shown that an arbitrary rectangular rank matrix is ​​equivalent to the "canonical" matrix. But the matrix is ​​completely determined by specifying the dimensions and numbers. Therefore, all rectangular matrices of given sizes and given rank are equivalent to the same matrix and, therefore, equivalent to each other. The theorem is proven.

3. Let given a linear operator representing -dimensional space in -dimensional. A set of vectors of the form , where , forms vector space. We will denote this space by ; it forms part of space or, as they say, is a subspace in space.

Along with the subspace in, we consider the set of all vectors satisfying the equation

These vectors also form a subspace in ; We will denote this subspace by .

Definition 9. If a linear operator maps to , then the number of dimensions of the space is called the rank of the operator, and the number of dimensions of the space consisting of all vectors satisfying condition (38) is called the defect of the operator.

Among all equivalent rectangular matrices defining a given operator in various bases, there is canonical matrix[see (37)]. Let us denote by and the corresponding bases in and . Then

, .

From the definition and it follows that the vectors form a basis in , and the vectors compare the basis in . It follows from this that is the rank of the operator and

If is an arbitrary matrix corresponding to the operator, then it is equivalent and therefore has the same rank. Thus, the rank of the operator coincides with the rank of the rectangular matrix

,

defining operator in some bases And .

The columns of the matrix contain the coordinates of the vectors . Since it follows that the rank of the operator, i.e., the number of dimensions, is equal to maximum number linearly independent vectors among . Thus, the rank of the matrix coincides with the number of linearly independent columns of the matrix. Since during transposition the rows of the matrix are made into columns, and the rank does not change, the number of linearly independent rows of the matrix is ​​also equal to the rank of the matrix.

4. Let two linear operators and their product be given.

Let the operator map to , and the operator map to . Then the operator maps to:

Let us introduce matrices , , corresponding to the operators , , for a certain choice of bases , and . Then the operator equality will correspond to the matrix equality ., i.e. in, .

Document: I.e. The rank of the matrix is ​​preserved when performing the following operations:

1. Changing the order of lines.

2. Multiplying a matrix by a number other than zero.

3. Transposition.

4. Eliminating a string of zeros.

5. Adding another string to a string, multiplied by an arbitrary number.

The first transformation will leave some minors unchanged, but will change the sign of some to the opposite. The second transformation will also leave some minors unchanged, while others will be multiplied by a number other than zero. The third transformation will preserve all minors. Therefore, when applying these transformations, the rank of the matrix will also be preserved (second definition). Eliminating a zero row cannot change the rank of the matrix, because such a row cannot enter a non-zero minor. Let's consider the fifth transformation.

We will assume that the basis minor Δp ​​is located in the first p rows. Let an arbitrary string b be added to string a, which is one of these strings, multiplied by some number λ. Those. to string a is added a linear combination of strings containing the basis minor. In this case, the basis minor Δp ​​will remain unchanged (and different from 0). Other minors placed in the first p lines also remain unchanged, the same is true for all other minors. That. V in this case the rank (by the second definition) will be preserved. Now consider the minor Ms, which does not have all the rows from among the first p rows (and perhaps it does not have any).

By adding an arbitrary string b to the string ai, multiplied by the number λ, we obtain a new minor Ms‘, and Ms‘=Ms+λ Ms, where

If s>p, then Ms=Ms=0, because all minors of order greater than p of the original matrix are equal to 0. But then Ms‘=0, and the rank of matrix transformations does not increase. But it could not decrease either, since the basic minor did not undergo any changes. So, the rank of the matrix remains unchanged.

You can also find the information you are interested in in the scientific search engine Otvety.Online. Use the search form:

The simplest form of a linear operator matrix.

Matrices A And B are called equivalent if there are non-singular matrices Q And T, What A=QBT.

Theorem 6.1. If the matrices are equivalent, then their ranks are equal.

Proof. Since the rank of the product does not exceed the ranks of the factors, then . Since, then. Combining the two inequalities, we obtain the required statement.

Theorem 6.2. Elementary transformations with rows and columns of a matrix A can be reduced to block form , where is the unit matrix of order k, and 0 is a zero matrix of the corresponding sizes.

Proof. Let us present an algorithm for matrix reduction A To specified type. Column numbers will be indicated in square brackets, and line numbers are in parentheses.

1. Let's put r=1.

2. If then we go to step 4, otherwise we go to step 3.

3. Let's do transformations with strings , Where i=r+1,…,m, and with columns , Where j=r+1,…,n, And . Let's increase r to 1 and return to step 2.

4. If, at i=r+1,…,m, j=r+1,…,n, then it's over. Otherwise we'll find i,j>r, What . Let's rearrange the rows and columns and return to step 2.

Obviously, the algorithm will construct a sequence of equivalent matrices, the last of which has the required form.

Theorem 6.3. Matrices A And B of the same size are equivalent if and only if their ranks are equal.

Proof. If the matrices are equivalent, then their ranks are equal (Theorem 6.1). Let the ranks of the matrices be equal. Then there are non-singular matrices such that , Where r=rgA=rgB(Theorem 6.2). Hence, , and matrices A And B– are equivalent.

The results of this paragraph allow you to find simplest form matrices of the linear operator and bases of spaces in which the matrix of the linear operator has this simplest form.

Equivalent matrices

As mentioned above, the minor of a matrix of order s is the determinant of a matrix formed from elements of the original matrix located at the intersection of any selected s rows and s columns.

Definition. In a matrix of order mn, a minor of order r is called basic if it is not equal to zero, and all minors of order r+1 and higher are equal to zero or do not exist at all, i.e. r matches the smaller of m or n.

The columns and rows of the matrix on which the basis minor stands are also called basis.

A matrix can have several different basis minors that have the same order.

Definition. The order of the basis minor of a matrix is ​​called the rank of the matrix and is denoted by Rg A.

Very important property elementary matrix transformations is that they do not change the rank of the matrix.

Definition. Matrices obtained as a result of an elementary transformation are called equivalent.

It should be noted that equal matrices and equivalent matrices are completely different concepts.

Theorem. Largest number linearly independent columns in a matrix are equal to the number of linearly independent rows.

Because elementary transformations do not change the rank of the matrix, then the process of finding the rank of the matrix can be significantly simplified.

Example. Determine the rank of the matrix.

2. Example: Determine the rank of the matrix.

If, using elementary transformations, it is not possible to find a matrix equivalent to the original one, but of a smaller size, then finding the rank of the matrix should begin by calculating the minors of the highest possible order. In the above example, these are minors of order 3. If at least one of them is not equal to zero, then the rank of the matrix is ​​equal to the order of this minor.

The theorem on the basis minor.

Theorem. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the basis minor is located.

So the rank arbitrary matrix A is equal to the maximum number of linearly independent rows (columns) in the matrix.

If A is a square matrix and det A = 0, then at least one of the columns is a linear combination of the remaining columns. The same is true for strings. This statement follows from the property of linear dependence when the determinant is equal to zero.

Solving arbitrary systems of linear equations

As stated above, matrix method and Cramer's method are applicable only to those systems linear equations, in which the number of unknowns is equal to the number of equations. Next, we consider arbitrary systems of linear equations.

Definition. System of m equations with n unknowns in general view is written as follows:

where aij are coefficients and bi are constants. The solutions of the system are n numbers, which, when substituted into the system, turn each of its equations into an identity.

Definition. If a system has at least one solution, then it is called joint. If a system does not have a single solution, then it is called inconsistent.

Definition. A system is called determinate if it has only one solution and indefinite if it has more than one.

Definition. For a system of linear equations the matrix

A = is called the matrix of the system, and the matrix

A*= is called the extended matrix of the system

Definition. If b1, b2, …,bm = 0, then the system is called homogeneous. homogeneous system always joint, because always has a zero solution.

Elementary system transformations

TO elementary transformations relate:

1) Adding to both sides of one equation the corresponding parts of the other, multiplied by the same number, not equal to zero.

2) Rearranging the equations.

3) Removing from the system equations that are identities for all x.

Kronecker-Kapeli theorem (consistency condition for the system).

(Leopold Kronecker (1823-1891) German mathematician)

Theorem: A system is consistent (has at least one solution) if and only if the rank of the system matrix is ​​equal to the rank of the extended matrix.

Obviously, system (1) can be written in the form.

Transition to a new basis.

Let (1) and (2) be two bases of the same m-dimensional linear space X.

Since (1) is a basis, the vectors of the second basis can be expanded from it:

From the coefficients of we create a matrix:

(4) – coordinate transformation matrix when moving from basis (1) to basis (2).

Let it be a vector, then (5) and (6).

Relationship (7) means that

The matrix P is non-degenerate, since otherwise it would be linear dependence between its columns, and then between its vectors.

The converse is also true: any non-singular matrix is ​​a coordinate transformation matrix defined by formulas (8). Because P is a non-singular matrix, then its inverse exists. Multiplying both sides of (8) by, we get: (9).

Let there be 3 bases chosen in the linear space X: (10), (11), (12).

From where, i.e. (13).

That. with sequential transformation of coordinates, the matrix of the resulting transformation is equal to the product of the matrices of the component transformations.

Let be a linear operator and let a pair of bases be chosen in X: (I) and (II), and in Y – (III) and (IV).

Operator A in a pair of bases I – III corresponds to the equality: (14). The same operator in the pair of bases II – IV corresponds to the equality: (15). That. for a given operator A we have two matrices and. We want to establish a dependency between them.

Let P be the coordinate transformation matrix during the transition from I to III.

Let Q be the coordinate transformation matrix during the transition from II to IV.

Then (16), (17). Substituting expressions for and from (16) and (17) into (14), we obtain:

Comparing this equality with (15), we obtain:

Relation (19) connects the matrix of the same operator in different bases. In the case where the spaces X and Y coincide, role III basis plays I, and IV – II, then relation (19) takes the form: .

Bibliography:

3. Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: textbook for universities, -M. : Physics and mathematics literature, 2000, 368 pp.

Lecture No. 16 (II semester)

Subject: Necessary and sufficient condition matrix equivalence.

Two matrices, A and B, of the same size are called equivalent, if there are two non-singular matrices R and S such that (1).

Example: Two matrices corresponding to the same operator for different choices of bases in linear spaces x X and Y are equivalent.

It is clear that the relation defined on the set of all matrices of the same size using the above definition is an equivalence relation.



Theorem 8: In order for two rectangular matrices of the same size to be equivalent, it is necessary and sufficient that they be of the same rank.

Proof:

1. Let A and B be two matrices for which it makes sense. The rank of the product (matrix C) is not higher than the rank of each of the factors.

We see that the kth column of matrix C is a linear combination of vectors of columns of matrix A and this holds for all columns of matrix C, i.e. for all. That. , i.e. – subspace of linear space.

Since and since the dimension of the subspace is less than or equal to the dimension of the space, then the rank of matrix C is less than or equal to the rank of matrix A.

In equalities (2), we fix the index i and assign k all possible values ​​from 1 to s. Then we obtain a system of equalities similar to system (3):

From equalities (4) it is clear that i-th line matrix C is a linear combination of the rows of matrix B for all i, and then the linear hull spanned by the rows of matrix C is contained in the linear hull spanned by the rows of matrix B, and then the dimension of this linear shell is less than or equal to the dimension of the linear hull of the row vectors of matrix B, which means that the rank of matrix C is less than or equal to the rank of matrix B.

2. The rank of the product of matrix A on the left and on the right by a non-singular square matrix Q is equal to the rank of matrix A.(). Those. The rank of matrix C is equal to the rank of matrix A.

Proof: According to what was proven in case (1). Since the matrix Q is non-singular, then for it there exists: and in accordance with what was proven in the previous statement.

3. Let us prove that if the matrices are equivalent, then they have the same ranks. By definition, A and B are equivalent if there are R and S such that. Since multiplying A on the left by R and on the right by S produces matrices of the same rank, as proven in point (2), the rank of A is equal to the rank of B.

4. Let matrices A and B be of the same rank. Let us prove that they are equivalent. Let's consider.

Let X and Y be two linear spaces in which bases (basis X) and (basis Y) are chosen. As is known, any matrix of the form defines a certain linear operator acting from X to Y.

Since r is the rank of matrix A, then among the vectors exactly r are linearly independent. Without loss of generality, we can assume that the first r vectors are linearly independent. Then everything else can be expressed linearly through them, and we can write:

Let us define a new basis in space X as follows: . (7)

The new basis in Y space is as follows:

Vectors, by condition, are linearly independent. Let's supplement them with some vectors to the basis Y: (8). So (7) and (8) are two new bases X and Y. Let’s find the matrix of operator A in these bases:

So, in the new pair of bases, the matrix of the operator A is the matrix J. Matrix A was initially an arbitrary rectangular matrix of the form, rank r. Since the matrices of the same operator in different bases are equivalent, this shows that any rectangular matrix of type and rank r is equivalent to J. Since we are dealing with an equivalence relation, this shows that any two matrices A and B of type and rank r , being equivalent to the matrix J are equivalent to each other.

Bibliography:

1. Voevodin V.V. Linear algebra. St. Petersburg: Lan, 2008, 416 p.

2. Beklemishev D.V. Course analytical geometry And linear algebra. M.: Fizmatlit, 2006, 304 p.

3. Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: textbook for universities, -M. : Physics and mathematics literature, 2000, 368 p.

Lecture No. 17 (II semester)

Subject: Eigenvalues ​​and eigenvectors. Own subspaces. Examples.



Did you like the article? Share with your friends!