Properties of eigenvalues. Eigenvalues ​​(numbers) and eigenvectors. Examples of solutions

Definition 9.3. Vector X called eigenvector matrices A, if there is such a number λ, that the equality holds: A X= λ X, that is, the result of applying to X linear transformation specified by the matrix A, is the multiplication of this vector by the number λ . The number itself λ called eigenvalue matrices A.

Substituting into formulas (9.3) x` j = λx j , we obtain a system of equations for determining the coordinates of the eigenvector:

. (9.5)

This linear homogeneous system will have non-trivial solution only if its main determinant is 0 (Cramer's rule). By writing this condition in the form:

we obtain an equation for determining the eigenvalues λ , called characteristic equation. Briefly it can be represented as follows:

| A - λE | = 0, (9.6)

since its left side contains the determinant of the matrix A-λE. Polynomial relative λ | A - λE| called characteristic polynomial matrices A.

Properties of the characteristic polynomial:

1) The characteristic polynomial of a linear transformation does not depend on the choice of basis. Proof. (see (9.4)), but hence, . Thus, it does not depend on the choice of basis. This means that | A-λE| does not change when moving to a new basis.

2) If the matrix A linear transformation is symmetrical(those. and ij =a ji), then all the roots characteristic equation(9.6) are real numbers.

Properties of eigenvalues ​​and eigenvectors:

1) If you choose a basis from the eigenvectors x 1, x 2, x 3 , corresponding to the eigenvalues λ 1, λ 2, λ 3 matrices A, then in this basis linear transformation A has a diagonal matrix:

(9.7) The proof of this property follows from the definition of eigenvectors.

2) If the eigenvalues ​​of the transformation A are different, then their corresponding eigenvectors are linearly independent.

3) If characteristic polynomial matrices A has three different roots, then in some basis the matrix A has a diagonal appearance.

Let's find the eigenvalues ​​and eigenvectors of the matrix Let's create a characteristic equation: (1- λ )(5 - λ )(1 - λ ) + 6 - 9(5 - λ ) - (1 - λ ) - (1 - λ ) = 0, λ ³ - 7 λ ² + 36 = 0, λ 1 = -2, λ 2 = 3, λ 3 = 6.

Let's find the coordinates of the eigenvectors corresponding to each found value λ. From (9.5) it follows that if X (1) ={x 1 ,x 2 ,x 3) – eigenvector corresponding λ 1 =-2, then

- a cooperative but uncertain system. Its solution can be written in the form X (1) ={a,0,-a), where a is any number. In particular, if we require that | x (1) |=1, X (1) =

Substituting into system (9.5) λ 2 =3, we obtain a system for determining the coordinates of the second eigenvector - x (2) ={y 1 ,y 2 ,y 3}:

, where X (2) ={b,-b,b) or, provided | x (2) |=1, x (2) =

For λ 3 = 6 find the eigenvector x (3) ={z 1 , z 2 , z 3}:

, x (3) ={c,2c,c) or in the normalized version

x (3) = It can be noticed that X (1) X (2) = ab–ab= 0, x (1) x (3) = ac-ac= 0, x (2) x (3) = bc- 2bc + bc= 0. Thus, the eigenvectors of this matrix are pairwise orthogonal.

Lecture 10.

Quadratic forms and their connection with symmetric matrices. Properties of eigenvectors and eigenvalues ​​of a symmetric matrix. Reducing a quadratic form to canonical form.

Definition 10.1.Quadratic shape real variables x 1, x 2,…, x n is called a polynomial of the second degree in these variables that does not contain free member and members of the first degree.

Examples quadratic forms:

(n = 2),

(n = 3). (10.1)

Let us recall the definition of a symmetric matrix given in the last lecture:

Definition 10.2. The square matrix is ​​called symmetrical, if , that is, if the matrix elements that are symmetrical about the main diagonal are equal.

Properties of eigenvalues ​​and eigenvectors of a symmetric matrix:

1) All eigenvalues ​​of a symmetric matrix are real.

Proof (for n = 2).

Let the matrix A has the form: . Let's create a characteristic equation:

(10.2) Let’s find the discriminant:

Therefore, the equation has only real roots.

2) The eigenvectors of a symmetric matrix are orthogonal.

Proof (for n= 2).

The coordinates of the eigenvectors and must satisfy the equations.

Diagonal matrices have the simplest structure. The question arises whether it is possible to find a basis in which the matrix of the linear operator would have a diagonal form. Such a basis exists.
Let it be given linear space R n and the linear operator A acting in it; in this case, operator A takes R n into itself, that is, A:R n → R n .

Definition. A non-zero vector x is called an eigenvector of the operator A if the operator A transforms x into a collinear vector, that is. The number λ is called the eigenvalue or eigenvalue of the operator A, corresponding to the eigenvector x.
Let us note some properties of eigenvalues ​​and eigenvectors.
1. Any linear combination of eigenvectors operator A corresponding to the same eigenvalue λ is an eigenvector with the same eigenvalue.
2. Eigenvectors operator A with pairwise different eigenvalues ​​λ 1 , λ 2 , …, λ m are linearly independent.
3. If the eigenvalues ​​λ 1 =λ 2 = λ m = λ, then the eigenvalue λ corresponds to no more than m linearly independent eigenvectors.

So, if there are n linearly independent eigenvectors , corresponding to different eigenvalues ​​λ 1, λ 2, ..., λ n, then they are linearly independent, therefore, they can be taken as the basis of the space R n. Let us find the form of the matrix of the linear operator A in the basis of its eigenvectors, for which we will act with the operator A on the basis vectors: Then .
Thus, the matrix of the linear operator A in the basis of its eigenvectors has a diagonal form, and the eigenvalues ​​of the operator A are along the diagonal.
Is there another basis in which the matrix has a diagonal form? The answer to this question is given by the following theorem.

Theorem. The matrix of a linear operator A in the basis (i = 1..n) has a diagonal form if and only if all the vectors of the basis are eigenvectors of the operator A.

Rule for finding eigenvalues ​​and eigenvectors

Let a vector be given , where x 1 , x 2 , …, x n are the coordinates of the vector x relative to the basis and x is the eigenvector of the linear operator A corresponding to the eigenvalue λ, that is. This relationship can be written in matrix form

. (*)


Equation (*) can be considered as an equation for finding x, and , that is, we are interested in non-trivial solutions, since the eigenvector cannot be zero. It is known that nontrivial solutions of a homogeneous system linear equations exist if and only if det(A - λE) = 0. Thus, for λ to be an eigenvalue of the operator A it is necessary and sufficient that det(A - λE) = 0.
If equation (*) is written in detail in coordinate form, then we obtain a system of linear homogeneous equations:

(1)
Where - linear operator matrix.

System (1) has a non-zero solution if its determinant D is equal to zero


We received an equation for finding eigenvalues.
This equation is called the characteristic equation, and its left side- the characteristic polynomial of the matrix (operator) A. If the characteristic polynomial has no real roots, then the matrix A does not have eigenvectors and cannot be reduced to diagonal form.
Let λ 1, λ 2, …, λ n be the real roots of the characteristic equation, and among them there may be multiples. Substituting these values ​​in turn into system (1), we find the eigenvectors.

Example 12. The linear operator A acts in R 3 according to the law, where x 1, x 2, .., x n are the coordinates of the vector in the basis , , . Find the eigenvalues ​​and eigenvectors of this operator.
Solution. We build the matrix of this operator:
.
We create a system for determining the coordinates of eigenvectors:

We compose a characteristic equation and solve it:

.
λ 1,2 = -1, λ 3 = 3.
Substituting λ = -1 into the system, we have:
or
Because , then there are two dependent variables and one free variable.
Let x 1 be a free unknown, then We solve this system in any way and find general solution this system: Fundamental system solutions consists of one solution, since n - r = 3 - 2 = 1.
The set of eigenvectors corresponding to the eigenvalue λ = -1 has the form: , where x 1 is any number other than zero. Let's choose one vector from this set, for example, putting x 1 = 1: .
Reasoning similarly, we find the eigenvector corresponding to the eigenvalue λ = 3: .
In the space R3, the basis consists of three linear independent vectors, we received only two linearly independent eigenvectors, from which a basis in R 3 cannot be composed. Consequently, we cannot reduce the matrix A of a linear operator to diagonal form.

Example 13. Given a matrix .
1. Prove that the vector is an eigenvector of matrix A. Find the eigenvalue corresponding to this eigenvector.
2. Find a basis in which matrix A has a diagonal form.
Solution.
1. If , then x is an eigenvector

.
Vector (1, 8, -1) is an eigenvector. Eigenvalue λ = -1.
The matrix has a diagonal form in a basis consisting of eigenvectors. One of them is famous. Let's find the rest.
We look for eigenvectors from the system:

Characteristic equation: ;
(3 + λ)[-2(2-λ)(2+λ)+3] = 0; (3+λ)(λ 2 - 1) = 0
λ 1 = -3, λ 2 = 1, λ 3 = -1.
Let's find the eigenvector corresponding to the eigenvalue λ = -3:

The rank of the matrix of this system is two and equal to the number unknowns, so this system has only a zero solution x 1 = x 3 = 0. x 2 here can be anything other than zero, for example, x 2 = 1. Thus, the vector (0,1,0) is an eigenvector, corresponding to λ = -3. Let's check:
.
If λ = 1, then we obtain the system
The rank of the matrix is ​​two. We cross out the last equation.
Let x 3 be a free unknown. Then x 1 = -3x 3, 4x 2 = 10x 1 - 6x 3 = -30x 3 - 6x 3, x 2 = -9x 3.
Assuming x 3 = 1, we have (-3,-9,1) - an eigenvector corresponding to the eigenvalue λ = 1. Check:

.
Since the eigenvalues ​​are real and distinct, the vectors corresponding to them are linearly independent, so they can be taken as a basis in R 3 . Thus, in the basis , , matrix A has the form:
.
Not every matrix of the linear operator A:R n → R n can be reduced to diagonal form, since for some linear operators There can be less than n linearly independent eigenvectors. However, if the matrix is ​​symmetric, then the root of the characteristic equation of multiplicity m corresponds to exactly m linearly independent vectors.

Definition. A symmetric matrix is ​​called square matrix, in which the elements symmetrical about the main diagonal are equal, that is, in which .
Notes. 1. All eigenvalues ​​of a symmetric matrix are real.
2. The eigenvectors of a symmetric matrix corresponding to pairwise different eigenvalues ​​are orthogonal.
As one of the many applications of the studied apparatus, we consider the problem of determining the type of a second-order curve.

Lecture 9.

Linear coordinate transformations. Eigenvectors and eigenvalues ​​of a matrix, their properties. Characteristic polynomial of a matrix, its properties.

We will say that on the set of vectorsRgiven transformation A , if each vector X R according to some rule the vector A X R.

Definition 9.1.Conversion A called linear, if for any vectors X And at and for any real number λ the following equalities hold:

A( X + at )=A X+ A at ,A(λ X ) = λ A X. (9.1)

Definition 9.2.The linear transformation is called identical, if it transforms any vector X into yourself.

The identity transformation is denoted HER X= X .

Consider a three-dimensional space with a basis e 1 , e 2, e 3 , in which a linear transformation is specified A. Applying it to the basis vectors, we get the vectors A e 1, A e 2, A e 3 belonging to this three-dimensional space. Consequently, each of them can be uniquely expanded into basis vectors:

A e 1 = a 11 e 1+ a 21 e 2+a 31 e 3,

A e 2 = a 12 e 1+ a 22 e 2+ a 32 e 3 ,(9.2)

A e 3= a 13 e 1+ a 23 e 2+ a 33 e 3 .

Matrix called linear transformation matrix A in the basis e 1 , e 2, e 3 . The columns of this matrix are made up of the coefficients in the basis transformation formulas (9.2).

Comment. Obviously, the identity transformation matrix is ​​the identity matrix E.

For an arbitrary vector X =x 1 e 1+ x 2 e 2+ x 3 e 3 the result of applying a linear transformation to it A will be a vector A X, which can be expanded into vectors of the same basis: A X =x` 1 e 1+ x` 2 e 2+ x` 3 e 3 , where the coordinatesx` ican be found using the formulas:

X` 1 = a 11 x 1 + a 12 x 2 + a 13 x 3 ,

x` 2 = a 21 x 1 + a 22 x 2 + a 23 x 3,(9.3)

x` 3 = a 31 x 1 + a 32 x 2 + a 33 x 3 .

The coefficients in the formulas of this linear transformation are elements of the matrix rows A.

Linear transformation matrix transformation

when moving to a new basis.

Consider a linear transformation A and two bases in three-dimensional space: e 1, e 2, e 3 And e 1 , e 2 , e 3 . Let the matrix C define the formulas for the transition from the basis (e k) to basis ( e k). If in the first of these bases the selected linear transformation is given by the matrix A, and in the second by the matrix A, then we can find the connection between these matrices, namely:

A = C -1 A C(9.4)

Indeed, then A . On the other hand, the results of applying the same linear transformation A in basis (e k), i.e. , and in the basis (e k ): respectively - connected by the matrix WITH: , from which it follows that CA= A WITH. Multiplying both sides of this equality from the left by WITH-1 , we get WITH -1 CA= = C -1 A WITH, which proves the validity of formula (9.4).

Eigenvalues ​​and eigenvectors of a matrix.

Definition 9.3.Vector X called eigenvector matrices A, if there is such a number λ, that the equality holds: A X= λ X, that is, the result of applying to X linear transformation specified by the matrix A, is the multiplication of this vector by the number λ . The number itself λ called eigenvalue matrices A.

Substituting into formulas (9.3)x` j = λ x j, we obtain a system of equations for determining the coordinates of the eigenvector:

.

From here

.(9.5)

This linear homogeneous a system will have a nontrivial solution only if its main determinant is 0 (Cramer's rule). By writing this condition in the form:

we obtain an equation for determining the eigenvalues λ , called characteristic equation. Briefly it can be represented as follows:

| AE | = 0,(9.6)

since its left side contains the determinant of the matrix A- λE. Polynomial relative λ| AE| called characteristic polynomial matrices A.

Properties of the characteristic polynomial:

1) The characteristic polynomial of a linear transformation does not depend on the choice of basis. Proof. (see (9.4)), but hence, . Thus, it does not depend on the choice of basis. This means that |AE| does not change when moving to a new basis.

2) If matrix A linear transformation is symmetrical(those. A ij= a ji), then all roots of the characteristic equation (9.6) are real numbers.

Properties of eigenvalues ​​and eigenvectors:

1) If we choose a basis from the eigenvectors x 1, x 2, x 3 , corresponding to the eigenvalues λ 1, λ 2, λ 3 matrices A, then in this basis the linear transformation A has a matrix of diagonal form:

(9.7)The proof of this property follows from the definition of eigenvectors.

2) If the transformation eigenvalues A are different, then their corresponding eigenvectors are linearly independent.

3) If the characteristic polynomial of the matrix A has three different roots, then in some basis the matrix A has a diagonal appearance.

Example.

Let's find the eigenvalues ​​and eigenvectors of the matrix C and leave the characteristic equation: (1- λ )(5 - λ )(1 - λ ) + 6 - 9(5 - λ ) - (1 - λ ) - (1 - λ ) = 0, λ ³ - 7 λ ² + 36 = 0, λ 1 = -2, λ 2 = 3, λ 3 = 6.

Let's find the coordinates of the eigenvectors corresponding to each found value λ. From (9.5) it follows that if X (1) ={ x 1 , x 2 , x 3 ) – eigenvector corresponding λ 1 =-2, then

- a cooperative but uncertain system. Its solution can be written in the form X (1) ={ a,0,- a), where a is any number. In particular, if we require that |x (1) |=1, X (1) =

Substituting into system (9.5) λ 2 =3, we obtain a system for determining the coordinates of the second eigenvector -x (2) ={ y 1 , y 2 , y 3

Linear coordinate transformations. Eigenvectors and eigenvalues ​​of a matrix, their properties. Characteristic polynomial of a matrix, its properties.

We will say that on the set of vectors R given transformationA , if each vector X R according to some rule the vector AX R.

Definition 9.1. Conversion A called linear, if for any vectors X And at and for any real number λ the following equalities hold:

A(X + at )=AX + Aat ,A(λX ) =λ AX . (9.1)

Definition 9.2. The linear transformation is called identical, if it transforms any vector X into yourself.

The identity transformation is denoted HERX = X .

Consider a three-dimensional space with a basis e 1 , e 2 , e 3 , in which a linear transformation is specified A. Applying it to the basis vectors, we get the vectors Ae 1 , Ae 2 , Ae 3 belonging to this three-dimensional space. Consequently, each of them can be uniquely expanded into basis vectors:

Ae 1 = a 11 e 1 + a 21 e 2 +a 31 e 3 ,

Ae 2 = a 12 e 1 + a 22 e 2 + a 32 e 3 , (9.2)

Ae 3 = a 13 e 1 + a 23 e 2 + a 33 e 3 .

Matrix
called linear transformation matrixA in the basis e 1 , e 2 , e 3 . The columns of this matrix are made up of the coefficients in the basis transformation formulas (9.2).

Comment. Obviously, the identity transformation matrix is ​​the identity matrix E.

For an arbitrary vector X =x 1 e 1 + x 2 e 2 + x 3 e 3 the result of applying a linear transformation to it A will be a vector AX , which can be expanded into vectors of the same basis: AX =x` 1 e 1 + x` 2 e 2 + x` 3 e 3 , where the coordinates x` i can be found using the formulas:

X` 1 =a 11 x 1 + a 12 x 2 + a 13 x 3 ,

x` 2 =a 21 x 1 + a 22 x 2 + a 23 x 3 , (9.3)

x` 3 = a 31 x 1 + a 32 x 2 + a 33 x 3 .

The coefficients in the formulas of this linear transformation are elements of the matrix rows A.

Linear transformation matrix transformation

when moving to a new basis.

Consider a linear transformation A and two bases in three-dimensional space: e 1 , e 2 , e 3 And e 1 , e 2 , e 3 . Let the matrix C define the formulas for the transition from the basis ( e k) to basis ( e k). If in the first of these bases the chosen linear transformation is specified by the matrix A, and in the second - by the matrix A, then we can find the connection between these matrices, namely:

A = C -1 A C (9.4)

Really,
, Then A
. On the other hand, the results of applying the same linear transformation A in basis ( e k), i.e. , and in the basis ( e k ): respectively - connected by matrix WITH:
, from which it follows that CA=A WITH. Multiplying both sides of this equality from the left by WITH-1 , we get WITH - 1 CA = = C -1 A WITH, which proves the validity of formula (9.4).

Eigenvalues and eigenvectors of the matrix.

Definition 9.3. Vector X called eigenvector matrices A, if there is such a number λ, that the equality holds: AX = λ X , that is, the result of applying to X linear transformation specified by the matrix A, is the multiplication of this vector by the number λ . The number itself λ called eigenvalue matrices A.

Substituting into formulas (9.3) x` j = λ x j , we obtain a system of equations for determining the coordinates of the eigenvector:

.

. (9.5)

This linear homogeneous system will have a nontrivial solution only if its main determinant is 0 (Cramer's rule). By writing this condition in the form:

we obtain an equation for determining the eigenvalues λ , called characteristic equation. Briefly it can be represented as follows:

| A - λ E| = 0, (9.6)

since its left side contains the determinant of the matrix A-λE. Polynomial relative λ | A - λ E| called characteristic polynomial matrices A.

Properties of the characteristic polynomial:


Properties of eigenvalues ​​and eigenvectors:

    If we choose a basis from the eigenvectors X 1 , X 2 , X 3 , corresponding to the eigenvalues λ 1 , λ 2 , λ 3 matrices A, then in this basis the linear transformation A has a matrix of diagonal form:

(9.7) The proof of this property follows from the definition of eigenvectors.

    If the transformation eigenvalues A are different, then their corresponding eigenvectors are linearly independent.

    If the characteristic polynomial of the matrix A has three different roots, then in some basis the matrix A has a diagonal appearance.

Let's find the eigenvalues ​​and eigenvectors of the matrix Let's create a characteristic equation:
(1-λ )(5 -λ )(1 -λ ) + 6 - 9(5 -λ ) - (1 -λ ) - (1 -λ ) = 0,λ ³ - 7 λ ² + 36 = 0, λ 1 = -2,λ 2 = 3,λ 3 = 6.

Let's find the coordinates of the eigenvectors corresponding to each found value λ. From (9.5) it follows that if X (1) ={x 1 , x 2 , x 3 ) – eigenvector corresponding λ 1 =-2, then

- a cooperative but uncertain system. Its solution can be written in the form X (1) ={a,0,-a), where a is any number. In particular, if we require that | x (1) |=1,X (1) =

Substituting into system (9.5) λ 2 =3, we obtain a system for determining the coordinates of the second eigenvector - x (2) ={y 1 , y 2 , y 3 }:

, where X (2) ={b,- b, b) or, provided | x (2) |=1,x (2) =

For λ 3 = 6 find the eigenvector x (3) ={z 1 , z 2 , z 3 }:

,x (3) ={c,2 c, c) or in the normalized version

X (3) =
It can be noticed that X (1) X (2) =abab = 0,x (1) x (3) =acac = 0,x (2) x (3) =bc - 2bc + bc = 0. Thus, the eigenvectors of this matrix are pairwise orthogonal.



Did you like the article? Share with your friends!