Determinant of the gram matrix. Euclidean and unitary spaces

1. Consider arbitrary vectors. Let us first assume that these vectors are linearly independent. In this case, the Gram determinant compiled for any of these vectors will be different from zero. Then, assuming according to (22)

(23)

and multiplying term by term these inequalities and the inequality

, (24)

.

Thus, the Gram determinant for linearly independent vectors positive, for linearly dependent equal to zero. The Gram determinant is never negative.

Let us denote for abbreviation . Then from (23) and (24)

where is the area of ​​a parallelogram built on and . Next,

,

where is the volume of a parallelepiped built on vectors. Continuing further, we find:

,

and finally

. (25)

It is natural to call it the volume of a -dimensional parallelepiped built on vectors as on edges.

Let us denote by , the coordinates of the vector in some orthonormal basis in , and let

Then based on (14)

and therefore [see formula (25)]

. (26)

This equality has the following geometric meaning:

Squared volume of a parallelepiped equal to the sum squared volumes of its projections onto all coordinate-dimensional subspaces. In particular, when from (26) it follows:

. (26)

Using formulas (20), (21), (22), (26), (26"), a number of basic metric problems of dimensional unitary and Euclidean analytical geometry are solved.

2. Let's return to expansion (15). It follows directly from this:

which, in combination with (22), gives the inequality (for arbitrary vectors )

in this case, the equal sign holds if and only if the vector is orthogonal to the vectors.

From here it is easy to obtain the so-called Hadamard inequality

where the equal sign holds if and only if the vectors are pairwise orthogonal. Inequality (29) expresses the following geometrically obvious fact:

The volume of a parallelepiped does not exceed the product of the lengths of its edges and is equal to this product only when the parallelepiped is rectangular.

Hadamard's inequality can be given it normal look, putting in (28) and introducing into consideration the determinant composed of the coordinates of the vectors in some orthonormal basis:

.

Then from (26") and (28) it follows

. (28)

3. Let us now establish a generalized Hadamard inequality, covering both inequality (27) and inequality (28):

and the equal sign holds if and only if each of the vectors is orthogonal to any of the vectors or one of the determinants, equal to zero.

Inequality (28") has the following geometric meaning:

The volume of a parallelepiped does not exceed the product of the volumes of two additional faces and is equal to this product if and only if these faces are mutually orthogonal or at least one of them has zero volume.

We will establish the validity of inequality (29) inductively with respect to the number of vectors . The inequality is true when this number is 1 [see formula (27)].

Let us introduce two subspaces and, respectively, with bases and . Obviously, . Let us consider orthogonal expansions

.

Replacing the square of the volume of the parallelepiped with the product of the square of the volume of the base and the square of the height [see. formula (22)], we find

In this case, from the vector decomposition it follows:

, (31)

and here the sign takes place only when .

Using now relations (30), (30"), (31) and the induction assumption, we obtain:

We got inequality (29). Moving on to clarifying when the sign occurs in this inequality, we assume that And . Then according to (30") also And . Since in relations (32) there is an equal sign everywhere, then, in addition, by the induction assumption, each of the vectors is orthogonal to each of the vectors. Obviously, the vector also has this property

Thus, the generalized Hadamard inequality is completely established.

4. The generalized Hadamard inequality (29) can also be given an analytical form.

Let be an arbitrary positive definite Hermitian form. Considering as the coordinates of a vector in -dimensional space with a basis, we take the form as the basic metric form in (see page 224). Then it will become a unitary space. Let us apply the generalized Hadamard inequality to the basis vectors: - real matrix of coefficients of a positive definite quadratic form between the vectors and, defining it from the relation

.

From Bunyakovsky's inequality it follows that it has a real value.

dot product vectors specified by coordinates.

Let in the basis e vectors are given A = x 1 e 1 + x 2 e 2 + … + x n e n , V = at 1 e 1 + at 2 e 2 + … + y n e n . Then ( a, c) = (x 1 e 1 + x 2 e 2 + … + x n e n )×( at 1 e 1 + at 2 e 2 + … + y n e n ) = = x T ×G× at, Where x T– vector coordinate string A , y – vector coordinate column V . So, ( a, c) = x T ×G× at(42).

Properties of the Gram matrix.

1 0 . The Gram matrix is ​​symmetrical about the main diagonal.

This follows from the fact that ( e k, e s ) = (e s, e k ).

2 0 . The diagonal elements of the Gram matrix are strictly positive.

This follows from the fact that e k ¹ 0 and therefore ( e k, e k ) > 0.

3 0 . For a Gram matrix and any n- dimensional column X condition is met x T ×G× X> 0.

This follows from the 4th axiom of the definition of a scalar product.

Symmetric matrix A, satisfying the condition x T ×A× X> 0 for any

non-zero column X, called positive definite. Therefore, the matrix

Grama positive definite.

4 0 . Let e = (e 1 , e 2, ... , e n ) And e 1 = (e 1 1 , e 2 1, ... , e n 1 ) – two bases in E n , G And G 1– Gram matrices of a given scalar product in bases e And e 1 respectively. Let T– transition matrix from the basis e to the base e 1 . Then ( a, c) = x T ×G× y, x = T×x 1, y = T×y 1, x T = (T×x 1)T =(x 1)T × T T. Hence, ( a, c) = ((x 1)T × T T(Т×у 1) = (x 1)T ×(T T× G×Ty 1. But ( a, c) = (x 1)T × G 1 × y 1. From here

G 1 = T T × G × T(43)

Formula (42) gives a connection between Gram matrices in different bases.

5 0 . The determinants of Gram matrices in all bases have the same sign.

From formula (42) it follows ú G 1ú =ú T Tú ×ú Gú ×ú Tú = ú Gú ×ú Tú 2. Because Tú 2 > 0, then ú G 1ú and ú Gú have the same signs.

Examples.

1. In abundance M 2 square matrices with real elements, the scalar product is given by the formula . Find the Gram matrix of this product in the basis e 1 = , e 2 = , e 3 = , e 4 = .

Solution. Let's find all pairwise products basic elements: (e 1, e 1 ) = 1, (e 1, e 2 ) = (e 2, e 1 ) = 0, (e 1, e 3 ) = (e 3, e 1 ) = 0, (e 1, e 4 ) = (e 4, e 1 ) = 0, (e 2 , e 2 ) = 1, (e 2, e 3 ) = (e 3, e 2 ) = 0, (e 2, e 4 ) = (e 4, e 2 ) = 0, (e 3 , e 3 ) = 1, (e 3, e 4 ) = (e 4, e 3 ) = 0, (e 4, e 4 ) = 1. Therefore,

2. In space R [X] of polynomials of degree not higher than 3, the scalar product is given by the formula , Where a And b– fixed real numbers, a< b. Compose the Gram matrix in the basis (1, x, x 2, x 3).

Solution. Let's find all pairwise products of basic elements: (1, 1) = = b–a,

(1, X) = (X, 1) = = ), (1, x 2) = (x 2, 1) = = ), (1, x 3) = (x 3, 1) = = ), (x, x)= = ), (x, x 2) = (x 2 , x) = = ), (x, x 3) = (x 3, x) = = ), (x 2, x 2) = = ), (x 2, x 3) = (x 3, x 2) = = ), (x 3, x 3) = = ). The Gram matrix will look like:

G = .

3. In basis ( e 1, e 2, e 3 ) space E 3 the scalar product is given by the Gram matrix G= . Find the dot product of vectors A = (1, –5, 4) and V = (–3, 2, 7).

Solution. Using formula (41), we obtain ( A , V ) = (1, –5, 4) × × = 7.

Introduction of metrics in Euclidean space

Let E n n- dimensional Euclidean space. Let's call the scalar product of a vector and itself scalar square of this vector , i.e. ( a, a ) = a 2 . According to the 4th axiom of the scalar product a 2 ³ 0.

Definition 47. Vector length called arithmetic value square root from the scalar square of this vector. those. ú A ú = (44)

Vector length properties:

1. Any vector A has a length and only one, ú A ú ³ 0.

2. ú A ú = úaú×ú A ú for any A Î E n .

3. For any vectors A And V from E n inequality ú is true a×b ú £ú A ú ×ú V ú.

Proof.(A –a V ) 2 = A 2 – 2a( a, c ) + a 2 × V 2 ³ 0 for any a О R. Because quadratic trinomial is non-negative for any value of a, then its discriminant is non-positive, i.e. ( a, c ) 2 – A V 2 £ 0, or ( a, c ) 2 £ A V 2. Hence ú a×b ú £ú A ú ×ú V ú (45). The equal sign in this formula will be if and only if the vectors are proportional.

Definition 48. A vector of unit length is called unit vector or ortom .

4 0 . For anyone not zero vector there is an unit unit proportional to it.

If a ¹ 0 , then ú A ú ¹ 0. Therefore, there is a vector a 0 = A . Obviously, a 0 ú =1.

Definition 49. The angle between non-zero vectors a and such a real number is called j, which is (46).

Angle between vectors A and can also be denoted .

Properties of angles.

1 0 . For any two non-zero vectors, the angle between them is defined.

From formula (44) it follows that Therefore, j exists.

2 0 . If a ¹ 0, b ¹ 0, then .

Definition 48. Two non-zero vectors are called orthogonal , if their scalar product is equal to zero.

Orthogonal vectors are denoted A ^V.

3 0 . If A ^V , a ¹ 0, b ¹ 0, That ( a A )^ (b V ).

4 0 . If A ^V And A ^With , That A ^(V + With ).

Definition 50. The set of all vectors in space E n , orthogonal to the vector A , to which the zero vector is added is called orthogonal complement of vector a .

5 0 . Orthogonal vector complement A is ( n – 1)-dimensional Euclidean subspace in E n .

Proof.

From properties 3 0 and 4 0 it follows that the set under consideration L is linear subspace V E n . Since in E n If the scalar product is defined, then it is also defined in the orthogonal complement, therefore, L is a Euclidean subspace. Besides, With Î L Û ( A , With ) = 0 (*). Let's fix it in E n basis. Let A = (a 1, a 2, …, a n), With = (x 1, x 2, …, x n). Then With Î L Û a T ×G×x = 0 (**). Equation (**) is linear homogeneous equation With n unknown. Fundamental system its solutions consists of ( n– 1) solutions. Therefore, the solution space of equation (**) is ( n– 1)-dimensional.

Let E k – subspace of space E n . Let's denote E set consisting of the zero vector and all vectors orthogonal to any nonzero vector from E k .In other words With Î E Û ( With , A ) = 0 for all A Î E k . Space E orthogonal complement to space E k .

Def: Determinant of Gram, system of vectors ( e 1 , e 2 , …, e k} called determinant

G( e 1 , e 2 , …, e k) = .

. In order for the system of vectors ( e 1 , e 2 , …, e k) Euclidean space E n was

linearly dependent, it is necessary and sufficient that Г( e 1 , e 2 , …, e k) was equal

◀ Necessity. Let e 1 , e 2 , …, e k linearly dependent. Then e k= a 1 e 1 + a 2 e 2 +…+ e k–1 a k–1 and in Г( e 1 , e 2 , …, e k) the elements of the last row look like a 1 ( e 1 ,e i) + a 2 ( e 2 ,e i) + …+ a k –1 (e k –1 ,e i), i.e. last line is a linear combination of the remaining Þ Г( e 1 , e 2 , …, e k) = 0.

Adequacy. Let G( e 1 , e 2 , …, e k) = 0 Þ its lines are linearly dependent Þ $b 1 , b 2 , …, b k b 1 ( e 1 ,e i) + … + b k(e k ,e i) = 0 Þ (b 1 e 1 + … + b k e k= 0 and not all b i= 0 Þ e 1 , e 2 , …, e k linearly dependent. Contradiction

Consequence. If e 1 , e 2 , …, e k are linearly independent, then Г( e 1 , e 2 , …, e k) ¹ 0. Moreover, Г( e 1 , e 2 , …, e k) > 0

◀ Considering ℒ( e 1 , e 2 , …, e k). Then ( e k ,e i) – elements of a matrix of some symmetric bilinear form, corresponding to which quadratic form defines the scalar product, i.e. is positive definite. Therefore, according to Sylvester’s criterion D 1 > 0, D 2 > 0, …, D k> 0. But D k= Г( e 1 , e 2 , …, e k)

§2. Mutual bases.

Covariant and contravariant coordinates of vectors

Let E n– Euclidean space, let ( e 1 , e 2 , …, e n)basis in E n And ( e 1 , e 2 , …, e n)another basis in E n. Bases ( e i) And ( e i) are called reciprocal if ( e i, e j) = = .

Kronecker-Capelli.

. Any basis ( e i) from E n has a unique reciprocal basis.

◀ Let e j= e 1 + e 2 + … + e n. Multiply the equality scalarly by e i.

(e i, e j) = (e i, e 1) + (e i, e 2) + … + (e i, e n) = , i, j = 1, 2, …, n.

We have heterogeneous system n-linear equations with n unknown, the determinant of this system is Г( e 1 , e 2 , …, e n) ¹ 0, i.e. the system has a unique non-zero solution.

Therefore the vectors e j are determined unambiguously. Let us make sure that they form a basis (that is, they are linearly independent).

Let a 1 e 1 + a 1 e 2 + …+ a n e n= 0. Multiply scalarly by e i.

a 1 ( e i, e 1) + a 2 ( e i, e 2) + … + a n(e i, e n) = 0 Þ a i= 0, i, j = 1, 2, …, n

Comment : if basis ( e i) is orthonormal, then its mutual basis coincides with the given basis.

Let ( e i) And ( e j) mutual bases in E n.

Then "xО E n (1)

(x 1 , x 2 , …, x n) are called covariant coordinates of the vector x.

(x 1 , x 2 , …, x n) are called contravariant coordinates of the vector x.

Agreement: Let there be an expression composed of factors that are equipped finite number indices (upper and lower). In this case, it is agreed that all subscripts are designated different symbols(similar to the top ones). If in such an expression there are two identical indices, of which one is upper and the other is lower, then it is considered that summation is performed over such indices from 1 to n.) we get e j= g ji e i; e j= g ji e i.

They say that in a real linear space X operation defined scalar vector multiplication, if any pair of vectors x and at from X a real number is assigned, which is called scalar product vectors X And at and is designated by the symbol (x,y), and if for any X. y, zX and any real number A the following are performed dot product axioms:

  • 1. (x,y) =(y,; X).
  • 2. (.t + y, z)= (x,z) + (y, z).
  • 3. (ah, y) = a(x,y).
  • 4. (x, x)> 0 at x F 0 and (x, X)= 0 at X = 0.

Example 8.1. Let X be the space geometric vectors, studied in vector algebra. The dot product, defined as the product of the lengths of two vectors and the cosine of the angle between them, satisfies the dot product axioms. ?

Example 8.2. IN arithmetic space K p column height n dot product of vectors

can be determined by the formula

It is not difficult to check the validity of the scalar product axioms. For example, let us check the feasibility of Axiom 4. Note that

But the sum of squares is positive if at least one of the numbers Xi non-zero (or x f 0), and is equal to zero if all x* are equal to zero (i.e. x = 0). ?

Example 8.3. In the linear space of polynomials with real coefficients of degree no higher than n- 1 scalar product can be entered by the formula

Verification of the scalar product axioms is based on the properties definite integral and is not difficult. ?

Example 8.4. In linear space Sa, b] functions of a real variable, continuous on the interval [a, 6], the scalar product can be introduced in the same way as in the linear space of polynomials - using a definite integral:

The verification of the scalar product axioms is carried out in the same way as in the previous example. ?

From axioms 2 and 3 it follows that any finite linear combination of vectors can be multiplied scalarly into another linear combination of vectors according to the rule of multiplying a polynomial by a polynomial, i.e. according to the formula

Valid linear space, in which scalar multiplication of vectors is defined, is called Euclidean space. A finite-dimensional linear space can be converted into a Euclidean space in many ways. If in n-dimensional Euclidean space X fixed basis e, e^,..., e n, then any vectors x and y have decompositions in it

and formula (8.1) for vectors henna gives

or in matrix form where it should be

Thus, the scalar product in the Euclidean space X is completely determined by the matrix D. Not every square matrix can appear in formula (8.3). But if one scalar product in a given basis is determined by some matrix Г, then it is easy to understand that the same matrix, only in a different basis, also determines the scalar product. Keeping the matrix Г and changing the bases, we get infinite set scalar products in a given π-dimensional linear space.

The matrix Г involved in formula (8.3) is called Gram matrix basis e = (e x, b2,..., e n). The Gram matrix (matrix of scalar products) can be defined not only for bases, but also for arbitrary ordered finite systems of vectors.

Let us note some properties of the Gram matrix of the basis in n-dimensional Euclidean space.

1. Gram Matrix G symmetric and for any n-dimensional columnXf 0 satisfies the conditionx TGX > 0, in particular the diagonal elements(ei,ej) = ef G e* Gram matrices are semi-equivalent.

The symmetry of the Gram matrix follows from axiom 1 of the scalar product, according to which (e*, ej)= (e^, e*) for any two basis vectors, and the condition x T G x > 0, x f 0, is equivalent to axiom 4 of the scalar product.

Symmetric matrix A, satisfying the condition x t Ah > > 0, x F 0, called positive definite. Taking this term into account, the proven property sounds like this: the Gram matrix is ​​positive definite.

2. Gram matrices G and G" two bases e and e" of Euclidean space are related by the relation

where T is the transition matrix from basis e to basis e".

Indeed, when passing from the basis e to the basis e! coordinates X And at two vectors X And at converted to coordinates X" And y" according to formulas (see section 4.6)

Therefore, the matrix T T G T there is a Gram matrix for the basis e!.

3. The determinant of the Gram matrix of any basis is positive.

Indeed, from formula (8.4) it follows that when the basis is changed, the determinant of the Gram matrix retains its sign (or remains equal to zero), since the determinant of the transition matrix is ​​non-zero:

It remains to take into account that as the Gram matrix Г we can take the identity matrix (see the remark below), which has a determinant equal to one.

4. All corner diagonal minors


Gram matrices of basis e lf e2 , ... e n are positive.

Indeed, for anyone To we can consider the subspace Lfc = (ei,...,efc) as an independent Euclidean space.

Then the determinant of the Gram matrix for the basis ei, 62, ..., will coincide with D^. According to the previous property, this determinant is positive.

Comment. In Sect. 9.C it is established that property 4 is necessary and sufficient condition positive certainty square matrix. Therefore, property 4 follows from property 1. Any positive definite matrix is ​​the Gram matrix of some basis in a given Euclidean space. Indeed, the scalar product can be defined by formula (8.3), in which any positive definite matrix can be taken as Γ. Then axiom 1 of the scalar product will follow from the symmetry of the matrix Г, axioms 2 and 3 - from the distributivity property matrix product, and axiom 4 - from the condition of positive definiteness of G. Consequently, any matrix with property 4 can be considered as a Gram matrix. In particular, one can choose the identity matrix as the Gram matrix, i.e. in a given basis e, ..., e p define the dot product

formula


As already noted, the concept of a Gram matrix can be introduced for an arbitrary ordered finite system of vectors. At the same time and in general case the Gram matrix remains symmetric, but other properties (positive definiteness, positivity of the determinant) are lost. The following statement holds.

Theorem 8.1.The Gram matrix of a system of vectors is non-singular if and only if this system is linearly independent. The Gram matrix is ​​not linear dependent system vectors is positive definite and, in particular, has a positive determinant. The determinant of the Gram matrix of a linearly dependent system of vectors is equal to zero.

> Any linearly independent system of vectors can be considered as a basis in some Euclidean space, namely in its linear shell. According to the properties of the Gram matrix of the basis, the Gram matrix of the system of vectors under consideration is positive definite. Therefore, all of her corner minors, in particular, its determinant, are positive. This also means that the Gram matrix is ​​linearly independent system vectors is non-degenerate.

Multiplying this vector equality scalarly by vectors a, a2 , and to,

we obtain a homogeneous system of linear equations


relative to the coefficients ac, ak considered linear

combinations. The matrix of this system is the Gram matrix Г of the vector system a, a,2 , ..., CLk If the matrix Г is non-singular, then homogeneous system has only a zero solution. This means that the system of vectors under consideration a, a2 , , a to linearly independent.

If the system of vectors A, ^k linearly dependent, then the considered linear system has non-zero solutions. Therefore, its determinant, i.e. the determinant of the Gram matrix Г of the system of vectors under consideration is equal to zero.



Did you like the article? Share with your friends!