Check that this function specifies a scalar product. Dot product of vectors

Dot product vectors (hereinafter referred to as SP). Dear friends! The mathematics exam includes a group of problems on solving vectors. We have already considered some problems. You can see them in the “Vectors” category. In general, the theory of vectors is not complicated, the main thing is to study it consistently. Calculations and operations with vectors in school course The math is simple, the formulas are not complicated. Take a look at. In this article we will analyze problems on SP of vectors (included in the Unified State Examination). Now “immersion” in the theory:

H To find the coordinates of a vector, you need to subtract from the coordinates of its endthe corresponding coordinates of its origin

And one more thing:


*Vector length (modulus) is determined as follows:

These formulas must be remembered!!!

Let's show the angle between the vectors:

It is clear that it can vary from 0 to 180 0(or in radians from 0 to Pi).

We can draw some conclusions about the sign of the scalar product. The vector lengths are positive value, this is obvious. This means the sign of the scalar product depends on the value of the cosine of the angle between the vectors.

Possible cases:

1. If the angle between the vectors is acute (from 0 0 to 90 0), then the cosine of the angle will have a positive value.

2. If the angle between the vectors is obtuse (from 90 0 to 180 0), then the cosine of the angle will have a negative value.

*At zero degrees, that is, when the vectors have the same direction, cosine equal to one and accordingly the result will be positive.

At 180 o, that is, when the vectors have opposite directions, cosine is equal to minus one,and accordingly the result will be negative.

Now the IMPORTANT POINT!

At 90 o, that is, when the vectors are perpendicular to each other, the cosine equal to zero, and therefore SP is equal to zero. This fact (consequence, conclusion) is used in solving many problems where we are talking about relative position vectors, including in problems included in open bank math assignments.

Let us formulate the statement: the scalar product is equal to zero if and only if these vectors lie on perpendicular lines.

So, the formulas for SP vectors:

If the coordinates of the vectors or the coordinates of the points of their beginnings and ends are known, then we can always find the angle between the vectors:

Let's consider the tasks:

27724 Find the scalar product of the vectors a and b.

We can find the scalar product of vectors using one of two formulas:

The angle between the vectors is unknown, but we can easily find the coordinates of the vectors and then use the first formula. Since the origins of both vectors coincide with the origin of coordinates, the coordinates of these vectors are equal to the coordinates of their ends, that is

How to find the coordinates of a vector is described in.

We calculate:

Answer: 40


Let's find the coordinates of the vectors and use the formula:

To find the coordinates of a vector, it is necessary to subtract the corresponding coordinates of its beginning from the coordinates of the end of the vector, which means

We calculate the scalar product:

Answer: 40

Find the angle between vectors a and b. Give your answer in degrees.

Let the coordinates of the vectors have the form:

To find the angle between vectors, we use the formula for the scalar product of vectors:

Cosine of the angle between vectors:

Hence:

The coordinates of these vectors are equal:

Let's substitute them into the formula:

The angle between the vectors is 45 degrees.

Answer: 45

Federal Agency for Education

State educational institution of higher professional education St. Petersburg State Mining Institute named after. G.V. Plekhanova

(technical university)

A.P. Gospodarikov, G.A. Colton, S.A. Khachatryan

Fourier series. Fourier integral.

Operational calculus

Educational and methodological manual

SAINT PETERSBURG

UDC 512 + 517.2 (075.80)

The educational and methodological manual provides an opportunity to gain practical skills in analyzing functions using Fourier series expansion or representation by the Fourier integral and is intended for independent work of full-time and part-time students of specialties.

The manual examines the main issues of operational calculus and a wide class of technical problems using the fundamentals of operational calculus.

Scientific editor prof. . A.P. Gospodarikov

Reviewers: department higher mathematics No. 1 St. Petersburg State Electrotechnical University; Doctor of Physics and Mathematics sciences V.M. Chistyakov(St. Petersburg State Polytechnic University).

Gospodarikov A.P.

G723. Fourier series. Fourier integral. Operational calculus: Educational and methodological manual / A.P. Gospodarikov,G.A. Colton,S.A. Khachatryan; St. Petersburg State Mining Institute (Technical University). St. Petersburg, 2005. 102 p.

ISBN 5-94211-104-9

UDC 512 + 517.2 (075.80)

BBK 22.161.5

Introduction

From Fourier theory it is known that with some influence on physical, technical and other systems, its result repeats the shape of the initial input signal, differing only in the scale factor. It is clear that the system reacts to such signals (they are called its own) in the simplest way. If an arbitrary input signal is a linear combination of its own signals, and the system is linear, then the system’s response to this arbitrary signal is the sum of reactions to its own signals. And so full information information about a system can be obtained from its “building blocks”—the system’s responses to its own input signals. This is done, for example, in electrical engineering when introducing the frequency response of the system (transfer function). For the simplest linear, time-invariant systems (for example, those described by ordinary differential equations with constant coefficients), in some cases the eigenfunctions are harmonics of the form . In this way, it is possible to obtain the result of an arbitrary influence on the system, if the latter is presented in the form of a linear combination of harmonics (in the general case, in the form of a Fourier series or Fourier integral). This is one of the reasons why in theory and applications there is a need to use the concept of a trigonometric series (Fourier series) or Fourier integral.

Chapter 1. Fourier series

§ 1. Vector spaces

Here are brief information from vector algebra, necessary for a better understanding of the basic principles of the theory of Fourier series.

Let us consider the set  of geometric vectors (vector space), for which the concept of equality of vectors is introduced in the usual way, linear operations(addition and subtraction of vectors, multiplication of a vector by a number) and operations of scalar multiplication of vectors.

Let us introduce an orthogonal basis in space , consisting of three pairwise orthogonal vectors ,And . Free vector
is a linear combination of basis vectors:

. (1.1)

Coefficients  i (i= 1, 2, 3), called vector coordinates relative to the basis
, can be defined as follows. Dot product of a vector and one of the basis vectors

.

Due to the orthogonality of the basis, the scalar products
at
, therefore, on the right side of the last equality only one term is nonzero, corresponding
, That's why
, where

, (1.2)

Where
.

If the vectors And given by their coordinates
And
, then their scalar product

.

Since when
dot product
, then in a double sum only the terms with equal indices are nonzero, therefore

In particular when
from (1.3) it follows

. (1.4)

§ 2. Inner product and norm of functions

Let us denote by the symbol
set of functions that are piecewise continuous on the interval [ a, b], i.e. functions having on the interval [ a, b] a finite number of points of discontinuity of the first kind and continuous at all other points of this interval.

Dot product of functions
called number

.

Properties of the scalar product of functions completely coincide with the properties of the scalar product of vectors:

1.
.

2.
.

3.
.

4.
;
.

Thus, the dot product depends linearly on its components. This property is called bilinearity of the scalar product.

Functions
are called orthogonal
on [ a, b], If
.

Function norm
in between [a, b] is called a non-negative number , whose square is equal to the scalar product of the function to yourself:

.

Properties of the norm of a function largely coincide with the properties of the vector module:

1.
.

2. If the function
is continuous on [ a, b] And
, That
. Because
, then when

,

where
. Differentiating the last relation with respect to and applying Barrow's theorem, we get
and, therefore,
.

3. Ttheorem of cosines .


.

Consequence. If
, That
(Pythagorean theorem).

4. Generalized Pythagorean theorem. If the functions (k = = 1, 2, …, n) are pairwise orthogonal on the interval
, That

.

Using the property of bilinearity of the scalar product, we obtain

Due to the orthogonality of the functions dot products
at
, That's why

.

5. nCauchy–Bunyakovsky equality
, or, what is the same,

.

For any real

Thus, quadratic trinomial on the left side of the last inequality preserves the sign on the entire real axis, therefore, its discriminant
.

Exercise 1. Prove the properties of the scalar product of functions 1-3.

Exercise 2. Show the validity of the following statements:

a) function
orthogonal to functions
And
in between
for any integers k And m;

b) for any integers k And m functions
And
orthogonal on the interval
;

c) functions
And
, and also
And
at
orthogonal on intervals
And
;

d) functions
And
not orthogonal on the interval
.

Exercise 3. Using norm property 5, prove the triangle inequality

.

Let us now note some properties of the scalar product and norm. Applying the inequality and taking into account that we can write:

Let us now prove the so-called triangle rule

We have:

or, taking into account (128), we obtain:

whence it follows (129).

In conclusion of this issue, we will consider what effect the choice of coordinate system has on the metric of space, that is, on the expression of the square of the length of the vector. Let us assume that instead of the main Cartesian we take new system coordinates, and we take some independent vectors as the main vectors

For any vector we will have:

where are its components in the new coordinate system.

The squared length of this vector will be expressed by the scalar product of the vector and itself, i.e.

Expanding this, according to the above formulas, we will have the following expression for the square of the vector length:

where the coefficients are determined by the formulas

When the icons are rearranged, they obviously become conjugate, i.e.

A sum of the form (130) with coefficients satisfying condition (131) is usually called the Hermite form. It is immediately obvious that any expression of the form (130) under condition (131) will have only real values ​​for all possible complex complexes, since with two terms of the sum (130) will be conjugate, and in terms of the form, due to condition (131), the coefficients will be real . In addition, by the very construction of the Hermite form in in this case we can state that the sum (130) will be non-negative and will vanish only when everyone is zero. Formula (130) determines the metric of space in the new coordinate system.

Metric (130) will coincide with metric (110) in the corresponding Cartesian system, if at or at i.e., in other words, if the vectors we take as vectors will be mutually orthogonal unit lecturers (of length one).

In what follows, any system of mutually orthogonal and unit vectors we will call it an orthonormal system.

Note also that if formula (113) defines a unitary transformation for the components of the vector, then the corresponding transformation for the transition from the previous unit vectors to the new ones will be given by the table

contragradient U. In this case, due to (123), this table will coincide with the table U, and for real orthogonal transformations it will simply coincide with U.

Application. 1. Dot product of functions.

1. Dot product of functions.

Let on the segment [ a, b] given a system of functions that are square integrable on [ a, b]:

u 0 (x), u 1 (x), u 2 (x), …, u n(x), …, (1)

Similar to how between elements vector space introduced dot product operation vectors, which matches a pair of vectors given space some number - scalar , and between the elements of this system of functions u i(x), u j(x) can be defined the operation of the scalar product of functions, denoted below as ( u i(x), u j(x)).

By definition, the scalar product operation between elements x , y And z some space (including between elements of the system of functions) must have the following properties:

Dot product between elements of function space u i(x), u j(x) i, j= 0, 1, 2,..., integrable on [ a, b] with a square, is entered using the integration operation:

Definition 1. System (1) is orthogonal system of functions on the segment [ a, b], if any two functions u i(x), u j(x), i, j= 0, 1, 2, ... of a given system
orthogonal (among each other) on [ a, b].

Definition 2. Let's call two functions u i(x), u j(x), i, j= 0, 1, 2, ... systems (1)
orthogonal on the segment [ a, b], if the following condition is satisfied for their scalar product:

(4)

Number - called function norm u i(x).

If all functions u i(x) have single rate , i.e.

l i = 1, i = 0, 1, 2, ... (5)

and the system of functions (1) is orthogonal to [ a, b], then such a system is called
orthonormal or normal orthogonal system on the segment [ a, b].

If the conditions for normality of functions are not initially met, from system (1), if necessary, you can move to system (6), which will certainly be normal:

, i = 0, 1, 2, ... (6)

Note that from the property orthogonality elements of some system, they should be linear independence , i.e. the following statement is true: Any orthogonal system non-zero vectors(elements)is linearly independent.

2 .The concept of basis functions.

From the course linear algebra you know that in vector space you can enter vector basis- a set of vectors such that any vector of a given vector space can be the only way represented as a linear combination of basis vectors. At the same time none of the basis vectors can be represented as a finite linear combination of the remaining basis vectors (linear independence of basis vectors).

So, for example, any vector three-dimensional space can be uniquely represented as a linear combination of basis vectors :

= .

Where a, b, And c- some numbers. And due to linear independence(orthogonality) of basis vectors none of the vectors individually can be represented as a linear combination of the remaining basis vectors.

Similar to the above, in space polynomial functions, i.e. in the space of polynomials of degree no higher than n:

P n(x) = a 0 + a 1 x + a 2 x 2 + … + a n x n. (7)

a basis can be introduced from elementary polynomial (indicative) functions :

x 0 , x, x 2 , x 3 , …, x n(8)

Moreover, it is obvious that the basis functions (8) are linearly independent, i.e. none of the basis functions (8) can be represented as a linear combination of the remaining basis functions. Moreover, it is obvious that any polynomial of degree is not higher than n can be uniquely represented in the form (7), i.e. in the form of a linear combination of basis functions (8).

j i(x) = g i(x-a) i + (x-a)i+ 1 , i= 1, 2, …, n(9)

The explanation for this is partly given by the well-known mathematical analysis Weierstrass’s theorem, according to which any continuous line on the interval [ a, b] function f(x) May be " Fine» is approximated on this segment by some polynomial P n(x) degrees n, i.e. increasing the degree n polynomial P n(x), it can always be as close as you like fit to continuous function f(x).

Since any polynomial can be represented as a linear combination of basis polynomial functions of type (8) or (9), then, by Weierstrass’s theorem, a continuous (i.e., twice differentiable function that is a solution differential equation second order) can be represented as a linear combination of basis functions (9), which are twice differentiable and pairwise linearly independent.


Questions on the topic

“Methods for approximate solution of boundary value problems for ordinary
differential equations"
.

(Lectures 25 - 26)

1. Basic definitions: Staging linear boundary value problem for second order ODE; types and classification of boundary value problems.

2. Methods for reducing boundary value problems to initial tasks : problem statement; sighting method; reduction method; differential sweep method.

3. Finite difference method: problem statement; universality of the finite difference method for solving boundary value problems; selection of types of approximations of the derivative to reduce the boundary value problem to a SALU with a matrix having a tridiagonal structure.

4. Interpolation method or collocation method: search for an approximate solution in the form of a linear combination of basis functions, requirements for basis functions to satisfy boundary conditions; search for coefficients of a linear combination based on the condition of coincidence of the exact and approximate solutions at collocation nodes; selection of basis functions.

5. Galerkin method- basic concepts of the theory of the Galerkin method. Finding an approximate solution in the form of a linear combination basis functions , requirements for basic functions. Selection of coefficients of a linear combination that determines the type of approximate solution from the minimization condition residuals , due to the replacement of the exact solution differential problem the desired approximate solution.



Did you like the article? Share with your friends!