Spectral decomposition of a stationary random function.

Consider the connection between character correlation function and the structure of the corresponding random process.

We will use the concept of “spectrum”, which is widely used not only in the theory of random functions, but also in physics and technology. If any oscillatory process is represented as a sum of harmonic oscillations of various frequencies (the so-called “harmonics”), then the spectrum oscillatory process called a function that describes the distribution of amplitudes at various frequencies. The spectrum shows what kind of oscillations predominate in a given process, what its internal structure. We will introduce the spectral description of a stationary random process in a similar way.

First, consider some stationary random function observed on a finite interval (0, T). Let the correlation function be given random function X(t)

K x(t, t + τ ) = k x(τ ).

We know that k x(τ ) is an even function, so its graph is symmetrical about the axis 0Y curve.



When changing t 1 and t 2 from 0 to T argument τ varies from – T to T.

It is known that an even function on the interval (– T, T) can be expanded into a Fourier series using only even (cosine) harmonics:

k x(τ ) = ,

ωk= 1 , ω 1 = ,

and the coefficients Dk determined by formulas

D 0 = ,

Dk = at k ≠ 0.

Considering that the functions k x(τ ) and cos ωk(τ ) are even, you can transform the expressions for the coefficients as follows:

(1)
D 0 = ,

Dk = at k ≠ 0.

It can be shown that in such notation a random function can be represented as a canonical expansion:

= , (2)

Where U k, V k– uncorrelated random variables with mathematical expectations, equal to zero, and variances that are the same for each pair random variables with the same index k: D(U k) = D(V k) =Dk, and variances Dk are determined by formulas (1).

Expansion (2) is called spectral decomposition stationary random function.

Spectral decomposition depicts a stationary random function decomposed into harmonic oscillations of various frequencies ω 1 , ω 2 , …, ω k , …, and the amplitudes of these oscillations are random variables.



The variance of the random function given by spectral decomposition (2) is determined by the formula

D x = = = , (3)

those. the variance of a stationary random function is equal to the sum of the variances of all harmonics of its spectral decomposition.

Formula (3) shows that the variance of the function is distributed in a known way over different frequencies: one frequency corresponds to b O larger variances, others – m e smaller ones. The frequency dispersion distribution can be illustrated graphically in the form of the so-called dispersion spectrum . To do this, frequencies are plotted along the abscissa axis. ω 0 = 0, ω 1 , ω 2 , …, ω k , …, and along the ordinate axis – the corresponding dispersions.


Obviously, the sum of all ordinates of the spectrum constructed in this way is equal to the variance of the random function.

It is clear that the larger the period of time we consider when constructing the spectral decomposition, the more complete our information about the random function will be. Therefore, it is natural to try in the spectral decomposition to try to go to the limit at T→ ∞, and see what the spectrum of the random function turns into. At T → ∞ ω 1 = , so the distances between frequencies ωk, will decrease indefinitely. In this case, the discrete spectrum will approach a continuous spectrum, in which each arbitrarily small frequency interval will correspond to an elementary dispersion.

Let us depict the continuous spectrum graphically. To do this, we will plot on the ordinate axis not the dispersion itself Dk, A average dispersion density, i.e. dispersion per unit length of a given frequency interval. Let us denote the distance between adjacent frequencies ∆ω , and on each segment ∆ω , as on the base, we will construct a rectangle with area Dk. We obtain a step chart that resembles in principle a histogram of a statistical distribution.


This curve depicts the distribution density of dispersions over frequencies of a continuous spectrum, and the function itself Sx(ω ) is called the spectral dispersion density or spectral density stationary random function.

Obviously, the area enclosed by the curve Sx(ω ), must still be equal to the variance D x random function:

D x = . (4).

Formula (4) is the expansion of the variance D x for the sum of elementary terms Sx(ω ), each of which represents the dispersion per elementary frequency range , adjacent to the point ω .

Thus, a new additional characteristic of a stationary random process has been introduced - spectral density, which describes the frequency composition stationary process. However, it is not independent - it is completely determined by the correlation function this process. Corresponding formula, coming from the expansion of the correlation function k x(τ ) into a Fourier series on a finite interval, looks like this:

Sx(ω ) = . (5)

In this case, the correlation function itself can also be expressed through spectral density:

k x(τ ) = . (6)

Formulas like (5) and (6), connecting two functions mutually, are called Fourier transforms.

Note that from general formula(6) at τ = 0, the previously obtained variance decomposition (4) is obtained.

In practice, instead of spectral density Sx(ω ) often use normalized spectral density:

s x(ω ) = ,

Where D x is the variance of the random function.

It is easy to verify that the normalized correlation function ρ X ( τ ) and normalized spectral density s x(ω ) are related by Fourier transforms:

ρ X ( τ ) = ,

s x(ω ) = .

Assuming in the first of these equalities τ = 0 and given that ρ x (0) = 1, we have

those. total area, limited by schedule normalized spectral density is equal to 1.

§ 7. Ergodic property of stationary random functions.

Consider some stationary random function X(t) and suppose that it is necessary to estimate its characteristics: mathematical expectation m x and correlation function k x(τ ). These characteristics, or rather, their estimates and, as already mentioned, can be obtained from experience, having known number random function implementations X(t). Due to the limited number of observations, the function will not be strictly constant; it will have to be averaged and replaced by some constant; similarly, averaging the values ​​for different τ = t 2 – t 1, we obtain the correlation function.

This processing method is obviously quite complex and cumbersome and, moreover, consists of two stages: an approximate determination of the characteristics of a random function and also an approximate averaging of these characteristics. The question naturally arises: is it possible for a stationary random function to replace this process with a simpler one, which is based in advance on the assumption that the mathematical expectation does not depend on time, and the correlation function does not depend on the origin.

In addition, the question arises: when processing observations of a stationary random function, is it essential to have several implementations? Since the random process is stationary and proceeds uniformly in time, it is natural to assume that one and only implementation of sufficient duration can serve as sufficient material for obtaining the characteristics of a random function.

It turned out that such an opportunity exists, but not for everyone random processes. For example, consider two stationary random functions, represented by a set of their implementations.

Fig.1
Fig.2

For a random function X 1 (t) (Fig. 1) is characterized by the following feature: each of its implementations has the same characteristic features: the average value around which oscillations occur and the average range of these oscillations. Let us arbitrarily choose one of these realizations and mentally continue the experience as a result of which it was obtained for a certain period of time T. Obviously, for a sufficiently large T this one implementation can give us enough good show about the properties of a random function as a whole. In particular, by averaging the values ​​of this implementation along the x-axis - over time, we must obtain an approximate value of the mathematical expectation of a random function; By averaging the squared deviations from this average, we should obtain an approximate value of the variance, etc.

Such a function is said to have ergodic property . The ergodic property is that each individual implementation of a random function is, as it were, an “authorized representative” of the entire set of possible implementations.

If we consider the function X 2 (t) (Fig. 2), then it is obvious that for each implementation the average value is different and significantly different from the others. Therefore, if you build a single average value for all implementations, it will differ significantly from each individual one.

If the random function X(t) has the ergodic property, then for it time average(over a fairly large observation area) approximately equal to the average over a set of observations. The same will be true for X 2 (t), X(t)X(t+τ), etc. In particular, for a sufficiently large T mathematical expectation m x can be calculated using the formula

. (1)

In this formula, for simplicity, the sign ~ is omitted when characterizing a random function, which means that we are dealing not with the characteristics themselves, but with their estimates.

Similarly, we can find the correlation function k x(τ ) for any τ . Because

k x(τ ) = ,

then calculating this value for a given τ , we get

k x(τ ) ≈ , (2)

Where - centered implementation. Having calculated the integral (2) for a number of values τ , it is possible to approximately reproduce the course of the correlation function point by point.

In practice, the above integrals are usually replaced by finite amounts. This is done as follows. Let us divide the recording interval of the random function into n equal parts of length ∆ t, and denote the midpoints of the resulting sections t 1 , t 2 , …, tn.



Let us represent integral (1) as the sum of integrals over elementary sections ∆ t and on each of them we will derive the function x(t) from under the integral sign by the average value corresponding to the center of the interval - x(t i). We get approximately

m x = = /

Similarly, you can calculate the correlation function for the values τ , equal to 0, ∆ t, 2∆t, ... Let us give, for example, the value τ meaning

τ = 2∆t = .

Let us calculate integral (2) by dividing the integration interval

T - τ = =

on nm equal sections of length ∆ t and taking the function out of the integral sign on each of them by the mean value. We get

.

The correlation function is calculated using the given formula for m= 0, 1, 2,…. Consistently up to such values m, at which the correlation function becomes almost equal to zero or begins to make small irregular fluctuations around zero. General move functions k x(τ ) is reproduced at individual points.


In order for the characteristics to be determined with satisfactory accuracy, it is necessary that the number of points n was quite large (about 100, and in some cases more). Choosing the length of the elementary section ∆ t is determined by the nature of the change in the random function: if it changes relatively smoothly, the section ∆ t you can choose more than when it makes sharp and frequent fluctuations. As a rough guide, we can recommend choosing an elementary section so that full period the highest-frequency harmonic in the random function accounted for about 5-10 reference points.

Solution typical tasks

1. a) Random function X(t) = (t 3 + 1)U, Where U– a random variable whose values ​​belong to the interval (0; 10). Find function implementations X(t) in two tests in which the value U took values u 1 = 2, u 2 = 3.

Solution. Since the implementation of the random function X(t) is called a non-random argument function t, then for these values ​​of quantity U the corresponding implementations of the random function will be

x 1 (t) = 2(t 3 + 1), x 2 (t) = 3(t 3 + 1).

b) Random function X(t) = U sin t, Where U– random variable.

Find sections X(t), corresponding to fixed argument values t 1 = , t 2 = .

Solution. Since the cross section of the random function X(t) is a random variable corresponding to a fixed value of the argument, then for given values ​​of the argument the corresponding cross sections will be

X 1 = U· = , X 2 = U· = U.

2. Find the mathematical expectation of a random function X(t) = U· ℮t, Where U M(U) = 5.

Solution. Let us remind you that mathematical expectation random function X(t) is called a non-random function m x(t) = M[X(t)], which for each value of the argument t is equal to the mathematical expectation of the corresponding section of the random function. Hence

m x(t) = M[X(t)] = M[U· ℮t].

m x(t) =M[U· ℮t] = ℮ t M(U) = 5℮t.

3. Find the mathematical expectation of a random function a) X(t) = Ut 2 +2t+1; b) X(t) = U sin4 t + cos4 t, Where U And V are random variables, and M(U) = M(V) = 1.

Solution. Using the properties of m.o. random function, we have

A) m x(t) = M(Ut 2 +2t+1) = M(Ut 2) +M(2t) + M(1) = M(U)t 2 +2t+1 = t 2 +2t+1.

b) m x(t) = M(U sin4 t + cos4 t) = M(U sin4 t) + M( cos4 t) = M(U)· sin4 t + M(V)· cos4 t= sin4 t+cos4 t.

4. The correlation function is known K x random function X(t). Find the correlation function of a random function Y(t) = X(t) + t 2, using the definitions of m.o. and correlation function.

Solution. Let's find m.o. random function Y(t):

m y(t) = M[Y(t)] = M[X(t) + t 2 ] = M[X(t)] + t 2 = m x(t) + t 2 .

Let's find the centered function

= Y(t) - m y(t) = [X(t) + t 2 ] – [m x(t) + t 2 ] = X(t) –m x(t) = .

K y = = = K x.

5. The correlation function is known K x random function X(t). Find the correlation function of the random function a) Y(t)=X(t)·( t+1); b) Z(t)=C· X(t), Where WITH– constant.

Solution. a) Let's find the m.o. random function Y(t):

m y(t) = M[Y(t)] = M[X(t) · ( t+1)] = (t+1) · M[X(t)].

Let's find the centered function

=Y(t)-m y(t)=X(t)·( t+1) - (t+1)· M[X(t)] = (t+1)·( X(t) - M[X(t)]) = (t+1)· .

Now let's find the correlation function

K y = = = (t 1 +1)(t 2 +1)K x.

b) Similar to case a) it can be proven that

K y = WITH 2 K x.

6. Variance is known D x(t) random function X(t Y(t) =X(t)+2.

Solution. Adding a non-random term to a random function does not change the correlation function:

K y(t 1 , t 2) = K x(t 1 , t 2).

We know that K x(t, t) = D x(t), That's why

D(t) = K y(t, t) = K x(t, t) = D x(t).

7. Variance is known D x(t) random function X(t). Find the variance of a random function Y(t) = (t+3) · X(t).

Solution. Let's find m.o. random function Y(t):

m y(t) = M[Y(t)] = M[X(t) · ( t+3)] = (t+3) · M[X(t)].

Let's find the centered function

=Y(t)-m y(t)=X(t)·( t+3) - (t+3)· M[X(t)] = (t+3)·( X(t) - M[X(t)]) = (t+3)· .

Let's find the correlation function

K y = = = (t 1 +3)(t 2 +3)K x.

Now let's find the variance

D(t) = K y(t, t) = (t+3)(t+3)K x(t, t) = (t+3) 2 D x(t).


8. Given a random function X(t) = U cos2 t, Where U is a random variable, and M(U) = 5, D(U) = 6. Find the mathematical expectation, correlation function and variance of the random function X(t).

Solution. Let's find the required mathematical expectation by taking out the non-random factor cos2 t for the sign m.o.:

M[X(t)] = M[U cos2 t] = cos2 t ·M(U) = 5cos2 t.

Let's find the centered function:

= X(t) - m x(t) = U cos2 t- 5cos2 t = (U – 5)cos2 t.

Let's find the desired correlation function:

K x(t 1 , t 2) = = M{[(U- 5)· cos2 t 1 ] [(U- 5)· cos2 t 2 ]} =

Cos2 t 1 cos2 t 2 M(U- 5) 2 .

Further, taking into account that for a random variable U variance by definition is equal to D(U) = M[(U - M((U)] 2 = M((U- 5) 2 , we get that M((U- 5) 2 = 6. Therefore, for the correlation function we finally have

K x(t 1 , t 2) = 6cos2 t 1 cos2 t 2 .

Let us now find the required dispersion, for which we set t 1 = t 2 = t:

D x(t) = K x(t, t) = 6cos 2 2 t.

9. The correlation function is given K x(t 1 , t 2) = t 1 t 2. Find the normalized correlation function.

Solution. By definition, the normalized correlation function

ρx(t 1 , t 2) = = = .

The sign of the resulting expression depends on whether the arguments have t 1 and t 2 identical signs or different. The denominator is always positive, so we finally have

ρx(t 1 , t 2) =

10. Mathematical expectation is given m x(t) = t 2 + 4 random functions X(t). Find the mathematical expectation of a random function Y(t) = tX´( t) + t 2 .

Solution. The mathematical expectation of the derivative of a random function is equal to the derivative of its mathematical expectation. That's why

m y(t) = M(Y(t)) = M(tX´( t) + t 2) = M(tX´( t)) + M(t 2) =

= t∙M(X´( t)) + t 2 = t∙(m x(t))´ + t 2 = t∙(t 2 + 4)´ + t 2 = 3t 2 .

11. The correlation function is given K x= random function X(t). Find the correlation function from its derivative.

Solution. To find the correlation function of the derivative, you need to differentiate the correlation function of the original random function twice, first with respect to one argument, then with respect to the other.

= .

+ =

= .


12. Random function given X(t) = U3 t cos2 t, Where U is a random variable, and M(U) = 4, D(U) = 1. Find the mathematical expectation and the correlation function of its derivative.

Solution. m x(t) = M(X(t)) = M(U3 t cos2 t) = M(U)℮3 t cos2 t = 4℮3 t cos2 t.

M(X(t)) = (m x(t))´ = 4(3℮ 3 t cos2 t – 2℮3 t sin2 t) = 4℮3 t(3cos2 t– 2sin2 t).

Let's find the correlation function of the original random function. The centered random function is

= X(t) - m x(t) = U3 t cos2 t- 4℮3 t cos2 t = (U – 4)℮3 t cos2 t.

K x(t 1 , t 2) = = M{[(U- 4) cos2 t 1 ] [(U- 4) cos2 t 2 ]} =

Cos2 t 1 cos2 t 2 M((U- 4) 2)= cos2 t 1 cos2 t 2 D(U)=cos2 t 1 cos2 t 2 .

Let's find the partial derivative of the correlation function with respect to the first argument

Cos2 t 2 =

Cos2 t 2 (3cos2 t 1 – 2sin2 t 1).

Let's find the second mixed derivative of the correlation function

= (3cos2 t 1 – 2sin2 t 1) =

= (3cos2 t 1 – 2sin2 t 1) (3cos2 t 2 – 2sin2 t 2).


13. Random function given X(t), having a mathematical expectation

m x(t) = 3t 2 + 1. Find the mathematical expectation of a random function Y(t)= .

Solution. The required mathematical expectation

m y(t) = = = t 2 + t.

14. Find the mathematical expectation of the integral Y(t)= , knowing the mathematical expectation of the random function X(t):

A) m x(t) = t–cos2 t; b) m x(t) = 4cos 2 t.

Solution. A) m y(t) = = = .

b) m y(t) = = = = + =

2t+ sin2 t.


15. Random function given X(t) = U2t cos3 t, Where U is a random variable, and M(U) = 5. Find the mathematical expectation of the integral Y(t)= .

Solution. First, let's find the mathematical expectation of the random function itself.

m x(t) = M(U2t cos3 t) = M(U)℮2t cos3 t = 5℮2t cos3 t.

m y(t) = = 5 = =

= ℮2t sin3 t - = =

= ℮2t sin3 t =

= ℮2t sin3 t + ℮2t cos3 t .

We have obtained a circular integral, therefore

5 + = ℮2t sin3 t + ℮2t cos3 t.

or = ℮2t( sin3 t+cos3 t).

Finally m y(t) = ℮2t( sin3 t+cos3 t).

16. Find the mathematical expectation of the integral Y(t) = , knowing the random function X(t) =U3 t sin t, Where U is a random variable, and M(U)=2.

Solution. Let's find the mathematical expectation of the random function itself.

m x(t) = M(U3t sin t) = M(U)℮3t sin t = 2℮3t sin t.

m y(t) = = 2 = =

= – 2℮3t cos t + = =

= – 2℮3t cos t + ℮3t sin t .

We have = – ℮3t cos t + ℮3t sin t.

Finally m y(t) = – ℮2t cos t + ℮2t sin t.


17. Random function given X(t), having a correlation function

K x(t 1 , t 2) = t 1 t 2. Find the correlation function of the integral Y(t)= .

Solution. First we find the correlation function of the integral, which is equal to double integral from a given correlation function. Hence,

K y(t 1 , t 2) = = = = .

Then the variance Dy(t) = K y(t, t) = .

18. The correlation function is given K x(t 1 , t 2) = random function X(t). Find the variance of the integral Y(t)= .

Solution. Let us find the correlation function of the integral

K y(t 1 , t 2) = = =

= = .

Then the variance

Dy(t) = K y(t, t) = .

19. Find the variance of the integral Y(t) = , knowing the correlation function of the random function X(t):

A) K x(t 1 ,t 2) = ; b) K x(t 1 , t 2) = .

Solution. A) K y(t 1 , t 2) = = .

Constructing the spectral expansion of a stationary random function

X(t) in a finite period of time (Oh, T), we obtained the spectrum of variances of a random function in the form of a series of individual discrete lines separated by equal intervals (the so-called “discontinuous” or “line” spectrum).

Obviously, the larger the period of time we consider, the more complete our information about the random function will be. It is natural, therefore, to try to go to the limit in the spectral decomposition at T-> oo and see what the spectrum turns into

random function. With therefore distances

between the frequencies of the ods on which the spectrum is constructed will be at T-> oo decrease indefinitely. In this case, the discrete spectrum will approach a continuous one, in which each arbitrarily small frequency interval Aco will correspond to an elementary dispersion ADco.

Let's try to depict a continuous spectrum graphically. To do this, we must slightly rearrange the graph of the discrete spectrum at finite T. Namely, we will plot on the ordinate axis not the dispersion itself Dk(which decreases infinitely with T-"ooo), and average dispersion density, those. dispersion per unit length of a given frequency interval. Let us denote the distance between adjacent frequencies ACO:

and on each segment Aso as a base we construct a rectangle with area D k ( rice. 17.3.1). We obtain a step diagram that resembles the principle of construction of a histogram of a statistical distribution.

The height of the diagram in the section Aco adjacent to the point sod. is equal to

Rice. 17.3.1

and represents the average dispersion density in this area. The total area of ​​the entire diagram is obviously equal to the variance of the random function.

We will increase the interval indefinitely T. In this case, Du -> O, and the stepped curve will indefinitely approach the smooth curve S x (с) (Fig. 17.3.2). This curve depicts the distribution density of dispersions over the frequencies of a continuous spectrum, and the function D x.(a>) itself is called spectral dispersion density, or, in short, spectral density stationary random function X(t).

Rice. 17.3.2

Obviously, the area enclosed by the curve D g (co) must still be equal to the dispersion D x random function X(t):

Formula (17.3.2) is nothing more than the expansion of the variance D x by the sum of the elementary terms L'Dso) s/co, each of which represents the dispersion per elementary frequency range dco, adjacent to the point с (Fig. 17.3.2).

Thus, we introduced into consideration a new additional characteristic of a stationary random process - spectral density, which describes the frequency composition of the stationary process. However, this characteristic is not independent; it is completely determined by the correlation function of this process. Just as the ordinates of a discrete spectrum Dk are expressed by formulas (17.2.4) through the correlation function k x ( t), spectral density Sx(a) can also be expressed through a correlation function.

Let's derive this expression. To do this, let's go to canonical expansion correlation function to the limit at T-> oh and let's see what it turns into. We will proceed from the expansion (17.2.1) of the correlation function into a Fourier series on a finite interval (-T, 7):

where the dispersion corresponding to the frequency w/( is expressed by the formula

Before passing to the limit as Г -> oo, let us pass in formula (17.3.3) from the dispersion Dk to the average dispersion density

Since this density is calculated even at final value T and depends on T, let's denote it:

Dividing expression (17.3.4) by we get:

From (17.3.5) it follows that

Let's substitute expression (17.3.7) into formula (17.3.3); we get:

Let's see what expression (17.3.8) turns into when T-> oo. Obviously, in this case Aso -> 0; the discrete argument ω/(transforms into a continuously changing argument ω; the sum transforms into an integral over the variable ω; average density variances S X T) ( with A.) tends to the dispersion density A L.(ω), and expression (17.3.8) in the limit takes the form:

Where S x (с) -spectral density of a stationary random function.

Passing to the limit as Γ -> oo in formula (17.3.6), we obtain an expression for the spectral density through the correlation function:

An expression like (17.3.9) is known in mathematics as Fourier integral. The Fourier integral is a generalization of the Fourier series expansion for the case of a non-periodic function considered on an infinite interval, and represents the expansion of the function into the sum of elementary harmonic oscillations with a continuous spectrum 1.

Just as the Fourier series expresses the expandable function through the coefficients of the series, which in turn are expressed through the expandable function, formulas (17.3.9) and (17.3.10) express the functions k x ( m) and A x (k>) are mutual: one through the other. Formula (17.3.9) expresses the correlation function in terms of spectral density; formula

(17.3.10), on the contrary, expresses the spectral density through the correlation function. Formulas like (17.3.9) and (17.3.10) that relate two functions mutually are called Fourier transforms.

Thus, the correlation function and spectral density are expressed in terms of one another using Fourier transforms.

Note that from the general formula (17.3.9) at m = 0 the previously obtained decomposition of the dispersion into frequencies (17.3.2) is derived.

In practice, instead of spectral density S x ( co) often use normalized spectral density:

Where D x- variance of the random function.

It is easy to verify that the normalized correlation function p l (m) and the normalized spectral density l A (ω) are related by the same Fourier transforms:

Assuming the first equation (17.3.12) t = 0 and taking into account that p t (0) = 1, we have:

those. the total area bounded by the normalized spectral density graph is equal to unity.

Example 1. Normalized correlation function p x (m) of a random function X(t) decreases by linear law from one to zero at 0 t 0 r l.(t) = 0 (Fig. 17.3.3). Determine the normalized spectral density of a random function X(t).

Solution. The normalized correlation function is expressed by

formulas:

From formulas (17.3.12) we have:

Rice. 17.3.3


Rice. 17.3.4

The normalized spectral density graph is shown in Fig. 17.3.4. The first - absolute - maximum spectral density is achieved at co = 0; revealing uncertainty

the spectral density reaches a number of relative maxima, the height of which decreases with increasing co; when ω -> oo l A. (o>) -> 0. The nature of the change in spectral density s x (с) (fast or slow decrease) depends on the parameter m 0. Total area, bounded by a curve s x(co), is constant and equal to unity. A change in m 0 is equivalent to a change in the scale of the curve, s" A .(co) along both axes while maintaining its area. With an increase in m 0, the scale along the ordinate axis increases, along the abscissa axis it decreases; the predominance of the random function of zero frequency in the spectrum becomes more pronounced In the limit, as m -> oo, the random function degenerates into an ordinary random variable; in this case, p d (m) = I, and the spectrum becomes discrete with a single frequency with 0 = 0.

Rice. 17.3.5

Example 2. Normalized spectral density.v v (co) of a random function X(t) is constant over a certain frequency interval a>b a>2 and is equal to zero outside this interval (Fig. 17.3.5).

Determine the normalized correlation function of a random function X(t).

Solution. The value of xl (co) at “t 2 is determined from the condition that the area limited by the curve s x(co), equal to one:

From (17.3.12) we have:

The general view of the function p d (t) is shown in Fig. 17.3.6. It has the character of oscillations decreasing in amplitude with a number of nodes at which the function vanishes. Specific view The graphics obviously depend on the values ​​of a>a>2.

Rice. 17.3.6

Of interest is the limiting form of the function p x (m) as “t -> ω 2. Obviously, when ω 2 = ω = ω, the spectrum of the random function becomes discrete with a single line corresponding to the frequency ω; in this case, the correlation function turns into a simple cosine:

Let's see what form the random function itself has in this case X(t). With a discrete spectrum with a single line

spectral expansion of a stationary random function X(t) has the appearance;

Where U vlV - uncorrelated random variables with mathematical expectations equal to zero and equal variances:

Let us show that a random function of type (17.3.14) can be represented as one harmonic oscillation of frequency с with a random amplitude and a random phase. Designating

we reduce expression (17.3.14) to the form:

In this expression - random amplitude; F - random phase harmonic vibration.

Until now, we have considered only the case when the frequency distribution of dispersions is continuous, i.e. when an infinitely small range of frequencies accounts for an infinitesimal dispersion. In practice, sometimes there are cases when a random function contains a purely periodic frequency component o>a with a random amplitude. Then, in the spectral expansion of the random function, in addition to the continuous spectrum of frequencies, a separate frequency co* will appear, with a finite dispersion Dk. In the general case, there may be several such periodic components. Then the spectral expansion of the correlation function will consist of two parts: discrete and continuous spectrum:

Cases of stationary random functions with such a “mixed” spectrum are quite rare in practice. In these cases, it always makes sense to divide the random function into two terms - with a continuous and discrete spectrum - and study these terms separately.

Often we have to deal with the special case when the final dispersion in the spectral expansion of a random function occurs at zero frequency (ω = 0). This means that the random function includes as a term an ordinary random variable with variance D0. IN similar cases it also makes sense to isolate this random term and operate with it separately.

  • Formula (17.3.9) is a particular form of the Fourier integral, generalizing the Fourier series expansion of an even function in cosine harmonics. A similar expression can be written for more general case.
  • Here we are dealing with a special case of Fourier transforms - the so-called “cosine Fourier transforms”.

Necessary and sufficient condition ergodicity ξ (t) in

relation to dispersion is formula (2.5), and the sufficient condition is (2.6).

Typically, a stationary random process is non-ergodic when it proceeds non-uniformly. For example, non-ergodicity

ξ (t) can be caused by the fact that it contains as a term a random variable X with characteristics m x and D x. Then, sinceξ 1 (t) = ξ (t) + X, then m ξ 1 = m ξ + m x,K ξ 1 (τ) = K ξ (τ) + D x

and τ→∞ limK ξ 1 (τ ) = τ→∞ lim[ K ξ (τ ) + D x ] = τ→∞ limK ξ (τ ) + τ→∞ limD x = D x ≠ 0 .

2.2. Spectral decomposition of a stationary random process and the Fourier transform. Spectral Density

The main idea of ​​the spectral representation of random processes is that they can be depicted as a sum of certain harmonics. This representation makes it possible to relatively easily carry out various, both linear and nonlinear, transformations over random processes. One can, for example, study how the dispersion of a random process is distributed over the frequencies of its constituent harmonics. The use of such information constitutes the essence spectral theory stationary random processes.

Spectral theory makes it possible to use the Fourier image of a random process in calculations. In a number of cases, this significantly simplifies the calculations and is widely used, especially in theoretical studies.

A stationary random process ξ (t) can be specified in its own way

them by canonical or spectral decomposition:

ξ(t ) =m ξ +∑ ∞ (x k cos ωk t +y k sin ωk t ) ,

k = 0

where M [ x k ] = M [ y k ] = 0 ,

D [ x k] = D [ y k] = D k,

M [ xk yk ] = M[ xi xj ] =

M[ yi yj ] = M[ xi yj ] = 0 ,

i ≠ j. At the same time

its covariance

K ξ (t 1, t 2) = ∑ ∞ D k cos ω k (t 2− t 1) =

k = 0

= ∑ ∞ D k (cosω k t 1 cosω k t 2 + sinω k t 1 sinω k t 2 ) =

k = 0

= ∑ D k cos ωk τ =K ξ (τ) .

k = 0

Expression (2.8) can be represented as

ξ(t ) =m ξ +∑ z k cos (ωk t − ψk ) ,

k = 0

where ψ k is the phase of the harmonic oscillation of an elementary random

process, which is a random variable distributed uniformly over an interval in the interval (0.2π),z k – am-

amplitude of harmonic oscillation of an elementary random process, and z k is also a random variable with some

m z and D z.

Indeed, let ξ k (t) = x k cos ω k t + y k sin ω k t, then m ξ k = 0,

K ξ k (t 1 , t 2 ) = M [ (x kcos ω kt 1 + y ksin ω kt 1 )(x kcos ω kt 2 + y ksin ω kt 2 ) ] =

M [ x k 2 cosω k t 1 cosω k t 2 + x k y k (sinω k t 1 cosω k t 2 +

Cos ω k t 1 sinω k t 2 ) + y k 2 sinω k t 1 sinω k t 2 ] =

M [ x k 2 ] cosω k t 1 cosω k t 2 + M [ y k 2 ] sinω k t 1 sinω k t 2 =

D k cosω k (t 2 − t 1 ) = D k cosω k τ .

put

ξ k(t) = z kcos (ω kt −ψ k) ,

ψ k R (0.2π ) ,

ω k–

non-random value, but

z k – case-

magnitude

famous

Dz,

ξ k (t ) = z k cosψ k cosω k t + z k sinψ k sinω k t

M [ cosψ k ] =

M [ sinψ k ] =

∫ cosxdx = 0

∫ sinxdx = 0,

D [ cosψ k ] = M [ cos2 ψ k ] =

∫ cos 2 xdx= 1

D [ sinψ k ] = M [ sin2 ψ k ] =

D [ sinψ k cosψ k ] = 0 .

∫ sin 2 xdx=

Hence m ξ k = M [ z k cosψ k sinω k t + z k sinψ k sinω k t ] = 0 ,

K ξ k (t 1 ,t 2 ) = M [ (z k cosψ k sinω k t 1 + z k sinψ k sinω k t 1 ) × × (z cosψ cosω t + z sinψ sinω t ) ] =

M [ z k 2 ] ( M [ cos2 ψ k ] cosω k t 1 cosω k t 2 +

M [ sinψ k cosψ k ] sinω k t 1 cosω k t 2 +

M [ cosψ k sinψ k ] cosω k t 1 sinω k t 2 +

M [ sin2 ψ k ] sinω k t 1 sinω k t 2 ) = D z k + 2 m z k cos(t 2 − t 1 ) .k k k 2 k k k 2

Thus, under the assumptions made in formulas (2.8) and (2.10) about the properties included in these formulas of random variables, representations (2.8) and (2.10) are equivalent. In this case,

the tea quantities z i and ψ i ,i = 1,∞ are dependent, since, obviously, the relations hold

z kcos ψ k= x k, z ksin ψ k= y k,

D z k+ m z 2 k

D [ x k ] =D [ y k ] =D k .

Since the covariance function of a stationary random process is even function, then it can be varied on the interval (− T ,T )

put in a Fourier series in terms of cosines, i.e. K ξ (τ ) = ∑ ∞ D k cosω k τ ,

k = 0

, ω =

(τ)dτ,

(τ ) d τ . Believing

−T

−T

τ = 0, we get

K ξ (0) = D ξ = ∑ D k cosω k 0

= ∑ D k .

k = 0

k = 0

Since ω k can be interpreted as harmonics of the spec-

tral expansion of the stationary random process (2.8), then total variance of a stationary random process, represented by its canonical (spectral) decomposition, is equal to the sum of the dispersions of all harmonics of its spectral decomposition. In Fig. 2.1

shows a set of dispersions D k corresponding to various harmonics ω i . The longer the decomposition interval according to the formula

(2.9) will be taken, the more accurate the expansion according to this formula will be. If we take T ′ = 2T, then the dispersion spectrum of the spectral decomposition

process ξ (t ) on the interval (0,T ′ )

more components (see Fig. 2.1, frequencies ω / ).

/D 4/

D 5D 6 /

D7/

D2/k

ω1 /

ω 13 ω 1/ 2 ω 15 ω 1/ 3 ω 17 ω 1/ 4 ω 1

kω 1

Rice. 2.2. "Spectrum of variances" of a stationary random process

Let us rewrite (2.9) in a slightly different form:

(cosk ∆ωτ) ∆ω,

∑Dk

cos ωk τ =∑

k = 0

k = 0

where ∆ω = ω1

there is an interval between adjacent frequencies. If

D k =S

(ω ),

K ξ (τ) =∑ D k cos ωk τ =

(cos k ∆ωτ) ∆ω =

k = 0

0 k = 0

= ∞ ∫ S ξ (ω) cos ωτd ω.

The quantity S ξ (ω k ) ∆ω = D k represents part of the total

variance of the stationary random process ξ (t) attributable to the kth harmonic. As T → ∞ (or as ∆ω→ 0), the function S ξ (ω k) will indefinitely approach the curve S ξ (ω), which

paradise is called the spectral density of the stationary case -

process ξ (t) (Fig. 2.2). From (2.13) it follows that the functions K ξ (τ) and S ξ (ω) are related to each other by the Fourier cosine transform. Thus,

S ξ (ω )=

∞ ∫ K ξ (τ) cos ωτd τ.

Rice. 2.2. Graphs of functions S ξ (ω k) And Sξ (ω )

Spectral density, by analogy with the probability density function, has the following properties:

1. Sξ (ω ) 0.

2. Sξ (ω ) dω = Sξ (ω ) cos(0 ω ) dω = Kξ (0 ) =Dξ .

If you enter the function Sξ (ω ) , defined as follows:

Sξ (ω ) =Sξ 2 (ω ) , ω≥ 0,

Sξ (ω ) =

Sξ (−ω )

, ω< 0,

called spectral density of a stationary random process in complex form, then this function, in addition to the two above properties, has a third property - the property of parity (Fig. 2.3).

3. Sξ (ω ) =Sξ (− ω ) .

Rice. 2.3. Spectral density function plots

Let us rewrite (2.8) in the following form:

x k

y k

ξ (t) =mξ +

(cosk∆ω t) ∆ω+

( sin k∆ω t) ∆ω .

k = 0

x k

= X(ω ) ,

y k

= Y(ω ) , then at

T→ ∞

∆ω→ 0

∆ω→ 0

can be obtained integral canonical representation hundred

national random process:

ξ (t) =mξ +X(ω ) cosω tdω+

Y(ω ) sin ω tdω ,

where are the random functions X(ω ) And Y(ω )

represent the so-called

washed " white noise"(see subsection 2.4). Statistical characteristics

the following:

M[X(ω )]= M[Y(ω )]= 0 ,

KX(ω 1, ω 2)

= KY(ω 1 , ω 2 ) =Sξ (ω ) δ (ω 2 − ω 1 ) , Whereδ (x)

e ix + e ix

e ix e ix

cos x=

sin x=

2i

(t)= x

cos ω t+ y

ω t=

x k iy k

e i ω k t

x k

+ iy k

e i ω k t .

x k iy k

x k+ iy k

ξ (t) =zkeiω kt+

designate zk=

z k e iω kt

z k

means complex conjugacy. Hence,

the spectral expansion of a stationary random process in complex form has the form

i ω k t

i ω k t

+ zke

i ω k t

= mξ +

ξ (t) =mξ +

z k e

z k e

k = 0

k=−∞

Similar actions can be carried out with the covariance function presented in the form (2.9), and obtain

K ξ (τ ) = D k e iω kt.

k=−∞

Formula (2.13), taking into account the introduction of the function, can be rewritten in the following form:

Sξ (ω ) you can re-

Kξ (τ ) =Sξ (ω ) eiω tdω ,

and the function Sξ (ω ) - How

Sξ (ω ) =

K ξ (τ ) e iωτ d τ .

2 π −∞

Formulas (2.18) and (2.19) represent the Fourier transform of the spectral density Sξ (ω ) and covariance function Kξ (τ ) in a complex form.

Since the spectral density Sξ (ω ) represents

distribution density of the dispersion of a random process over the frequencies of its harmonics, then in some applications of random theory

nal processes Kξ ( 0) = Dξ (t) interpreted as the energy of a stationary random process, and Sξ (ω ) - how is the density of this

energy per unit frequency. This interpretation appeared after the application of the theory of stationary random processes in electrical engineering.

Example 5. Find Spectral Density Sξ (ω ) elementary random process ξ k(t) = xk cos ω kt+ yk sin ω kt.

It was previously shown that

mξ k= 0 ,

Kξ k(t1 ,t2 ) = Dk cos ω kτ ,

M [ x k] = M [ y k] = 0 ,

D[ x k ] = D[ y k ] = D k ,

τ = t2 t1 .

According to formula (2.14)

ξ k

(ω )=

K

ξ k

(τ ) cosωτ dτ =

D

cos ω

τ cosωτ dτ =

= Dk[ cos(ω− ω k) τ + cos(ω+ ω k) τ ] dτ =

π 0

= Dk[ ei(ω−ω

Sξ k (ω ) =

i(ω−ω k) τ d(− τ ) + ei(ω−ω k) τ dτ +

k(1 ) e

2π

+ (1 ) e

i(ω+ω k) τ d(− τ ) + ei(ω+ω k) τ dτ

k

ei(ω−ω k)(−τ ) d(− τ ) + ei(ω−ω k) τ dτ + (1 ) ∫ ei(ω+ω k)(−τ ) d(− τ ) +

2 π −∞

+ ei(ω+ω k) τ dτ

kei(ω−ω k) τ dτ +

ei

(ω+ω k) τ dτ

2 π −∞

= Dk[ δ (ω− ω k) + δ (ω+ ω k) ] ,

Where δ (ω ) = 1 eiωτ dτ – integral representation in the form of pre-

2 π −∞

Fourier education δ -Dirac functions. Expression for Sξ k(ω )

could have been left that way, but for positive ω (because ω k> 0), taking into account the properties δ -functions (see Table 6

us. 141), δ (ω+ ω k) 0 . Thus, Sξ (ω ) = Dkδ (ω− ω k) .

ThenSξ k(ω ) =1 2 Sξ k(ω ) =D2 k[ δ (ω− ω k) + δ (ω+ ω k) ] .

Let us now find the given spectral density in complex form. Functions Sξ (ω ) And Sξ k(ω ) – valid non-

negative functions. Sξ k(ω ) – an even function defined on the interval (− ∞ ,) ,Sξ (ω ) – defined on the interval ( 0,) , And

on this interval Sξ k(ω ) = 1 2 Sξ k(ω ) (see Fig. 2.3). According to formula (2.19)

(ω )=

K

ξ k

(τ ) eiωτ dτ =

D

cosω τ eiωτ dτ =

ξ k

2 π −∞

2 π −∞



Did you like the article? Share with your friends!