Function of two random arguments. Functions of random variables

If each pair of possible values random variables X And Y corresponds to one possible value of a random variable Z, That Z called function of two random arguments X And Y:

Z= j( X, Y).

Further examples will show how to find the distribution of the function Z = X + Y according to known distributions of terms. This problem often occurs in practice. For example, if X- error of readings of the measuring device (normally distributed), Y- the error in rounding the readings to the nearest scale division (uniformly distributed), then the task arises - to find the law of distribution of the sum of errors Z=X+Y.

1. Let X And Y-discrete independent random variables. In order to draw up the distribution law of the function Z = X + Y, we need to find everything possible values Z and their probabilities.

Example 1. Discrete independent random variables are specified by distributions:

X Y
p 0, 4 0, 6 p 0, 2 0, 8

Create a distribution of a random variable Z = X+Y.

Solution. Possible values Z there are sums of each possible value X with all possible values Y:

z 1 = 1+ 3= 4; z 2 = 1+ 4= 5; z 3 = 2+ 3= 5; z 4 = 2+ 4= 6.

Let's find the probabilities of these possible values. In order to Z= 4, it is enough that the value X took on the meaning x 1 =1 and value Y- meaning y 1 = 3. The probabilities of these possible values, as follows from these distribution laws, are respectively equal to 0.4 and 0.2.

Arguments X And Y are independent, so the events X= 1i Y= 3 are independent and, therefore, the probability of their joint occurrence (i.e., the probability of the event Z= 1+3 = 4) by the multiplication theorem is equal to 0.4*0.2 = 0.08.

Similarly we find:

P(Z= 1+ 4= 5) = 0, 4* 0, 8= 0, 32;

R(Z= 2 + 3 = 5) = 0, 6* 0, 2 = 0, 12;

R(Z= 2 + 4 = 6)= 0, 6* 0, 8 = 0, 48.

Let’s write the required distribution by first adding up the probabilities incompatible events Z = z 2 , Z = z 3 (0,32+0,12 = 0,44):

Z
p 0, 08 0, 44 0, 48

Control: 0.08 + 0.44 + 0.48 = 1.

2. Let X And Y- continuous random variables. Proven: if X And Y independent, then the distribution density g(z) amounts Z = X + Y(provided that the density of at least one of the arguments is specified on the interval () by one formula) can be found using the equality

(*)

or using equivalent equality

(**)

Where f 1 ,f 2 - distribution densities of arguments.

If the possible values ​​of the arguments are non-negative, then g(z) are found using the formula

(***)

or by an equivalent formula

(****)

The distribution density of the sum of independent random variables is called composition.

The law of probability distribution is called sustainable, if the composition of such laws is the same law (differing, generally speaking, in parameters). A normal law has the property of stability: the composition of normal laws also has normal distribution (mathematical expectation and the variance of this composition are equal, respectively, to the sums of the mathematical expectations and variances of the terms). For example, if X And Y- independent random variables distributed normally with mathematical expectations and variances respectively equal A 1 = Z, a 2 = 4, D 1 =1, D 2 = 0, 5, then the composition of these quantities (i.e., the probability density of the sum Z = X+ Y)is also normally distributed, and the mathematical expectation and variance of the composition are respectively equal A = 3 + 4 = 7; D=l +0.5=1.5.

Example 2. Independent random variables X And Y are given by the distribution densities:

f(x)= ;

f(y)= .

Find the composition of these laws, i.e. the distribution density of the random variable Z = X+Y.

Solution. Possible values ​​of the arguments are non-negative. Therefore, we will use the formula (***)

Note that here z 0 because Z=X+Y and, by condition, possible values X And Y non-negative.

Chi square distribution

Let X i(i = 1, 2, ..., p) are normal independent random variables, and the mathematical expectation of each of them is equal to zero, and the standard deviation is equal to one. Then the sum of the squares of these quantities

distributed according to the chi square law with k = n degrees of freedom; if these quantities are related by one linear relationship, for example , then the number of degrees of freedom k=n- 1.

The density of this distribution

Where - gamma function; in particular,

(n+ 1)=n!.

This shows that the chi square distribution is determined by one parameter - the number of degrees of freedom k.

As the number of degrees of freedom increases, the distribution slowly approaches normal.

Student distribution

Let Z is a normal random variable, and M(Z) = 0, s( Z)= 1, a V-independent of Z a quantity that is distributed according to the law with k degrees of freedom. Then the value

has a distribution called t- distribution or Student distribution (pseudonym of the English statistician W. Gosset), with k degrees of freedom.

So, the ratio of the normalized normal size To square root from an independent random variable distributed according to the chi-square law with k degrees of freedom divided by k, distributed according to Student's law with k degrees of freedom.

As the number of degrees of freedom increases, the Student distribution quickly approaches normal. More information about this distribution are given below (see Chapter XVI, § 16).

§ 15. Distribution F Fischer - Snedecor

If U And V-independent random variables distributed according to the law with degrees of freedom k 1 and k 2 , then the value

has a distribution called the distribution F Fischer-Snedecor with degrees of freedom k 1 and k 2 (sometimes denoted by V 2).

The density of this distribution

We see that the distribution F is determined by two parameters - the number of degrees of freedom. Additional information about this distribution is given below (see Chapter XIX, § 8).

Tasks

1. Find the mathematical expectation and variance of a random variable X, knowing its distribution density:

A) for other values x;

b) f(x)= 1/ 2l at A- l x a+l, f(x)= 0 for other values X.

Rep. a)M(X)= 0, D(X) = l/2; b) M(X)= a, D(X)= l 2 / 3.

2. Random variable X normally distributed. The mathematical expectation and standard deviation of this value are equal to 6 and 2, respectively. Find the probability that as a result of the test X will take the value contained in the interval (4,8).

Rep. 0,6826.

3. The random variable is normally distributed. The standard deviation of this value is 0.4. Find the probability that the deviation of a random variable from its mathematical expectation by absolute value will be less than 0.3.

Rep. 0,5468.

4. Random measurement errors are subject to normal law with average square deviation s=1 mm and mathematical expectation A= 0. Find the probability that of two independent observations the error of at least one of them will not exceed 1.28 mm in absolute value.

Rep. 0,96.

5. Rollers produced by an automatic machine are considered standard if the deviation of the roller diameter from the design size does not exceed 2 mm. Random deviations roller diameters obey the normal law with standard deviation s = 1.6 mm and mathematical expectation a = 0. What percentage of standard rollers does the machine produce?

Rep. Approximately 79%.

6. Discrete random variable X is given by the distribution law:

X
p 0, 2 0, 1 0, 7

If each pair of possible values ​​of the variables X and Y corresponds to one possible value of the random variable Z, then Z is called a function of two cases of the arguments X and Y: Z=φ(X, Y).

1. Let X and Y be discrete independent quantities.

In order to draw up the distribution law for the function Z=X+Y, it is necessary to find all possible values ​​of Z and their probabilities. Because X and Y are independent quantities, then zi=xi+yi, pz=px*py. If zi=zj, then their probabilities add up.

2. Let X and Y be continuous quantities. It has been proven: if X and Y are independent, then the distribution density g(z) of the sum Z=X+Y (provided that the density of at least one of the arguments is given on the interval (-∞;∞) by one formula) can be found using the formula :

Where f1, f2 are the distribution densities of the arguments.

If the possible values ​​of the arguments are non-negative, then g(z) is found using the formula:

The distribution density of the sum of independent random quantities is called composition, and the probability distribution law is called stable if the composition of such laws is the same law. M(z)=M(x)+M(y); D(z)=D(x)+D(y).

You can also find the information you are interested in in the scientific search engine Otvety.Online. Use the search form:

More on topic 26. Function of two random arguments:

  1. 15. Partial derivatives of functions of two arguments, their geometric meaning.
  2. 23. The smallest and largest values ​​of a function of two arguments in a closed region.
  3. Mathematical expectation of a scalar function of random arguments. Two-dimensional discrete case.
  4. 31. Distribution function of a system of two random variables
  5. 120. Show with an example and explain the essence of the techniques in a dispute: “argument to the public”, “argument to pity”, “argument to ignorance”, “argument to vanity” and “argument to the individual”. Illustrate with an example and explain the logical term "verification".

If each pair of random variables
And corresponds to one of the possible values ​​of the random variable , That
called a function of two random arguments
And . In practice, the most common task is to find the distribution law of the function
according to known distributions of terms. For example, if
is the error of readings of some measuring device (usually distributed normally), and - the rounding error of the readings of this device (uniformly distributed), then the task arises - to find the law of distribution of the sum of errors
.


which are specified by their distribution laws. Then the possible values ​​of the random variable
- these are all possible values ​​of the sums of values
And , and the probabilities of the corresponding values are found as products of the corresponding probabilities of values
And included in

and as the sum of these products, if one value of the sum corresponds to different combinations of values
And .

Example 1. Let the series of distribution of discrete random variables be given
And .

Then the function
takes the values: 1, 3, 4, 6, 7, 8, 9. We find the probabilities of these values ​​using the theorems of multiplication and addition of probabilities as follows:

We obtain the distribution series of the random variable :

The sum of the probabilities in the bottom line is equal to 1, so this table actually specifies the distribution series of the random variable
.

7.2. Let it now
And -continuous random variables. If
And - are independent, then knowing the distribution densities of random variables
And -
, respectively, the distribution density of the random variable

can be found using one of the following formulas:

;

.

In particular, if
take only positive values ​​on the interval
, then fulfilling the following formulas:


EXAMPLE 2. Let independent random variables
And are given by their distribution densities:


Find the distribution law of a random variable
.

Thus,

It is easy to check that the main property of the distribution density is satisfied, namely,

§ 8. Systems of random variables

8.1 Laws of distribution of a system of random variables.

All random variables that have been considered so far have been defined by one number (one argument) - one-dimensional random variables. But, in addition to them, we can consider quantities that depend on two, three or more arguments, the so-called multidimensional random variables, which can be considered as systems of one-dimensional random variables. Through
- denote a two-dimensional random variable, and each of the values
And - called component (component) .

A two-dimensional random variable is called discrete , if its components are discrete random variables.

Continuous called a two-dimensional random variable, the components of which are continuous random variables.

The distribution law of a discrete two-dimensional random variable called a table of the form:

Since events
,

form full group incompatible events, then the sum of all probabilities in the table is equal to one.

Knowing the law of distribution of a two-dimensional random variable, you can find the law of distribution of each component:

(sum of probabilities in table column);

(sum of probabilities in table row).

Example 1. The law of distribution of two-dimensional random quantities:

Draw up laws of distribution of random variables
And .

Random variable
has a distribution:

Definition. Distribution function of a two-dimensional random variable is called a function that makes sense for both discrete and continuous random variables. Geometrically, this equality can be interpreted as the probability that a random point
falls into an infinite square with its vertex at the point
, located to the left and below this vertex.

BASIC PROPERTIES OF THE DISTRIBUTION FUNCTION:

Property 1.
.

Property 2. The distribution function is a non-decreasing function for both arguments, i.e.

Property 3. For everyone And the following relations hold:

Property 4. The distribution functions of the components can be found from the equalities:

Definition.Density joint distribution The probabilities of a two-dimensional continuous random variable are called the second mixed derivative of the distribution function, i.e.

.

Example 2. The distribution function of the system of random variables is given
:
Find its distribution density.

Let the distribution density of the system of random variables be known
-
. Then the distribution function can be found using the equality:

,

This follows directly from the definition of distribution density.

Hit Probability
to the region
is determined by the equality

PROPERTIES OF TWO-DIMENSIONAL DENSITY DISTRIBUTION.

Property 1. The two-dimensional distribution density is always positive:

Property 2. Double improper integral with infinite limits of integration from the distribution density is equal to unity

If the joint probability distribution density of a system of two random variables is known, then the distribution densities of each component can be found.
But
. Then

.

In a similar way we get

,

Example 3. Let the two-dimensional distribution density be given

Find the distribution density of random variables
And

at
and is equal to zero outside this interval. Similarly, due to the symmetry of the function
relatively And , we get:

      Conditional laws of distribution.

A concept similar to the concept of conditional probability for random events
, can be introduced to characterize the dependence between random variables.

Let us consider separately the cases of discrete and continuous two-dimensional random variables.

A) For a discrete two-dimensional random quantity, given table:

conditional probabilities are calculated using the formulas:

Comment. The sums of the corresponding conditional probabilities are equal to one, i.e.

Example 4. Let a discrete random variable be given by the table:

Find the conditional distribution law of the component
provided that the random variable took on the meaning .

Obviously, the sum of these probabilities is equal to one.

b) For Continuous two-dimensional random variable conditional distribution density
component
at a given value
called attitude

,

similarly, conditional distribution density
at a given value
-
.

Example 5. Let the joint distribution density of a continuous two-dimensional random variable
given by the function:
. Find the conditional distribution densities of the components.


The calculations used the Poisson integral

Then the conditional distribution densities have the form:

      Conditional mathematical expectation.

Definition.Conditional mathematical expectation discrete random variable at
is the sum of the products of possible values on their conditional probabilities:

similarly

Example 6. Let a two-dimensional discrete random variable be given by the table:

Find conditional mathematical expectations: at
And
at

Then

Then

For continuous quantities:

      Dependent and independent random variables.

Definition. Two random variables are called independent , if the distribution law of one of them does not depend on what possible values ​​the other random variable took. From this definitions should that the conditional distribution laws of independent random variables are equal to their unconditional distribution laws.

THEOREM.
And were independent, it is necessary and sufficient for the equality to hold:

We will not prove the theorem, but as a consequence, we get:

Consequence. In order for random variables
And were independent, it is necessary and sufficient that the density of the joint distribution of the system
was equal -on the work distribution densities of components, i.e.

      Numerical characteristics of a system of two random

quantities Correlation moment. Coefficient

correlations.

Definition.Correlation moment
systems of random variables
And The mathematical expectation of the product of deviations of these quantities is called:

Note 1. It is easy to see that the correlation moment can be written in the form:

Note 2. Correlation moment of two independent random variables equal to zero.

This follows from the condition of independence of random variables.

Note 3. For the correlation moment of random variables
And inequality holds

Definition.Correlation coefficient
random variables
And is called the ratio of the correlation moment to the product of the standard deviations of these quantities, i.e.

(2)

If random variables are independent, then their correlation moment is equal to zero and, accordingly, their correlation coefficient is equal to zero.

Taking into account Remark 3, we obtain the main property of the correlation coefficient:

(3)

Example 7. Let us consider the case of a system of discrete random variables, the distribution of which is given in the table:

Find mathematical expectations and variances of components and find the correlation coefficient for them .

Let's find one-dimensional laws of distribution of components

and them numerical characteristics.

For

For

Mathematical expectation of the product:

Then the correlation moment is equal to:

And finally, the correlation coefficient is:

This means that random variables
And have a very weak dependence.

Let us consider a similar problem for the case of continuous random variables.

Example 8. Let the system of random variables
is subject to the distribution law with density:

where is area . Find parameter value, numerical characteristics of random variables
And and their correlation coefficient .

Region
- this is a triangle:

0 2

First we find the value of the parameter , taking into account the basic condition of the distribution density:

In our case,

From here,
and the distribution density has the form:

Let's find the numerical characteristics of the components.

Since the function
and region
symmetrical with respect to And , then the numerical characteristics of random values
And coincide, i.e.

Mathematical expectation of a product of random variables


The correlation moment is equal to:

And finally,

      Correlation and dependence of random

quantities

Definition. Two random variables
And called correlated , if their correlation moment (or, equivalently, the correlation coefficient) is different from zero.

Correlated quantities are dependent. The opposite assumption does not always hold, i.e. dependent random variables can be either correlated or uncorrelated. If random variables are independent, then they are necessarily uncorrelated.

Let us see by example that two dependent quantities may be uncorrelated.

Example. Let a two-dimensional random variable
for – given by the distribution density:

Prove that
And - uncorrelated quantities.

The distribution densities of the components, as is easy to see, inside a given ellipse are given by the corresponding formulas and are equal to zero outside the ellipse.

Since then
And - dependent random variables.

Since the function
is symmetrical about the Oy axis, then
, similarly,
, due to symmetry
relative to the Ox axis (even functions).

since the internal integral is equal to zero (the integral of not even function is equal to an even function, and the limits of integration are symmetrical). Then

those. these dependent random variables are not correlated.

Note 1. For normally distributed components of a two-dimensional random variable, the concepts of uncorrelatedness and independence are equivalent.

Note 2. If the components
And are connected by a linear dependence, i.e.
, That

BIBLIOGRAPHICAL LIST

    Gmurman V.E. Probability theory and mathematical statistics - M.: Vyssh. school, 2001.

    Gmurman V.E. A guide to solving problems in probability theory and mathematical statistics - M.: Vyssh. school , 2001

    Gursky E.I. Probability theory with elements of mathematical statistics - M.: Vyssh. school, 1971.

    Izosova L.A., Izosov A.V. Random variables //method of indication// - Magnitogorsk, 2003.

    Izosova L.A., Izosov A.V. Random variables and laws of their distribution //individual assignments// - Magnitogorsk, 2004.

    Kremer N.Sh. Probability theory and mathematical statistics - M.: Unity, 2000.

    Chistyakov V.P. Course on probability theory - M.: Nauka, 1982.

Xv X2, ..., HP Function type Z= cf (Xp X2, ..., XJ and her
(Econometrics)
  • X with distribution density px. Another random variable at at
  • Expected and imagined accidents in international relations
    Case is God's pseudonym when he does not want to sign his own name. Anatole France In theory international relations the idea of ​​their systemic nature. The discovery of differences in the manifestation of the most important systemic features made it possible to build the history of international...
    (Sociology of the imagination of international relations)
  • Determination of numerical characteristics of functions of random arguments
    Let us consider the problem of determining the numerical characteristics of functions of random arguments in the following formulation. The random variable Z is a function of the system of random arguments Xv X2, ..., HP Function type Z= cf (Xp X2, ..., XJ and her the parameters are known, but the numerical characteristics...
    (Econometrics)
  • Laws of distribution of functions of random arguments
    There is a continuous random variable X with distribution density px. Another random variable at is related to it by the functional dependence Density of distribution of the quantity at in case monotonic function/ according is defined as follows: where /_1...
    (Numerical probabilistic analysis uncertain data)
  • APPLICATION OF THE RANDOM SEARCH METHOD WITH CONSISTENT REDUCTION OF THE RESEARCH AREA
    RANDOM SEARCH METHOD WITH CONSEQUENTIAL REDUCTION OF THE RESEARCH AREA Description of the global extremum search strategy The method of random search for a global extremum with sequential reduction of the study area, the Luus-Jakola method (Luus-Jakola, LJ), is applicable to solving the problem...
    (Metaheuristic algorithms for searching for optimal program control)
  • Definition of a function of random variables. Discrete function random argument and its numerical characteristics. Function of continuous random argument and its numerical characteristics. Functions of two random arguments. Determination of the probability distribution function and density for a function of two random arguments.

    Law of probability distribution of a function of one random variable

    When solving problems related to assessing the accuracy of various automatic systems, production accuracy individual elements systems, etc., it is often necessary to consider functions of one or more random variables. Such functions are also random variables. Therefore, when solving problems, it is necessary to know the distribution laws of the random variables appearing in the problem. In this case, the distribution law of the system of random arguments and the functional dependence are usually known.

    Thus, a problem arises that can be formulated as follows.

    Given a system of random variables (X_1,X_2,\ldots,X_n), the distribution law of which is known. Some random variable Y is considered as a function of these random variables:

    Y=\varphi(X_1,X_2,\ldots,X_n).

    It is required to determine the law of distribution of the random variable Y, knowing the form of functions (6.1) and the law of joint distribution of its arguments.

    Let us consider the problem of the distribution law of a function of one random argument

    Y=\varphi(X).

    \begin(array)(|c|c|c|c|c|)\hline(X)&x_1&x_2&\cdots&x_n\\\hline(P)&p_1&p_2&\cdots&p_n\\\hline\end(array)

    Then Y=\varphi(X) is also a discrete random variable with possible values ​​. If all values y_1,y_2,\ldots,y_n are different, then for each k=1,2,\ldots,n the events \(X=x_k\) and \(Y=y_k=\varphi(x_k)\) are identical. Hence,

    P\(Y=y_k\)=P\(X=x_k\)=p_k


    and the required distribution series has the form

    \begin(array)(|c|c|c|c|c|)\hline(Y)&y_1=\varphi(x_1)&y_2=\varphi(x_2)&\cdots&y_n=\varphi(x_n)\\\hline (P)&p_1&p_2&\cdots&p_n\\\hline\end(array)

    If among the numbers y_1=\varphi(x_1),y_2=\varphi(x_2),\ldots,y_n=\varphi(x_n) there are identical ones, then each group identical values y_k=\varphi(x_k) you need to allocate one column in the table and add the corresponding probabilities.

    For continuous random variables, the problem is posed as follows: knowing the distribution density f(x) of the random variable X, find the distribution density g(y) of the random variable Y=\varphi(X). When solving the problem, we consider two cases.

    Let us first assume that the function y=\varphi(x) is monotonically increasing, continuous and differentiable on the interval (a;b) on which all possible values ​​of X lie. Then the inverse function x=\psi(y) exists, while also being monotonically increasing, continuous and differentiable. In this case we get

    G(y)=f\bigl(\psi(y)\bigr)\cdot |\psi"(y)|.

    Example 1. Random variable X distributed with density

    F(x)=\frac(1)(\sqrt(2\pi))e^(-x^2/2)

    Find the law of distribution of the random variable Y associated with the value X by the dependence Y=X^3.

    Solution. Since the function y=x^3 is monotonic on the interval (-\infty;+\infty), we can apply formula (6.2). Inverse function in relation to the function \varphi(x)=x^3 there is \psi(y)=\sqrt(y) , its derivative \psi"(y)=\frac(1)(3\sqrt(y^2)). Hence,

    G(y)=\frac(1)(3\sqrt(2\pi))e^(-\sqrt(y^2)/2)\frac(1)(\sqrt(y^2))

    Let us consider the case of a nonmonotonic function. Let the function y=\varphi(x) be such that the inverse function x=\psi(y) is ambiguous, i.e. one value of y corresponds to several values ​​of the argument x, which we denote x_1=\psi_1(y),x_2=\psi_2(y),\ldots,x_n=\psi_n(y), where n is the number of sections in which the function y=\varphi(x) changes monotonically. Then

    G(y)=\sum\limits_(k=1)^(n)f\bigl(\psi_k(y)\bigr)\cdot |\psi"_k(y)|.

    Example 2. Under the conditions of example 1, find the distribution of the random variable Y=X^2.

    Solution. The inverse function x=\psi(y) is ambiguous. One value of the argument y corresponds to two values ​​of the function x


    Applying formula (6.3), we obtain:

    \begin(gathered)g(y)=f(\psi_1(y))|\psi"_1(y)|+f(\psi_2(y))|\psi"_2(y)|=\\\\ =\frac(1)(\sqrt(2\pi))\,e^(-\left(-\sqrt(y^2)\right)^2/2)\!\left|-\frac(1 )(2\sqrt(y))\right|+\frac(1)(\sqrt(2\pi))\,e^(-\left(\sqrt(y^2)\right)^2/2 )\!\left|\frac(1)(2\sqrt(y))\right|=\frac(1)(\sqrt(2\pi(y)))\,e^(-y/2) .\end(gathered)

    Distribution law of a function of two random variables

    Let the random variable Y be a function of two random variables forming the system (X_1;X_2), i.e. Y=\varphi(X_1;X_2). The task is to find the distribution of the random variable Y using the known distribution of the system (X_1;X_2).

    Let f(x_1;x_2) be the distribution density of the system of random variables (X_1;X_2) . Let us introduce into consideration a new quantity Y_1 equal to X_1 and consider the system of equations

    We will assume that this system is uniquely solvable with respect to x_1,x_2


    and satisfies the differentiability conditions.

    Distribution density of random variable Y

    G_1(y)=\int\limits_(-\infty)^(+\infty)f(x_1;\psi(y;x_1))\!\left|\frac(\partial\psi(y;x_1)) (\partial(y))\right|dx_1.

    Note that the reasoning does not change if the introduced new value Y_1 is set equal to X_2.

    Mathematical expectation of a function of random variables

    In practice, there are often cases when there is no particular need to completely determine the distribution law of a function of random variables, but it is enough only to indicate its numerical characteristics. Thus, the problem arises of determining the numerical characteristics of functions of random variables in addition to the laws of distribution of these functions.

    Let the random variable Y be a function of the random argument X with given by law distribution

    Y=\varphi(X).

    It is required, without finding the law of distribution of the quantity Y, to determine its mathematical expectation

    M(Y)=M[\varphi(X)].

    Let X be a discrete random variable having a distribution series

    \begin(array)(|c|c|c|c|c|)\hline(x_i)&x_1&x_2&\cdots&x_n\\\hline(p_i)&p_1&p_2&\cdots&p_n\\\hline\end(array)

    Let's make a table of the values ​​of the value Y and the probabilities of these values:

    \begin(array)(|c|c|c|c|c|)\hline(y_i=\varphi(x_i))&y_1=\varphi(x_1)&y_2=\varphi(x_2)&\cdots&y_n=\varphi( x_n)\\\hline(p_i)&p_1&p_2&\cdots&p_n\\\hline\end(array)

    This table is not a distribution series of the random variable Y, since in general case Some of the values ​​may be the same and the values ​​in the top row are not necessarily in ascending order. However, the mathematical expectation of the random variable Y can be determined by the formula

    M[\varphi(X)]=\sum\limits_(i=1)^(n)\varphi(x_i)p_i,


    since the value determined by formula (6.4) cannot change due to the fact that under the sum sign some terms will be combined in advance, and the order of the terms will be changed.

    Formula (6.4) does not explicitly contain the distribution law of the function \varphi(X) itself, but contains only the distribution law of the argument X. Thus, to determine the mathematical expectation of the function Y=\varphi(X), it is not at all necessary to know the distribution law of the function \varphi(X), but rather to know the distribution law of the argument X.

    For a continuous random variable, the mathematical expectation is calculated using the formula

    M[\varphi(X)]=\int\limits_(-\infty)^(+\infty)\varphi(x)f(x)\,dx,


    where f(x) is the probability distribution density of the random variable X.

    Let us consider cases when, to find the mathematical expectation of a function of random arguments, knowledge of even the laws of distribution of arguments is not required, but it is enough to know only some of their numerical characteristics. Let us formulate these cases in the form of theorems.

    Theorem 6.1. The mathematical expectation of the sum of both dependent and independent two random variables is equal to the sum of the mathematical expectations of these variables:

    M(X+Y)=M(X)+M(Y).

    Theorem 6.2. The mathematical expectation of the product of two random variables is equal to the product of their mathematical expectations plus the correlation moment:

    M(XY)=M(X)M(Y)+\mu_(xy).

    Corollary 6.1. The mathematical expectation of the product of two uncorrelated random variables is equal to the product of their mathematical expectations.

    Corollary 6.2. The mathematical expectation of the product of two independent random variables is equal to the product of their mathematical expectations.

    Variance of a function of random variables

    By definition of dispersion we have D[Y]=M[(Y-M(Y))^2].. Hence,

    D[\varphi(x)]=M[(\varphi(x)-M(\varphi(x)))^2], Where .

    Let's give calculation formulas only for the case of continuous random arguments. For a function of one random argument Y=\varphi(X), the variance is expressed by the formula

    D[\varphi(x)]=\int\limits_(-\infty)^(+\infty)(\varphi(x)-M(\varphi(x)))^2f(x)\,dx,

    Where M(\varphi(x))=M[\varphi(X)]- mathematical expectation of the function \varphi(X) ; f(x) - distribution density of the value X.

    Formula (6.5) can be replaced with the following:

    D[\varphi(x)]=\int\limits_(-\infty)^(+\infty)\varphi^2(x)f(x)\,dx-M^2(X)

    Let's consider dispersion theorems who play important role in probability theory and its applications.

    Theorem 6.3. The variance of the sum of random variables is equal to the sum of the variances of these variables plus twice the sum correlation moments each of the summand quantities with all the following:

    D\!\left[\sum\limits_(i=1)^(n)X_i\right]=\sum\limits_(i=1)^(n)D+2\sum\limits_(i

    Corollary 6.3. The variance of the sum of uncorrelated random variables is equal to the sum of the variances of the terms:

    D\!\left[\sum\limits_(i=1)^(n)X_i\right]=\sum\limits_(i=1)^(n)D\mu_(y_1y_2)= M(Y_1Y_2)-M(Y_1)M(Y_2).

    \mu_(y_1y_2)=M(\varphi_1(X)\varphi_2(X))-M(\varphi_1(X))M(\varphi_2(X)).


    that is, the correlation moment of two functions of random variables is equal to the mathematical expectation of the product of these functions minus the product of the mathematical expectations.

    Let's look at the main properties of the correlation moment and correlation coefficient.

    Property 1. Adding constants to random variables does not change the correlation moment and the correlation coefficient.

    Property 2. For any random variables X and Y, the absolute value of the correlation moment does not exceed the geometric mean of the variances of these values:

    |\mu_(xy)|\leqslant\sqrt(D[X]\cdot D[Y])=\sigma_x\cdot \sigma_y,



    Did you like the article? Share with your friends!