Function of one random argument and its distribution. Random Argument Functions

Function Definition random variables. Discrete random argument function and its numerical characteristics. Function of continuous random argument and its numerical characteristics. Functions of two random arguments. Determination of the probability distribution function and density for a function of two random arguments.

Law of probability distribution of a function of one random variable

When solving problems related to assessing the accuracy of various automatic systems, production accuracy individual elements systems, etc., it is often necessary to consider functions of one or more random variables. Such functions are also random variables. Therefore, when solving problems, it is necessary to know the distribution laws of the random variables appearing in the problem. In this case, the distribution law of the system of random arguments and the functional dependence are usually known.

Thus, a problem arises that can be formulated as follows.

Given a system of random variables (X_1,X_2,\ldots,X_n), the distribution law of which is known. Some random variable Y is considered as a function of these random variables:

Y=\varphi(X_1,X_2,\ldots,X_n).

It is required to determine the distribution law of the random variable Y, knowing the form of functions (6.1) and the law joint distribution her arguments.

Let us consider the problem of the distribution law of a function of one random argument

Y=\varphi(X).

\begin(array)(|c|c|c|c|c|)\hline(X)&x_1&x_2&\cdots&x_n\\\hline(P)&p_1&p_2&\cdots&p_n\\\hline\end(array)

Then Y=\varphi(X) is also a discrete random variable with possible values ​​. If all values y_1,y_2,\ldots,y_n are different, then for each k=1,2,\ldots,n the events \(X=x_k\) and \(Y=y_k=\varphi(x_k)\) are identical. Hence,

P\(Y=y_k\)=P\(X=x_k\)=p_k


and the required distribution series has the form

\begin(array)(|c|c|c|c|c|)\hline(Y)&y_1=\varphi(x_1)&y_2=\varphi(x_2)&\cdots&y_n=\varphi(x_n)\\\hline (P)&p_1&p_2&\cdots&p_n\\\hline\end(array)

If among the numbers y_1=\varphi(x_1),y_2=\varphi(x_2),\ldots,y_n=\varphi(x_n) there are identical ones, then each group identical values y_k=\varphi(x_k) you need to allocate one column in the table and add the corresponding probabilities.

For continuous random variables, the problem is posed as follows: knowing the distribution density f(x) of the random variable X, find the distribution density g(y) of the random variable Y=\varphi(X). When solving the problem, we consider two cases.

Let us first assume that the function y=\varphi(x) is monotonically increasing, continuous and differentiable on the interval (a;b) on which all possible values ​​of X lie. Then inverse function x=\psi(y) exists, while also being monotonically increasing, continuous and differentiable. In this case we get

G(y)=f\bigl(\psi(y)\bigr)\cdot |\psi"(y)|.

Example 1. Random variable X distributed with density

F(x)=\frac(1)(\sqrt(2\pi))e^(-x^2/2)

Find the law of distribution of the random variable Y associated with the value X by the dependence Y=X^3.

Solution. Since the function y=x^3 is monotonic on the interval (-\infty;+\infty), we can apply formula (6.2). The inverse function with respect to the function \varphi(x)=x^3 is \psi(y)=\sqrt(y) , its derivative \psi"(y)=\frac(1)(3\sqrt(y^2)). Hence,

G(y)=\frac(1)(3\sqrt(2\pi))e^(-\sqrt(y^2)/2)\frac(1)(\sqrt(y^2))

Let us consider the case of a nonmonotonic function. Let the function y=\varphi(x) be such that the inverse function x=\psi(y) is ambiguous, i.e. one value of y corresponds to several values ​​of the argument x, which we denote x_1=\psi_1(y),x_2=\psi_2(y),\ldots,x_n=\psi_n(y), where n is the number of sections in which the function y=\varphi(x) changes monotonically. Then

G(y)=\sum\limits_(k=1)^(n)f\bigl(\psi_k(y)\bigr)\cdot |\psi"_k(y)|.

Example 2. Under the conditions of example 1, find the distribution of the random variable Y=X^2.

Solution. The inverse function x=\psi(y) is ambiguous. One value of the argument y corresponds to two values ​​of the function x


Applying formula (6.3), we obtain:

\begin(gathered)g(y)=f(\psi_1(y))|\psi"_1(y)|+f(\psi_2(y))|\psi"_2(y)|=\\\\ =\frac(1)(\sqrt(2\pi))\,e^(-\left(-\sqrt(y^2)\right)^2/2)\!\left|-\frac(1 )(2\sqrt(y))\right|+\frac(1)(\sqrt(2\pi))\,e^(-\left(\sqrt(y^2)\right)^2/2 )\!\left|\frac(1)(2\sqrt(y))\right|=\frac(1)(\sqrt(2\pi(y)))\,e^(-y/2) .\end(gathered)

Distribution law of a function of two random variables

Let the random variable Y be a function of two random variables forming the system (X_1;X_2), i.e. Y=\varphi(X_1;X_2). The task is to find the distribution of the random variable Y using the known distribution of the system (X_1;X_2).

Let f(x_1;x_2) be the distribution density of the system of random variables (X_1;X_2) . Let us introduce into consideration a new quantity Y_1 equal to X_1 and consider the system of equations

We will assume that this system is uniquely solvable with respect to x_1,x_2


and satisfies the differentiability conditions.

Distribution density of random variable Y

G_1(y)=\int\limits_(-\infty)^(+\infty)f(x_1;\psi(y;x_1))\!\left|\frac(\partial\psi(y;x_1)) (\partial(y))\right|dx_1.

Note that the reasoning does not change if the introduced new value Y_1 is set equal to X_2.

Mathematical expectation of a function of random variables

In practice, there are often cases when there is no particular need to completely determine the distribution law of a function of random variables, but it is enough only to indicate its numerical characteristics. Thus, the problem arises of determining the numerical characteristics of functions of random variables in addition to the laws of distribution of these functions.

Let the random variable Y be a function of the random argument X with given by law distribution

Y=\varphi(X).

It is required, without finding the law of distribution of the quantity Y, to determine it mathematical expectation

M(Y)=M[\varphi(X)].

Let X be a discrete random variable having a distribution series

\begin(array)(|c|c|c|c|c|)\hline(x_i)&x_1&x_2&\cdots&x_n\\\hline(p_i)&p_1&p_2&\cdots&p_n\\\hline\end(array)

Let's make a table of the values ​​of the value Y and the probabilities of these values:

\begin(array)(|c|c|c|c|c|)\hline(y_i=\varphi(x_i))&y_1=\varphi(x_1)&y_2=\varphi(x_2)&\cdots&y_n=\varphi( x_n)\\\hline(p_i)&p_1&p_2&\cdots&p_n\\\hline\end(array)

This table is not a distribution series of the random variable Y, since in general case Some of the values ​​may be the same and the values ​​in the top row are not necessarily in ascending order. However, the mathematical expectation of the random variable Y can be determined by the formula

M[\varphi(X)]=\sum\limits_(i=1)^(n)\varphi(x_i)p_i,


since the value determined by formula (6.4) cannot change due to the fact that under the sum sign some terms will be combined in advance, and the order of the terms will be changed.

Formula (6.4) does not explicitly contain the distribution law of the function \varphi(X) itself, but contains only the distribution law of the argument X. Thus, to determine the mathematical expectation of the function Y=\varphi(X), it is not at all necessary to know the distribution law of the function \varphi(X), but rather to know the distribution law of the argument X.

For a continuous random variable, the mathematical expectation is calculated using the formula

M[\varphi(X)]=\int\limits_(-\infty)^(+\infty)\varphi(x)f(x)\,dx,


where f(x) is the probability distribution density of the random variable X.

Let us consider cases when, to find the mathematical expectation of a function of random arguments, knowledge of even the laws of distribution of arguments is not required, but it is enough to know only some of their numerical characteristics. Let us formulate these cases in the form of theorems.

Theorem 6.1. The mathematical expectation of the sum of both dependent and independent two random variables is equal to the sum of the mathematical expectations of these variables:

M(X+Y)=M(X)+M(Y).

Theorem 6.2. The mathematical expectation of the product of two random variables is equal to the product of their mathematical expectations plus the correlation moment:

M(XY)=M(X)M(Y)+\mu_(xy).

Corollary 6.1. The mathematical expectation of the product of two uncorrelated random variables is equal to the product of their mathematical expectations.

Corollary 6.2. The mathematical expectation of the product of two independent random variables is equal to the product of their mathematical expectations.

Variance of a function of random variables

By definition of dispersion we have D[Y]=M[(Y-M(Y))^2].. Hence,

D[\varphi(x)]=M[(\varphi(x)-M(\varphi(x)))^2], Where .

Let's give calculation formulas only for the case of continuous random arguments. For a function of one random argument Y=\varphi(X), the variance is expressed by the formula

D[\varphi(x)]=\int\limits_(-\infty)^(+\infty)(\varphi(x)-M(\varphi(x)))^2f(x)\,dx,

Where M(\varphi(x))=M[\varphi(X)]- mathematical expectation of the function \varphi(X) ; f(x) - distribution density of the value X.

Formula (6.5) can be replaced with the following:

D[\varphi(x)]=\int\limits_(-\infty)^(+\infty)\varphi^2(x)f(x)\,dx-M^2(X)

Let's consider dispersion theorems who play important role in probability theory and its applications.

Theorem 6.3. The variance of the sum of random variables is equal to the sum of the variances of these variables plus twice the sum correlation moments each of the summand quantities with all the following:

D\!\left[\sum\limits_(i=1)^(n)X_i\right]=\sum\limits_(i=1)^(n)D+2\sum\limits_(i

Corollary 6.3. The variance of the sum of uncorrelated random variables is equal to the sum of the variances of the terms:

D\!\left[\sum\limits_(i=1)^(n)X_i\right]=\sum\limits_(i=1)^(n)D\mu_(y_1y_2)= M(Y_1Y_2)-M(Y_1)M(Y_2).

\mu_(y_1y_2)=M(\varphi_1(X)\varphi_2(X))-M(\varphi_1(X))M(\varphi_2(X)).


that is, the correlation moment of two functions of random variables is equal to the mathematical expectation of the product of these functions minus the product of the mathematical expectations.

Let's look at the main properties of the correlation moment and correlation coefficient.

Property 1. Adding constants to random variables does not change the correlation moment and the correlation coefficient.

Property 2. For any random variables X and Y, the absolute value of the correlation moment does not exceed the geometric mean of the variances of these values:

|\mu_(xy)|\leqslant\sqrt(D[X]\cdot D[Y])=\sigma_x\cdot \sigma_y,

Xv X2, ..., HP Function type Z= cf (Xp X2, ..., XJ and her
(Econometrics)
  • X with distribution density px. Another random variable at at
  • Expected and imagined accidents in international relations
    The case is God's pseudonym when he does not want to sign his own name. Anatole France The theory of international relations has firmly established the idea of ​​their systemic nature. The discovery of differences in the manifestation of the most important systemic features made it possible to build the history of international...
    (Sociology of the imagination of international relations)
  • Determination of numerical characteristics of functions of random arguments
    Let us consider the problem of determining the numerical characteristics of functions of random arguments in the following formulation. The random variable Z is a function of the system of random arguments Xv X2, ..., HP Function type Z= cf (Xp X2, ..., XJ and her the parameters are known, but the numerical characteristics...
    (Econometrics)
  • Laws of distribution of functions of random arguments
    There is a continuous random variable X with distribution density px. Another random variable at is related to it by the functional dependence Density of distribution of the quantity at in the case of a monotonic function / according is defined as follows: where /_1...
    (Numerical probabilistic analysis of uncertain data)
  • APPLICATION OF THE RANDOM SEARCH METHOD WITH CONSISTENT REDUCTION OF THE RESEARCH AREA
    RANDOM SEARCH METHOD WITH CONSEQUENTIAL REDUCTION OF THE RESEARCH AREA Description of the global extremum search strategy The method of random search for a global extremum with sequential reduction of the study area, the Luus-Jakola method (Luus-Jakola, LJ), is applicable to solving the problem...
    (Metaheuristic algorithms for searching for optimal program control)
  • 16.1. The law of distribution of a function of one random argument.

    Let's start by considering the simplest problem about the distribution law of a function of one random argument. Since continuous random variables are of greatest importance for practice, we will solve the problem specifically for them.

    There is a continuous random variable X with distribution density f(x) . Another random variable Y is connected with it by a functional dependence: .

    It is required to find the distribution density of the quantity Y. Let us consider the section of the abscissa axis on which all possible values ​​of the quantity lie X, i.e. .

    The method for solving the problem depends on the behavior of the function in the area: whether it is monotonic or not.

    In this section we will consider the case when the function on the segment is monotonic. In this case, we will separately analyze two cases: monotonic increase and monotonic decrease of the function.

    1. The function in the area increases monotonically (Fig. 6.1.1). When the value X takes on different values

    area, random point ( X, Y) moves only along a curve; the ordinate of this random point is completely determined by its abscissa.

    Let us denote the distribution density of the quantity Y. In order to determine, we first find the distribution function of the quantity Y: .

    Let's make a direct AB, parallel to the x-axis at a distance y from it (Fig. 6.1.1). For the condition to be satisfied, a random point (X, Y) must fall on that section of the curve that lies below the straight line AB; For this it is necessary and sufficient that the random variable X fell on the x-axis section from a to x, Where x- abscissa of the point of intersection of the curve and the straight line AB. Hence,

    (6.1.1) Since it is monotonic on the segment, there is an inverse single-valued function. Then

    (6.1.2) Differentiating the integral (6.1.2) with respect to the variable at included in the upper limit, we obtain:

    (6.1.3) 2. The function on the section decreases monotonically (Fig. 6.1.2). In this case

    (6.1.4) from where

    (6.1.5) Comparing formulas (6.1.3) and (6.1.5), we notice that they can be combined into one:

    (6.1.6)

    Indeed, when it increases, its derivative (and therefore) is positive. For a decreasing function, the derivative is negative, but in front of it in formula (6.1.5) there is a minus. Consequently, formula (6.1.6), in which the derivative is taken modulo, is true in both cases.

    3. Consider the case when the function on the section possible values argument is not monotonic (Fig. 6.1.3).

    Let's find the distribution function G(y) quantities Y. To do this, let's draw a straight line again AB, parallel to the x-axis, at a distance at from it and select those sections of the curve where the condition is satisfied. Let these sections correspond to sections of the abscissa axis: .

    The event is equivalent to the hit of a random variable X to one of the sites - it doesn’t matter which one. That's why

    (6.1.7) Thus, for the quantity distribution function we have the formula:

    (6.1.8) The boundaries of the intervals depend on at and given a specific form, functions can be expressed as explicit functions at. Differentiating G(y) in size at, included in the limits of the integrals, we obtain the distribution density of the quantity Y:

    (6.1.9) Example. Magnitude X is subject to the law of uniform density on the site.

    Find the law of distribution of the quantity.

    Solution. We build a graph of the function (Fig. 6.1.4). Obviously, the function is also non-monotonic in the interval. Applying formula (6.1.8), we have:

    Let us express the limits through at: ; . Then

    .(6.1.10) To find the density g(at) differentiate this expression with respect to the variable at, included in the limits of the integrals, we obtain:

    Bearing in mind that , we get:

    (6.1.11) Indicating for Y distribution law (6.1.11), it should be noted that it is valid only in the range from 0 to 1, i.e. within the limits to which it changes with the argument X, enclosed in the interval from, to. Outside these limits the density g(at) is equal to zero.

    Graph of a function g(at) is given in Fig. 6.1.5. At at=1 curve g(y) has a branch going to infinity.

    26.2. The distribution law of a function of two random variables.

    Let us present a general method for solving the problem for the simplest case of a function of two arguments.

    There is a system of two continuous random variables (X, Y) with distribution density f(x, y) . Random variable Z associated with X And Y functional dependence:

    It is required to find the law of distribution of the value Z.

    To solve the problem, we will use a geometric interpretation. The function will no longer be depicted as a curve, but as a surface (Fig. 6.2.1).

    Let's find the distribution function of the value Z:

    (6.2.1) Let us draw a plane Q parallel to the plane xOy, at a distance z from her. This plane will intersect the surface along some curve TO. Let's design the curve TO to the plane xOy. This projection, whose equation, will divide the plane xOy into two areas; for one of them, the height of the surface above the plane xOy will be less, and for the other - more z. Let's denote D the area for which this height is less z. For inequality (6.2.1) to hold, a random point (X, Y) obviously should fall into the area D; hence,

    (6.2.2) In expression (6.2.2) the quantity z enters implicitly, through the limits of integration.

    Differentiating G(z) By z, we obtain the distribution density of the quantity Z:

    (6.2.3) Knowing the specific form of the function, we can express the limits of integration in terms of z and write the expression g(z) explicitly.

    36.3. The law of distribution of the sum of two random variables. Composition of distribution laws.

    Let us use the general method outlined above to solve one problem, namely, to find the law of distribution of the sum of two random variables. There is a system of two random variables (X, Y) with distribution density f(x, at) . Consider the sum of random variables X And Y: and find the law of distribution of the quantity Z. To do this, we build on the plane xOy line, the equation of which is (Fig. 6.3.1). This is a straight line cutting off segments on the axes equal to z. A straight line divides the xOy plane into two parts; to the right and above it; to the left and below

    Region D in this case - the lower left part of the plane xOy, shaded in Fig. 6.3.1. According to formula (6.3.2) we have:

    Differentiating this expression with respect to the variable z, included in the upper limit of the internal integral, we obtain:

    (6.3.1) This is the general formula for the distribution density of the sum of two random variables.

    For reasons of symmetry of the problem with respect to X And Y You can write another version of the same formula:

    (6.3.2) which is equivalent to the first and can be used instead.

    Example compositions normal laws . Consider two independent random variables X And Y, subject to normal laws:

    It is required to produce a composition of these laws, that is, to find the law of distribution of the quantity: .

    Let us apply the general formula for the composition of distribution laws:

    (6.3.3) If we open the brackets in the exponent of the integrand and bring similar terms, we get:

    Substituting these expressions into the formula we have already encountered

    (6.3.4) after transformations we obtain:

    (6.3.5) and this is nothing more than a normal law with a center of dispersion

    (6.3.6) and standard deviation

    (6.3.7) The same conclusion can be reached much easier using the following qualitative reasoning.

    Without opening the parentheses and without making any transformations in the integrand (6.3.3), we immediately come to the conclusion that the exponent is a quadratic trinomial with respect to X kind

    where in coefficient A magnitude z not included in the coefficient at all IN is included in the first power, and in the coefficient WITH- squared. Keeping this in mind and applying formula (6.3.4), we come to the conclusion that g(z) is an exponential function whose exponent is a square trinomial with respect to z, and distribution density; This type corresponds to the normal law. So we; we come to a purely qualitative conclusion: the law of distribution of the value z must be normal. To find the parameters of this law - and - we will use the theorem of addition of mathematical expectations and the theorem of addition of variances. According to the theorem of addition of mathematical expectations. By the theorem of addition of variances or from which formula (6.3.7) follows.

    Moving from standard deviations to probable deviations proportional to them, we obtain: .

    Thus, we came to the following rule: when combining normal laws, a normal law is obtained again, and the mathematical expectations and variances (or squares of probable deviations) are summed up.

    The rule for the composition of normal laws can be generalized to the case of an arbitrary number of independent random variables.

    If available n independent random variables: subject to normal laws with centers of dispersion and standard deviations, then the value is also subject to the normal law with parameters

    (6.3.8) (6.3.9) Instead of formula (6.3.9), you can use an equivalent formula:

    If the system of random variables (X, Y) distributed according to the normal law, but the values X, Y are dependent, then it is not difficult to prove, just as before, based on the general formula (6.3.1), that the law of distribution of a quantity is also a normal law. The scattering centers are still added algebraically, but for standard deviations the rule becomes more complex: , where, r- coefficient of correlation of quantities X And Y.

    When adding several dependent random variables, which in their entirety are subject to the normal law, the distribution law of the sum also turns out to be normal with the parameters

    (6.3.10)(6.3.11) or in probable deviations

    where is the correlation coefficient of quantities X i , X j, and the summation extends to all different pairwise combinations of quantities.

    We have become convinced of a very important property normal law: with the composition of normal laws, a normal law is obtained again. This is the so-called “stability property”. A distribution law is called stable if the composition of two laws of this type again results in a law of the same type. We showed above that the normal law is stable. Very few distribution laws have the property of stability. The law of uniform density is unstable: by combining two laws of uniform density in sections from 0 to 1, we obtained Simpson's law.

    The stability of the normal law is one of the essential conditions for its widespread use in practice. However, in addition to the normal one, some other distribution laws also have the property of stability. A feature of the normal law is that when a sufficiently large number of practically arbitrary distribution laws are combined, the total law turns out to be as close to normal as desired, regardless of what the distribution laws of the terms were. This can be illustrated, for example, by composing the three laws of uniform density in areas from 0 to 1. The resulting distribution law g(z) shown in Fig. 6.3.1. As can be seen from the drawing, the graph of the function g(z) very reminiscent of a normal law graph.

    46.4. Product distribution.

    Let, where and are scalar random variables with joint distribution density. Let's find the distribution Y.

    (6.4.1)

    In Fig. the event is shown by shading. Now it's obvious that

    5(6.4.2) (6.4.3) 6.5. Distribution of the square of a random variable.

    Let; X is a continuous random variable with density. We'll find it. If, then and. In the case when we get:

    (6.5.1) (6.5.2) In the special case when, we have:

    (6.5.3) If in this case, then

    6(6.5.4) 6.6. Distribution of private.

    Let; X is a continuous random variable with density. We'll find it.

    (6.6.1)

    In Fig. 6.6.1 it is clear that the event is represented by shaded areas. That's why

    (6.6.2) (6.6.3) If; ; independent, then it is easy to obtain:

    (6.6.4) Distribution (6.6.4) is named after Cauchy. It turns out that this distribution has no mathematical expectation and variance.

    76.7. Numerical characteristics of functions of random variables.

    Consider the following problem: random variable Y there is a function of several random variables;

    (6.7.1) Let us know the law of distribution of the system of arguments; we need to find the numerical characteristics of the quantity Y, first of all, mathematical expectation and variance.

    Let's imagine that we managed to find the distribution law g(y) quantities Y. Then the task of determining numerical characteristics becomes simple; they are found according to the formulas:

    (6.7.2) (6.7.3) However, the task of finding the distribution law g(y) quantities Y often turns out to be quite difficult. To solve the problem, find the law of distribution of the quantity Y not necessary: ​​to find only numerical characteristics of a quantity Y, there is no need to know its distribution law; It is enough to know the law of distribution of arguments.

    Thus, the problem arises of determining the numerical characteristics of functions of random variables without determining the laws of distribution of these functions.

    Let us consider the problem of determining the numerical characteristics of a function for a given argument distribution law. Let's start with the simplest case - a function of one argument.

    There is a random variable X with a given distribution law; another random variable Y associated with X functional dependence: Y= (X).

    Required without finding the law of distribution of the quantity Y, determine its mathematical expectation:

    (6.7.4) Let us first consider the case when X there is a discrete random variable with a distribution series:

    x i X 1 x 2 x n p i P 1 p 2 p n Let us write down the possible values ​​of the quantity in the form of a table Y and the probabilities of these values:

    (x i ) (x 1 ) (x 2 ) (x n )p i P 1 P 2 p n Table 6.7.2 is not a quantity distribution series Y, since in the general case some of the values

    (6.7.5) may coincide with each other. In order to move from table (6.7.1) to the true series of distribution of the value Y, it would be necessary to arrange the values ​​(6.7.5) in ascending order, combine columns corresponding to equal values Y, and add up the corresponding probabilities. Mathematical expectation of value Y can be determined by the formula

    (6.7.6) Obviously, the quantity T at - M((X)), determined by formula (6.7.6), cannot change if, under the sum sign, some terms are combined in advance, and the order of the terms is changed.

    Formula (6.7.6) for the mathematical expectation of a function does not explicitly contain the distribution law of the function itself, but only the distribution law of the argument. Thus, Fordefinitionsmathematicalexpectationsfunctionsat allNotrequiredknowlawdistributionthisfunctions, Aenough­ exactlyknowlawdistributionargument.

    Replacing the sum in formula (6.7.6) with an integral, and the probability r i- element of probability, we obtain a similar formula for a continuous random variable:

    (6.7.7) where f(x) X.

    The mathematical expectation of the function can be determined similarly at(X, Y) from two random arguments X And Y. For discrete quantities

    (6.7.8) where - the probability that the system ( X, Y) will take the values (x i y j). For continuous quantities

    (6.7.9) where f(x, at) - distribution density of the system ( X, Y).

    The mathematical expectation of a function from an arbitrary number of random arguments is determined similarly. We present the corresponding formula only for continuous quantities:

    (6.7.10) where - system distribution density.

    Formulas like (6.7.10) are very often encountered in the practical application of probability theory, when it comes to averaging any quantities that depend on a number of random arguments.

    Thus, the mathematical expectation of a function of any number of random arguments can be found in addition to the distribution law of the function. Similarly, other numerical characteristics of a function can be found - moments of various orders. Since each moment represents the mathematical expectation of a certain function of the random variable under study, the calculation of any moment can be carried out using techniques completely similar to those outlined above. Here we present calculation formulas only for the variance, and only for the case of continuous random arguments.

    The variance of a function of one random argument is expressed by the formula

    (6.7.11) where T= M[(x)] - mathematical expectation of the function ( X);f(X) - quantity distribution density X.

    The variance of a function of two random arguments is expressed similarly:

    (6.7.12) where is the mathematical expectation of the function (X, Y); f(x, y) - distribution density of the system (X, Y). Finally, in the case of an arbitrary number of random arguments, in similar notation.

    In the previous chapter, we got acquainted with methods for determining the numerical characteristics of functions c. V. and showed that to find them it is not necessary to know the laws of distribution of these functions, but it is enough to know the laws of distribution of arguments. As has been shown, in many cases of engineering practice, when finding the numerical characteristics of functions c. V. you can even do without the laws of distribution of arguments - it is enough to know only the numerical characteristics of these arguments.

    However, often in engineering applications the problem of determining the laws of distribution of the function c arises. V. This is usually required when determining the probability of these functions falling into various areas of their possible values.

    At this point we will solve the following problem. There is a continuous s. V. Xs density/(x); With. V. T is expressed through s. V. X functional dependence

    It is required to find the distribution law c. V. Y

    Let us first consider the case when the function cp(A) is strictly monotone, continuous and differentiable in the interval ( A, b) all possible values ​​of c. V. X Distribution function G(y)s. V. ^determined by the formula

    If the function φ(x) increases monotonically over the entire range of possible values ​​of c. V. ^(Fig. 9.1.1), then the event (T(X f (y)), where z(y) = x there is a function that is the inverse of the function

    Differentiating this expression by the value y, included in upper limit integral, we obtain p.r. random variable Y:

    If the function cf (X) on site (a, b) possible values ​​c. V. X decreases monotonically (Fig. 9.1.2), then the event (T |/ (y)). Hence,

    Rice. 9.1.1

    Differentiating C(y) with respect to the variable y included in the lower limit, we obtain p.r. random variable Y:

    Since the density cannot be negative, formulas (9.1.4) and (9.1.6) can be combined into one:

    1 In formulas (9.1.3) and (9.1.5) the range of possible values ​​of c. V. It can be (- ao, oo), i.e. A= - oo; b - oo then possible values U- f (A) are determined from the expression y, - - φ (x;) (/= 1,2,..., p) in this case there is equality

    Problem 1. Distribution law linear function one random argument. A special case of a monotone function is the linear function at = Oh + b, Where a, b - non-random quantities. Let Us be a linear function of continuous c. V. Xs density/(x):


    Let us find, using formula (9.1.7), the distribution density g(y) random variable U. In this case, the inverse function φ (y) = (y - b)/a; its derivative f" (y) = 1 /A modulus of the derivative 1/|i|. Formula (9.1.7) gives

    Example 1. Random variable X exponentially distributed

    The random variable is expressed linearly through X:

    If s. V. LG is discrete and has a number of distributions

    Solution. In this case, the inverse function φ (y) = (2 - y)/3. The condition x > 0 in formula (*) for y becomes the condition y = 2 - 3x

    Density graph g(y) is shown in Fig. 9.1.3.

    Example 2. Find p.r. linear function Y= aX+ b normally distributed argument X with characteristics t x and o*.

    Solution. According to formula (9.1.7) we have

    and this is a normal law with characteristics that = at x + b , D y = = a 2 o 2 x; a y = a o x. Thus, as a result of a linear transformation of a normally distributed c. V. X we get s. V. Y, also distributed according to the normal law. ?

    Example 3. Continuous s. V. X distributed according to Cauchy's law in its simplest (canonical) form:

    With. V. Associated with it is dependence:

    Find the distribution density c. V. Y.

    Solution. Since the function at= 1 - x 2 is monotonic (monotonically decreasing) over the entire section (-oo, oo), we apply formula (9.1.7). Let's write the solution in the form of two columns; on the left will be placed the designations of functions adopted in the general solution of the problem; in the first - specific functions corresponding to this example.


    Example 4. S.v. X is distributed according to the same Cauchy law/(x) = = 1/[i (1 + x 2)]; With. V. Test value, reciprocal X:

    Find its density g(y).

    Solution. Graph of a function at= 1/x is shown in Fig. 9.1.4. This function suffers a discontinuity of the second kind (jumps from - oo to + oo) at x = 0; but the inverse function is x = 1 /y is unambiguous, so you can use the same formula (9.1.7) that was derived for the monotonic function. Let us again formulate the solution in the form of two columns (on the left - the general case, on the right - the special case):


    i.e. the reciprocal Y = 1/X same as X, has a Cauchy distribution. ?

    Example 5. Molecular collision speed X distributed according to Rayleigh's law with parameter o;

    Amount of energy released Y upon collision of molecules is determined by the formula

    Find p.r. With. V. Y.

    Solution. For x > O the function (X) is monotonic. The solution to the example is again arranged in the form of two columns (on the left is the general case, on the right is a particular specific case):


    Example 6. Radius of a circle X distributed according to Rayleigh's law with parameter a:

    Find the distribution law p. V. Y- area of ​​a circle.

    Solution. S.v. Y = nX 2 -the function is monotonic when X> 0 y (y) =

    = (^/l) 1/2 ; k"OOl=-t=, from where 2 u)pu

    therefore, s. V. Has an exponential distribution law with pa- 1

    meter--. ?

    2co g

    Example 7. Through a dot A, lying on the axis Or, draw a straight line ab at an angle Hk axis Or (see Fig. 9.1.5). Angle ^is evenly distributed

    in the interval + yj. ^ a ^ ti distribution law c. V. U- abscissa of the point of intersection of the line ab with axis 0%.


    Example 8. Voltage ^distributed according to the normal law with parameters t x, st x; stabilized voltage U determined by the formula

    U Solution. S.v. U- mixed:

    where Ф(X) is the Laplace function. Distribution function r.v. U has the form:


    In Fig. 9.1.6 shows the graph G(y). In general, if the distribution function c. V. Hest F(x), That


    Example 9. The voltage stabilizer works in such a way that it limits the voltage from above:

    Find the distribution function c. V. U, if the distribution function c is given. V. X- F(x).

    Solution. By analogy with the solution to the previous example, we obtain

    Example 10. Voltage stabilizer X works in such a way that it limits the voltage from below:


    Find the distribution function c. V. Y, if specified F(x) - distribution function c. V. X.

    Solution. In accordance with the solution of Example 8, we obtain

    Let us now consider the case when the function y - A, b) possible values ​​c. V. not monotonic (Fig. 9.1.7). In this case, the inverse function x = |/ (y) is ambiguous.



    Number of values ​​of the inverse function c/ (y) depends on what value at we took; let's denote these values ​​|/i (y), |/2 (y),..., ugh), ... . Event Y is equivalent to hitting c. V. X into one of the non-overlapping segments marked with a thick line in Fig. 9.1.7, where the corresponding part of the curve y = φ (x) lies below the straight line y; in our case these segments will be: from A to i x(y); from ts/2 (y) D° Fz (y), from v|/ 4 (y) to |/ 5 (y) it. d.; the last segment may end with a dot b, and maybe one of the points y, (y) (this is unimportant). Point hits Hv these segments are incompatible events; according to the rule of addition of probabilities

    Taking into account the rule for differentiating an integral with respect to a variable included in its limits (namely: the derivative of the integral with respect to such a variable is equal to the value of the integrand of the upper limit multiplied by the derivative of the upper limit minus the value of the integrand of the lower limit multiplied by the derivative of the lower limit), we obtain in our case:


    At those points where cp(x), crossing the straight line y, decreases (the beginning of the corresponding section of the x-axis, for which The Y derivative y" (y) is negative; it is also included in the sum (9.1.11) with a minus sign; at those points where φ (x) increases, φ" (y) (end of the section) it has a plus sign. Derivatives of constants A And b are equal to zero, so it does not matter whether the points appear A And b in the form of the beginning or end of a section. All terms in formula (9.1.11) are positive, and it takes a very simple form:

    Where To- the number of values ​​of the inverse function corresponding to a given y, φ! (y); f 2 (y);...; f^(y) - values ​​of the inverse function corresponding to a given y.

    Problem 2. The law of distribution of the modulus of a random variable. The problem is stated as follows: given a continuous s. V. density/(x) in the area (- oo, + oo); random variable K is related to it by the relation:

    Find the distribution density c. V. Y.

    Solution. Function at= |x| not monotonous; its graph is shown in Fig. 9.1.8. Inverse function for a given at has two meanings: ?i (y) = - y; Fg (y) = Y- Using formula (9.1.12) we obtain:

    (negative random variable Y can't be). In particular, if the density /(x) is symmetric about the origin, i.e. /(-x) =/(x), formula (9.1.13) will give:

    Problem 3. Distribution law of the square of a random variable. Let continuous s. V. X has density /(x); find the distribution density of its square.

    Solution. Function at= x 2 is not monotonic (Fig. 9.1.9); f, (y) = -y[y;

    at 2 (y) = 4y-

    Formula (9.1.12) gives

    In the special case when s. V. X has a normal distribution

    with parameters t x = 0; a x = 1; / (x) = e~ x^/l/2l, p. V. Has distribution

    The curve of this distribution is shown in Fig. 9.1.10. ?


    Rice. 9.1.9

    So far we have only considered the case when the function argument Y= f (X)- continuous random variable. Now let's consider a simpler in essence, but more complex in writing case, when the argument X- discrete s. V. with distribution range:

    Some “similarity” of the distribution series c. V. The table will show:

    f te)

    To make a distribution series out of it, you need, firstly, to arrange the values ​​in the top row in ascending order, and, secondly, to combine those of them that turn out to be equal (due to the ambiguity of the inverse function), and add the corresponding probabilities. The series obtained in this way will be the distribution series c. V. Y.

    Example 11. Discrete s. V. X has a distribution series:

    Construct a series of distribution of its square

    Solution. The “unordered” distribution series has the form:

    Let's arrange the values ​​of c. V. Y in ascending order, we combine equal ones and add up their probabilities; we obtain the distribution series c. V. Y

    Example 12. The number of faults on a section of a high-voltage line during a year has a Poisson distribution with the parameter A. The total material damage from these faults is proportional to the square of their number:

    where c > 0 is a non-random value. Find the law of distribution of this damage.

    Solution. Distribution range X has the form:

    Since the values Y increase with values X and among them there are no coincidences (the inverse function in the area 0, 1, T,... is unambiguous), then the Timet distribution series has the form:


    PART 6

    FUNCTIONS OF RANDOM VARIABLES

    Lecture 11

      1. DISTRIBUTION LAW AND NUMERICAL CHARACTERISTICS OF FUNCTIONS OF RANDOM VARIABLES

    PURPOSE OF THE LECTURE: to introduce the concept of a function of a random variable and to classify the problems arising in this case; derive the law of distribution of a function of one random argument and the law of distribution of the sum of two random variables; explain the concept of composition of distribution laws.

    The concept of a function of a random variable

    Among the practical applications of probability theory special place are occupied by tasks that require finding distribution laws and/or numerical characteristics of functions of random variables. In the simplest case, the problem is posed as follows: a random influence is received at the input of a technical device
    ; the device is exposed to
    some functional transformation and at the output gives a random variable
    (see Fig. 6.1). We know the distribution law of the random variable
    , and it is required to find the distribution law and/or numerical characteristics of the random variable .

    Three main emerging challenges can be identified:

    1. Knowing the law of distribution of a random variable
    (or random vector
    ), find the distribution law of the output random variable
    (or
    ).

    2. Knowing the law of distribution of a random variable
    , find only the numerical characteristics of the output random variable.

    3. In some cases (with special types of transformation ) to find the numerical characteristics of the output, it is not necessary to know the distribution law of the input random variable
    , but it is enough to know only its numerical characteristics.

    Consider a random variable , depending functionally on the random variable
    , i.e.
    . Let the random variable
    is discrete and its distribution series is known:


    Where
    .

    When a random variable value is supplied to the input
    at the output we get
    with probability . And so on for all possible values ​​of the random variable
    . Thus, we get table. 6.1.

    Table 6.1


    The resulting table 6.1 in the general case there may not be a nearby distribution of a random variable , since the values ​​in the top row of the table can be arranged in non-increasing order, and some
    may even coincide.

    To convert the table. 6.1 in the distribution series of a random variable it is necessary to order the possible values
    in ascending order, and the probabilities of coinciding values
    needs to be folded.

    To find the numerical characteristics of a random variable there is no need to transform (6.1) into a distribution series, since they can be calculated from table (6.1). Indeed, finding the sum of products of possible values ​​of a random variable on their probabilities, we get

    . (6.1)

    Thus, knowing only the distribution law of the argument
    , you can find the mathematical expectation of a function of a random variable.

    Similarly, we find the variance of the random variable :

    Similarly, we determine the initial and central moments of any order of a random variable
    :

    For a continuous random variable
    , having a distribution density
    , we get

    ;

    ;

    We see that to find the numerical characteristics of a function
    there is no need to know its distribution law - knowledge of the argument distribution law is enough
    .

    Theorems on numerical characteristics of functions of random variables

    In some problems, the numerical characteristics of a system of random variables
    can be defined as functions of the numerical characteristics of a system of random variables
    . In this case, knowledge of the distribution law of the argument is not even required, for example, the joint distribution density
    , but it is enough to have only the numerical characteristics of this system of random variables. To solve such problems, the following theorems on the numerical characteristics of functions of random variables are formulated:

    1.
    , 3.
    ,

    2.
    , 4.
    ,

    Where is a non-random quantity.

    5. for any number of terms, both independent and dependent, correlated and uncorrelated.

    6. Mathematical expectation from a linear combination of random variables
    is equal to the same linear function of the mathematical expectations of the random variables under consideration:

    .

    7. The variance of the sum of random variables is equal to the sum of all elements of the correlation matrix
    these random variables

    .

    Since the correlation matrix
    is symmetrical with respect to the main diagonal on which the dispersions are located, then we rewrite the last formula in the form

    .

    If random variables
    not correlated, then the theorem on the addition of variances is valid:

    .

    8. The variance of a linear function of random variables is determined by the formula

    .

    9. The mathematical expectation of the product of two random variables is equal to the product of mathematical expectations plus covariance

    Expectation of the product of two uncorrelated random variables is equal to the product of their mathematical expectations

    10. Product variance independent random variables

    expressed by the formula

    If random variables
    independent and centered, we get

    .

    Law of distribution of the function of a random argument

    There is a continuous random variable
    with distribution density
    associated with a random variable functional dependence
    . It is required to find the distribution law of a random variable .

    Let's consider the case when
    strictly monotone, continuous and differentiable on the interval
    all possible values ​​of a random variable
    .

    Distribution function
    random variable by definition there is
    . If the function
    monotonically increases on the range of all possible values ​​of the random variable
    , then the event
    is equivalent to the event
    , Where
    there is a function that is the inverse of the function
    . When a random variable
    accepts values ​​on the site
    , then a random point
    moves along a curve
    (the ordinate is completely determined by the abscissa) (see Fig. 6.2). From strict monotony
    monotony follows
    , and therefore the distribution function of the random variable can be written as follows:

    .

    Differentiating this expression by included in the upper limit of the integral, we obtain the distribution density of the random variable in the form

    If the function
    on site
    possible values ​​of a random variable
    monotonically decreases, then, having carried out similar calculations, we obtain

    . (6.3)

    Range of possible values ​​of a random variable
    can be in expressions (6.2) and (6.3) from
    to
    .

    Since the distribution density cannot be negative, formulas (6.2) and (6.3) can be combined into one

    . (6.4)

    Example . Let the random variable function
    is linear, i.e.
    , Where
    . Continuous random variable
    has a distribution density
    , and then, using expression (6.4), we find the distribution law
    , given that the inverse function is
    , and the modulus of its derivative is equal to
    ,

    . (6.5)

    If the random variable
    has a normal distribution

    ,

    then according to (6.5) we get

    .

    This is still a normal distribution law with mathematical expectation
    , dispersion
    and average square deviation
    .

    As a result of a linear transformation of a normally distributed random variable
    we get a random variable , also distributed according to the normal law.

    The law of distribution of the sum of two random variables. Composition of distribution laws

    We have a system of two continuous random variables
    and their sum is a random variable
    . It is necessary to find the distribution law of a random variable , if the joint distribution density of the system is known
    .

    The distribution function is the area of ​​the region
    on the plane
    , where the inequality holds
    (see Fig. 6.3), i.e.

    .

    P differentiating this expression with respect to , we obtain the probability distribution density of the random variable

    .

    Taking into account the symmetry of the terms, we can write a similar relation

    .

    If random variables And
    are independent, i.e. equality is satisfied, then the last two formulas will take the form:

    ; (6.6)

    . (6.7)

    In the case when independent random variables are added And
    then they talk about compositions of distribution laws. To indicate the composition of distribution laws, the following symbolic notation is sometimes used:
    .

    The law of distribution is called composition resistant, if the composition of distribution laws of this type results in the same law again, but with different parameter values. For example, if you add two independent normal random variables, the resulting random variable will have a normal distribution law, i.e. the normal law is resistant to composition. In addition to the normal law, the distribution laws of Erlang, binomial, and Poisson are resistant to composition.



    Did you like the article? Share with your friends!