Laws of uniform and normal distribution of systems of random variables. Regression analysis

In probability theory and its applications, the two-dimensional normal distribution plays an important role. The density of a two-dimensional normal random variable (X,Y) has the form

Here
- mathematical expectations quantities X and Y;
- average square deviations quantities X and Y; r – correlation coefficient of X and Y values.

Let's assume that the random variables X and Y are not correlated, that is, r=0. Then we have:

(53)

We found that the distribution density of a system of two random variables(X,Y) is equal to the product of the distribution densities of the components X and Y, which means that X and Y are independent random variables.

Thus, the following has been proven theorem: from the non-correlation of normally distributed random variables it follows that they are independent . Since the independence of any random variables implies that they are uncorrelated, we can conclude that the terms “uncorrelated” and “independent” variables are equivalent for the case of a normal distribution.

Let us present formulas for the probability of a normally distributed two-dimensional random variable falling into various areas on a plane.

Let a random vector (X,Y), whose components are independent, be distributed according to the normal law (53). Then the probability of a random point (X,Y) falling into the rectangle R, whose sides are parallel to the coordinate axes is equal to

(54)

Where
- Laplace function. This function is tabulated.

Let the distribution density of the normal law of the system of random variables (X,Y) be given in the form (52). It is clear that this density preserves constant value on ellipses:

where C is a constant; on this basis such ellipses are called ellipses of equal probabilities. It can be shown that the probability of a point (X,Y) falling inside an ellipse equal probability equal to

(56)

Example 10 . Random variables X and Y are independent and normally distributed with Find the probability that a random point (X,Y) will fall into the ring

Solution: Since the random variables X and Y are independent, they are not correlated and, therefore, r = 0. Substituting into (C), we get

,

that is, the ellipse of equal probability has degenerated into a circle of equal probability. Then

Answer: 0,1242.

3.2. General case of n-dimensional normal distribution

Normal distribution density of the system n random variables has the form:

Where - determinant of matrix C - inverse to the covariance matrix;
- mathematical expectation of random variable X i - i-th component n -dimensional normal random vector.

From general expression all forms of the normal law follow for any number of dimensions and for any types of dependence between random variables. In particular, when n = 2 covariance matrix has the form:

(58)

its determinant
; matrix C, inverse to the covariance matrix, has the form

. (59)

Substituting and elements of matrix C in general formula(57), we obtain the formula for the normal distribution on the plane (52).

If random variables
independent, then the distribution density of the system
equal to

For n = 2, this formula takes the form (53).

In probability theory and its applications, two-dimensional normal distribution. The density of a two-dimensional normal random variable (X,Y) has the form

Here are the mathematical expectations of the values ​​X and Y; - standard deviations of X and Y values; r – correlation coefficient of X and Y values.

Let's assume that the random variables X and Y are not correlated, that is, r=0. Then we have:

(53)

We found that the distribution density of a system of two random variables (X,Y) is equal to the product of the distribution densities of components X and Y, which means that X and Y are independent random variables.

Thus, the following has been proven theorem: from the non-correlation of normally distributed random variables it follows that they are independent . Since the independence of any random variables implies that they are uncorrelated, we can conclude that the terms “uncorrelated” and “independent” variables are equivalent for the case of a normal distribution.

Let us present formulas for the probability of a normally distributed two-dimensional random variable falling into various regions on the plane.

Let a random vector (X,Y), whose components are independent, be distributed according to the normal law (53). Then the probability of a random point (X,Y) falling into the rectangle R, whose sides are parallel coordinate axes, is equal

y R d c x a b (54)

Where - Laplace function. This function is tabulated.

Let the distribution density of the normal law of the system of random variables (X,Y) be given in the form (52). It is clear that this density remains constant on ellipses:

where C is a constant; on this basis such ellipses are called ellipses of equal probabilities. It can be shown that the probability of a point (X,Y) falling inside an ellipse of equal probability is equal to

(56)

Example 10. Random variables X and Y are independent and normally distributed with Find the probability that a random point (X,Y) will fall into the ring



Solution: Since the random variables X and Y are independent, they are not correlated and, therefore, r = 0. Substituting into (C), we get

that is, the ellipse of equal probability has degenerated into a circle of equal probability. Then

Answer: 0,1242.

3.2. General case of n-dimensional normal distribution

Normal distribution density of the system n random variables has the form:

where is the determinant of the matrix C - the inverse of the covariance matrix; - mathematical expectation of random variable X i - i-th component n -dimensional normal random vector.

All forms of the normal law for any number of dimensions and for any types of dependence between random variables follow from the general expression. In particular, when n = 2 covariance matrix has the form:

(58)

its determinant ; matrix C, inverse to the covariance matrix, has the form

. (59)

Substituting the elements of matrix C into the general formula (57), we obtain the formula for the normal distribution on the plane (52).

If random variables independent, then the distribution density of the system equal to

For n = 2, this formula takes the form (53).

3.2. Functions of normally distributed random variables. Chi-square, Student, and Fisher-Snedecor distributions

Let's consider the general case: a linear function of normally distributed arguments. Let an n-dimensional normally distributed random vector be given , the random variable Y is a linear function of these quantities:

(61)

It can be shown that the random variable Y is also normally distributed with the parameters

(62)

(63)

where is the mathematical expectation of the random variable - the variance of the random variable - the correlation coefficient between and .

Example 11. Write down the distribution density of a random variable , if random variables and have a normal distribution with parameters , , , their correlation coefficient is .

Solution. According to the conditions of the problem we have: n=2; . Using formula (62), we obtain: . Using formula (63), we obtain: .

Then the required distribution function of the random variable Y has the form:

Let - independent random variables that obey a normal distribution with zero mathematical expectation and unit variance, that is, a standard normal distribution. Distribution of a random variable that is the sum of the squares of these values

. (64)

called “ CI distribution - square with n degrees of freedom ”.

The distribution density of CI – a square with n=2 degrees of freedom is equal to

(65)

The CI density – squared distribution with n degrees of freedom has the form:

(66)

Where - Euler's gamma function. As the number of degrees of freedom increases, the distribution approaches the normal distribution law (with n >30 distribution is practically no different from normal). The mathematical expectation of a distribution with n degrees of freedom is n , and the variance is 2 n .

Student distribution with n degrees of freedom St(n) is defined as the distribution of a random variable

where Z is standard normal value, independent of distribution.

The Student distribution density with n degrees of freedom has the form:

(68)

The mathematical expectation at is equal to 0, the variance at is equal to At, the Student distribution approaches normal (already at n >30 almost coincides with the normal distribution).

Fisher-Snedecor distribution (or F-distribution) with and degrees of freedom is called the distribution of a random variable

(69)

where and are random variables having distribution with and degrees of freedom, respectively.

4. Written D.T. Lecture notes on probability theory and mathematical statistics. – M.: Iris-press, 2004.

1. Basic information about systems of random variables and methods for specifying them. . 3

1.1. The concept of a system of random variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2. Probability distribution function of a two-dimensional random variable and its

properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.3. The law of probability distribution of a discrete two-dimensional random variable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.4. Probability distribution density of a continuous two-dimensional random variable and its properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.5. System of n random variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2. Dependence and independence of random variables. . . . . . . . . . . . . . . . . . . . . . . 14

2.1. Independent random variables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2. Conditional laws of distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3. Numerical characteristics of dependence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3. Normal distribution of a system of random variables. . . . . . . . . . . . . . . . 22

3.1. Bivariate normal distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3.2. General case of n-dimensional normal distribution. . . . . . . . . . . . . . . . 24

3.3. Functions of normally distributed random variables. Distributions: CI - square, Student, Fisher - Snedecor. . . . . . . . . . . . . . . . . . . 25

References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Compiled by Vera Alexandrovna Bobkova

Systems of random variables

Guidelines For independent work students

Editor G.V. Kulikova

Signed for publication on 03/02/2010. Format 60x84. Writing paper. Conditions of baking l.1.63.

Uch.-ed. l.1.81. Circulation 50 copies.

State Educational Institution of Higher Professional Education Ivanovo State University of Chemical Technology

Printed on printing equipment of the Department of Economics and Finance of the State Educational Institution of Higher Professional Education "IGHTU"

153000, Ivanovo, F. Engels Ave., 7

Let's consider a system of two random continuous variables. The distribution law of this system is normal law distribution if the probability density function of this system has the form

. (1.18.35)

It can be shown that here are the mathematical expectations of random variables, are their standard deviations, and are the correlation coefficient of variables. Calculations using formulas (1.18.31) and (1.18.35) give

. (1.18.36)

It is easy to see that if random variables distributed according to the normal law are not correlated, then they are also independent

.

Thus, for the normal distribution law, non-correlation and independence are equivalent concepts.

If , then the random variables are dependent. Conditional distribution laws are calculated using formulas (1.18.20)

. (1.18.37)

Both laws (1.18.37) represent normal distributions. In fact, let us transform, for example, the second of relations (1.18.37) to the form

.

This is truly a normal distribution law, which has conditional mathematical expectation equals

, (1.18.38)

A conditional standard deviation expressed by the formula

. (1.18.39)

Note that in the conditional law of distribution of a quantity at a fixed value, only the conditional mathematical expectation depends on this value, but not conditional variance – .

On coordinate plane dependence (1.18.38) is a straight line

, (1.18.40)

which is called regression line on .

In a completely analogous manner, it is established that the conditional distribution of a quantity at a fixed value

, (1.18.41)

there is a normal distribution with conditional mathematical expectation

, (1.18.42)

conditional standard deviation

. (1.18.43)

In this case, the regression line looks like

. (1.18.44)

Regression lines (1.18.40) and (1.18.44) coincide only when the relationship between the quantities and is linear. If the quantities and are independent, the regression lines are parallel to the coordinate axes.

End of work -

This topic belongs to the section:

Lecture notes in mathematics probability theory mathematical statistics

Department higher mathematics and computer science.. lecture notes.. in mathematics..

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material was useful to you, you can save it to your page on social networks:

All topics in this section:

Probability theory
Probability theory is a branch of mathematics in which the patterns of random mass phenomena are studied. A phenomenon that is random is called

Statistical definition of probability
An event is a random phenomenon that may or may not appear as a result of experience (ambiguous phenomenon). Indicate events in capital Latin letters

Space of elementary events
Let there be many events associated with some experience, and: 1) as a result of the experience one and only one thing appears

Actions on events
The sum of two events and

Rearrangements
The number of different permutations of elements is denoted by

Placements
By placing the elements according to

Combinations
A combination of elements

Formula for adding probabilities for incompatible events
Theorem. Probability of the sum of two incompatible events is equal to the sum of the probabilities of these events. (1

Formula for adding probabilities for arbitrary events
Theorem. The probability of the sum of two events is equal to the sum of the probabilities of these events without the probability of their product.

Probability multiplication formula
Let two events and be given. Consider the event

Total Probability Formula
Let be a complete group of incompatible events; they are called hypotheses. Consider some event

Hypothesis Probability Formula (Bayes)
Let's look again - full group incompatible hypotheses and events

Asymptotic Poisson formula
In cases where the number of tests is large and the probability of an event occurring

Random discrete quantities
A random quantity is a quantity that, when repeating an experiment, can take on unequal values. numeric values. The random variable is called discrete,

Random continuous variables
If, as a result of experiment, a random variable can take any value from a certain segment or the entire real axis, then it is called continuous. Law

Probability density function of a random continuous variable
Let it be. Let's consider a point and give it increments

Numerical characteristics of random variables
Random discrete or continuous variables are considered completely specified if their distribution laws are known. In fact, knowing the distribution laws, you can always calculate the probability of hitting

Quantiles of random variables
Quantile of the order of a random continuous variable

Mathematical expectation of random variables
The mathematical expectation of a random variable characterizes its average value. All values ​​of the random variable are grouped around this value. Let us first consider the random discrete variable

Standard deviation and dispersion of random variables
Let us first consider a random discrete variable. Numerical characteristics mode, median, quantiles and mathematical expectation

Moments of random variables
In addition to mathematical expectation and dispersion, probability theory uses numerical characteristics of higher orders, which are called moments of random variables.

Theorems on the numerical characteristics of random variables
Theorem 1. The mathematical expectation of a non-random value is equal to this value itself. Proof: Let

Binomial distribution law

Poisson distribution law
Let a random discrete variable take the values

Uniform distribution law
The uniform law of distribution of a random continuous variable is the law of the probability density function, which

Normal distribution law
The normal distribution law of a random continuous variable is the density function law

Exponential distribution law
The exponential or exponential distribution of a random variable is used in applications of probability theory such as the theory queuing, reliability theory

Systems of random variables
In practice, in applications of probability theory, one often encounters problems in which the results of an experiment are described not by one random variable, but by several random ones at once.

System of two random discrete variables
Let two random discrete variables form a system. Random variable

System of two random continuous variables
Let now the system be formed by two random continuous quantities. The distribution law of this system is called probably

Conditional laws of distribution
Let dependent random continuous quantities

Numerical characteristics of a system of two random variables
The starting moment order of the system of random variables

System of several random variables
The results obtained for a system of two random magnitudes can be generalized to the case of systems consisting of any number random variables. Let the system be formed by a set

Limit theorems of probability theory
The main goal of the discipline theory of probability is to study the patterns of random mass phenomena. Practice shows that observation of a mass of homogeneous random phenomena discovering

Chebyshev's inequality
Consider a random variable with mathematical expectation

Chebyshev's theorem
If the random variables are pairwise independent and have finite, collectively bounded variances

Bernoulli's theorem
With an unlimited increase in the number of experiments, the frequency of occurrence of an event converges in probability to the probability of the event

Central limit theorem
When adding random variables with any distribution laws, but with jointly limited variances, the distribution law

Main problems of mathematical statistics
The laws of probability theory discussed above are mathematical expression real patterns that actually exist in various random mass phenomena. Studying

A simple statistical population. Statistical distribution function
Let's consider some random variable whose distribution law is unknown. Required based on experience

Statistical series. Histogram
At large number observations (about hundreds) population becomes inconvenient and cumbersome for recording statistical material. For clarity and compactness, statistical material

Numerical characteristics of statistical distribution
In probability theory, various numerical characteristics of random variables were considered: mathematical expectation, dispersion, initial and central points different orders. Similar numbers

Selection of theoretical distribution using the method of moments
Any statistical distribution inevitably contains elements of randomness associated with the limited number of observations. With a large number of observations, these elements of randomness are smoothed out,

Checking the plausibility of the hypothesis about the form of the distribution law
Let the given statistical distribution approximated by some theoretical curve or

Consent criteria
Let's consider one of the most commonly used goodness-of-fit criteria - the so-called Pearson criterion. Guess

Point estimates for unknown distribution parameters
In pp. 2.1. – 2.7 we examined in detail how to solve the first and second main problems mathematical statistics. These are the problems of determining the laws of distribution of random variables based on experimental data

Estimates for expectation and variance
Let over a random variable with unknown mathematical expectation

Confidence interval. Confidence probability
In practice, with a small number of experiments on a random variable, the approximate replacement unknown parameter

In the case when it is necessary to use two random variables to study random phenomena X And Y together, we say that there is a system ( X, Y) two random variables. Possible system values ​​( X, Y) represent random points (x, y) in the area possible values systems.

Discrete and continuous systems are distinguished depending on the type of random variables included in them.

The distribution law of a discrete system is specified in the form of a table or distribution function.


Lecture 6. Distribution laws for a system of two random variables

System distribution table{X, Y) contains a set of quantities xi, yj And P(xi,yj), Where P(xi,yj)=P(X=xi,Y=yj), n, m– number of possible values ​​of a random variable X, Y, respectively.

System distribution function{X, Y) is given in the form:



Lecture 6. Distribution laws for a system of two random variables

Law of distribution of a continuous system ( X, Y) can be represented distribution function F(x, y)or distribution density φ(x, y):

Lecture 6. Distribution laws for a system of two random variables

Private system distributions{X, Y) are the laws of distribution of each of the random variables X And Y.

If X And Y are discrete random variables, then the probabilities P(xi) And P(yj), necessary to find their distribution laws, are found from the distribution table using the formulas:

For continuous systems {X, Y) partial distribution densities have the form:


Lecture 6. Distribution laws for a system of two random variables

Conditional distributions are determined:

conditional probabilities P(xi/yj), P(yj/xi) for discrete systems ( X, Y) and conditional distribution densities ( x/y), (y/x) for continuous systems ( X, Y}:

Lecture 6. Distribution laws for a system of two random variables

Conditions for the independence of random variables X and Y:

– for discrete systems (8)

– for continuous systems (9)

When these relations are fulfilled, it follows:

(10) (11)

Probability of hitting possible values ​​of a continuous system{X, Y) to area ( D) is determined by the formula:

(12)

Lecture 6. Distribution laws for a system of two random variables

Example 3.1

The distribution law of the system (X, Y) is given by the table:

Required:

a) find the partial distributions of X and Y;

b) conditional distribution law of Y at X= -1;

c) determine whether the quantities X and Y are dependent?

Lecture 6. Distribution laws for a system of two random variables

Solution:

a) Find the partial distributions of X and Y

b) Conditional distribution law of Y at X= -1. When X= -1, the random variable Y has next law distributions:

c) Determine whether the quantities X and Y are dependent?

Since in the unconditional and conditional laws the probability distributions P(yj) and P(yj / X = -1) are different, therefore, the random variables X and Y are dependent.




Lecture 6. Distribution laws for a system of two random variables

Example 3.2

Given a system (X, Y), uniformly distributed in the square |x|+|y|1 (see Fig. 22).

Determine: a) particular laws of distribution of X and Y; b) are these random variables dependent?



Lecture 6. Distribution laws for a system of two random variables

Solution:

The distribution law (X, Y) has the form:

Density for |x|≤1 is determined by the formula:

Lecture 6. Distribution laws for a system of two random variables

Then (see Fig. 23):

Similarly for (y) we obtain:

Since the independence condition is not satisfied:

then the random variables X and Y are dependent.

To the numerical characteristics of the system ( X, Y) include:

  • numerical characteristics of random variables X and Y:

mx, my, Dx, Dy, σx, σy;
  • numerical characteristics of conditional distributions:

mx/y, my/x, Dx/y, Dy/x, σx/y, σy/x;
  • numerical characteristics of the connection of random variables:

Kxy And rxy

Lecture 7. Numerical characteristics of a system of two random variables

The numerical characteristics of the first group are determined using the previously given formulas.

Numerical characteristics of the second group in relation to a continuous system ( X, Y) are determined by the formulas:

For discrete systems ( X, Y) these formulas are obvious.

Lecture 7. Numerical characteristics of a system of two random variables

Quantities Kxy And rxy are characteristics of linear correlation dependence between X And Y; they are defined by dependencies:

Where Kxy– correlation moment or moment of connection between X And Y;

– correlation coefficient between X And Y, -1  rx  1. (16)

Correlation coefficient characterizes the degree of linear correlation between X And Y.


Lecture 7. Numerical characteristics of a system of two random variables

Under correlation dependence such a dependence is understood when, with a change in one random variable, for example X, the other - Y its mathematical expectation changes ( my/x).

When | rxy|=1 there is a linear functional relationship between X And Y, at rxy=0 random variables X And Y uncorrelated.

If X And Y independent, then they are uncorrelated. If rxy=0, then random variables X And Y may be dependent.


Lecture 7. Numerical characteristics of a system of two random variables

Example 3.3

Under the conditions of example 3.1. determine: mx, my, Dx, Dy, Kxy, rxy.

Solution:



Lecture 7. Numerical characteristics of a system of two random variables

Example 3.4

Under the conditions of example 3.2. determine the numerical characteristics of the system (X, Y).

Solution:

Lecture 7. Numerical characteristics of a system of two random variables

is the density of uniform distribution in the interval

(-(1-|x|), (1-|x|))

Similarly, you can write expressions for mx/y, Dx/y.



In the general case, when the random variables included in the system ( X, Y), are dependent, the normal distribution density has the form:

(17)

Partial distributions are determined by the formulas:

(18)

(19)

Lecture 8. Normal distribution law for a system of two random variables

Conditional densities ( x/y) and ( y/x) have the form of normal distributions:

(20) (21)

Where

(22) (23)

(24) (25)

Lecture 8. Normal distribution law for a system of two random variables

If random variables X And Y are independent, then the density takes the form:

Hit probability of a normally distributed system (X,Y)(in the case of independent random variables X And Y) into a rectangle with sides parallel to the coordinate axes, are determined using the Laplace function according to the formula:

(27)


Lecture 8. Normal distribution law for a system of two random variables

Example 3.5

Determine the probability of a projectile hitting a target that has the shape of a rectangle with center coordinates: xts = 10 m, yts = 5 m. The sides of the rectangle are parallel to the coordinate axes and are equal: along the ox axis: 2 = 20 m, along the oy axis: 2k = 40 m. Coordinates of the aiming point: mx = 5 m, my = 5 m. The dispersion characteristics of projectiles along the ox and oy axes, respectively, are equal to: σx = 20 m, σy = 10 m.

Solution: Let's denote the area of ​​the rectangle by D.

Then:




Topic 4. Functions of random variables



Lecture 9. The law of distribution of a function of one random argument

The order of finding the distribution law of a function Y=y(X), Where X– discrete random variable, presented in example 4.1.

If possible values ​​of random variables X And Y linked by functional dependence y=y(x), Where y(x) is continuous and differentiable, and the law of distribution of the random variable is known X-, then the distribution law of the random variable Y- for the case when y(x) monotonically increases or decreases in the range of its possible values, expressed by formula (1):

In formula (1) x(y) there is an inverse function.

In the case when the function y(x) has n sections of decreasing and increasing, then this formula is written in the form (2).


Lecture 9. The law of distribution of a function of one random argument

Example 4.1

The random variable X has a distribution law:

Find the distribution law of a random variable

Solution: Find possible values ​​of the function

at =0, 1, 2, 3.

They are respectively equal to: 1, 2, 1, 0. Therefore, the possible values ​​are: 0, 1, 2.

Lecture 9. The law of distribution of a function of one random argument

We find the probabilities of these possible values:

Y distribution law:



Lecture 9. The law of distribution of a function of one random argument

Example 4.2

Find the distribution density of a random variable and plot it if the random variable X is distributed uniformly over the interval

Solution: Graph of a function

shown in Fig. 24.



Lecture 9. The law of distribution of a function of one random argument

The random variable X has the following distribution density:

Finding the inverse function x(y)and its derivative:



Lecture 9. The law of distribution of a function of one random argument

We finally obtain the following expression for the density

Graph of this density

shown in Fig. 25.



Lecture 10. Numerical characteristics of functions of random variables

Basic formulas:



Lecture 10. Numerical characteristics of functions of random variables



Lecture 10. Numerical characteristics of functions of random variables

Where Xi– independent random variables,

Lecture 10. Numerical characteristics of functions of random variables



Lecture 10. Numerical characteristics of functions of random variables

For n random variables, the numerical characteristics are specified by the population and the correlation matrix:

Notation in the form of a triangular matrix is ​​valid, because

Lecture 10. Numerical characteristics of functions of random variables

The correlation matrix can be presented in a normalized form, i.e. matrix of correlation coefficients:

Lecture 10. Numerical characteristics of functions of random variables

Example 4.3

Determine the numerical characteristics of a random variable

if and

Solution:

The random variable U is a linear function of the random arguments X, Y and Z. Therefore, using formulas (11) and (17) of this section we obtain:

Numerical characteristics of a system of random variables

The distribution law fully characterizes a system of random variables, but using it in practice is not always convenient due to its complexity. Often it is enough to know the numerical characteristics of the random variables that make up the system, which include: mathematical expectations M[X], M[Y], variances D[X], D[Y] and standard deviations. They are calculated using the following formulas.

The variances of the components can also be calculated using shortened formulas

An important role in the theory of two-dimensional random variables is played by the correlation moment (covariance), which characterizes linear connection between system components

The correlation moment is calculated using the following formulas.

For discrete systems random variables

For continuous systems of random variables

Along with correlation moment dimensionless characteristic is used correlation connection- correlation coefficient

For any systems of random variables

Random variables X and Y are called uncorrelated if

Independent quantities are always uncorrelated.

The conditional law of distribution of a random variable included in the system is the law of its distribution, calculated under the condition that another random variable has taken a certain value. For systems of continuous random variables conditional laws are expressed by conditional distribution densities of the components

Moreover, (6.9)

At the same time

Laws of uniform and normal distribution of systems of random variables

Uniform law. If all the values ​​of the random variables included in the system are located inside the region D, and the probability density of the system has the following form

then (X,Y) is subordinated uniform law distributions.

Normal law. If the distribution density of the system (X,Y) has the form

where are mathematical expectations; - standard deviations, a is the correlation coefficient, then the system is subject to the normal distribution law.

For uncorrelated random variables normal density distribution

Example 6.2. It is planned to operate 3 enterprises on another year. System (X,Y)

where is the company number

Amount of investments (in thousand conventional monetary units),

Defined by a table

The distribution law of the X component means that, regardless of the volume of investments, the first enterprise will have investments with a probability of 0.3, the second with a probability of 0.2 and the third with a probability of 0.5. The Y component corresponds to the distribution law

and this means that, regardless of the enterprise number, the volume of investments can be equal to 3 thousand conventional units. den. units with a probability of 0.5 or 4 thousand conventional monetary units. with probability 0.5.

To determine the numerical characteristics of the components, we will use the found distribution laws of X and Y and formulas for determining the numerical characteristics of discrete systems

Average investment volume;

Deviation from the average investment volume

Relationship between enterprise number and investment volume

Example 6.3. During a certain period of time, production used two types of raw materials. Random variables X and Y are respectively the volumes of raw materials, expressed in conventional units. The probability distribution density of the system has the form



Did you like the article? Share with your friends!