Find the correlation function and variance of the random process. Correlation function of a random process

Measurement errors due to induced interference and intrinsic noise electronic devices, are described using a mathematical theory called " theory of random processes". Let us recall the basic concepts of this theory, which we will use in the further presentation and which are used by GOST 8.009 [GOST] when normalizing the random component of the measurement error.

,
.
.

In the limit, when the given parameter estimates tend to their true values. In the above formulas, the same notations are used for estimating parameters and the parameters themselves, since in what follows we will only use estimates, unless otherwise specifically stated.

Standalone implementation random process is a deterministic (non-random) function, so its spectral characteristic can be found using the Fourier transform:

In accordance with this definition, noise is measured in or, etc. Note that in the theory of random processes the concept of power differs from the generally accepted one: it is assumed that the noise energy is released at a resistance of 1 Ohm, but the dimension is not indicated, therefore, instead of the power dimension used , . Likewise, energy is not measured in , and in .

Autocorrelation function and power spectral density are related to each other by the Fourier transform ( Wiener-Khinchin theorem[Baskakov]):

;
,

If the energy spectrum lies in the frequency range from >0 to , for example, due to the use of a filter, then we can assume that outside the specified frequency range its values ​​are equal to zero and this allows us to change the limits of integration in (4.16):

.

When using formulas (4.16) and (4.19), we must remember that it uses a two-sided energy spectrum (symmetrical relative to the origin of the ordinate axis). In case single-sided spectrum , specified in the frequency range, the coefficient “2” should be absent:

In foreign reference books on graphs of the noise power spectral density of transistors, operational amplifiers, etc., the square root of the noise power spectral density, having the dimension , etc., is usually plotted along the ordinate axis. In this case, the noise voltage (rms value) can be found as

.

For white noise, the previous expression is simplified:

.

Consider the summation of two random errors and with zero mathematical expectation (i.e. centered random variables). By definition, the variance of the sum of two random variables is equal to the mathematical expectation of the square of their sum:

= ,

where and - variance operators And mathematical expectation ; , - standard deviations random variables and . Magnitude

called covariance(“joint variation”) of random variables and .

The covariance of discrete random variables can be estimated from their discrete values And using the arithmetic mean formula:

.

Correlation coefficient is called the ratio of covariance to the product of the mean square deviations and random variables and :

.

Here the "-" sign is used when random variables are subtracted, for example, if the difference in voltages of two measuring channels. In this case, the presence of correlation between channels partially reduces the difference error.

In the case when the random variables are statistically independent (), the previous expression is simplified:

.

This summation is called geometric, since it works in a similar way to finding the hypotenuse of a right triangle.

If the correlation coefficient is , then the correlation coefficient can be estimated as. The tangent of the line's slope is called the regression coefficient. The equation of the regression line can be obtained

Statistical relationship between the errors of measuring instruments in general case nonlinear, but this nonlinearity is usually neglected.

When researching questions dependence or independence two or more cross sections of random processes, knowledge only of the mathematical expectation and dispersion of the r.p. not enough.

To determine the relationship between various random processes, the concept of a correlation function is used - an analogue of the concept of covariance of random variables (see T.8)

Correlation (covariance, autocovariance, autocorrelation) function of a random process
called non-random function two arguments

equal to the correlation moment of the corresponding sections
And
:

or (taking into account the designation of centered random function
) we have

Here are the main properties of the correlation function
random process
.

1. Correlation function for the same values ​​of the arguments, it is equal to the dispersion of the r.p.

Really,

The proven property allows one to calculate the m.o. and correlation function being the main characteristics of a random process, there is no need to calculate variance.

2. The correlation function does not change with respect to the replacement of arguments, i.e. is a symmetric function with respect to its arguments: .

This property is directly derived from the definition of the correlation function.

3. If a non-random function is added to a random process, then the correlation function does not change, i.e. If
, That. In other words

is a periodic function with respect to any non-random function.

Indeed, from the chain of reasoning

it follows that . From here we obtain the required property 3.

4. The modulus of the correlation function does not exceed the product of the r.c.o., i.e.

The proof of property 4. is carried out similarly as in paragraph 12.2. (Theorem 12..2), taking into account the first property of the correlation function of the r.p.
.

5. When multiplying s.p.
by a non-random multiplier
its correlation function will be multiplied by the product
, i.e., if
, That

5.1. Normalized correlation function

Along with the correlation function s.p. also considered normalized correlation function(or autocorrelationfunction)
defined by equality

.

Consequence. Based on property 1, the equality holds

.

By its meaning
similar to the correlation coefficient for r.v., but is not a constant value, but depends on the arguments And .

Let's list properties of the normalized correlation function:

1.

2.

3.
.

Example 4. Let s.p. is determined by the formula, i.e.
s.v.,

distributed according to the normal law with

Find the correlation and normalized functions of the random process

Solution. By definition we have

those.
From here, taking into account the definition of the normalized correlation function and the results of solving the previous examples, we obtain
=1, i.e.
.

5.2. Cross correlation function of a random process

To determine the degree of dependence sections two random processes use a correlation link function or cross-correlation function.

Cross correlation function of two random processes
And
called a non-random function
two independent arguments And , which for each pair of values And equal to the correlation moment of two sections
And

Two sp.
And
are called uncorrelated, if their mutual correlation function is identically equal to zero, i.e. if for any And takes place
If for any And it turns out
, then random processes
And
are called correlated(or related).

Let us consider the properties of the cross-correlation function, which are directly derived from its definition and the properties of the correlation moment (see 12.2):

1. When indices and arguments are simultaneously rearranged, the cross-correlation function does not change, that is

2. The modulus of the mutual correlation function of two random processes does not exceed the product of their standard deviations, that is

3. The correlation function will not change if random processes
And
add non-random functions
And
accordingly, that is
, where respectively
And

4. Non-random multipliers
can be taken out as a correlation sign, that is, if
and then

5. If
, That.

6. If random processes
And
uncorrelated, then the correlation function of their sum is equal to the sum of their correlation functions, that is.

To assess the degree of dependence of the cross sections of two s.p. also used normalized cross-correlation function
, defined by the equality:

Function
has the same properties as the function
, but property 2

is replaced by the following double inequality
, i.e. the modulus of the normalized cross-correlation function does not exceed unity.

Example 5. Find the mutual correlation function of two r.p.
And
, Where
random variable, while

Solution. Because,.

Expectation and variance are important characteristics random process, but they do not give a sufficient idea of ​​what character individual realizations of the random process will have. This can be seen from Fig. 9.3, which shows the implementation of two random processes, completely different in their structure, although they have

the same values ​​of mathematical expectation and variance. Dashed lines in Fig. Figure 9.3 shows the values ​​for random processes.

The process shown in Fig. 9.3, a, from one section to another proceeds relatively smoothly, and the process in Fig. 9.3, b has strong variability from section to section. Therefore, the statistical connection between sections in the first case is greater than in the second, but this cannot be established either by the mathematical expectation or by the dispersion.

To some extent characterize internal structure random process, i.e. take into account the relationship between the values ​​of the random process in various moments time or, in other words, to take into account the degree of variability of the random process, it is necessary to introduce the concept of the correlation (autocorrelation) function of the random process.

Correlation function of a random process is called a non-random function of two arguments which, for each pair of arbitrarily chosen values ​​of the arguments (instants of time), is equal to the mathematical expectation of the product of two random variables of the corresponding sections of the random process:

where is the two-dimensional probability density; - centered random process; - mathematical expectation(mean value) of a random process.

Various random processes, depending on how their statistical characteristics change over time, are divided into stationary and non-stationary. Divide stationarity in in the narrow sense and stationarity in in a broad sense.

Stationary in the narrow sense called a random process if its n-dimensional distribution functions and probability densities for any time do not depend on the shift of all points

Along the time axis the same size i.e.

This means that the two processes have the same statistical properties for any i.e. the statistical characteristics of a stationary random process are constant over time.

A stationary random process is a kind of analogue of a steady process in deterministic systems. Any transition process is not stationary.

Stationary in a broad sense is a random process whose mathematical expectation is constant:

and the correlation function depends on only one variable - the differences in the arguments, and the correlation function is denoted by

Processes that are stationary in the narrow sense are necessarily stationary in the broad sense; however, the converse statement is, generally speaking, false.

The concept of a random process, stationary in the broad sense, is introduced when only the mathematical expectation and the correlation function are used as statistical characteristics of a random process. The part of the theory of random processes that describes the properties of a random process through its mathematical expectation and correlation function is called correlation theory.

For a random process with normal law distribution, the mathematical expectation and correlation function completely determine its n-dimensional probability density.

Therefore, for normal random processes the concepts of stationarity in the broad and narrow sense coincide.

The theory of stationary processes has been most fully developed and allows for relatively simple calculations for many practical cases. Therefore, it is sometimes advisable to make the assumption of stationarity also for those cases where the random process, although non-stationary, during the considered period of operation of the system, the statistical characteristics of the signals do not have time to change significantly. In what follows, unless otherwise stated, random processes that are stationary in the broad sense will be considered.

When studying random processes that are stationary in the broad sense, we can limit ourselves to considering only processes with a mathematical expectation (average value) equal to zero, i.e., since a random process with a non-zero mathematical expectation is represented as the sum of a process with a zero mathematical expectation and a constant non-random ( regular) value equal to the mathematical expectation of this process (see below § 9.6).

When the expression for the correlation function

In the theory of random processes, two concepts of average values ​​are used. The first concept of the average value is the average value of the set (or mathematical expectation), which is determined based on observation of the set of realizations of a random process at the same point in time. The average value over a set is usually denoted by a wavy line above the expression describing the random function:

In general, the average value over a set is a function of time

Another concept of average value is the average value over time, which is determined based on observation of a separate implementation of a random process over a period of time.

a sufficiently long time T. The average value over time is indicated by a straight line above the corresponding expression of the random function and is determined by the formula:

if this limit exists.

The time average is generally different for individual realizations of the set that define the random process. Generally speaking, for the same random process, the set average and time average values ​​are different. However, there is a class of stationary random processes, called ergodic, for which the average over the set is equal to the average over time, i.e.

The correlation function of an ergodic stationary random process decreases indefinitely in absolute value as

However, it must be borne in mind that not every stationary random process is ergodic, for example, a random process whose each implementation is constant in time (Fig. 9.4) is stationary, but not ergodic. In this case, the average values ​​determined from one implementation and from processing multiple implementations do not coincide. The same random process in the general case can be ergodic with respect to some statistical characteristics and non-ergodic with respect to others. In what follows, we will assume that the ergodicity conditions are satisfied with respect to all statistical characteristics.

The ergodicity property has a very large practical significance. To determine statistical properties Some objects, if it is difficult to carry out simultaneous observation of them at an arbitrarily selected point in time (for example, if there is one prototype), it can be replaced by long-term observation of one object. In other words, a separate implementation of the ergodic random

process over an infinite period of time completely determines the entire random process with its infinite implementations. In fact, this fact underlies the method described below experimental determination correlation function of a stationary random process according to one implementation.

As can be seen from (9.25), the correlation function is the average value over the set. For ergodic random processes, the correlation function can be defined as the time average of the product, i.e.

where is any implementation of a random process; x is the average value over time, determined by (9.28).

If the mean value of a random process is zero then

Based on the ergodicity property, one can dispersion [see. (9.19)] defined as the time average of the square of the centered random process, i.e.

Comparing expressions (9.30) and (9.32) at one can establish very important connection between the dispersion and the correlation function - the dispersion of a stationary random process is equal to the initial value of the correlation function:

From (9.33) it is clear that the dispersion of a stationary random process is constant, and therefore the standard deviation is constant:

The statistical properties of the connection between two random processes can be characterized by a mutual correlation function which, for each pair of arbitrarily chosen argument values, is equal to

For ergodic random processes, instead of (9.35) we can write

where are any realizations of stationary random processes, respectively.

The cross-correlation function characterizes the mutual statistical relationship of two random processes at different points in time, separated from each other by a period of time. The value characterizes this relationship at the same point in time.

From (9.36) it follows that

If random processes are not statistically related to each other and have equal to zero average values, then their cross-correlation function for all is equal to zero. However reverse output that if the cross-correlation function is equal to zero, then the processes are independent, it can only be done in in some cases(in particular, for processes with a normal distribution law), the general strength inverse law does not have.

Note that correlation functions can also be calculated for non-random (regular) time functions. However, when they talk about the correlation function of a regular function, this is simply understood as the result of a formal

applying to a regular function an operation expressed by an integral:

Let us present some basic properties of correlation functions

1. Initial value of the correlation function [see (9.33)] is equal to the variance of the random process:

2. The value of the correlation function cannot exceed it for any initial value, i.e.

To prove this, consider the obvious inequality from which it follows

We find the average values ​​over time from both sides of the last inequality:

Thus, we obtain the inequality

3. There is a correlation function even function, i.e.

This follows from the very definition of the correlation function. Really,

therefore, on the graph, the correlation function is always symmetrical about the ordinate.

4. The correlation function of the sum of random processes is determined by the expression

where are the cross-correlation functions

Really,

5. Correlation function constant value equal to the square of this constant value (Fig. 9.5, a), which follows from the very definition of the correlation function:

6. The correlation function of a periodic function, for example, is a cosine wave (Fig. 9-5, 5), i.e.

having the same frequency as and independent of the phase shift

To prove this, note that when finding the correlation functions periodic functions you can use the following equality:

where is the period of the function

The last equality is obtained after replacing the integral with limits from -T to T at T with the sum of individual integrals with limits from to , where and using the periodicity of the integrands.

Then, taking into account the above, we get t.

7. Correlation function of a time function expanded into a Fourier series:

Rice. 9.5 (see scan)

based on the above, has the following form:

8. A typical correlation function of a stationary random process has the form shown in Fig. 9.6. It can be approximated by the following analytical expression:

With growth, the connection between them weakens and the correlation function becomes smaller. In Fig. 9.5, b, c show, for example, two correlation functions and two corresponding realizations of a random process. It is easy to see that the correlation function corresponding to a random process with more fine structure, decreases faster In other words, the more high frequencies are present in a random process, the faster the corresponding correlation function decreases.

Sometimes there are correlation functions that can be approximated by the analytical expression

where is the dispersion; - attenuation parameter; - resonant frequency.

Correlation functions of this type have, for example, random processes such as atmospheric turbulence, radar signal fading, angular flicker of a target, etc. Expressions (9.45) and (9.46) are often used to approximate correlation functions obtained as a result of processing experimental data.

9. The correlation function of a Stationary random process, on which a periodic component with frequency is superimposed, will also contain a periodic component of the same frequency.

This circumstance can be used as one of the ways to detect “hidden periodicity” in random processes, which may not be detected at first glance at individual records of the implementation of a random process.

An approximate form of the correlation function of a process containing, in addition to the random component, also a periodic component, is shown in Fig. 9.7, where the correlation function corresponding to the random component is indicated. To identify the hidden periodic component (this problem arises, for example, when identifying a small useful signal against the background of large noise), it is best to determine the correlation function for large values When random signal is already relatively weakly correlated and the random component has little effect on the form of the correlation function.

Interference in communication systems is described by methods of the theory of random processes.

A function is called random if, as a result of an experiment, it takes one form or another, and it is not known in advance which one. A random process is a random function of time. Specific view, which assumes a random process as a result of an experiment, is called a realization of a random process.

In Fig. Figure 1.19 shows a set of several (three) implementations of the random process , , . Such a collection is called an ensemble of realizations. With a fixed value of the moment of time in the first experiment we obtain a specific value, in the second - , in the third - .

The random process is dual in nature. On the one hand, in each specific experiment it is represented by its implementation - a non-random function of time. On the other hand, a random process is described by a set of random variables.

Indeed, let us consider a random process at a fixed point in time. Then in each experiment it takes one value, and it is not known in advance which one. Thus, a random process considered at a fixed point in time is random variable. If two moments of time and are recorded, then in each experiment we will obtain two values ​​of and . In this case, joint consideration of these values ​​leads to a system of two random variables. When analyzing random processes at N points in time, we arrive at a set or system of N random variables .

Mathematical expectation, dispersion and correlation function of a random process. Since a random process considered at a fixed point in time is a random variable, we can talk about the mathematical expectation and dispersion of a random process:

, .

Just as for a random variable, dispersion characterizes the spread of values ​​of a random process relative to the average value. The more, the greater the likelihood of very large positive and negative values process. A more convenient characteristic is the average standard deviation(RMS), which has the same dimension as the random process itself.

If a random process describes, for example, a change in the distance to an object, then the mathematical expectation is the average range in meters; dispersion is measured in square meters, and Sco is measured in meters and characterizes the dispersion possible values range relative to average.

The mean and variance are very important characteristics that allow us to judge the behavior of a random process at a fixed point in time. However, if it is necessary to estimate the “rate” of change in a process, then observations at one point in time are not enough. For this purpose, two random variables are used, considered together. Just as for random variables, a characteristic of the connection or dependence between and is introduced. For a random process, this characteristic depends on two moments in time and is called the correlation function: .

Stationary random processes. Many processes in control systems occur uniformly over time. Their basic characteristics do not change. Such processes are called stationary. The exact definition can be given as follows. A random process is called stationary if any of its probabilistic characteristics do not depend on the shift in the origin of time. For a stationary random process, the mathematical expectation, variance and standard deviation are constant: , .

Correlation function stationary process does not depend on the origin t, i.e. depends only on the difference in time:

The correlation function of a stationary random process has the following properties:

1) ; 2) ; 3) .

Often the correlation functions of processes in communication systems have the form shown in Fig. 1.20.

Rice. 1.20. Correlation functions of processes

The time interval over which the correlation function, i.e. the magnitude of the connection between the values ​​of a random process decreases by M times, called the interval or correlation time of the random process. Usually or . We can say that the values ​​of a random process that differ in time by the correlation interval are weakly related to each other.

Thus, knowledge of the correlation function allows one to judge the rate of change of a random process.

Another important characteristic is the energy spectrum of a random process. It is defined as the Fourier transform of the correlation function:

.

Obviously, the reverse transformation is also true:

.

The energy spectrum shows the power distribution of a random process, such as interference, on the frequency axis.

When analyzing an ACS, it is very important to determine the characteristics of a random process at the output of a linear system with known characteristics of the process at the input of the ACS. Let us assume that the linear system is given by a pulsed one step response. Then the output signal at the moment of time is determined by the Duhamel integral:

,

where is the process at the system input. To find the correlation function, we write and after multiplication we find the mathematical expectation

Subject correlation analysis is the study of probabilistic dependencies between random variables.

Quantities are independent if the law of distribution of each of them does not depend on the value assumed by the other. Such values ​​can be considered, for example, the endurance limit of the part material and the theoretical stress concentration coefficient in the dangerous section of the part.

Quantities are related probabilistic or stochastic dependencies if known value One quantity corresponds not to a specific value, but to another distribution law. Probabilistic dependencies occur when quantities depend not only on their common factors, but also on various random factors.

Full information about the probabilistic connection of two random variables is represented by the joint distribution density f(x,y) or conditional distribution densities f(x/y), f(y/x), i.e., the distribution densities of random variables X and Y when specifying specific values at And X respectively.

Joint Density And conditional densities distributions are related by the following relations:

The main characteristics of probabilistic dependencies are the correlation moment and the correlation coefficient.

Correlation moment two random variables X and Y is the mathematical expectation of the product of centered random variables:

for discrete

for continuous

where m x and m y– mathematical expectations of X and Y values; р ij– probability individual values x i And y i.

The correlation moment simultaneously characterizes the connection between random variables and their scattering. In terms of its dimension, it corresponds to the variance for an independent random variable. To highlight the characteristics of the relationship between random variables, we proceed to the correlation coefficient, which characterizes the degree of closeness of the relationship and can vary within the range -1 ≤ ρ ≤ 1.

;

where S x and S y– standard deviations of random variables.

Values ρ = 1 and ρ = –1 indicate functional dependence, value ρ = 0 indicates that random variables are uncorrelated

Consider correlation both between quantities and between events, as well as multiple correlation, characterizing the relationship between many quantities and events.

With a more detailed analysis of the probabilistic relationship, the conditional mathematical expectations of random variables are determined m y/x And m x/y, i.e. mathematical expectations of random variables Y and X for given specific values X And at respectively.

Dependence of conditional mathematical expectation t u/x from X called regression of Y on X. Dependence t x/u from at corresponds to regression of X on Y.

For normal distributed quantities Y and X regression equation is:

for regression of Y on X

for regression of X on Y

The most important area of ​​application of correlation analysis to reliability problems is the processing and generalization of the results of operational observations. Results of observing random variables Y and X represented by paired values y i, x i i-th observation, where i=1, 2 . . . p; n– number of observations.

Evaluation r correlation coefficient ρ determined by the formula

Where , – estimates of mathematical expectations t x And that respectively, i.e. the average of n observations of values

s x , s y- estimates of standard deviations S x And S y accordingly:


Having designated the estimate of conditional mathematical expectations t y/x, t x / y respectively through and , empirical regression equations U By X And X By Y written in the following form:

As a rule, only one of the regressions has practical value.

With a correlation coefficient r=1 the regression equations are identical.

Question No. 63 Estimation of statistical parameters using confidence intervals

If the value of the tested parameter is estimated by one number, then it is called a point value. But in most problems you need to find not only the most reliable numerical value, but also to assess the degree of reliability.

You need to know what error is caused by replacing a true parameter A his point estimate; with what degree of confidence can one expect that these errors will not exceed known predetermined limits.

For this purpose in mathematical statistics They use so-called confidence intervals and confidence probabilities.

If for the parameter A unbiased estimate obtained from experience , and the task is set to evaluate the possible error, then it is necessary to assign some sufficient high probabilityβ (for example β = 0.9; 0.95; 0.99, etc.), such that an event with probability β could be considered practically certain.

In this case, one can find a value of ε for which P(| - a| < ε) = β.

Rice. 3.1.1 Confidence interval diagram.

In this case, the range is almost possible errors arising when replacing A will not exceed ± ε. Large by absolute value errors will appear only with a low probability α = 1 – β. An event that is opposite and unknown with probability β will fall within the interval I β= ( - ε; + ε). Probability β can be interpreted as the probability that a random interval I β will cover the point A(Fig. 3.1.1).

The probability β is usually called the confidence probability, and the interval I β It is commonly called a confidence interval. In Fig. 3.1.1 considers a symmetric confidence interval. In general, this requirement is not mandatory.

Confidence interval parameter values a can be considered as an interval of values a, consistent with experimental data and not contradicting them.

Choosing confidence probabilityβ close to one, we want to have confidence that an event with such probability will occur when a certain set of conditions is met.

This is equivalent to the fact that the opposite event will not happen, that we neglect the probability of the event, equal to α = 1 – β. Let us point out that the purpose of the boundary and negligible probabilities are not math problem. The purpose of such a boundary is outside the theory of probability and is determined in each area by the degree of responsibility and the nature of the problems being solved.

But the establishment is too large stock strength leads to an unjustified and large increase in construction costs.


65 Question No. 65 Stationary random process.

A stationary random function is a random function whose all probabilistic characteristics do not depend on the argument. Stationary random functions describe stationary processes of machine operation, non-stationary functions - non-stationary processes, in particular transitional: start, stop, mode change. The argument is time.

Stationarity conditions for random functions:

1. constancy of mathematical expectation;

2. constancy of dispersion;

3. The correlation function should depend only on the difference between the arguments, but not on their values.

Examples of stationary random processes include: oscillations of an aircraft in steady-state horizontal flight; random noise in the radio, etc.

Each stationary process can be considered as continuing in time indefinitely; during research, any point in time can be chosen as the starting point. When studying a stationary random process over any period of time, the same characteristics should be obtained.

The correlation function of stationary random processes is an even function.

Effective for stationary random processes spectral analysis, i.e. consideration in the form of harmonic spectra or Fourier series. Additionally, the spectral density function of a random function is introduced, which characterizes the distribution of dispersions over spectral frequencies.

Dispersion:

Correlation function:

K x (τ) =

Spectral Density:

Sx() =

Stationary processes can be ergodic and non-ergodic. Ergodic – if the average value of a stationary random function over a sufficiently long period is approximately equal to the average value for individual implementations. For them, the characteristics are determined as the time average.

Question No. 66 Reliability indicators of technical objects: single, complex, calculated, experimental, operational, extrapolated.

Reliability indicator is a quantitative characteristic of one or more properties that make up the reliability of an object.

A single reliability indicator is a reliability indicator that characterizes one of the properties that makes up the reliability of an object.

A complex reliability indicator is a reliability indicator that characterizes several properties that make up the reliability of an object.

Calculated reliability indicator is a reliability indicator, the values ​​of which are determined by the calculation method.

Experimental indicator reliability – reliability indicator, point or interval estimation which is determined according to test data.

Operational reliability indicator – a reliability indicator, the point or interval estimate of which is determined based on operational data.

Extrapolated reliability indicator – a reliability indicator, a point or interval estimate of which is determined based on the results of calculations, tests and (or) operational data by extrapolating to another duration of operation and other operating conditions.



Question No. 68 Indicators of durability of technical objects and cars.

Gamma-percentage resource is the total operating time during which the object will not reach the limit state with probability g, expressed as a percentage.

Average resource– mathematical expectation of the resource.

Gamma-percentage service life is the calendar duration of operation during which the object will not reach the limiting state with probability g, expressed as a percentage

Average service life is the mathematical expectation of service life.

Note. When using durability indicators, the starting point and type of action after the onset of the limit state should be indicated (for example, the gamma-percentage life from the second major overhaul to write-off). Durability indicators, counted from the commissioning of the facility to final withdrawal from operation are called gamma-percentage full resource (service life), average full resource (service life)


71 71 Tasks and methods for predicting car reliability

There are three stages of forecasting: retrospection, diagnosis and prognosis. At the first stage, the dynamics of changes in machine parameters in the past are established, at the second stage they are determined technical condition elements in the present; at the third stage, changes in the state parameters of the elements in the future are predicted.

The main tasks of predicting the reliability of cars can be formulated as follows:

a) Predicting patterns of changes in vehicle reliability in connection with prospects for production development, the introduction of new materials, and increasing the strength of parts.

b) Assessing the reliability of designed vehicles before they are manufactured. This task arises at the design stage.

c) Predicting the reliability of a specific vehicle (or its component or assembly) based on the results of changes in its parameters.

d) Predicting the reliability of a certain set of cars based on the results of a study of a limited number of prototypes. These types of problems have to be faced at the production stage.

e) Predicting the reliability of vehicles under unusual operating conditions (for example, temperature and humidity environment higher than permissible, difficult road conditions, and so on).

Methods for predicting car reliability are selected taking into account forecasting tasks, the quantity and quality of initial information, and the nature of the real process of changing the reliability indicator (predicted parameter).

Modern methods forecasting can be divided into three main groups: a) methods of expert assessments; b) modeling methods, including physical, physical and mathematical and information models; c) statistical methods.

Forecasting methods based on expert assessments, consist in generalization, statistical processing and analysis of the opinions of specialists regarding the prospects for the development of this area.

Modeling methods are based on the basic principles of similarity theory. Based on the similarity of the indicators of modification A, the reliability level of which was studied earlier, and some properties of modification B of the same car or its component, the reliability indicators of B are predicted for a certain period of time.

Statistical forecasting methods are based on extrapolation and interpolation of predicted reliability parameters obtained as a result preliminary studies. The method is based on patterns of changes in vehicle reliability parameters over time

Question No. 74 Mathematical methods forecasting. Construction mathematical models reliability.

When predicting transmission reliability, it is possible to use the following models: 1) the “weakest” link; 2) dependent resources of parts elements; 3) independent resources parts elements. The resource of the i-th element is determined from the relationship:

x i = R i /r i ,

where R i – quantitative value criterion of the i-th element at which its failure occurs;

r i – average value increments quantification criterion of the i-th element per unit of resource.

The values ​​of R i and r i can be random with certain distribution laws or constant.

For the option when R i are constant, and r i are variable and have a functional connection with the same random variable, consider the situation when a linear functional connection is observed between the values ​​of r i, which leads to the “weakest” link model. In this case, the reliability of the system corresponds to the reliability of the “weakest” link.

The model of dependent resources is implemented under loading according to the scheme, when there is a spread of operating conditions for mass-produced machines or uncertainty in the operating conditions of unique machines. The model of independent resources occurs when loading according to a scheme with specific operating conditions.

An expression for calculating the reliability of a system with independent resource elements.

Question No. 79 Schematic loading of the system, parts and elements (using the example of a transmission).

By transmission we mean the drive of the car as a whole or a separate, rather complex part of it, which for one reason or another needs to be isolated. The load on the transmission is determined by the power and speed components. The force component is characterized by torque, and the speed component is characterized by angular velocity rotation, which determines the number of loading cycles of transmission parts or the sliding speed of contact surfaces.

Depending on the type of part, the schematization of torque in order to obtain the load of the part may be different. For example, the load on gears and bearings is determined current value moments, and shafts for torsion - by the magnitude of its amplitude.

Based on operating conditions, the transmission load can be presented in the form of the following diagrams.

1. Each mode corresponds to a one-dimensional distribution curve.

2. For each mode we have n one-dimensional distribution curves (n is the number of machine operating conditions). The probability of operation in each of the conditions is specific.

3. For each mode we have one bivariate distribution current and average torque values.

Scheme 1 can be used for mass-produced machines under exactly the same operating conditions or for a unique machine under specific operating conditions.

Scheme 2 is not qualitatively different from Scheme 1, however, in some cases, for the calculation it is advisable that each operating condition correspond to a load curve.

Scheme 3 can characterize the load on the transmission of a unique machine, the specific operating conditions of which are unknown, but the range of conditions is known.

82 Question No. 82 Systematic approach to predicting the life of parts

The car should be considered as complex system, formed from the point of view of the reliability of its sequentially connected units, parts, and elements.

Item resource:

T i = R i /r i ,

where R i is the quantitative value of the limit state criterion of the i-th element at which its failure occurs;

g i - the average increment of the quantitative assessment of the criterion

limit state of the i-th element per unit of resource.

R i and r i can be random or constant and are possible

the following options:

1. R i - random, r i - random;

2. R i - random, r i - constant;

3. R ​​i - constant, r i - random;

4. R i - constants, r i - constants.

For the first three options, we consider R i to be independent random variables.

1.a) r i - independent

The reliability of the system is considered to be the multiplication of the FBG

b) r i - random and related by probability

f (r i / r j) = f (r i , r j)/ f (r j);

f (r j / r i) = f (r i, r j)/ f (r i).

If r i and r j depend on each other, then the resources will also depend on each other

friend and the element resource dependence model is used for calculation. Because the relationship is probabilistic, then the method of conditional functions is used.

c) r i - random and functionally related.

IN in this case free quantities depend on each other, and resources also depend on each other. Only due to functional dependence will the connection be stronger than in other cases.

2. model of independent elements resources.

The FBR of the system is equal to the sum of the FBR of all elements.

3. The same cases as in 1 are possible, only in cases b) and c) there will be an increase in dependent resources due to the constancy of R i. In case c) r i is a functional connection,

a situation is possible when the “weakest” link model is applied.

R 1 , R 2 – constants;

r 1,r 2 – random;

r 1 = 1.5 ∙ r 2 ;

R 1 = T ∙ r 1 ;

R 2 = T ∙ r 2 ;

If, for other two specific values ​​of r 1, r 2,

the same resource ratio T 1 >T 2, then element 2 will be the “weakest”

link, i.e. it determines the reliability of this system.

Application of the weakest link model:

If there is an element in the system whose criterion R is significantly less than this criterion for all other elements, and all elements are loaded approximately equally;

If the R criterion for all elements is approximately the same, and the loading of one element is significantly higher than all other elements.

Question No. 83 Determination of the service life of parts (shafts, or gears, or bearings of transmission units) based on experimental load conditions.

Determination of the life of rolling bearings.

To determine the durability of rolling bearings of transmission units and chassis, it is necessary to perform several types of calculations: for static strength, for contact fatigue, for wear.

Failure Model:

where f(R) is the resource distribution density;

, – density and resource distribution function for the i-th type of destructive process;

n – number of calculation types.

Most widespread received a calculation of rolling bearings for contact fatigue:

R = a p C d mρ No 50 [β -1 ,

where C d – dynamic load capacity;

No 50 – the number of cycles of the fatigue curve corresponding to a 50% probability of non-destruction of the bearing under load C d;

m ρ – exponent (ball = 3, roller = 3.33);

Frequency of bearing loading when moving in kth gear;

The distribution density of the reduced load when driving in k-th gear under i-th operating conditions.

Main features of the calculation.

1. Since for the bearing fatigue curve, instead of the endurance limit, C d is introduced (corresponding to the probability of non-destruction of 90% at 10 6 cycles), it is necessary to move to the fatigue curve corresponding to 50% of non-destruction. Considering that the distribution density under load on the bearing C d obeys the Weibull law, then No 50 = 4.7 ∙ 10 6 cycles.

2. Integration in the formula is carried out from zero, and the parameters of the fatigue curve - m ρ, No 50 and C d - are not adjusted. Therefore, under the condition = const, rearranging the operations of summation and integration does not affect the value of R. Consequently, calculations for the generalized load mode and for individual load modes are identical. If the values ​​differ significantly, then the average resource R ik is calculated separately for each transmission:

R ik = a p C d mρ No [β -1 ,

the formula can be written:

R = [ -1 ,

Р = (K Fr ∙ K v ∙ F r + K Fa ∙ F a) ∙ K b ∙ K T ∙ K m;

where F r, F a – radial and axial loads;

K v – rotation coefficient;

K b – rotation coefficient;

K T – temperature coefficient;

K m – material coefficient;

K Fr , K Fa – coefficient of radial and axial loads.

4. Relationship between torque on shaft M and reduced load on the bearing:

Р = K P M = (K Fr ∙ K v ∙ K R + K Fa ∙ K A) ∙ K b ∙ K T ∙ K m ∙ M;

where K R is the conversion factor;

K R , K A – torque conversion coefficients into total radial and axial loads on the bearing.

The loading frequency of the bearing corresponds to the frequency of its rotation.

1000 U Σα (2πr ω)

where U Σα is the total gear ratio of the transmission from the shaft to the driving wheels of the vehicle when the kth gear is engaged.

5. Calculation of the distribution density of the bearing resource and its parameters is carried out using the method of static modeling.

Question No. 12 Specific material consumption of cars.

When determining the material consumption of a vehicle, the weight of the curbed chassis is used. The expediency of using chassis weight when assessing the material consumption of a vehicle is explained by the widespread development of the production of specialized vehicles with bodies various types or other add-ons different weights installed on the chassis of the same base vehicle. That is why branded brochures and catalogs for foreign trucks, as a rule, provide the weight of the curb chassis, not the vehicle. At the same time, many foreign companies do not include the weight of equipment and additional equipment in the weight of the equipped chassis, and the degree of fuel filling is indicated differently in different standards.

For objective assessment material consumption of cars of different models, they must be brought to a single configuration. In this case, the chassis load capacity is determined as the difference between the total structural weight of the vehicle and the weight of the curbed chassis.

The main indicator of the material consumption of a car is specific gravity chassis:

m beat = (m sn.shas – m z.sn)/[(m k.a – m sn.shas)P];

where m ground chassis is the weight of the equipped chassis,

m з.сн – mass of refueling and equipment,

m к.а – total structural mass of the vehicle,

P – established resource before major repairs.

For a towing vehicle it is taken into account gross weight road trains:

m beat = (m sn.shas – m z.sn)/[(m k.a – m sn.shas)KR];

where K is the coefficient of correction of indicators for tractor-trailer vehicles intended for operation as part of a road train

K = m a / m k.a;

where m a is the total weight of the road train.


Related information.




Did you like the article? Share with your friends!