Rayleigh distribution histogram. List of used literature

Implementation of some methods for modifying histograms in Matlab

As has been noted more than once, one of the most important characteristics of an image is the histogram of the distribution of brightness of its elements. Previously, we have already briefly reviewed the theoretical foundations of modifying histograms, so in this work we will pay more attention to the practical aspects of implementing some methods for transforming histograms in the Matlab system. At the same time, we note that modifying histograms is one of the methods for improving the visual quality of images.

Step 1: Reading the original image.

We read the original image from the file into the Matlab workspace and display it on the monitor screen.

L=imread("lena.bmp");

figure, imshow(L);

Since the original image under study is halftone, we will consider only one component of the multidimensional array.

Rice. 1. Original image.

Since the work considers histogram transformation methods, we will also construct a histogram of the original image.

Fig.2. Histogram of the original image.

Step 2: Uniform histogram transformation.

Uniform transformation of the histogram is carried out according to the formula

where , - the minimum and maximum values ​​of the elements of the intensity array of the original image;

Probability distribution function of the original image, which is approximated by the distribution histogram . In other words, we're talking about about the cumulative histogram of an image.

In Matlab, this can be implemented as follows. Calculate the cumulative histogram of the original image

CH=cumsum(H)./(N*M);

The vector of histogram values ​​of the original image, and , are the dimensions of this image, which are determined using the size function

L1(i,j)=CH(ceil(255*L(i,j)+eps));

figure, imshow(L1);

The eps value is used in conjunction with the ceil function to avoid assigning zero values ​​to the cumulative histogram indices. The result of applying the uniform histogram transformation method is presented in Fig. 3.

Rice. 3. The original image processed by the uniform histogram transformation method.

The histogram of the image transformed according to formula (1) is shown in Fig. 4. It really occupies almost the entire dynamic range and is uniform.

Rice. 4. Histogram of the image shown in Fig. 3.

The uniform transmission of intensity levels of image elements is also evidenced by its cumulative histogram (Fig. 5).

Fig.5. Cumulative histogram of the image shown in Fig. 3.

Step 3: Exponential histogram transformation.

The exponential transformation of the histogram is carried out according to the formula

where is a certain constant characterizing the steepness of the exponential transformation.

In Matlab, transformations according to formula (2) can be implemented as follows.

L2(i,j)=-(1/alfa1)*log10(1-CH(ceil(255*L(i,j)+eps)));

figure, imshow(L2);

Rice. 6. The original image after processing using the exponential histogram transformation method.

The histogram of the image processed by the exponential transformation method is shown in Fig. 7.

Rice. 7. Histogram of an image processed by the exponential transformation method.

The exponential nature of the transformations is most clearly manifested in the cumulative histogram of the processed image, which is presented in Fig. 8.

Rice. 8. Cumulative histogram of an image processed using the exponential transform method.

Step 4: Transform the histogram using Rayleigh's law.

The histogram transformation according to Rayleigh's law is carried out according to the expression

,

where is a certain constant characterizing the histogram of the distribution of intensities of the elements of the resulting image.

Let us present the implementation of these transformations in the Matlab environment.

L3(i,j)=sqrt(2*alfa2^2*log10(1/(1-CH(ceil(255*L(i,j)+eps)))));

figure, imshow(L3);

Rice. 9. The original image processed by the histogram transformation method according to Rayleigh’s law.

The histogram of the image processed by the Rayleigh law transformation method is shown in Fig. 10.

Rice. 10. Histogram of an image processed using the Rayleigh law transformation method.

The cumulative histogram of the image processed by the Rayleigh law transformation method is shown in Fig. 11.

Rice. 11. Cumulative histogram of an image processed using the Rayleigh law transformation method.

Step 5: Transform the histogram using the power law.

The transformation of the image histogram according to the power law is implemented according to the expression

.

In Matlab, this method can be implemented as follows.

L4(i,j)=(CH(ceil(255*L(i,j)+eps)))^(2/3);

figure, imshow(L4);

Rice. 12. The original image processed by the histogram transformation method according to the power law.

The histogram of the distribution of intensities of elements of the processed image is shown in Fig. 13.

Rice. 13. Histogram of an image processed by the histogram transformation method according to the power law.

The cumulative histogram of the processed image, which most clearly demonstrates the nature of the transmission of gray levels, is presented in Fig. 14.

Rice. 14. Cumulative histogram of an image processed by the power law transformation method.

Step 6: Hyperbolic histogram transformation.

The hyperbolic transformation of the histogram is implemented according to the formula

where is a certain constant with respect to which the hyperbolic transformation of the histogram is carried out. In fact, the parameter is equal to the minimum intensity value of image elements.

In Matlab environment this method can be implemented as follows

L5(i,j)=.01^(CH(ceil(255*L(i,j)+eps))); % V in this case A=0.01

figure, imshow(L5);

Rice. 15. The original image processed by the hyperbolic transformation method.

The histogram of the distribution of intensities of the elements of the image processed in this way is shown in Fig. 16.

Rice. 16. Histogram of an image processed by the hyperbolic transform method.

The cumulative histogram, the shape of which corresponds to the nature of the transformations being carried out, is presented in Fig. 17.

Rice. 17. Cumulative histogram of an image processed by the hyperbolic transform method.

In this work, some methods for modifying histograms were considered. The result of applying each method is that the histogram of the distribution of brightness of the elements of the processed image takes a certain shape. This kind of transformation can be used to eliminate distortions in the transmission of quantization levels to which images were subjected at the stage of formation, transmission or data processing.

Note also that the considered methods can be implemented not only globally, but also in a sliding mode. This will complicate the calculations, since it will be necessary to analyze the histogram at each local area. However, on the other hand, such transformations, in contrast to the global implementation, make it possible to increase the detail of local areas.

MINISTRY OF EDUCATION AND SCIENCE OF RUSSIA

Federal State Budgetary Educational Institution

higher professional education

“Chuvash State University named after I.N. Ulyanov"

Faculty of Design and Computer Technologies

Department of Computer Technologies

in the discipline “Reliability, ergonomics and quality of automated control systems and control systems”

on the topic " Basic mathematical models, used in theoryreliability»

Completed:

student gr. ZDIKT-25-08

Lyusenkov I.V.

Checked:

Grigoriev V.G.

Cheboksary

Introduction

    Basic mathematical models used in reliability theory……. 3

    Weibull distribution………………………………………………………. 3

    Exponential distribution……………………………………………. 4

    Rayleigh distribution……………………………………………………………… 5

    Normal distribution (Gaussian distribution)………………………….. 5

    Definition of the distribution law……………………………………………. 6

    Selection of the number of reliability indicators……………………………………. 7

    Accuracy and reliability statistical assessment reliability indicators... 10

    Features of reliability programs………………………………………… 11

    Literature………………………………………………………………………………… 13

Basic mathematical models used in reliability theory

In the above mathematical relationships, the concept of probability density and the distribution law were often used.

Distribution law - a connection established in a certain way between possible values random variable and their corresponding probabilities.

Distribution (probability) density is a widely used way to describe the distribution law

Weibull distribution

The Weibull distribution is a two-parameter distribution. According to this distribution, the probability density of the moment of failure

where δ is the shape parameter (determined by selection as a result of processing experimental data, δ > 0);

λ - scale parameter,

The graph of the probability density function largely depends on the value of the shape coefficient.

The failure rate is determined by the expression

(2)

Probability of failure-free operation

(3)

Note that with the parameter δ = 1 the Weibull distribution becomes exponential, and with δ = 2 it becomes the Rayleigh distribution.

At δ<1 интенсивность отказов монотонно убывает (период приработки), а при δ >1 increases monotonically (wear period). Consequently, by selecting the parameter δ, it is possible to obtain, in each of the three sections, such a theoretical curve λ(t), which coincides fairly closely with the experimental curve, and then the required reliability indicators can be calculated on the basis of a known pattern.

Exponential distribution

As noted, the exponential distribution of the probability of failure-free operation is a special case of the Weibull distribution when the shape parameter δ = 1. This distribution is one-parameter, that is, to write the calculated expression, one parameter λ = const is sufficient. For this law, the opposite statement is also true: if the failure rate is constant, then the probability of failure-free operation as a function of time obeys the exponential law:

(4)

The average no-failure time with an exponential law of distribution of the no-failure interval is expressed by the formula:

(5)

Thus, knowing the average failure-free operation time T 1 (or the constant failure rate λ), in the case of an exponential distribution, it is possible to find the probability of failure-free operation for the time interval from the moment the object is turned on to any given moment t.

Rayleigh distribution

The probability density in Rayleigh's law has the following form

(6)

where δ * is the Rayleigh distribution parameter.

The failure rate is:

. (7)

A characteristic feature of the Rayleigh distribution is the straight line of the graph λ(t), starting from the origin.

The probability of failure-free operation of the object in this case is determined by the expression

(8)

Normal distribution (Gaussian distribution)

The normal distribution law is characterized by a probability density of the form

(9)

where m x, σ x - respectively mathematical expectation and the standard deviation of the random variable X.

When analyzing the reliability of RESI, in the form of a random variable, in addition to time, the values ​​of current, electrical voltage and other arguments often appear. The normal law is a two-parameter law, to write which you need to know m x and s x.

The probability of failure-free operation is determined by the formula

(10)

and the failure rate is according to the formula

(11)

This manual shows only the most common laws of distribution of a random variable. There are a number of known laws that are also used in reliability calculations: gamma distribution, χ 2 distribution, Maxwell, Erlang distribution, etc.

Probability density function

Distribution function

, x ³ 0;

Point estimate distribution law parameter

.

Erlang distribution law (gamma distribution)

Probability density function

Distribution function

, x ³ 0;

Point estimate of distribution law parameters:

and by k" k is taken as the nearest integer (k=1, 2, 3,...); .

Weibull distribution law

Probability density function

distribution function

, x ³ 0;

Point estimate of distribution law parameters

;

In systems with priority requirements, a distinction is made between relative priority (without service interruption), when when a request with a higher priority arrives, it is accepted for service after the previously started servicing of a request with a lower priority is completed, and absolute priority, when the channel is immediately freed to service an incoming request with a higher priority. high priority.

The priority scale can be built based on some criteria external to the service system or on indicators related to the operation of the service system itself. Practical significance have following types priorities:

priority given to requirements least time service. The effectiveness of this priority can be shown in following example. Two requests were received sequentially with a service duration of 6.0 and 1.0 hours, respectively. When they are accepted for service by an empty channel in the order of arrival, the downtime will be 6.0 hours for the 1st request and 6.0 + 1.0 = 7 for the second .0 hours or a total of 13.0 hours for two requirements. If you give priority to the second requirement and accept it for service first, then its downtime will be 1.0 hours and the downtime of the other one will be 1.0 + 6.0 = 7.0 hours or in total for two requirements 8.0 hours. The gain from the assigned priority will be 5.0 hours (13-8) reduction in downtime of requirements in the system;

priority is given to requirements with a minimum ratio of service time to the power (performance) of the demand source, for example, to the carrying capacity of a vehicle.

The service mechanism is characterized by the parameters of individual service channels, the throughput of the system as a whole, and other data on service requirements. The system capacity is determined by the number of channels (devices) and the performance of each of them.

45. Determination of confidence intervals of random variables



Interval estimation distribution parameter of a random variable is determined by the fact that with probability g

abs(P – P m) ≤d,

where P is the exact (true) value of the parameter;

P m – parameter estimation based on the sample;

d – accuracy (error) of parameter P estimation.

The most commonly accepted values ​​are g from 0.8 to 0.99.

Confidence interval parameter is the interval in which the parameter value falls with probability g. For example, on this basis the required sample size of a random variable is found, which provides an estimate of the mathematical expectation at accuracy d with probability g. The type of connection is determined by the distribution law of the random variable.

The probability of a random variable falling into a given interval [Х 1 , Х 2 ] is determined by the increment of the integral distribution function on the considered interval F(Х 2)–F(Х 1). Based on this, when known function distribution, you can find the expected guaranteed minimum X gn (x≥ X gn) or maximum value X gv (x≤ X gv) random variable c given probability g (Figure 2.15). The first of them is the value that the random variable will be greater than with probability g, and the second is that the random variable with probability g will be less than this value. Guaranteed minimum value X gn with probability g is ensured when F(x)= 1-g and maximum X gy at F(x)=g. Thus, the values ​​of X gn and X gv are found by the expressions:

X gn = F -1 (1-g);

X gv = F -1 (g).

Example. The random variable has an exponential distribution with the function .

It is required to find the values ​​of X r and X r for which the random variable X with probability g=0.95, respectively, more than X gv and less than X gv.



Based on the fact that F -1 (α) = -1/l ln(1- α) (see conclusion earlier) and α = 1-g = 0.05 we obtain

X gn = -1/l ln(1- α) = -1/0.01 ln(1-0.05)=-100 (-.0513)=5.13.

For X gv α = g = 0.95 we similarly have

X gv = -1/l ln(1- α) = -1/0.01 ln(1-0.95)=-100 (-2.996)=299.6.

For normal law distributions of the values ​​of X gv and X gv can be calculated using the formulas

X g = x m + s U 1- g = x m - s U g;

X gv = x m + s U g,

where x m is the mathematical expectation of a random variable; s – standard deviation of a random variable; U g – one-sided quantile of the normal distribution law with probability g.

Figure 2.15 – Graphic interpretation of the definition of X gn and X gv

46.Description of flows of service requirements

The incoming flow is a sequence of requirements (applications) arriving at the service system, and is characterized by the frequency of receipt of requirements per unit of time (intensity) and the law of distribution of flow intensity. The incoming flow can also be described by time intervals between the moments of receipt of requests and the distribution law of these intervals.

Requests in a flow can arrive one at a time (ordinary flows) or in groups (non-ordinary flows).

The property of an ordinary flow is that only one request can arrive at any given time. In other words, the property is that the probability of receiving more than one request in a short period of time is an infinitesimal value.

In the case of group receipt of requirements, the intensity of receipt of groups of demands and the law of its distribution, as well as the size of the groups and the law of their distribution are specified.

The intensity of receipt of requirements can vary over time (non-stationary flows) or depends only on the time unit adopted to determine the intensity (stationary flows). A flow is called stationary if the probability of n requests appearing over a period of time (t 0 , t 0 +Δt) does not depend on t 0 , but depends only on Δt.

In an unsteady flow, the intensity changes over time in a non-periodic or periodic pattern(for example, seasonal processes), and may also have periods corresponding to partial or complete flow delay.

Depending on whether there is a connection between the number of requests entering the system before and after a certain point in time, the flow can have an aftereffect or no aftereffect.

An ordinary, stationary flow of demands with no aftereffect is the simplest.

47.Pearson and Romanovsky agreement criteria

In subsequent chapters we will meet several various types random variables. In this section, we list these new frequently occurring random variables, their PDFs, PDFs, and moments. We will start with the binomial distribution, which is the distribution of a discrete random variable, and then introduce the distribution of some continuous random variables.

Binomial distribution. Let be a discrete random variable that takes two possible values, for example or , with probability and respectively. The corresponding PDF for is shown in Fig. 2.1.6.

Rice. 2.1.6. Probability distribution function

Now suppose that

where , , are statistically independent and identically distributed random variables with PDF shown in Fig. 2.1.6. What is the distribution function?

To answer this question, note that initially it is a series of integers from 0 to . The probability that , is simply equal to the probability that everything . Since they are statistically independent, then

.

The probability that , is equal to the probability that one term is , and the rest are equal to zero. Since this event may occur in various ways,

.

(2.1.84)

various combinations that lead to the result , we get

where is the binomial coefficient. Therefore, the PDF can be expressed as

, (2.1.87)

where means the largest integer such that .

IFR (2.1.87) characterizes binomial distribution random variable.

The first two moments are equal

and the characteristic function

. (2.1.89)

Uniform distribution. The PDF and IDF of a uniformly distributed random variable are shown in Fig. 2.1.7.

Rice. 2.1.7. Graphs of PDF and IFR for a uniformly distributed random variable

The first two moments are equal

,

, (2.1.90)

,

and the characteristic function is equal to

(2.1.91)

Gaussian distribution. The PDF of a Gaussian or normally distributed random variable is determined by the formula

, (2.1.92)

where is the mathematical expectation, and is the variance of the random variable. FMI is equal to

where is the error function, which is determined by the expression

. (2.1.94)

The PDF and PFR are illustrated in Fig. 2.1.8.

Rice. 2.1.8. Graphs of PDF (a) and IDF (b) of a Gaussian random variable

The IFR can also be expressed in terms of an additional error function, i.e.

,

. (2.1.95)

Note that , , And . For the additional error function is proportional to the area under the part of the Gaussian PDF. For large values, an additional error function can be approximated by the series

, (2.1.96)

and the approximation error is less than the last retained term.

The function that is usually used for the area under the part of the Gaussian PDF is denoted by and defined as

, . (2.1.97)

Comparing (2.1.95) and (2.1.97), we find

. (2.1.98)

The characteristic function of a Gaussian random variable with mean and variance is equal to

The central moments of a Gaussian random variable are

(2.1.100)

and ordinary moments can be expressed through central points

. (2.1.101)

The sum of statically independent Gaussian random variables is also a Gaussian random variable. To demonstrate this, suppose

where , are independent random variables with mean and variances. Using the result (2.1.79), we find that the characteristic function is equal to

Therefore, is a Gaussian random variable with mean and variance.

Chi-square distribution. A random variable with a chi-square distribution is generated by a Gaussian random variable, in the sense that its formation can be considered as a transformation of the latter. To be specific, let , where is a Gaussian random variable. Then has a chi-square distribution. We distinguish between two types of chi-square distribution. The first is called the central chi-square distribution, and is obtained when it has a mean of zero. The second is called the non-central chi-square distribution, and is obtained when it has a non-zero mean.

First consider the central chi-square distribution. Let be a Gaussian random variable with zero mean and variance. Since , the result is given by function (2.1.47) with parameters and . Thus, we obtain the PDF in the form

, . (2.1.105)

which cannot be expressed in a closed form. Characteristic function, however, can be expressed in closed form:

. (2.1.107)

Now suppose that the random variable is defined as

where , , are statistically independent and identically distributed Gaussian random variables with zero mean and variance. Due to statistical independence characteristic function

. (2.1.109)

The inverse transformation of this characteristic function gives the PDF

, , (2.1.110)

where is the gamma function defined as

,

Integer, , (2.1.111)

This PDF is a generalization of (2.1.105) and is called the chi-square (or gamma) PDF with degrees of freedom. It is illustrated in Fig. 2.1.9.

The case when they are equal

The first two moments are equal

, (2.1.112)

FMI is equal to

, (2.1.113)

Rice. 2.1.9 PDF plots for a random variable with a chi-square distribution for several degrees of freedom values

This integral is converted to an incomplete gamma function, which was tabulated by Pearson (1965).

If it is even, integral (2.11.113) can be expressed in closed form.

In particular, let , where be an integer. Then, using repeated integration by parts, we obtain

, . (2.1.114)

Now consider the noncentral chi-square distribution, which is the result of squaring a Gaussian random variable with nonzero mean. If is a Gaussian random variable with mean and variance, the random variable has a PDF

, (2.1.115)

This result is obtained by using (2.1.47) for a Gaussian PDF with distribution (2.1.92). Characteristic function for PDF

. (2.1.116)

To generalize the results, assume that is the sum of the squares of the Gaussian random variables defined by (2.1.108). All , , are assumed to be statistically independent with means , , and equal variances . Then the characteristic function obtained from (2.1.116), using relation (2.1.79), is equal to

. (2.1.117)

The inverse Fourier transform of this characteristic function gives the PDF

where the designation is introduced

a is a modified Bessel function of the first kind of order, which can be represented by an infinite series

, . (2.1.120)

The PDF defined by (2.1.118) is called a non-central chi-square distribution with degrees of freedom. The parameter is called the distribution noncentrality parameter. IDF for non-central chi-square distribution with degrees of freedom

This integral is not expressed in closed form. However, if is an integer number, the IDF can be expressed in terms of the generalized Marcum -function, which is defined as

, (2.1.122)

, (2.1.123)

If we replace the integration variable in (1.2.121) with , and , and assume that , then we can easily find

. (2.1.124)

In conclusion, we note that the first two moments for the central chi-square distribution of random variables are equal to

,

.

Rayleigh distribution. The Rayleigh distribution is often used as a model for statistical signals transmitted over radio channels, such as in cellular radio communications. This distribution is closely related to the central chi-square distribution. To illustrate this, let us assume that , where and are statistically independent Gaussian random variables with zero means and equal variance. From the above it follows that it has a chi-square distribution with two degrees of freedom. Therefore, the PDF for

, . (2.1.126)

Now suppose we define a new random variable

. (2.1.127)

Having performed simple transformations in (2.1.126), we obtain for the PDF

, . (2.1.128)

This is the PDF for the Rayleigh random variable. The corresponding FMI is equal to

, . (2.1.129)

Moments from are equal

, (2.1.130)

and dispersion

. (2.1.131)

Characteristic function for a Rayleigh distributed random variable

. (2.1.132)

This integral can be expressed as follows:

where is the degenerate hypergeometric function defined as

, … (2.1.134)

Bowley (1990) showed that it can be expressed as

. (2.1.135)

As a generalization of the expressions obtained above, consider the random variable

where , , are statistically independent identically distributed Gaussian random variables with zero mean. It is clear that it has a chi-square distribution with degrees of freedom. Its PDF is given by formula (2.1.100). Simple Conversions variable in (2.1.110) lead to PDF for in the form

, . (2.1.137)

As a consequence of the fundamental relationship between the central chi-square distribution and the Rayleigh distribution, the corresponding IDF is quite simple. Thus, for any IFR, for can be represented in the form of an incomplete gamma function. In a special case, when it is clear, i.e. when , the FMI for can be represented in closed form

, . (2.1.138)

In conclusion, we present the formula for the th moment

, , (2.1.139)

fair for anyone.

Rice distribution. While the Rayleigh distribution is related to the central chi-square distribution, the Rice distribution is related to the non-central chi-square distribution. To illustrate this relationship, let us set , where and are statistically independent Gaussian random variables with mean , and the same variance. From the previous discussion, we know that a non-central chi-square distribution has a deviation parameter. The PDF for is obtained from (2.1.118), and for we find

, . (2.1.140)

Now let's introduce a new variable.

The PDF for is obtained from (2.1.140) by replacing the variable

, . (2.1.141)

Function (2.1.141) is called the Rice distribution.

As will be shown in Chap. 5, this PDF characterizes the statistics of the envelope of a harmonic signal exposed to narrow-band Gaussian noise. It is also used for statistics of the signal transmitted through some radio channels. The IFR for is easy to find from (2.1.124) for the case when . This gives

, , (2.1.142)

where is defined by (2.1.123).

To generalize the above result, let it be defined by (2.1.136), where , are statistically independent random variables with mean , and identical variances. The random variable has a noncentral chi-square distribution with -degrees of freedom noncentral parameter , defined by (2.1.119). Its PDF is determined by (2.1.118), therefore, the PDF for is equal to

, , (2.1.143)

and the relevant FMI

where is defined by (2.1.121). In the special case when is an integer, we have

, , (2.1.145)

which follows from (2.1.124). In conclusion, we note that the th moment from

, , (2.1.146)

where is the degenerate hypergeometric function.

-Nakagami distribution. Both the Rayleigh and Rice distributions are often used to describe the statistics of signal fluctuations at the output of a fading multipath channel. This channel model is discussed in Chap. 14. Another distribution often used to characterize statistical signals transmitted over multipath fading channels is the Nakagami distribution. The PDF for this distribution is given by Nakagami (1960)

, , (2.1.147)

where is defined as

and the parameter is defined as the ratio of moments and is called the fading parameter:

, . (2.1.149)

A normalized version of (2.1.147) can be obtained by introducing another random variable (see Problem 2.15). the th moment from is equal to

.

At one can see that (2.1.147) leads to the Rayleigh distribution. For values ​​satisfying the condition , we obtain a PDF that has longer tails than with the Rayleigh distribution. At values, the tails of the PDF of the Nakagami distribution decrease faster than for the Rayleigh distribution. Figure 2.1.10 illustrates the PDF for different meanings.

Multivariate Gaussian distribution. Of the many multivariable or multivariate distributions that can be defined, the multivariable Gaussian distribution is the most important and the most commonly used in practice. Let us introduce this distribution and consider its basic properties.

Let us assume that , are Gaussian random variables with means , variances , and covariances , . It is clear that , . Let be a covariance matrix of dimension with elements . Let defines the column vector of random variables and let denote the column vector of average values ​​, . The joint PDF of Gaussian random variables , , is defined as follows. We see that if Gaussian random variables are not correlated, they are also statistically independent. are uncorrelated and therefore statistically independent. in the form is diagonal. Therefore, we must require we obtain the eigenvectors

Hence,

.

It is easy to show that and , where the diagonal elements are equal to and .

Federal agency by education

State Educational Institution of Higher Professional Education "Ural State Technical University-UPI named after the first President of Russia B.N. Yeltsin"

Department of Theoretical Foundations of Radio Engineering

RAYLEIGH DISTRIBUTION

in the discipline "Probabilistic Models"

Group: R-37072

Student: Reshetnikova N.E.

Teacher: Trukhin M.P.

Ekaterinburg, 2009

Origin story 3

Probability density function 4

Cumulative distribution function 6

Central and absolute moments 8

Characteristic function 10

Cumulants (semi-invariants) 11

Application area 12

References 13

History of appearance

On November 12, 1842, Lord John William Rayleigh, English physicist, was born in Langford Grove (Essex). Nobel laureate. Received home education. He graduated from Trinity College, Cambridge University, and worked there until 1871. In 1873, he created a laboratory on the family estate of Terlin Place. In 1879 he became professor of experimental physics at the University of Cambridge, in 1884 - secretary of the London Royal Society. In 1887-1905. - Professor of the Royal Association, since 1905 - President of the Royal Society of London, since 1908 - President of the University of Cambridge.

Being a comprehensively erudite natural scientist, he distinguished himself in many branches of science: theory of vibrations, optics, acoustics, theory of thermal radiation, molecular physics, hydrodynamics, electricity and other areas of physics. Investigating acoustic vibrations (vibrations of strings, rods, plates, etc.), he formulated a number of fundamental theorems of the linear theory of vibrations (1873), allowing qualitative conclusions to be made about the natural frequencies of oscillatory systems, and developed a quantitative perturbation method for finding natural frequencies oscillatory system. Rayleigh was the first to point out the specificity of nonlinear systems capable of performing undamped oscillations without periodic external influence, and the special nature of these oscillations, which were later called self-oscillations.

He explained the difference between group and phase velocities and obtained a formula for group velocity (Rayleigh formula).

The Rayleigh distribution appeared in 1880 as a result of considering the problem of adding a set of oscillations with random phases, in which he obtained a distribution function for the resulting amplitude. The method developed by Rayleigh for a long time determined the further development of the theory of random processes.

Probability density function

Type of distribution function:

σ-parameter.

Thus, depending on the parameter σ, not only the amplitude, but also the dispersion of the distribution changes. As σ decreases, the amplitude increases and the graph “narrows,” and as σ increases, the scatter increases and the amplitude decreases.

Cumulative distribution function

The cumulative distribution function, by definition equal to the integral of the probability density, is equal to:

Graph of the integral distribution function for various parameters σ:

Depending on σ, the graph of the distribution function looks like this:

Thus, when the parameter σ changes, the graph changes. As σ decreases, the graph becomes steeper, and as σ increases, it becomes flatter:

Central and absolute moments

Distribution laws completely describe a random variable X With probabilistic point vision (contain complete information about the random variable). In practice there is often no need for this full description, it is enough to indicate the values ​​of individual parameters (numerical characteristics) that determine certain properties of the probability distribution of a random variable.

Among the numerical characteristics, the mathematical expectation plays the most significant role and is considered as a result of the application averaging operations to a random variable X, denoted as
.

The starting moments – first order random variable X called mathematical expectation s – th power of this quantity:

For a continuous random variable:

The mathematical expectation for a value distributed according to Rayleigh’s law is equal to:

The value of the mathematical expectation for different values ​​of the parameter σ:

Centered random variable X its deviation from the mathematical expectation is called
.

Central moment s first order random variable X called mathematical expectation s– th degree of the centered quantity
:

For a continuous random variable

.

Second central point. Dispersion There is scattering characteristic random variable about its mathematical expectation

For a random variable distributed according to Rayleigh's law, the dispersion (second central moment) is equal to:

Characteristic function

The characteristic function of a random variable X is the function

- this function represents the mathematical expectation of some complex random variable
, which is a function of the random variable X. When solving many problems, it is more convenient to use the characteristic function rather than the distribution law.

Knowing the distribution law, you can find the characteristic function using the formula:

As we see, this formula is nothing more than the inverse Fourier transform of the distribution density function. Obviously, with the help direct conversion Fourier can use the characteristic function to find the distribution law.

Characteristic function for a random variable distributed according to Rayleigh's law:

,

Where
- integral of the probability of a complex argument.

Cumulants (semi-invariants)

Function
is called the cumulant function of the random variable X. The cumulant function is a complete probabilistic characteristic of the random variable, just like. The point of introducing the cumulant function is that this function often turns out to be the simplest among the complete probabilistic characteristics.

In this case, the number
is called a cumulant of the order of the random variable X.

Scope of application

The Rayleigh distribution is used to describe a large number of problems, for example:

    The problem of adding oscillations with random phases;

    Distribution of black body radiation energy;

    To describe the laws of reliability;

    To describe some radio signals;

    The Rayleigh distribution law governs the amplitude values ​​of noise oscillations (interference) in a radio receiver;

    Used to describe the random envelope of a narrow-band random process (noise).

List of used literature

    R.N. Wadzinski "Handbook of probability distributions", S.-P. "Science", 2001.

    G.A. Samusevich, training manual“Probability theory and mathematical statistics”, USTU-UPI, 2007.



Did you like the article? Share with your friends!