Deterministic and stochastic models basic concepts. Deterministic and stochastic models

Modeling is one of the most important tools in modern life when they want to foresee the future. And this is not surprising, because the accuracy of this method is very high. Let's take a look at what it is in this article deterministic model.

General information

Deterministic models of systems have the peculiarity that they can be studied analytically if they are simple enough. In the opposite case, when using a significant number of equations and variables, electronic computers can be used for this purpose. Moreover, computer assistance, as a rule, comes down solely to solving them and finding answers. Because of this, it is necessary to change the systems of equations and use a different discretization. And this entails an increased risk of error in calculations. All types of deterministic models are characterized by the fact that knowledge of the parameters on a certain studied interval allows us to fully determine the dynamics of development of known indicators beyond the border.

Peculiarities

Factor modeling

References to this could be seen throughout the article, but we have not yet discussed what it is. Factor modeling implies that the main provisions for which quantitative comparison is necessary are identified. To achieve the stated goals, the research transforms the form.

If a strictly deterministic model has more than two factors, then it is called multifactorial. Its analysis can be carried out using various techniques. As an example, let us give In this case, she considers the assigned tasks from the point of view of pre-established and worked out a priori models. The choice among them is carried out according to their content.

To build a high-quality model, it is necessary to use theoretical and experimental studies essence technological process and its cause-and-effect relationships. This is precisely the main advantage of the subjects we are considering. Deterministic models allow for accurate forecasting in many areas of our lives. Thanks to their quality parameters and versatility, they have become so widespread.

Cybernetic deterministic models

They are of interest to us due to the analysis-based transient processes that occur with any, even the most insignificant changes in aggressive properties external environment. For simplicity and speed of calculations current situation cases is replaced by a simplified model. The important thing is that it satisfies all basic needs.

The performance of the automatic control system and the effectiveness of the decisions it makes depend on the unity of all necessary parameters. In this case, it is necessary to solve the following problem: the more information is collected, the higher the probability of error and the longer the processing time. But if you limit your data collection, you can expect a less reliable result. Therefore it is necessary to find golden mean, which will allow you to obtain information of sufficient accuracy, and at the same time it will not be unnecessarily complicated by unnecessary elements.

Multiplicative deterministic model

It is built by dividing factors into many. As an example, we can consider the process of forming the volume of manufactured products (PP). So, for this you need to have labor (PC), materials (M) and energy (E). In this case, the PP factor can be divided into a set (RS;M;E). This option reflects the multiplicative form of the factor system and the possibility of its division. In this case, you can use the following transformation methods: expansion, formal decomposition and lengthening. The first option has found wide application in analysis. It can be used to calculate an employee's performance, and so on.

When lengthening, one value is replaced by other factors. But in the end it should be the same number. An example of elongation was discussed above. All that remains is the formal decomposition. It involves the use of lengthening the denominator of the original factor model due to the replacement of one or more parameters. Let's consider this example: we calculate the profitability of production. To do this, the amount of profit is divided by the amount of costs. When multiplying, instead of a single value, we divide by the summed up expenses for materials, personnel, taxes, and so on.

Probabilities

Oh, if only everything went exactly as planned! But this rarely happens. Therefore, in practice, deterministic and What can be said about the latter are often used together? Their peculiarity is that they also take into account various probabilities. Take, for example, the following. There are two states. The relationship between them is very bad. A third party decides whether to invest in businesses in one of the countries. After all, if a war breaks out, profits will suffer greatly. Or you can give an example of building a plant in an area with high seismic activity. They work here natural factors, which cannot be accurately taken into account, this can only be done approximately.

Conclusion

We looked at what the models are deterministic analysis. Alas, in order to fully understand them and be able to apply them in practice, you need to study very well. Theoretical foundations already there. Also within the framework of the article, separate simple examples. Next, it is better to follow the path of gradually complicating the working material. You can simplify your task a little and start studying software, which can carry out appropriate simulations. But whatever the choice, understanding the basics and being able to answer questions about what, how and why is still necessary. You should first learn to select the correct input data and select necessary actions. Then the programs will be able to successfully complete their tasks.

The models of systems that we have talked about so far have been deterministic (certain), i.e. specifying the input influence uniquely determined the output of the system. However, in practice this rarely happens: the description real systems uncertainty is usually inherent. For example, for a static model, uncertainty can be taken into account by writing the relation (2.1)

where is the error normalized to the system output.

The reasons for uncertainty are varied:

– errors and interference in measurements of system inputs and outputs (natural errors);

– inaccuracy of the system model itself, which forces an error to be artificially introduced into the model;

– incomplete information about system parameters, etc.

Among in various ways clarification and formalization of uncertainty greatest distribution received a chaotic (probabilistic) approach, in which uncertain quantities are considered random. Developed conceptual and computational apparatus of probability theory and mathematical statistics allows you to give specific recommendations on choosing the structure of the system and assessing its parameters. The classification of stochastic models of systems and methods for their study is presented in Table. 1.4. Conclusions and recommendations are based on the averaging effect: random deviations the results of measurements of a certain quantity from its expected value cancel each other out when summed, and the arithmetic mean large number measurements turns out to be close to the expected value. Mathematical formulations this effect is given by law large numbers and the central limit theorem. The law of large numbers states that if are random variables with mathematical expectation (mean value) and variance, then



at sufficiently large N. This indicates the fundamental possibility of making an arbitrarily accurate assessment based on measurements. The central limit theorem, clarifying (2.32), states that

where is a standard normally distributed random variable

Since the distribution of the quantity is well known and tabulated (for example, it is known that the relation (2.33) allows one to calculate the estimation error. Let, for example, you want to find at what number of measurements the error in estimating their mathematical expectation with a probability of 0.95 will be less than 0.01 , if the variance of each measurement is 0.25. From (2.33) we obtain that the inequality from which must hold. N> 10000.

Of course, formulations (2.32), (2.33) can be given more strict look, and this can be easily done using the concepts of probabilistic convergence. Difficulties arise when trying to test the conditions of these strict statements. For example, in the law of large numbers and central limit theorem independence of individual measurements (implementations) is required random variable and the finiteness of its variance. If these conditions are violated, then the conclusions may also be violated. For example, if all measurements coincide: then, although all other conditions are met, there can be no question of averaging. Another example: the law of large numbers is not valid if random variables are distributed according to Cauchy's law (with a distribution density that does not have finite mathematical expectation and dispersion. But such a law occurs in life! For example, according to Cauchy, the integral illumination of points on a rectilinear shore is distributed by a uniformly rotating searchlight located at sea (on a ship) and turned on random moments time.

But still great difficulties calls for a check of the validity of the very use of the term “random”. What is a random variable? random event etc. It is often said that an event A by chance, if as a result of the experiment it can occur (with probability p) or not occur (with probability 1- p). Everything, however, is not so simple. The very concept of probability can be associated with the results of experiments only through the frequency of its occurrence in a certain number (series) of experiments: , where N A- the number of experiments in which the event occurred, N- total number; experiments. If the numbers are large enough N are approaching some constant number r A:

that event A can be called random, and the number r- its probability. In this case, the frequencies observed in different series of experiments should be close to each other (this property is called statistical stability or homogeneity). The above also applies to the concept of a random variable, since a value is random if events are random (and<£<Ь} для любых чисел A,b. The frequencies of occurrence of such events in long series of experiments should group around certain constant values.

So, for the stochastic approach to be applicable, the following requirements must be met:

1) the massive scale of the experiments being carried out, i.e. a fairly large number;

2) repeatability of experimental conditions, justifying comparison of the results of different experiments;

3) statistical stability.

The stochastic approach obviously cannot be applied to single experiments: expressions like “the probability that it will rain tomorrow”, “with a probability of 0.8, Zenit will win the cup”, etc. are meaningless. But even if the experiments are widespread and repeatable, there may not be statistical stability, and checking this is not an easy task. Known estimates of the permissible deviation of frequency from probability are based on the central limit theorem or Chebyshev's inequality and require additional hypotheses about the independence or weak dependence of measurements. Experimental verification of the independence condition is even more difficult, since it requires additional experiments.

The methodology and practical recipes for applying probability theory are presented in more detail in the instructive book by V.N. Tutubalin, an idea of ​​which is given by the quotes below:

“It is extremely important to eradicate the misconception that sometimes occurs among engineers and natural scientists who are not sufficiently familiar with the theory of probability, that the result of any experiment can be considered as a random variable. In especially severe cases, this is accompanied by belief in the normal law of distribution, and if the random variables themselves are not normal, then they believe that their logarithms are normal.”

“According to modern concepts, the scope of application of probability-theoretic methods is limited to phenomena that are characterized by statistical stability. However, testing statistical stability is difficult and always incomplete, and it often gives a negative conclusion. As a result, in entire fields of knowledge, for example, in geology, an approach has become the norm in which statistical stability is not checked at all, which inevitably leads to serious errors. In addition, the propaganda of cybernetics undertaken by our leading scientists has given (in some cases!) a somewhat unexpected result: it is now believed that only a machine (and not a person) is capable of obtaining objective scientific results.

In such circumstances, it is the duty of every teacher to again and again propagate that old truth that Peter I tried (unsuccessfully) to instill in Russian merchants: that one must trade honestly, without deception, since in the end it is more profitable for oneself.”

How to build a model of a system if there is uncertainty in the problem, but the stochastic approach is not applicable? Below we briefly outline one of the alternative approaches, based on fuzzy set theory.


We remind you that a relation (the relationship between and) is a subset of a set. those. some set of pairs R=(( x, at)), Where,. For example, a functional connection (dependence) can be represented as a relationship between sets, including pairs ( X, at), for which.

In the simplest case it may be, a R is an identity relation if.

Examples 12-15 in table. 1. 1 were invented in 1988 by a student of the 86th grade of school 292 M. Koroteev.

The mathematician here, of course, will notice that the minimum in (1.4), strictly speaking, may not be achieved and in the formulation of (1.4) it is necessary to replace rnin with inf (“infimum” is the exact infimum of the set). However, this will not change the situation: formalization in this case does not reflect the essence of the task, i.e. carried out incorrectly. In the future, in order not to “scare” the engineer, we will use the notation min, max; keeping in mind that, if necessary, they should be replaced by the more general inf, sup.

Here the term “structure” is used in a somewhat narrower sense, as in subsection. 1.1, and means the composition of subsystems in the system and types of connections between them.

A graph is a pair ( G, R), where G=(g 1 ... g n) is a finite set of vertices, a - binary relation to G. If, then and only if, then the graph is called undirected, otherwise - directed. The pairs are called arcs (edges), and the elements of the set G- the vertices of the graph.

That is, algebraic or transcendental.

Strictly speaking, a countable set is a certain idealization that cannot be realized practically due to the finite size of technical systems and the limits of human perception. Such idealized models (for example, the set of natural numbers N=(1, 2,...)) makes sense to introduce for sets that are finite, but with a preliminarily unlimited (or unknown) number of elements.

Formally, the concept of an operation is a special case of the concept of a relationship between elements of sets. For example, the operation of adding two numbers specifies a 3-place (ternary) relation R: three of numbers (x, y, z) z) belongs to the relation R(we write (x,y,z)), if z = x+y.

Complex number, argument of polynomials A(), IN().

This assumption is often fulfilled in practice.

If the quantity is unknown, then it should be replaced in (2.33) with the estimate where In this case, the quantity will no longer be distributed normally, but according to Student’s law, which at is practically indistinguishable from normal.

It is easy to see that (2.34) is a special case of (2.32), when we take if the event A came in j- m experiment, otherwise. At the same time

And today you can add “... and computer science” (author’s note).

Any real process characteristic random fluctuations caused by the physical variability of any factors over time. In addition, there may be random external influences on the system. Therefore, with an equal average value of the input parameters at different times the output parameters will be different. Therefore, if random impacts on the system under study are significant, it is necessary to develop probabilistic (stochastic) model of the object, taking into account the statistical laws of distribution of system parameters and choosing the appropriate mathematical apparatus.

When building deterministic models random factors are neglected, taking into account only the specific conditions of the problem being solved, the properties and internal connections of the object (almost all branches of classical physics are built on this principle)

The idea of ​​deterministic methods- in the use of the model’s own dynamics during the evolution of the system.

In our course these methods are presented: molecular dynamics method, the advantages of which are: accuracy and certainty of the numerical algorithm; The disadvantage is that it is labor intensive due to the calculation of the interaction forces between particles (for a system of N particles, at each step you need to perform
operations of counting these forces).

At deterministic approach equations of motion are specified and integrated over time. We will consider systems of many particles. The positions of the particles contribute potential energy to the total energy of the system, and their velocities determine the contribution of kinetic energy. The system moves along a trajectory with constant energy in phase space (further explanations will follow). For deterministic methods, a microcanonical ensemble is natural, the energy of which is the integral of motion. In addition, it is possible to study systems for which the integral of motion is temperature and (or) pressure. In this case, the system is not closed, and it can be represented in contact with a thermal reservoir (canonical ensemble). To model it, we can use an approach in which we limit a number of degrees of freedom of the system (for example, we set the condition
).

As we have already noted, in the case when processes in a system occur unpredictably, such events and quantities associated with them are called random, and algorithms for modeling processes in the system - probabilistic (stochastic). Greek stoohastikos- literally means “one who can guess.”

Stochastic methods use a slightly different approach than deterministic ones: they only need to calculate the configuration part of the problem. The equations for the momentum of a system can always be integrated. The problem that then arises is how to conduct transitions from one configuration to another, which in the deterministic approach are determined by momentum. Such transitions in stochastic methods are carried out with probabilistic evolution in Markov process. The Markov process is a probabilistic analogue of the model’s own dynamics.

This approach has the advantage that it allows one to model systems that do not have any inherent dynamics.

Unlike deterministic methods, stochastic methods on a PC are simpler and faster to implement, but to obtain values ​​close to the true ones, good statistics are required, which requires modeling a large ensemble of particles.

An example of a completely stochastic method is Monte Carlo method. Stochastic methods use the important concept of a Markov process (Markov chain). The Markov process is a probabilistic analogue of the process in classical mechanics. The Markov chain is characterized by the absence of memory, i.e. the statistical characteristics of the near future are determined only by the present, without taking into account the past.

More practical than busy 2.

Random walk model

Example(formal)

Let us assume that particles are placed in arbitrary positions at the nodes of a two-dimensional lattice. At each time step, the particle “jumps” to one of the idle positions. This means that the particle has the ability to choose the direction of its jump to any of the four nearest places. After a jump, the particle “does not remember” where it jumped from. This case corresponds to a random walk and is a Markov chain. The result at each step is a new state of the particle system. The transition from one state to another depends only on the previous state, i.e., the probability of the system being in state i depends only on state i-1.

What physical processes in a solid body remind us (similar to) the described formal model of a random walk?

Of course, diffusion, that is, the very processes, the mechanisms of which we considered in the course of heat and mass transfer (3rd course). As an example, let us recall the usual classical self-diffusion in a crystal, when, without changing their visible properties, atoms periodically change places of temporary residence and wander around the lattice, using the so-called “vacancy” mechanism. It is also one of the most important mechanisms of diffusion in alloys. The phenomenon of migration of atoms in solids plays a decisive role in many traditional and non-traditional technologies - metallurgy, metalworking, the creation of semiconductors and superconductors, protective coatings and thin films.

It was discovered by Robert Austen in 1896 by observing the diffusion of gold and lead. Diffusion- the process of redistribution of atomic concentrations in space through chaotic (thermal) migration. Reasons, from the point of view of thermodynamics, there can be two: entropy (always) and energy (sometimes). The entropic reason is the increase in chaos when mixing atoms of the carved variety. Energy - promotes the formation of an alloy, when it is more advantageous to have atoms of different types nearby, and promotes diffusion decomposition, when the energy gain is ensured by placing atoms of the same type together.

The most common diffusion mechanisms are:

    vacancy

    internodal

    displacement mechanism

To implement the vacancy mechanism, at least one vacancy is required. Migration of vacancies is carried out by moving to an unoccupied site of one of the neighboring atoms. An atom can make a diffusion jump if there is a vacancy next to it. Vacancy cm, with a period of thermal vibrations of an atom in a lattice site, at a temperature T = 1330 K (at 6 K< точки плавления), число скачков, которое совершает вакансия в 1с, путь за одну секунду-см=3 м (=10 км/ч). По прямой же путь, проходимый вакансиейсм, т. е. в 300 раз короче пути по ломаной.

Nature needed it. so that the vacancy changes its place of residence within 1 s, passes along a broken line 3 m, and moves along a straight line by only 10 microns. Atoms behave calmer than vacancies. But they also change their place of residence a million times per second and move at a speed of approximately 1 m/hour.

So. that one vacancy per several thousand atoms is enough to move atoms at a micro level at a temperature close to melting.

Let us now form a random walk model for the phenomenon of diffusion in a crystal. The process of wandering of an atom is chaotic and unpredictable. However, for an ensemble of wandering atoms, statistical regularities should appear. We will consider uncorrelated jumps.

This means that if
And
is the movement of atoms during the i and j jumps, then after averaging over the ensemble of wandering atoms:

(average product = product of averages. If the walk is completely random, all directions are equal and
=0.)

Let each particle of the ensemble make N elementary jumps. Then its total displacement is:

;

and the average square of displacement

Since there is no correlation, the second term =0.

Let each jump have the same length h and random direction, and the average number of jumps per unit time is v. Then

It's obvious that

Let's call the quantity
- diffusion coefficient of wandering atoms. Then
;

For the three-dimensional case -
.

We got parabolic diffusion law- the mean square of the displacement is proportional to the wandering time.

This is exactly the problem we have to solve in the next laboratory work - modeling one-dimensional random walks.

Numerical model.

We define an ensemble of M particles, each of which takes N steps, independently of each other, to the right or to the left with the same probability. Step length = h.

For each particle we calculate the square of the displacement
in N steps. Then we perform averaging over the ensemble -
. Magnitude
, If
, i.e. The mean square of the displacement is proportional to the random walk time
- average time of one step) - parabolic law of diffusion.

Mathematical models in economics and programming

1. Deterministic and probabilistic mathematical models in economics. Advantages and Disadvantages

Methods for studying economic processes are based on the use of mathematical - deterministic and probabilistic - models representing the process, system or type of activity being studied. Such models provide a quantitative description of the problem and serve as the basis for making management decisions when searching for the optimal option. How justified are these decisions, are they the best possible, are all the factors that determine the optimal solution taken into account and weighed, what is the criterion to determine that this solution is really the best - these are the range of questions that are of great importance for production managers, and the answer to which can be found using operations research methods [Chesnokov S.V. Deterministic analysis of socio-economic data. - M.: Nauka, 1982, p. 45].

One of the principles of forming a control system is the method of cybernetic (mathematical) models. Mathematical modeling occupies an intermediate position between experiment and theory: there is no need to build a real physical model of the system; it will be replaced by a mathematical model. The peculiarity of the formation of a control system lies in the probabilistic, statistical approach to control processes. In cybernetics, it is accepted that any control process is subject to random, disturbing influences. Thus, the production process is influenced by a large number of factors, which cannot be taken into account in a deterministic manner. Therefore, the production process is considered to be influenced by random signals. Because of this, enterprise planning can only be probabilistic.

For these reasons, when speaking about mathematical modeling of economic processes, they often mean probabilistic models.

Let us describe each type of mathematical model.

Deterministic mathematical models are characterized by the fact that they describe the relationship of some factors with an effective indicator as a functional dependence, i.e. in deterministic models, the effective indicator of the model is presented in the form of a product, a quotient, an algebraic sum of factors, or in the form of any other function. This type of mathematical models is the most common, since, being quite simple to use (compared to probabilistic models), it allows one to understand the logic of the action of the main factors in the development of the economic process, quantify their influence, understand which factors and in what proportions it is possible and advisable to change to increase production efficiency.

Probabilistic mathematical models are fundamentally different from deterministic ones in that in probabilistic models the relationship between factors and the resulting attribute is probabilistic (stochastic): with a functional dependence (deterministic models), the same state of factors corresponds to a single state of the resulting attribute, whereas in probabilistic models one and the same state of factors corresponds to a whole set of states of the resulting attribute [Tolstova Yu. N. Logic of mathematical analysis of economic processes. - M.: Nauka, 2001, p. 32-33].

The advantage of deterministic models is their ease of use. The main drawback is the low adequacy of reality, since, as noted above, most economic processes are probabilistic in nature.

The advantage of probabilistic models is that, as a rule, they are more consistent with reality (more adequate) than deterministic ones. However, the disadvantage of probabilistic models is the complexity and laboriousness of their application, so in many situations it is enough to limit ourselves to deterministic models.

2. Statement of the linear programming problem using the example of the food ration problem

For the first time, the formulation of a linear programming problem in the form of a proposal for drawing up an optimal transportation plan; allowing to minimize the total mileage was given in the work of the Soviet economist A. N. Tolstoy in 1930.

Systematic studies of linear programming problems and the development of general methods for solving them were further developed in the works of Russian mathematicians L. V. Kantorovich, V. S. Nemchinov and other mathematicians and economists. Also, many works by foreign and, above all, American scientists are devoted to linear programming methods.

The linear programming problem is to maximize (minimize) a linear function.

under restrictions

and all

Comment. Inequalities can also have opposite meanings. By multiplying the corresponding inequalities by (-1) one can always obtain a system of the form (*).

If the number of variables of the constraint system and the objective function in the mathematical model of the problem is 2, then it can be solved graphically.

So, we need to maximize the function to a satisfying system of constraints.

Let us turn to one of the inequalities of the system of restrictions.

From a geometric point of view, all points that satisfy this inequality must either lie on a line or belong to one of the half-planes into which the plane of this line is divided. In order to find out, you need to check which of them contains a dot ().

Remark 2. If , then it is easier to take the point (0;0).

Non-negativity conditions also define half-planes, respectively, with boundary lines. Let us assume that the system of inequalities is consistent, then the half-planes, intersecting, form a common part, which is a convex set and represents a set of points whose coordinates are a solution to this system - this is the set of admissible solutions. The set of these points (solutions) is called a solution polygon. It can be a point, a ray, a polygon, or an unbounded polygonal area. Thus, the task of linear programming is to find a point in the decision polygon at which the objective function takes on the maximum (minimum) value. This point exists when the solution polygon is not empty and the objective function on it is bounded from above (from below). Under the specified conditions, at one of the vertices of the solution polygon, the objective function takes on the maximum value. To determine this vertex, we construct a straight line (where h is some constant). Most often a straight line is taken. It remains to find out the direction of movement of this line. This direction is determined by the gradient (antigradient) of the objective function.

The vector at each point is perpendicular to the line, so the value of f will increase as the line moves in the direction of the gradient (decrease in the direction of the antigradient). To do this, draw straight lines parallel to the straight line, shifting in the direction of the gradient (anti-gradient).

We will continue these constructions until the line passes through the last vertex of the solution polygon. This point determines the optimal value.

So, finding a solution to a linear programming problem using the geometric method includes the following steps:

Lines are constructed, the equations of which are obtained by replacing the inequality signs in the restrictions with exact equality signs.

Find the half-planes defined by each of the constraints of the problem.

Find a solution polygon.

Build a vector.

They are building a straight line.

They construct parallel straight lines in the direction of the gradient or antigradient, as a result of which they find the point at which the function takes on the maximum or minimum value, or establish that the function is unbounded from above (from below) on the admissible set.

The coordinates of the maximum (minimum) point of the function are determined and the value of the objective function at this point is calculated.

Problem about rational nutrition (problem about food ration)

Statement of the problem

The farm fattens livestock for commercial purposes. For simplicity, let’s assume that there are only four types of products: P1, P2, P3, P4; The unit cost of each product is equal to C1, C2, C3, C4, respectively. From these products you need to create a diet that should contain: proteins - at least b1 units; carbohydrates - at least b2 units; fat - at least b3 units. For products P1, P2, P3, P4, the content of proteins, carbohydrates and fats (in units per unit of product) is known and specified in the table, where aij (i=1,2,3,4; j=1,2,3) - some specific numbers; the first index indicates the product number, the second - the element number (proteins, carbohydrates, fats).

Probabilistic-deterministic mathematical predictive models of energy load graphs are a combination of statistical and deterministic models. It is these models that make it possible to ensure the best forecasting accuracy and adaptability to the changing process of power consumption.

They are based on standardized modeling concepts loads, i.e. additive decomposition of the actual load on a standardized graph (basic component, deterministic trend) and the residual component :

Where t– time within the day; d– number of day, for example, in a year.

In the standard component during modeling, they also carry out additive selection of individual components that take into account: changes in the average seasonal load ; weekly cycle of power consumption changes ; a trend component that models additional effects associated with changes in the time of sunrise and sunset from season to season ; component that takes into account the dependence of power consumption on meteorological factors , in particular temperatures, etc.

Let us consider in more detail approaches to modeling individual components based on the deterministic and statistical models mentioned above.

Modeling average seasonal load often done using simple moving averaging:

where N is the number of ordinary regular (working days) contained in n past weeks. , since “special”, “irregular days”, holidays, etc. are excluded from the weeks. Daily updates are carried out by averaging data over the past n weeks.

Simulation of weekly cycles also carried out by moving averaging of the form

updated weekly by averaging data over the past n weeks, or using an exponentially weighted moving average:

where is an empirically determined smoothing parameter ( ).

In work for modeling And seven components are used , for each day of the week, and each determined separately using an exponential smoothing model.

Authors of the work for modeling Double exponential smoothing of the Holt–Winters type is used. In work for modeling use a harmonic representation of the form

with parameters estimated from empirical data (the value “52” determines the number of weeks in a year). However, the problem of adaptive operational estimation of these parameters in this work is not completely solved.

Modeling , in some cases, carried out using finite Fourier series: with a weekly period, with a daily period, or with separate modeling of working and weekend days, respectively, with periods of five and two days:

To model the trend component use either polynomials of 2nd - 4th orders, or various nonlinear empirical functions, for example, of the form:

where is a fourth-degree polynomial describing relatively slow smoothed load changes during the daytime according to the seasons; , , – functions modeling effects associated with changes in the time of sunrise and sunset by season.

To take into account the dependence of power consumption on meteorological factors, in some cases an additional component is introduced . The work theoretically substantiates the inclusion into the model, but the possibilities of modeling the temperature effect are considered only to a limited extent. Thus, to represent the temperature component for Egyptian conditions, a polynomial model is used

where is the air temperature at the t-th hour.

A regression method is used to “normalize” the peaks and troughs of the process taking into account temperature, with the normalized data represented by a one-dimensional autoregressive integrated moving average (ARISS) model.

Also used for modeling taking into account temperature, a recursive Kalman filter, which includes external factors - temperature forecast. Or they use polynomial cubic interpolation of hourly loads in the short-term range and take into account the influence of temperature in the model.

To take into account average daily temperature forecasts, various weather conditions for the implementation of the analyzed process and at the same time increase the stability of the model, it is proposed to use a special modification of the moving average model

,

where for various weather conditions associated with probabilities a series of m load graphs is formed , and the forecast is defined as the conditional mathematical expectation. The probabilities are updated using the Bayes method as new actual load values ​​and factors become available during the day.

Modeling residual component carried out both using one-dimensional models and multidimensional ones, taking into account meteorological and other external factors. Thus, an autoregressive model AR(k) of order k is often used as a one-dimensional (one-factor) model

,

where is the residual white noise. To predict hourly (half-hourly) readings, the AR(1), AR(2) and even AR(24) models are used. Even if the generalized ARISS model is used for anyway, its application comes down to models AR(1), AR(2) for both five-minute and hourly load measurements.

Another one-factor model for modeling the component is the model single or double exponential smoothing. This model allows you to effectively identify short-term trends as the residual load changes. Simplicity, economy, recursiveness and computational efficiency provide the exponential smoothing method with wide application. Using simple exponential smoothing at different constants and determine two exponential averages And . Forecast of the residual component proactively determined by the formula



Did you like the article? Share with your friends!