Introduction of the concept of temperature is the zero law of thermodynamics. Thermal equilibrium at the zero law of thermodynamics

Physical chemistry

Sections of physical chemistry

Physical chemistry - science that studies chemical phenomena and establishing their general patterns based on physical approaches and using physical experimental methods.

Sections of physical chemistry:

· Structure of matter

Chemical thermodynamics and thermochemistry

· Chemical and phase equilibrium

· Solutions and electrochemistry

· Chemical kinetics. Catalysis. Photochemistry

Radiation chemistry

Basic concepts and quantities

Temperature T - the degree of heating of the body, determined by the distribution of molecules and other particles according to the speed of kinetic movement and the degree of population of the highest energy levels of molecules. In thermodynamics it is common to use absolute temperature, counted from absolute zero, which is always positive. SI dimension absolute temperature– K (kelvin), numerically equal to a degree on the Celsius scale.

Job amount of work w

The amount of work and the amount of heat in the general case are not functions of the state, since their value is determined by the type of process as a result of which the system changed its state. Exceptions are the work of expansion and the thermal effect of a chemical reaction.

The term “thermodynamics” itself comes from the Greek words thermos (heat) and dynamos (work), since this science is based on the study of the balance of heat and work in systems during various processes.

Heat capacity WITH - the ratio of the amount of heat absorbed by a body when heated to the temperature change caused by this absorption. There are true and average, molar and specific, isobaric and isochoric heat capacities.

True heat capacity- the ratio of an infinitesimal amount of heat to an infinitesimal change in temperature:

S ist= dQ /dT

average heat capacity- the ratio of the macroscopic amount of heat to the temperature change in the macroscopic process:

WITH= D Q /D T .

In physical terms, average heat capacity is the amount of heat required to heat a body by 1 degree (1 o C or 1 K).

Heat capacity per unit mass of a substance - specific heat(SI dimension - J/kg K). The heat capacity of a mole of substance is molar(molar)heat capacity(SI dimension - J/mol·K). Heat capacity measured at constant volume - isochoric heat capacity C V ; heat capacity at constant pressure - isobaric heat capacity With P . Between With P And C V there is a relation (for one mole of an ideal gas):

With P = C V + R

Where R - universal gas constant.

Thermodynamic systems

Thermodynamic system- a specific object of thermodynamic research, mentally isolated from the environment. This is a set of macroscopic bodies that can interact with each other and with the external environment - exchange energy and matter with them. A thermodynamic system consists of such a large number of structural particles that its state can be characterized by macroscopic parameters: density, pressure, concentration of substances, temperature, etc.

Thermodynamic systems (or simply systems for short) can be classified according to various signs:

- according to condition: equilibrium and nonequilibrium;

- on interaction with the environment(or with other systems): open (can exchange both energy and matter with the environment), closed (can only exchange energy) and isolated (can not exchange either matter or energy);

- by number of phases: single-phase (homogeneous, homogeneous) and multiphase (heterogeneous, heterogeneous);

- by number of components(chemical substances included in their composition): single-component and multi-component.

Internal energy system under consideration U - the sum of all types of energy of movement and interaction of particles (molecules, atoms, ions, radicals, etc.) that make up the system - the kinetic energy of the chaotic movement of molecules relative to the center of mass of the system and the potential energy of interaction of molecules with each other. Components of internal energy - translational U post (energy of translational motion of particles, such as molecules of gases and liquids), rotational U VR (energy of rotational motion of particles, for example, rotation of molecules of gases and liquids, atoms around chemical s-bonds), oscillatory U count (energy of intramolecular vibrational motion of atoms and energy of vibrational motion of particles located at the nodes of the crystal lattice), electronic U el (energy of electron movement in atoms and molecules), nuclear energy U me and others The concept of internal energy does not include kinetic and potential energy the system as a whole. The SI dimension of internal energy is J/mol or J/kg.

The absolute value of internal energy cannot be calculated using the equations of thermodynamics. You can only measure its change in a particular process. However, for thermodynamic consideration this turns out to be sufficient.

Status Options

State systems - a set of physical and chemical properties that characterize a given system. It is described state parameters- temperature T , pressure R , volume V , concentration WITH etc. Each state of the system, in addition to a certain value of the parameters, also corresponds to a certain value of some quantities that depend on the parameters and are called thermodynamic functions. If the change in a thermodynamic function does not depend on the path of the process, but is determined only by the initial and final states, such a function is called state function. For example, internal energy is a function of state, since its change in any process can be calculated as the difference between the final and initial values:

D.U.= U 2 - U 1.

State functions include characteristic functions , the totality of which can sufficiently fully characterize the state of the system (internal energy, enthalpy, entropy, Gibbs energy, etc.).

Thermodynamic process is any change in the system accompanied by a change in parameters. Driving force processes are factors- unevenness of the values ​​of certain parameters (for example, the temperature factor due to different temperature values ​​in different parts systems). A process occurring at constant pressure is called isobaric, at constant volume - isochoric, at constant temperature - isothermal, with a constant amount of heat - adiabatic.

Heat- the form of random (“thermal”) movement of particles forming the body (molecules, atoms, etc.). A quantitative measure of the energy transferred during heat exchange is quantity of heat Q . The SI dimension of the amount of heat is J. Along with the joule, an extra-system unit of heat is often used - calorie (cal). 1 cal = 4.184 J. Often, instead of the term “quantity of heat,” the expression “heat” is used as a synonym.

Job- a form of energy transfer from one system to another, associated with action against external forces and carried out during orderly, directed movement of the system or its individual components. A quantitative measure of the energy transferred during work is amount of work w . The SI dimension of work is J. Instead of the term “quantity of work,” the expression “work” is often used as a synonym.

Thermochemistry.

Thermochemistry- a branch of chemical thermodynamics that deals with determining the thermal effects of chemical reactions and establishing their dependence on various conditions. The task of thermochemistry also includes measuring the heat capacities of substances and the heats of phase transitions (including the processes of formation and dilution of solutions).

Calorimetric measurements

The main experimental method of thermochemistry is calorimetry. The amount of heat released or absorbed in a chemical reaction is measured using an instrument called calorimeter.

Calorimetric measurements make it possible to calculate extremely important quantities - the thermal effects of chemical reactions, the heat of dissolution, and the energy of chemical bonds. The values ​​of binding energies determine reactivity chemical compounds, and in some cases, the pharmacological activity of medicinal substances. However, not all chemical reactions and physicochemical processes can be measured by calorimetry, but only those that satisfy two conditions: 1) the process must be irreversible and 2) the process must proceed fast enough so that the released heat does not have time to dissipate in the environment.

Enthalpy

Majority chemical processes, both in nature and in the laboratory, and in industry, does not occur at a constant volume, but at a constant pressure. At the same time, often from various types only one work is done - expansion work, equal to the product of pressure and change in volume of the system:

w = rDV.

In this case, the equation of the first law of thermodynamics can be written as

D.U. = Q p - rDV

Q p= D.U. + rDV

(index R shows that the amount of heat is measured at constant pressure). Replacing changes in quantities with the corresponding differences, we get:

Q p = U 2 - U 1 + p (V 2 - V 1 )

Q p = (U 2 + pV 2 ) - (U 1 + pV 1 )

Q p = (U + pV ) 2 - (U + pV ) 1 = H 2 - H 1

Because p And V - state parameters, and U is a state function, then the sum U + pV = N is also a function of state. This function is called enthalpy. Thus, the heat absorbed or released by the system in a process running at constant pressure is equal to the change in enthalpy:

Q p = D.H.

There is a relationship between the change in enthalpy and the change in the internal energy of the system, expressed by the equations

DH= D.U. + DnRT or D.U. = DH - DnRT ,

which can be obtained using the Mendeleev-Clapeyron equation

pV= nRT , where pDV = DnRT .

Quantities DH various processes are relatively easily measured using calorimetric installations operating at constant pressure. As a result, enthalpy change is widely used in thermodynamic and thermochemical studies. The SI dimension of enthalpy is J/mol.

Hess's law

In the 1840s G.I. Hess formulated the fundamental law of thermochemistry, which he called " law of constancy of heat amounts":

When any chemical compound, then the same amount of heat is always released, regardless of whether the formation of this compound occurs directly or indirectly and in several stages.

In modern interpretations, the law sounds like this:

1. If the specified final products can be obtained from given starting materials in different ways, then the total heat of the process on any one path is equal to the total heat of the process on any other path.

2. The thermal effect of a chemical reaction does not depend on the path of the process, but depends only on the type and properties of the starting substances and products .

3. The thermal effect of a series of sequential reactions is equal to the thermal effect of any other series of reactions with the same starting materials and final products .

4. For example, an aqueous solution of ammonium chloride (NH 4 Cl aq) can be obtained from ammonia and hydrogen chloride gases and liquid water(aq) in the following two ways:

5. I. 1) NH 3(g) + aq = NH 3 aq + D.H. 1 (D.H. 1 = -34.936 kJ/mol);

6. 2) HCl (g) + aq = HCl aq + D.H. 2 (D.H. 2 = -72.457 kJ/mol);

7. 3) NH 3 aq + HCl aq = NH 4 Cl aq + D.H. 3 (D.H. 3 = -51.338 kJ/mol);

8. D.H. = D.H. 1 + D.H. 2 + D.H. 3 = -34,936 -72,457 -51,338 =

9. = -158.749 kJ/mol

11. II. 1) NH 3 (g) + HCl (g) = NH 4 Cl (t) + D.H. 4 (D.H. 4 = -175.100 kJ/mol);

12. 2) NH 4 Cl (t) + aq = NH 4 Cl aq + D.H. 5 (D.H. 5 = + 16.393 kJ/mol);

13. D.H. = D.H. 4 + D.H. 5 = -175,100 + 16,393 = -158,707

As can be seen, the thermal effect of the process carried out along path I is equal to the thermal effect of the process carried out along path II (the difference of 0.42 kJ/mol, accounting for 0.026% of the absolute value, is well within the experimental error).

One more example. The combustion of graphite to CO 2 can be carried out in two ways:

I. C (t) + O 2 (g) = CO 2 (g) + DH 1 (DH 1 = -393.505 kJ/mol);

II. C (T) + 1/2 O 2 (g) = CO (g) + D.H. 2 (D.H. 2 = -110.541 kJ/mol);

CO (g) + 1/2 O 2 (g) = CO 2 (g) + DH 3 (DH 3 = -282.964 kJ/mol);

And in this case

D.H. = D.H. 2 + D.H. 3 = -110.541 + (-282.964) = -393.505 kJ/mol.

Hess's law allows one to calculate the thermal effects of many reactions using relative Not large quantity reference data on the heats of combustion and formation of chemical substances, and in addition - calculate the thermal effects of such reactions that generally cannot be directly calorimetered, for example, C (t) + 1/2 O 2 (g) = CO (g)). This is achieved by applying the consequences of Hess's law.

1 consequence (Lavoisier-Laplace law): Thermal effect of decomposition complex substance to simpler ones is numerically equal, but opposite in sign to the thermal effect of the formation of a given complex substance from these simpler ones.

For example, the heat of decomposition of calcium carbonate (calcite) into calcium oxide and carbon dioxide

CaCO 3 (T) = CO 2 (g) + CaO (t) + DH 1

equal to + 178.23 kJ/mol. This means that for the formation of one mole of CaCO 3 from CaO and CO 2 the same amount of energy will be released:

CaO (T) + CO 2 (T) = CaCO 3 (T) + D.H. 2 (D.H. 2 = -178.23 kJ/mol).

2 consequence: If two reactions take place that lead from different initial states to the same final states, then the difference between their thermal effects is equal to the thermal effect of the reaction of transition from one initial state to another initial state.

For example, if the thermal effects of the combustion reactions of diamond and graphite are known:

C (g) + O 2 = CO 2 - 393.51 kJ/mol

C (alm) + O 2 = CO 2 - 395.39 kJ/mol

you can calculate the thermal effect of transition from one allotropic modification to another:

S (gr) ® S (alm) + DH allotrope

DH allotrope= -393.51 - (-395.39) = +1.88 kJ/mol

3 consequence: If two reactions take place that lead from identical initial states to different final states, then the difference between their thermal effects is equal to the thermal effect of the reaction of transition from one final state to another final state.

For example, using this consequence, you can calculate the thermal effect of the combustion reaction of carbon to CO:

C (g) + O 2 ® CO 2 - 393.505 kJ/mol

CO + 1/2 O 2 ® CO 2 - 282.964 kJ/mol

C (g) + 1/2 O 2 ® CO + DH r

DH r= -393.505 - (-282.964) = -110.541 kJ/mol.

4 consequence: The thermal effect of any chemical reaction is equal to the difference between the sums of the heats of formation of the reaction products and starting substances (taking into account the stoichiometric coefficients in the reaction equation):

DH r = å (n i D H f i ) prod - å (n i D H f i )ref

For example, the heat effect of the esterification reaction

CH 3 COOH (l) + C 2 H 5 OH (l) = CH 3 COOC 2 H 5 (l) + H 2 O (l) + DH r

DH r =(DH f CH3COOC2H5 +DH f H2O) - (DH f CH3COOH +DH f C2H5OH) =

= (-479.03 -285.83) - (-484.09 -276.98) = -3.79 kJ..

5 consequence: The thermal effect of any chemical reaction is equal to the difference between the sums of the heats of combustion of the starting substances and reaction products (taking into account the stoichiometric coefficients in the reaction equation):

DH r = å (n i D H c i ) ref - å (n i D H c i )prod

For example, the thermal effect of the esterification reaction given in the previous example is

DH r =(DH with CH3COOH +DH with C2H5OH) - (DH with CH3COOC2H5 +DH with H2O)=

= (-874.58 -1370.68) - (-2246.39 -0) = -1.13 kJ.

(The discrepancy in results is explained by the different accuracy of thermochemical data given in reference books).

Heat of solution

Heat of solution DH p -p or DH s .(from solution- solution) - the thermal effect of dissolving a substance at constant pressure.

There are integral and differential heats of solution. The heat of dissolution of 1 mole of a substance to form the so-called. an infinitely dilute solution is called integral heat of solution. The integral heat of solution depends on the ratio of the amounts of solute and solvent and, therefore, on the concentration of the resulting solution. The thermal effect when 1 mole of a substance is dissolved in a very large amount of an existing solution of the same substance of some concentration (leading to an infinitesimal increase in concentration) is called differential heat of solution:


In its physical meaning, the differential heat of solution shows how the thermal effect of dissolving a substance changes with increasing its concentration in the solution. The SI dimension of the heat of solution is J/mol.

The integral heat of dissolution of crystalline substances (for example, inorganic salts, bases, etc.) consists of two quantities - the enthalpy of transformation of the crystal lattice of the substance into an ionic gas (destruction crystal lattice)DH solve and enthalpy of solvation (in the case aqueous solutions- hydration) of molecules and ions formed from them during dissociation DH solv (DN hydr ):

DH p -p = DH solve + DH solv ; DH p -p = DH solve + DN hydr

Quantities DH solve And DH solv opposite in sign (solvation and hydration are always accompanied by the release of heat, while the destruction of the crystal lattice is accompanied by its absorption). Thus, the dissolution of substances that do not have a very strong crystal lattice (for example, hydroxides alkali metals– NaOH, KOH, etc.), is accompanied by strong heating of the resulting solution, and well-hydrated liquid substances that do not have a crystal lattice (for example, sulfuric acid) - by even greater heating until boiling. On the contrary, the dissolution of substances with a strong crystal lattice, such as, for example, alkali halides and alkaline earth metals KCl, NaCl, CaCl 2, comes with the absorption of heat and leads to cooling. (This effect is used in laboratory practice for preparing cooling mixtures).

Therefore, the sign of the total thermal effect during dissolution depends on which of its terms is greater in absolute value.

If the enthalpy of destruction of the crystal lattice of a salt is known, then by measuring the heat of solution, the enthalpy of its solvation can be calculated. On the other hand, by measuring the heat of dissolution of crystalline hydrate (i.e., hydrated salt), it is possible to calculate with sufficient accuracy the enthalpy of destruction (strength) of the crystal lattice.

The heat of solution of potassium chloride, equal to +17.577 kJ/mol at a concentration of 0.278 mol/l and 25 o C, is proposed as thermochemical standard to check the operation of calorimeters.

The temperature dependence of the heats of dissolution, as well as the thermal effects of chemical reactions, obeys the Kirchhoff equation.

When the solute and the solvent are chemically similar and there are no complications associated with ionization or solvation during dissolution, the heat of solution can be considered approximately equal to the heat of fusion of the solute. This mainly refers to the dissolution of organic substances in non-polar solvents.

Entropy

Entropy is a measure of the disorder of a system, related to thermodynamic probability.

Energy dissipation among system components can be calculated using statistical thermodynamics methods. This leads to a statistical definition of entropy. According to the previously mentioned position that the direction of spontaneous change corresponds to the direction of increase in thermodynamic probability, we can conclude that energy dissipation, and therefore entropy, is associated with it. This connection was proven in 1872 by L. Boltzmann. It is expressed by the Boltzmann equation

S = k ln W , (3.1)

Where k - Boltzmann constant.

According to statistical point From a perspective, entropy is a measure of disorder in a system. This is due to the fact that the more areas in the system in which there is spatial ordering in the arrangement of particles or uneven energy distribution (which is also considered energy ordering), the less thermodynamic probability. With chaotic mixing of particles, as well as with a uniform distribution of energy, when particles cannot be distinguished by their energy state, the thermodynamic probability, and, consequently, entropy, increases.

Second law of thermodynamics

the second can be expressed in several different formulations, each of which complements the others:

1. Heat cannot be transferred spontaneously from a colder body to a hotter one .

2. Energy of various types tends to turn into heat, and heat tends to dissipate .

3. No set of processes can be reduced to the transfer of heat from a cold body to a hot one, while the transfer of heat from a hot body to a cold one can be the only result of the processes (R.E.Clausius).

4. No set of processes can be reduced only to the transformation of heat into work, while the transformation of work into heat can be the only result of the processes (W. Thomson).

5. It is impossible to build a cyclic machine , which would convert heat into work without producing any other changes in the surrounding bodies (the so-called perpetual motion machine of the second kind) (W. Ostwald).

For an irreversible Carnot cycle we can write:


Where Q 1 - initial heat reserve in the system, Q 2 - the amount of heat remaining in the system after some process has passed through it, T 1 And T 2 - the initial and final temperatures of the system, respectively, h - efficiency of the process.

This equality is a mathematical expression of the second law of thermodynamics.

Third law of thermodynamics. Planck's postulate.

Absolute entropy

Since entropy is an extensive quantity, its value for a substance at each given temperature T is the sum of the values ​​corresponding to each temperature in the range from 0 K to T. If in equation (3.5) we take the lower temperature of the integration interval equal to absolute zero, then


Therefore, knowing the value of entropy at absolute zero, using this equation it would be possible to obtain the value of entropy at any temperature.

Careful measurements carried out at the end of the 19th century showed that as the temperature approaches absolute zero, the heat capacity of any substance S p tends to zero:

lim C p = 0 .

T ® 0

This means that the value S p /T is finite or equal to zero and, therefore, the difference S T - S 0 is always positive or equal to zero. Based on these considerations, M. Planck (1912) proposed a postulate:

At absolute zero temperature, the entropy of any substance in the form of an ideal crystal is zero.

This postulate of Planck is one of the formulations of the 3rd law of thermodynamics. It can be explained on the basis of the concepts of statistical physics: for a perfectly ordered crystal at absolute zero temperature, when there is no thermal motion of particles, the thermodynamic probability W is equal to 1. This means, in accordance with the Boltzmann equation (3.1), its entropy is equal to zero:

S 0 = k ln 1 = 0

From Planck's postulate we can conclude that the entropy of any substance at temperatures different from absolute zero is finite and positive. In accordance with this, entropy is the only thermodynamic function of state for which it is possible to determine absolute value, and not just a change in some process, as in the case of other state functions (for example, internal energy and enthalpy).

From the above equations it also follows that at a temperature approaching absolute zero, it becomes impossible to remove any, even very small, amounts of heat from a cooled body due to the infinitesimal heat capacity. In other words,

by using finite number operations cannot lower body temperature to absolute zero.

This expression is called the principle of unattainability of absolute zero temperature and, along with Planck's postulate, is one of the formulations of the third law of thermodynamics. (Note that the experiment has now succeeded in lowering the temperature to 0.00001 K).

The principle of the unattainability of absolute zero temperature is also associated with the thermal theorem of V. Nernst (1906), according to which when approaching absolute zero, the value of DN And DG = DH +TDS (G - Gibbs energy, which will be discussed below) come closer, that is, when T = 0 there must be equality

DG= DH .

Entropy change during a chemical reaction DS o r can be calculated as the difference between the sums of entropies of products and starting substances, taken with the corresponding stoichiometric coefficients. For standard conditions:

DS o r = å (n i S o i )prod - å (n i S o I )ref

(For calculations, the absolute values ​​of the entropy of individual substances are taken, and not their changes, as when calculating other thermodynamic functions. The reasons for this will be explained when considering the third law of thermodynamics).

Chemical equilibrium

Chemical equilibrium is a thermodynamic equilibrium in a system in which direct and reverse chemical reactions are possible.

Under certain conditions, reactant activities can be replaced by concentrations or partial pressures. In these cases, the equilibrium constant, expressed in terms of equilibrium concentrations Kc or through partial pressures K p, takes the form

(4.11)
(4.12)

Equations (4.11) and (4.12) are variants law of mass action (LAM) for reversible reactions at equilibrium. At a constant temperature, the ratio of the equilibrium concentrations (partial pressures) of the final products to the equilibrium concentrations (partial pressures) of the initial reagents, respectively raised to powers equal to their stoichiometric coefficients, is a constant value.

For gaseous substances K p And Kc related by the relation K p = (RT) Δ n Kc, where Δ n– the difference in the number of moles of initial and final gaseous reagents.

The equilibrium constant is determined at known equilibrium concentrations of reactants or from a known Δ G° chemical reaction

An arbitrary reversible chemical reaction can be described by an equation of the form:

aA + bB Û dD + eE

In accordance with the law of mass action, in the simplest case, the rate of a direct reaction is related to the concentrations of the starting substances by the equation

v pr = k pr C A a WITH In b,

and the rate of the reverse reaction - with the concentrations of products by the equation

v arr. = k arr. C D d WITH E e.

When equilibrium is achieved, these speeds are equal to each other:

v pr = v arr.

The ratio of the rate constants of the forward and reverse reactions to each other will be equal to equilibrium constant:


Since this expression is based on taking into account the amount of reactants and reaction products, it is a mathematical representation of the law acting masses for reversible reactions.

The equilibrium constant, expressed in terms of the concentrations of the reacting substances, is called the concentration constant and is denoted K s . For a more rigorous consideration, thermodynamic activities of substances should be used instead of concentrations A = fC (Where f - activity coefficient). In this case we are talking about the so-called thermodynamic equilibrium constant


At low concentrations, when the activity coefficients of the starting substances and products are close to unity, K s And K a almost equal to each other.

The equilibrium constant of a reaction occurring in the gas phase can be expressed in terms of partial pressures R substances involved in the reaction:


Between K r And K s there is a relationship that can be derived this way. Let us express the partial pressures of substances in terms of their concentrations using the Mendeleev-Clapeyron equation:

pV = nRT ,

where p = (n /V )RT = CRT .

The dimension of the equilibrium constants depends on the method of expressing the concentration (pressure) and the stoichiometry of the reaction. It can often cause confusion, for example, in the example considered [mol -1 m 3 ] for K s and [Pa -1] for K r , but there is nothing wrong with that. If the sums of the stoichiometric coefficients of the products and starting substances are equal, the equilibrium constant will be dimensionless.

Phase equilibrium.

Phase equilibrium- coexistence of thermodynamically equilibrium phases forming a heterogeneous system.

Phase F - a set of parts of a system that are identical in chemical composition and physical properties, are in thermodynamic equilibrium with each other and are separated from other parts by interfaces. Any homogeneous system is single-phase, i.e., characterized by the absence of internal interfaces. A heterogeneous system contains several phases (at least two). In a heterogeneous phase system there are internal interface(sometimes called interfacial boundaries).

Component- individual Chemical substance, included in the system. A component is considered only that substance that, in principle, can be isolated from the system and can exist independently for a sufficiently long time.

Number of independent components systems TO is the number of components required to create the complete composition of the system. It is equal to total number components minus the number of chemical reactions occurring between them.

Phase transitions- these are transitions of a substance from one phase state to another when the parameters characterizing thermodynamic equilibrium change.

Variation systems WITH can be represented as the number of external conditions (temperature, pressure, concentration, etc.) that the experimenter can change without changing the number of phases in the system.

phase rule, is a consequence of the second law of thermodynamics, connects the number of phases in equilibrium, the number of independent components and the number of parameters necessary for full description systems:

The number of degrees of freedom (variance) of a thermodynamic system in equilibrium, which is influenced only by pressure and temperature among external factors, is equal to the number of independent components minus the number of phases plus two:

WITH = TO - F + 2

Phase diagrams.

Studying the dependence of properties is a lot of things

Thermodynamics as a science officially originated a very long time ago, in the territory of the Ancient East, and then intensively developed in European countries. In scientific postulates during long period time, the question of the relationship between the part and the whole remained insufficiently studied. As it became clear in the mid-20th century, just one element can transform the whole in a completely unexpected way.

Figure 1. Zeroth law of thermodynamics. Author24 - online exchange of student works

From classical thermodynamics it follows that isolated systems, in accordance with the second thermodynamic principle, are suitable for irreversible processes. The entropy of the concept increases until it reaches its maximum indicator in a state of absolute balance. The growth of this factor is accompanied by a significant loss of information about the system itself.

With the discovery of the first law of thermodynamics, the question arose of how the rapid increase in entropy in closed systems can be reconciled with the phenomena of self-organization in inanimate and living nature. For a long time, physicists believed that there was a significant contradiction between the derivation of the second law of thermodynamics and the conclusions of Darwin's evolutionary hypothesis, according to which a process of self-organization occurs in all organisms on the planet due to the principle of selection. As a result of this development of nonlinear thermodynamics, a new scientific discipline.

Definition 1

Synergetics is the science of stability and self-organization of structures of various complex non-equilibrium concepts.

The main criteria in this system are:

  • physical;
  • chemical;
  • biological;
  • social.

Disagreements between the laws of thermodynamics and examples of the highly developed world were resolved with the advent of the zero thermodynamic principle and the subsequent development of nonequilibrium nonlinear thermodynamics. It is also called in physics the thermodynamics of open stable systems. P. Glensdorf, I. R. Prigozhin and G. Haken made a great contribution to the development of this scientific direction of science. Belgian researcher of Russian origin Prigogine won the Nobel Prize in 1977 for his work in this area.

Formation of the zero law of thermodynamics

The Zeroth Law of Thermodynamics, which was first formulated only about 50 years ago, is essentially a hindsight logical description for introducing the definition of temperature physical bodies. Temperature is one of the most important and profound concepts of thermodynamics. It is this parameter that plays just as well important role in thermodynamic systems, like the processes themselves.

Note 1

For the first time, the zero law of thermodynamics took central place in physics in the form of a completely abstract formulation, which replaced the definition of force introduced back in the time of Newton - at first glance, more “tangible” and concrete, and also successfully “mathematicized” by scientists.

The zero law of thermodynamics received this name because it was presented and described after the first and second laws became one of the well-established scientific concepts. According to this postulate, any isolated system, over time, independently enters a state of thermodynamic equilibrium, and then remains in it for such a period as long as external factors remain unchanged. The zero law of thermodynamics is also called common beginning, which assumes the presence of constant equilibrium in a system of mechanical, thermal and chemical origin.

Also, classical thermodynamic principles postulate only the existence of a state of absolute equilibrium, but say nothing about the time to achieve it.

The necessity and significance of the principles of thermodynamics is directly related to the fact that this branch of physics describes in detail the macroscopic parameters of systems without specific assumptions regarding their general microscopic structure. The science of statics deals with issues of internal structure.

Thermodynamic systems at the zero beginning

To define thermodynamic systems at the zero origin, it is necessary to consider two concepts separated by a heat-conducting wall. They are located in stable, thermal contact. Due to the presence of a state of constant equilibrium, sooner or later a moment comes when both systems will be in this state indefinitely. If you suddenly break the thermal contact and isolate the moving elements, their condition remains the same. Any third thermodynamic concept that does not change own position upon thermal contact, will not change position, even with prolonged contact.

This means that in thermodynamics there is a feature common to all three systems, which can be compared not to any individual process, but to the state of thermodynamic equilibrium itself. This characteristic is usually called temperature, the quantitative value of which is determined by the value of the active mechanical parameter, in the form of the volume of a certain system. Such a criterion in this case can be called a thermometer.

More broadly, the principles of the zero law of thermodynamics can be understood as the concept of the existence of objects in the surrounding world to which the science of thermodynamics itself is applicable. The zeroth thermodynamic law states that the corresponding system cannot be too small or very large; the number of particles that form it corresponds to the order of Avogadro's parameters.

Indeed, the performance of subtle concepts is always subject to significant fluctuations. Huge systems, for example, “half the Universe,” either go into an equilibrium state over astronomically large periods of time or do not have it at all. From the fact of the existence of thermodynamic systems, a concept arises that is central to all further study.

Note 2

We are talking about the possibility of introducing the concept of temperature for systems in thermodynamics.

Thermal equilibrium at the zero law of thermodynamics

IN modern literature The zero principle often includes postulates about the properties of thermal stable equilibrium. This value can exist between existing systems that are separated by a fixed, heat-permeable partition that allows the elements to exchange internal energy, but does not allow other substances to pass through.

Definition 2

The provision on the transitivity of thermal equilibrium states that if two working bodies are separated by a diathermic partition and are in equilibrium with each other, then any third object automatically begins to interact with them and receive a certain amount of thermal equilibrium.

In other words, if two closed systems are brought into thermal contact with each other at the zero law of thermodynamics, then after achieving stable equilibrium all active elements will be in a state of thermal equilibrium with each other. Moreover, each of the concepts itself is in a similar position.

In foreign thematic publications, the law on the transitivity of thermal equilibrium is often called the zero beginning, where the main provisions on achieving absolute equilibrium can be called the “minuses of the first” beginning. The significance of the transitivity postulate is that it allows scientists to introduce a specific function of the state of a system that has important properties of empirical temperature. This helps create instruments for measuring temperature. The equality of these indicators, measured using such a device - a thermometer, is a key condition for the thermal equilibrium of concepts.

INTRODUCTION

CHAPTER 1

BASIC CONCEPTS AND INITIAL POINTS OF THERMODYNAMICS

1.1. Closed and open thermodynamic systems.

1.2. Zero law of thermodynamics.

1.3. The first law of thermodynamics.

1.4. Second law of thermodynamics.

1.4.1. Reversible and irreversible processes.

1.4.2. Entropy.

1.5. Third law of thermodynamics.

CHAPTER 2

2.1. general characteristics open systems.

2.1.1. Dissipative structures.

2.2. Self-organization various systems and synergetics.

2.3. Examples of self-organization of various systems.

2.3.1. Physical systems.

2.3.2. Chemical systems.

2.3.3. Biological systems.

2.3.4. Social systems.

Formulation of the problem.

CHAPTER 3

ANALYTICAL AND NUMERICAL STUDIES OF SELF-ORGANIZATION OF VARIOUS SYSTEMS.

3.1. Benard cells.

3.2. Laser as a self-organized system.

3.3. Biological system.

3.3.1. Population dynamics. Ecology.

3.3.2. System "Victim - Predator".

CONCLUSION.

LITERATURE.

INTRODUCTION

Science originated a long time ago, in the Ancient East, and then developed intensively in Europe. In scientific traditions, the question of

relationships between the whole and the part. As it became clear in the middle

In the 20th century, a part can transform the whole in radical and unexpected ways.

From classical thermodynamics it is known that isolated thermodynamic systems, in accordance with the second law of thermodynamics for irreversible processes, the entropy of the system S increases until it reaches its maximum value in a state of thermodynamic equilibrium. An increase in entropy is accompanied by a loss of information about the system.

With the discovery of the second law of thermodynamics, the question arose of how the increase in entropy over time in closed systems could be reconciled with the processes of self-organization in living and non-living nature. For a long time there seemed to be a contradiction between the derivation of the second law of thermodynamics and the conclusions evolutionary theory Darwin, according to which in living nature, thanks to the principle of selection, the process of self-organization continuously occurs.

The contradiction between the second law of thermodynamics and examples of the highly organized world around us was resolved with the advent more than fifty years ago and the subsequent natural development of nonlinear nonequilibrium thermodynamics. It is also called thermodynamics of open systems. Great contribution to the development of this new science contributed by I.R. Prigozhin, P. Glensdorf, G. Haken. Belgian physicist of Russian origin Ilya Romanovich Prigogine was awarded the Nobel Prize in 1977 for his work in this area.

As a result of the development of nonlinear nonequilibrium thermodynamics, a completely new scientific discipline of synergetics appeared - the science of self-organization and stability of the structures of various complex nonequilibrium systems : physical, chemical, biological and social.

In this paper, the self-organization of various systems is studied using analytical and numerical methods.


CHAPTER 1

BASIC CONCEPTS AND ASPECTS

THERMODYNAMICS.

1.1. CLOSED AND OPEN THERMODYNAMIC

SYSTEMS.

Every material object, every body consisting of a large number of particles is called macroscopic system. The sizes of macroscopic systems are much larger than the sizes of atoms and molecules. All macroscopic features that characterize such a system and its relationship to surrounding bodies are called macroscopic parameters. These include, for example, density, volume, elasticity, concentration, polarization, magnetization, etc. Macroscopic parameters are divided into external and internal.

Quantities determined by the position of external bodies not included in our system are called external parameters, for example, the strength of the force field (since it depends on the position of the field sources - charges and currents not included in our system), the volume of the system (since it is determined by the location of external bodies), etc. Therefore, external parameters are functions of the coordinates of external bodies. The quantities determined by the total motion and distribution in space of the particles entering the system are called internal parameters, for example energy, pressure, density, magnetization, polarization, etc. (since their values ​​depend on the movement and position of the particles of the system and the charges included in them).

A set of independent macroscopic parameters determines the state of the system, i.e. the form of her existence. Values ​​that do not depend on the prehistory of the system and are completely determined by its state in this moment(i.e. a set of independent parameters) are called state functions.

The condition is called stationary, if the system parameters do not change over time.

If, in addition, in the system not only are all parameters constant in time, but there are no stationary flows due to the action of any external sources, then this state of the system is called equilibrium(state of thermodynamic equilibrium). Thermodynamic systems are usually called not all, but only those macroscopic systems that are in thermodynamic equilibrium. Similarly, thermodynamic parameters are those parameters that characterize a system in thermodynamic equilibrium.

The internal parameters of the system are divided into intensive and extensive. Parameters that do not depend on the mass and number of particles in the system are called intense(pressure, temperature, etc.). Parameters proportional to the mass or number of particles in the system are called additive or extensive(energy, entropy, etc.). Extensive parameters characterize the system as a whole, while intensive ones can take on certain values ​​at each point of the system.

According to the method of transfer of energy, matter and information between the system under consideration and the environment, thermodynamic systems are classified:

1. Closed (isolated) system- this is a system in which there is no exchange of energy, matter (including radiation), or information with external bodies.

2. Closed system- a system in which there is an exchange only with energy.

3. Adiabatically isolated system - This is a system in which there is an exchange of energy only in the form of heat.

4. Open system is a system that exchanges energy, matter, and information.

1.2. ZERO ORIGIN OF THERMODYNAMICS.

The zero law of thermodynamics, formulated only about 50 years ago, essentially represents a logical justification obtained “retroactively” for introducing the concept of the temperature of physical bodies. Temperature is one of the most profound concepts of thermodynamics. Temperature plays as important a role in thermodynamics as, for example, processes. For the first time, a completely abstract concept took center stage in physics; it replaced the concept of force introduced back in the time of Newton (17th century) - at first glance, more concrete and “tangible” and, moreover, successfully “mathematicized” by Newton.

The first law of thermodynamics establishes that the internal energy of a system is an unambiguous function of its state and changes only under the influence of external influences.

In thermodynamics, two types of external interactions are considered: impacts associated with changes in external parameters of the system (the system does work W), and impacts not associated with changes in external parameters and caused by changes in internal parameters or temperature (a certain amount of heat Q is imparted to the system).

Therefore, according to the first law, the change in the internal energy U 2 -U 1 of the system during its transition under the influence of these influences from the first state to the second is equal to algebraic sum Q and W, which for the final process will be written as the equation

U 2 - U 1 = Q - W or Q = U 2 - U 1 + W (1.1)

The first principle is formed as a postulate and is a generalization of a large amount of experimental data.

For an elementary process first law equation like this:

dQ = dU + dW (1.2)

dQ and dW are not full differential, since they depend on the route.

The dependence of Q and W on the path is visible in the simplest example of gas expansion. The work done by the system during its transition from state 1 to 2 (Fig. 1) along the path A depicted by an area bounded by a contour A1a2VA :

W a = p(V,T) dV ;

and work when moving along the path V- area limited by contour A1v2VA:

W b = p(V,T) dV.

Rice. 1

Since pressure depends not only on volume, but also on temperature, then when various changes temperatures on path a and b during the transition of the same initial state (p 1,V 1) to the same final state (p 2,V 2), the work turns out to be different. This shows that in a closed process (cycle) 1a2b1 the system does work that is not equal to zero. The operation of all heat engines is based on this.

From the first law of thermodynamics it follows that work can be done either by changing internal energy or by imparting an amount of heat to the system. If the process is circular, the initial and final states coincide: U 2 - U 1 = 0 and W = Q, that is, work in a circular process can only be done by the system receiving heat from external bodies.

The first principle can be formulated in several forms:

1. The creation and destruction of energy is impossible.

2. Any form of movement can and must be transformed into any other form of movement.

3. Internal energy is an unambiguous form of state.

4. A perpetual motion machine of the first kind is impossible.

5. An infinitesimal change in internal energy is a total differential.

6. The amount of heat and work does not depend on the path of the process.

The first law of thermodynamics, postulating the conservation law

energy for a thermodynamic system. does not indicate the direction of processes occurring in nature. The direction of thermodynamic processes is established by the second law of thermodynamics.

1.4. SECOND LAW OF THERMODYNAMICS.

The second law of thermodynamics establishes the presence of fundamental asymmetry in nature, i.e. unidirectionality of all spontaneous processes occurring in it.

The second basic postulate of thermodynamics is also associated with other properties of thermodynamic equilibrium as a special type of thermal motion. Experience shows that if two equilibrium systems A and B are brought into thermal contact, then regardless of the difference or equality of their external parameters, they either remain in the same state of thermodynamic equilibrium, or their equilibrium is disrupted and after some time in the process of heat exchange (exchange energy) both systems reach a different equilibrium state. In addition, if there are three equilibrium systems A, B and C, and if systems A and B are separately in equilibrium with system C, then systems A and B are in thermodynamic equilibrium with each other (transitivity properties of thermodynamic equilibrium).

Let there be two systems. In order to make sure that they are in a state of thermodynamic equilibrium, it is necessary to measure independently all the internal parameters of these systems and make sure that they are constant over time. This task is extremely difficult.

It turns out, however, that there is a physical quantity that allows us to compare the thermodynamic states of two systems and two parts of one system without detailed study and internal parameters. This quantity, expressing the state of internal motion of an equilibrium system, having the same value for all parts of a complex equilibrium system, regardless of the number of particles in them and determined by external parameters and energy is called temperature.

Temperature is an intensive parameter and serves as a measure of the intensity of thermal movement of molecules.

The stated position on the existence of temperature as a special function of the state of an equilibrium system represents the second postulate of thermodynamics.

In other words, the state of thermodynamic equilibrium is determined by a combination of external parameters and temperature.

R. Fowler and E. Guggenheim called it the zero law, since it is similar to the first and second law, which determine the existence of certain state functions, and establishes the existence of temperature in equilibrium systems. This was mentioned above.

So, all internal parameters of an equilibrium system are functions of external parameters and temperatures.(Second postulate of thermodynamics).

Expressing temperature through external parameters and energy, the second postulate can be formulated as follows : in thermodynamic equilibrium, all internal parameters are functions of external parameters and energy.

The second postulate allows us to determine a change in body temperature by a change in any of its parameters, on which the design of various thermometers is based.

1.4.1. REVERSIBLE AND IRREVERSIBLE PROCESSES.

The process of transition of a system from state 1 to 2 is called reversible, if the return of this system to its original state from 2 to 1 can be accomplished without any changes to the surrounding external bodies.

The process of transition of a system from state 1 to 2 is called irreversible, if the reverse transition of the system from 2 to 1 cannot be accomplished without changes in the surrounding bodies.

A measure of the irreversibility of a process in closed system is a change in a new state function - entropy, the existence of which in an equilibrium system establishes the first position of the second law about the impossibility of a perpetual motion machine of the second kind. The uniqueness of this state function leads to the fact that any irreversible process is nonequilibrium.

From the second law it follows that S is a unique state function. This means that dQ/T for any circular equilibrium process is zero. If this were not carried out, i.e. if entropy were an ambiguous function of state, then it would be possible to implement a perpetual motion machine of the second kind.

The position about the existence of a new unambiguous entropy state function S for any thermodynamic system, which does not change during adiabatic equilibrium processes and constitutes the content of the second law of thermodynamics for equilibrium processes.

Mathematically, the second law of thermodynamics for equilibrium processes is written by the equation:

dQ/T = dS or dQ = TdS (1.3)

The integral equation of the second law for equilibrium circular processes is the Clausius equality:

For a nonequilibrium circular process, the Clausius inequality has the following form:

dQ/T< 0 (1.5)

Now we can write the basic equation of thermodynamics for the simplest system under uniform pressure:

TdS = dU + pdV (1.6)

Let us discuss the question of the physical meaning of entropy.

1.4.2. ENTROPY.

The Second Law of Thermodynamics postulates the existence of a state function called "entropy" (meaning "evolution" from the Greek) which has the following properties:

a) The entropy of a system is an extensive property. If a system consists of several parts, then the total entropy of the system is equal to the sum of the entropy of each part.

c) The change in entropy d S consists of two parts. Let us denote by d e S the flow of entropy due to interaction with the environment, and by d i S the part of entropy due to changes within the system, we have

d S = d e S + d i S (1.7)

The increment in entropy d i S due to changes within the system never has a negative value. The value d i S = 0 only when the system undergoes reversible changes, but it is always positive if the same irreversible processes occur in the system.

Thus

(reversible processes);

d i S > 0 (1.9)

(irreversible processes);

For an isolated system, the entropy flow is zero and expressions (1.8) and (1.9) are reduced to the following form:

d S = d i S > 0 (1.10)

(isolated system).

For an isolated system, this relationship is equivalent to the classical formulation that entropy can never decrease, so in this case the properties of the entropy function provide a criterion for detecting the presence of irreversible processes. Similar criteria exist for some other special cases.

Let us assume that the system, which we will denote by the symbol 1 , is inside the system 2 bigger size and that the general system consisting of systems 1 And 2 , is isolated.

The classical formulation of the second law of thermodynamics then looks like:

dS = dS 1 +dS 2 ³ 0 (1.11)

Applying equations (1.8) and (1.9) separately to each part of this expression, postulates that d i S 1 ³ 0 , d i S 2 ³ 0

The situation in which d i S 1 > 0 and d i S 2 < 0 , а d(S 1 +S 2 )>0 , is physically impossible. Therefore, it can be argued that the decrease in entropy in a separate part of the system, compensated sufficient increase entropy in another part of the system is a forbidden process. From this formulation it follows that in any macroscopic part of the system, the increase in entropy due to the course of irreversible processes is positive. The concept of a “macroscopic section” of a system means any section of the system that contains a sufficiently large number of molecules so that microscopic fluctuations can be neglected. The interaction of irreversible processes is possible only when these processes occur in the same parts of the system.

Such a formulation of the second law might be called a "local" formulation, as opposed to the "global" formulation of classical thermodynamics. The significance of such a new formulation is that on its basis a much deeper analysis of irreversible processes is possible.

1.5 THE THIRD LAW OF THERMODYNAMICS.

The discovery of the third law of thermodynamics is associated with the discovery of a chemical agent - a quantity that characterizes the ability of various substances to react chemically with each other. This value is determined by the work W of chemical forces during the reaction. The first and second laws of thermodynamics allow us to calculate the chemical agent W only up to some undefined function. To determine this function, in addition to both principles of thermodynamics, new experimental data on the properties of bodies are needed. Therefore, Nernston undertook extensive experimental studies of the behavior of substances at low temperatures.

As a result of these studies, it was formulated third law of thermodynamics: As the temperature approaches 0 K, the entropy of any equilibrium system at isothermal processes ceases to depend on any thermodynamic parameters of the state and in the limit (T = 0 K) accepts the same universal value for all systems constant value, which can be taken equal to zero.

The generality of this statement is that, firstly, it applies to any equilibrium system and, secondly, that as T tends to 0 K, the entropy does not depend on the value of any parameter of the system. Thus, according to the third principle,

lin [ S (T,X 2) - S (T,X 1) ] = 0 (1.12)

lim [ dS/dX ] T = 0 at T ® 0 (1.13)

where X is any thermodynamic parameter (a i or A i).

The limiting value of entropy, since it is the same for all systems, has no physical meaning and therefore is assumed to be equal to zero (Planck's postulate). As a static consideration of this issue shows, entropy is essentially determined up to a certain constant (similar, for example, to the electrostatic potential of a system of charges at any point in the field). Thus, there is no point in introducing some kind of “absolute entropy”, as Planck and some other scientists did.

CHAPTER 2

BASIC CONCEPTS AND PROVISIONS OF SYNERGETICS.

SELF-ORGANIZATION OF VARIOUS SYSTEMS.

About 50 years ago, as a result of the development of thermodynamics, a new discipline arose - synergetics. Being the science of self-organization of a wide variety of systems - physical, chemical, biological and social - synergetics shows the possibility of at least partially removing interdisciplinary barriers not only within the natural scientific branch of knowledge, but also between natural scientific and humanitarian cultures.

Synergetics is the study of systems consisting of many subsystems of very different nature, such as electrons, atoms, molecules, cells, neutrons, mechanical elements, photons, organs, animals and even people.

When choosing a mathematical apparatus, it is necessary to keep in mind that it must be applicable to the problems faced by physicists, chemists, biologists, electrical engineers and mechanical engineers. It must operate no less flawlessly in the fields of economics, ecology and sociology.

In all these cases we will have to consider systems consisting of a very large number of subsystems about which we may not have all the complete information. To describe such systems, approaches based on thermodynamics and information theory are often used.

In all systems of interest for synergetics, dynamics play a decisive role. How and what macroscopic states are formed are determined by the rate of growth (or decay) of collective “modes”. We can say that in a certain sense we come to a kind of generalized Darwenism, the effect of which is recognized not only on the organic, but also on the inorganic world: the emergence of macroscopic structures caused by the birth of collective modes under the influence of fluctuations, their competition and, finally, the selection of the “fittest” modes or combinations of such modes.

It is clear that the “time” parameter plays a decisive role. Therefore, we must study the evolution of systems over time. That is why the equations that interest us are sometimes called “evolutionary”.

2.1. GENERAL CHARACTERISTICS OF OPEN SYSTEMS.

Open systems- these are thermodynamic systems that exchange matter, energy and momentum with surrounding bodies (environment). If the deviation of an open system from the equilibrium state is small, then the nonequilibrium state can be described by the same parameters (temperature, chemical potential, etc.) as the equilibrium state. However, deviations of parameters from equilibrium values ​​are caused by flows of matter and energy in the system. Such transfer processes lead to the production of entropy. Examples of open systems are: biological systems, including cells, information processing systems in cybernetics, energy supply systems and others. To maintain life in systems from cells to humans, a constant exchange of energy and matter with the environment is necessary. Therefore, living organisms are open systems, and similarly with the other given parameters. Prigogine formulated an expanded version of thermodynamics in 1945.

In an open system, the change in entropy can be broken down into the sum of two contributions:

d S = d S e + d S i (2.1)

Here d S e is the flow of entropy due to the exchange of energy and matter with the environment, d S i is the production of entropy within the system (Fig. 2.1).

Rice. 2.1. Schematic representation of open

systems: production and flow of entropy.

X - set of characteristics :

C - composition of the system and external environment ;

P - pressure; T - temperature.

So, an open system differs from an isolated one by the presence of a term in the expression for the change in entropy corresponding to the exchange. In this case, the sign of the term d S e can be any, unlike d S i .

For a nonequilibrium state:

The non-equilibrium state is more highly organized than the equilibrium state, for which

Thus, evolution to a higher order can be represented as a process in which the system reaches a state with lower entropy compared to the initial one.

The fundamental theorem on entropy production in an open system with time-independent boundary conditions was formulated by Prigogine: in a linear domain, the system evolves to stationary state, characterized by a minimum entropy production compatible with the imposed boundary conditions.

So, the state of any linear open system with time-independent boundary conditions always changes in the direction of decreasing entropy production P = d S / d t until the current equilibrium state is reached, in which entropy production is minimal:

dP< 0 (условие эволюции)

P = min , d P = 0 (current equilibrium condition)

d P/ d t< 0 (2.2)

2.1.1. DISSIPATIVE STRUCTURES.

Each system consists of elements (subsystems). These elements are in a certain order and are connected by certain relationships. The structure of a system can be called the organization of elements and the nature of the connection between them.

Real physical systems have spatial and temporal structures.

Formation of structure- this is the emergence of new properties and relationships in many elements of the system. Concepts and principles play an important role in the processes of structure formation:

1. Constant negative flow of entropy.

2. The state of the system is far from equilibrium.

3. Nonlinearity of equations describing processes.

4. Collective (cooperative) behavior of subsystems.

5. Universal criterion of evolution Prigogine - Glensdorf.

The formation of structures during irreversible processes must be accompanied by a qualitative jump (phase transition) when critical parameter values ​​are reached in the system. In open systems, the external contribution to entropy (2.1) d S can, in principle, be chosen arbitrarily by changing the parameters of the system and the properties of the environment accordingly. In particular, entropy can decrease due to the release of entropy to the external environment, i.e. when d S< 0 . Это может происходить, если изъятие из системы в единицу времени превышает производство энтропии внутри системы, то есть

¾ < 0 , если ¾ >¾ > 0 (2.3)

To begin the formation of a structure, the entropy return must exceed a certain critical value. At a strongly nonequilibrium distance, the system variables satisfy nonlinear equations.

Thus, two main classes of irreversible processes can be distinguished:

1. Destruction of the structure near the equilibrium position. This is a universal property of systems under arbitrary conditions.

2. The birth of a structure far from equilibrium in an open system under special critical external conditions and nonlinear internal dynamics. This property is not universal.

Spatial, temporal or spatiotemporal structures that can arise far from equilibrium in a nonlinear region at critical values ​​of system parameters are called dissipative structures.

Three aspects are interconnected in these structures:

1. State function expressed by equations.

2. Spatio-temporal structure arising due to instability.

3. Fluctuations responsible for instabilities.


Rice. 1. Three aspects of dissipative structures.

The interaction between these aspects leads to unexpected phenomena - the emergence of order through fluctuations, the formation of a highly organized structure from chaos.

Thus, in dissipative structures, becoming from being occurs, what emerges from the existing is formed.

2.2. SELF-ORGANIZATION OF VARIOUS SYSTEMS AND

SENERGETICS.

The transition from chaos to order, which occurs when parameter values ​​change from pre-critical to supercritical, changes the symmetry of the system. Therefore, such a transition is similar to thermodynamic phase transitions. Transitions in nonequilibrium processes are called kinetic phase transitions. In the vicinity of nonequilibrium phase transitions there is no consistent macroscopic description. Fluctuations are just as important as the average. For example, macroscopic fluctuations can lead to new types of instabilities.

So, far from equilibrium, there is an unexpected connection between the chemical, kinetic and spatiotemporal structure of reacting systems. True, the interactions that determine the interaction of rate constants and transfer coefficients are caused by short-range forces (valence forces, hydrogen bonds and Van Der Wals forces). However, the solutions to the corresponding equations also depend on global characteristics. For the appearance of dissipative structures, it is usually required that the dimensions of the system exceed a certain critical value - a complex function of the parameters describing reaction-diffusion processes. We can therefore assert that chemical instabilities determine the further order by which the system acts as a whole.

If we take into account diffusion, then mathematical formulation problems associated with dissipative structures will require study differential partial differential equations. Indeed, the evolution of the concentration of components X over time is determined by an equation of the form

(2.4)

where the first term gives the contribution of chemical reactions to the change in concentration X i and usually has a simple polynomial form, and the second term means diffusion along the r axis.

It is truly amazing how many different phenomena are described by the reaction-diffusion equation (2.4), so it is interesting to consider the main solution, which would correspond to the thermodynamic branch. Other solutions could be obtained for successive instabilities that arise as we move away from the equilibrium state. Instabilities of this type are convenient to study using the methods of bifurcation theory [Nicolis and Prigogine, 1977]. In principle, bifurcation is something other than the emergence, at a certain critical value of the parameter, of a new solution to the equations. Let us assume that we have a chemical reaction corresponding to the kinetic equation [McLane and Wallis, 1974].

¾ = a X (X-R) (2.5)

It is clear that for R< 0 существует только одно решение, независящее от времени, X = 0 . В точке R = 0 происходит бифуркация, и появляется новое решение X = R .

Rice. 2.3. Bifurcation diagram for equation (2.5.).

The solid line corresponds to a stable branch,

points - unstable branch.

Stability analysis in the linear approximation allows us to verify that the solution X = 0 when passing through R = 0 becomes unstable, and the solution X = R becomes stable. In general, when some characteristic parameter increases R successive bifurcations occur. In Figure 2.4. the only solution shown is p = p 1 , but at

p = p 2 uniqueness gives way to multiple solutions.

It is interesting to note that bifurcation, in a sense, introduces history into physics and chemistry - an element that was previously considered the prerogative of the sciences involved in the study of biological, social and cultural phenomena.

Rice. 2.4. Successive bifurcations:

A and A 1 - points of primary bifurcations from

thermodynamic branch,

B and B 1 - points of secondary bifurcation.

It is known that when control parameters change in the system, various transitional phenomena. Let us now highlight from these observations certain common features that are characteristic of a large number of other transitions in physicochemical systems.

For this purpose, let us present graphically (Fig. 2.5) the dependence of the vertical component of the fluid flow velocity at a certain point on the external limitation, or, more generally, the dependence of the system state variable X (or x = X - X s) on the control parameter l. This gives us a graph known as a bifurcation diagram.

Rice. 2.5. Bifurcation diagram:

a is the stable part of the thermodynamic branch,

and 1 is the unstable part of the thermodynamic branch,

in 1, in 2 - dissipative structures born in

supercritical region.

For small values ​​of l, only one solution is possible, corresponding to the state of rest in the Bénard experiment. It represents a direct extrapolation of thermodynamic equilibrium, and like equilibrium, characterized by an important property - asymptotic stability, since in this region the system is able to dampen internal fluctuations or external disturbances. For this reason, we will call such a branch of states the thermodynamic branch. When passing the critical value of the parameter l, designated l c in Figure 2.5. , consisting of this branch becomes unstable, since fluctuations or small external disturbances are no longer damped. Acting like an amplifier, the system deviates from the stationary state and moves to a new regime, in the case of the Bénard experiment corresponding to the state of stationary convection. Both of these modes merge at l = l c and differ at l > l c. This phenomenon is called bifurcation. It is easy to understand the reasons why this phenomenon should be associated with catastrophic change and conflict. In fact, at the decisive moment of transition, the system must make a critical choice (in the vicinity of l = l c), which in the Benard problem is associated with the emergence of right- or left-handed cells in a certain region of space (Fig. 2.5., branches in 1 or 2) .

Near the equilibrium state, the stationary state is asymptotic stable (according to the theorem of minimum entropy production), therefore, due to continuity, this thermodynamic branch extends throughout the entire subcritical region. When a critical value is reached, the thermodynamic branch can become unstable, so that any, even small, disturbance transfers the system from the thermodynamic branch to a new stable state, which can be ordered. So, at a critical value of the parameter, a bifurcation occurred and a new branch of solutions arose and, accordingly, a new state. In the critical region, therefore, the event develops according to the following pattern:

Fluctuation® Bifurcation®

nonequilibrium phase transition®

The birth of an ordered structure.

Bifurcation in a broad sense - the acquisition of a new quality by the movements of a dynamic system with a small change in its parameters (the appearance of a new solution to the equations at a certain critical value of the parameter). Note that during bifurcation, the choice of the next state is purely random, so the transition from one necessary stable state to another necessary stable state passes through a random one (the dialectic of the necessary and the random). Any description of a system undergoing a bifurcation includes both deterministic and probabilistic elements; from bifurcation to bifurcation the behavior of the system is deterministic, and in the vicinity of the bifurcation points the choice of the subsequent path is random. Drawing an analogy with biological evolution, we can say that mutations are fluctuations, and the search for new stability plays the role of natural selection. Bifurcation, in a sense, introduces an element of historicism into physics and chemistry - analysis of the state in 1, for example, implies knowledge of the history of the system that has undergone bifurcation.

General theory processes of self-organization in open, strongly non-equilibrium systems develops on the basis universal criterion evolution of Prigogine - Glensdorf. This criterion is a generalization of Prigogine's theorem on minimum entropy production. The rate of entropy production due to changes in thermodynamic forces X, according to this criterion, obeys the condition

d x P / t £ 0 (2.6)

This inequality does not depend on any assumptions about the nature of the connections between flows and forces under conditions of local equilibrium and is therefore universal in nature. In the linear domain, inequality (2.6.) turns into Prigogine’s theorem on the minimum entropy production. So, in a nonequilibrium system the processes go like this, i.e. the system evolves in such a way that the rate of entropy production decreases with changes in thermodynamic forces (or is equal to zero in a steady state).

Ordered structures that are born far from equilibrium, in accordance with criterion (2.6.), are dissipative structures.

The evolution of bifurcation and subsequent self-organization is thus determined by the corresponding non-equilibrium restrictions.

The evolution of the X variables will be described by the system of equations

(2.7)

where functions F can depend in any complex way on the variables X themselves and their spatial derivatives of coordinates r and time t. In addition, these functions will depend on the control parameters, i.e. those changing characteristics that can greatly change the system. At first glance it seems obvious that the structure of the function (F) will be strongly determined by the type of corresponding system under consideration. However, some basic universal features can be identified, regardless of the type of system.

The solution to equation (2.7), if there are no external restrictions, must correspond to equilibrium for any form of function F. Since the equilibrium state is stationary, then

F i ((X equal),l equal) = 0 (2.8)

In a more general case, for a nonequilibrium state, one can similarly write the condition

F i ((X),l) = 0 (2.9)

These conditions impose certain restrictions of a universal nature, for example, the laws of evolution of the system must be such that the requirement of positive temperature or chemical concentration, obtained as solutions to the corresponding equations.

Another universal feature is nonlinearity. Let, for example, some single characteristic of the system

satisfies the equation

(2.10)

where k is some parameter, l are external control restrictions. Then the stationary state is determined from the following algebraic equation

l - kX = 0 (2.11)

Xs = l/k (2.12)

In a stationary state, therefore, the value of a characteristic, for example, concentration, varies linearly depending on the values ​​of the control constraint l, and for each l there is a unique state X s. It is absolutely possible to predict the stationary value of X for any l if you have at least two experimental values X

(l) The control parameter may, in particular, correspond to the degree of distance of the system from equilibrium. The behavior in this case of the system is very similar to equilibrium, even in the presence of strongly nonequilibrium constraints.

Rice. 2.6. Illustration of the universal feature of nonlinearity in the self-organization of structures.

If the stationary value of the characteristic X does not linearly depend on the control constraint for some values, then for the same value there are several different solutions. For example, under constraints, the system has three stationary solutions, Figure 2.6.c. This universal difference from linear behavior occurs when the control parameter reaches a certain critical value l - bifurcation appears. Moreover, in the nonlinear region, a small increase can lead to an inadequately strong effect - the system can make a jump to a stable branch at small change near the critical value l, Figure 2.6.c. In addition, transitions AB 1 (or vice versa) can occur from states on branch A 1 B (or vice versa) even earlier than states B or A are reached, if the disturbances imposed on the stationary state are greater than the value corresponding to the intermediate branch A B. The disturbances can be either external influence or internal fluctuations in the system itself. Thus, a system with multiple stationary states has universally inherent properties of internal excitability and jump variability.

The fulfillment of the theorem on minimal entropy production in the linear region, and, as a generalization of this theorem, the fulfillment of the universal criterion (2.6.) in both the linear and nonlinear regions guarantee the stability of stationary nonequilibrium states. In the region of linearity of irreversible processes, entropy production plays the same role as thermodynamic potentials in equilibrium thermodynamics. In the nonlinear region, the quantity dP / dt does not have any general property, however, the quantity d x P/dt satisfies the general inequality (2.6.), which is a generalization of the theorem on the minimum entropy production.

2.3 EXAMPLES OF SELF-ORGANIZATION OF VARIOUS

SYSTEM

As an illustration, let us consider some examples of self-organization of systems in physics, chemistry, biology and society.

2.3.1. PHYSICAL SYSTEMS.

In principle, even in thermodynamic equilibrium, it is possible to indicate examples of self-organization as the results of collective behavior. For example, this is all phase transitions in physical systems, such as the liquid-gas transition, ferromagnetic transition or the occurrence of superconductivity. In a nonequilibrium state, one can name examples of high organization in hydrodynamics, in lasers of various types, in physics solid- Gunn oscillator, tunnel diodes, crystal growth.

In open systems, by changing the flow of matter and energy from outside, it is possible to control processes and direct the evolution of systems to states that are increasingly far from equilibrium. During nonequilibrium processes, at a certain critical value of the external flow, ordered states can arise from disordered and chaotic states due to the loss of their stability, and dissipative structures can be created.

2.3.1a. BENARD CELLS.

A classic example of the emergence of a structure from a completely chaotic phase is Benard convective cells. In 1900, an article by H. Benard was published with a photograph of a structure that looked like a honeycomb (Fig. 2.7).

Rice. 2.7. Bénard cells :

a) - general view of the structure

b) - a separate cell.

This structure was formed in mercury poured into a flat wide vessel heated from below after the temperature gradient exceeded a certain critical value. The entire layer of mercury (or other viscous liquid) disintegrated into identical vertical hexagonal prisms with a certain ratio between side and height (Benard cells). In the central region of the prism, the liquid rises, and near the vertical edges it falls. A temperature difference T arises between the lower and upper surfaces DT = T 2 - T 1 > 0. For small to critical differences DT< DТ kp жидкость остается в покое, тепло снизу вверх передается путем теплопроводности. При достижении температуры подогрева критического значения Т 2 = Т kp (соответственно DТ = DТ kp) начинается конвекция. При достижении критического значения параметра Т, рождается, таким образом, пространственная диссипативная структура. При равновесии температуры равны Т 2 =Т 1 , DТ = 0 . При кратковременном подогреве (подводе тепла) нижней плоскости, то есть при кратковременном внешнем возмущении температура быстро станет однородной и равной ее первоначальному значению. Возмущение затухает, а состояние - асимптотически устойчиво. При длительном, но до критическом подогреве (DТ < DТ kp) в системе снова установится простое и единственное состояние, в котором происходит перенос к top surface and transferring it to the external environment (thermal conductivity), Fig. 2.8, section A. The difference between this state and the equilibrium state is that the temperature, density, and pressure will become non-uniform. They will vary approximately linearly from the warm to the cold region.

Rice. 2.8. Heat flow in a thin layer of liquid.

An increase in the temperature difference DT, that is, a further deviation of the system from equilibrium, leads to the fact that the state of the stationary heat-conducting liquid becomes unstable. b in Figure 2.8. This state is replaced by a stable state (section V in Fig. 2.8), characterized by the formation of cells. At large temperature differences, a liquid at rest does not provide a large transfer of heat; the liquid is “forced” to move, and in a cooperative, collectively agreed upon manner.

2.3.1c. LASER AS SELF-ORGANIZED

SYSTEM.

So, as an example physical system, the orderliness of which is a consequence of external influence, consider a laser.

In the roughest description, a laser is a kind of glass tube into which light enters from an incoherent source (an ordinary lamp), and a narrowly directed coherent light beam comes out of it, releasing a certain amount of heat.


At low pump power these electromagnetic waves, which the laser emits are uncorrelated, and the radiation is similar to the radiation of a conventional lamp. Such incoherent radiation is noise, chaos. When the external influence in the form of pumping increases to a threshold critical value, the incoherent noise is converted into a “pure tone”, that is, it emits a sine wave - individual atoms behave in a strictly correlated manner and self-organize.

Lamp ® Laser

Chaos ® Order

Noise ® Coherent radiation

In the supercritical region, the “ordinary lamp” mode turns out to be unstable, but the laser mode is stable, Figure 2.9.

Rice. 2.9. Laser radiation up to critical (a) and

supercritical (b) region.

It can be seen that the formation of a structure in a liquid and in a laser is formally described in a very similar way. The analogy is associated with the presence of the same types of bifurcations in the corresponding dynamic levels.

We will consider this issue in more detail in the practical part, in Chapter 3.

2.3.2. CHEMICAL SYSTEMS.

In this area, synergetics focuses its attention on those phenomena that are accompanied by the formation of macroscopic structures. Typically, if the reagents are allowed to react while vigorously stirring the reaction mixture, the final product will be homogeneous. But in some reactions, temporary, spatial or mixed (spatial - temporal) structures may arise. The most famous example is the Belousov-Zhabotinsky reaction.

2.3.2a. BELAUSOV-ZHABOTINSKY REACTION.

Let us consider the Belousov-Zhabotinsky reaction. Ce 2 (SO 4), KBrO 3, CH 2 (COOH) 2, H 2 SO 4 are poured into the flask in certain proportions, a few drops of the oxidation-reduction indicator ferroin are added and mixed. More specifically, they are studied oxidatively - reduction reactions

Ce 3+ _ _ _ Ce 4+ ; Ce 4+ _ _ _ Ce 3+

in a solution of cerium sulfate, potassium bromide, malic acid and sulfuric acid. The addition of ferrogen allows you to monitor the progress of the reaction by color changes (by spectral absorption). At high concentrations of reactants exceeding the critical affinity value, unusual phenomena are observed.

When composed

cerium sulfate - 0.12 mmol/l

potassium bromide - 0.60 mmol/l

malic acid - 48 mmol/l

3-normal sulfuric acid,

some ferroin

At 60 C, the change in the concentration of cerium ions takes on the character of relaxation oscillations - the color of the solution periodically changes over time from red (with an excess of Ce 3+) to blue (with an excess of Ce 4+), Figure 2.10a.


Rice. 2.10. Temporal (a) and spatial (b)

periodic structures in reaction

Belousov - Zhabotinsky.

This system and effect are called chemical clocks. If a disturbance is applied to the Belousov-Zhabotinsky reaction - a concentration or temperature pulse, that is, by introducing several millimoles of potassium bromate or touching the flask for several seconds, then after a certain transition regime oscillations will occur again with the same amplitude and period as before the disturbance . Dissipative

Belousov-Zhabotinsky, thus, is asymptotically stable. The birth and existence of undamped oscillations in such a system indicates that individual parts of the system act in concert with maintaining certain relationships between the phases. When composed

cerium sulfate - 4.0 mmol/l,

potassium bromide - 0.35 mmol/l,

malic acid - 1.20 mol/l,

sulfuric acid - 1.50 mol/l,

some ferroin

at 20 C, periodic color changes occur in the system with a period of about 4 minutes. After several such oscillations, concentration inhomogeneities spontaneously arise and form for some time (30 minutes), if new substances are not introduced, stable spatial structures are formed, Figure 2.10b. If reagents are continuously supplied and final products are removed, the structure is preserved indefinitely.

2.3.3. BIOLOGICAL SYSTEMS.

The animal kingdom exhibits many structures that are highly ordered and function beautifully. The organism as a whole continuously receives flows of energy (solar energy, for example, in plants) and substances (nutrients) and releases waste products into the environment. A living organism is an open system. At the same time, living systems operate definitely far from equilibrium. In biological systems, self-organization processes allow biological systems to ²transform² energy with molecular level to macroscopic. Such processes, for example, are manifested in muscle contraction, leading to all kinds of movements, in the formation of a charge in electric fish, in pattern recognition, speech, and in other processes in living systems. The most complex biological systems are one of the main objects of research in synergetics. The possibility of a complete explanation of the features of biological systems, for example, their evolution, using the concepts of open thermodynamic systems and synergetics is currently completely unclear. However, we can point out several examples of an explicit connection between the conceptual and mathematical apparatus of open systems and biological order.

We will look more specifically at biological systems in Chapter 3, look at the dynamics of populations of one species and the “prey-predator” system.

2.3.4. SOCIAL SYSTEMS.

Social system represents a certain holistic formation, where the main elements are people, their norms and connections. As a whole, the system forms a new quality that cannot be reduced to the sum of the qualities of its elements. There is some analogy in this with the change in properties during the transition from small to very a large number particles in static physics - the transition from dynamic to static laws. At the same time, it is very obvious that any analogies with physicochemical and biological systems are very conditional, therefore, to draw an analogy between a person and a molecule or even something similar would be an unacceptable delusion. However, the conceptual and mathematical apparatus of nonlinear nonequilibrium thermodynamics and synergetics turn out to be useful in describing and analyzing the elements of self-organization in human society.

Social self-organization is one of the manifestations of spontaneous or forced processes in society, aimed at streamlining life social system, to greater self-regulation. A social system is an open system, capable, even forced, of exchanging information, matter, and energy with the outside world. Social self-organization arises as a result of purposeful individual actions its components.

Let's consider self-organization in a social system, for example, an urbanization zone. When analyzing the urbanization of geographic zones, it can be assumed that the growth of local population of a given territory will be due to the presence of jobs in this zone. However, there is some dependence here: the state of the market, which determines the need for goods and services and employment. This gives rise to a nonlinear feedback mechanism in the process of population density growth. This problem is solved on the basis of a logistic equation, where the zone is characterized by an increase in its productivity N, new economic functions S - function in the local area i of the city. The logistic equation describes the evolution of population size and can then be represented as

¾ = Кn i (N + å R k S ik ​​- n i) - dn i (2.13)

where R k is the weight of the given kth function, its significance. The economic function changes with population growth: it is determined by the demand for the k -th product in the i -th region, depending on the increase in population and competition of enterprises in other zones of the city. The emergence of a new economic function plays the role of socio-economic fluctuations and disrupts the uniform distribution of population density. Such numerical calculations using logistic equations can be useful in predicting many problems.

FORMULATION OF THE PROBLEM.

In the considered examples in the literature there are only general conclusions and conclusions, specific ones are not given. analytical calculations or numerical.

The purpose of this thesis is an analytical and numerical study of the self-organization of various systems.

CHAPTER 3

ANALYTICAL AND NUMERICAL STUDIES

SELF-ORGANIZATION OF VARIOUS SYSTEMS.

3.1. BENARD CELLS.

In order to experimentally study the structures, it is enough to have a frying pan, a little oil and some kind of fine powder so that the movement of the liquid is noticeable. Pour oil into a frying pan with the powder mixed in it and heat it from below (Fig. 3.1)

Rice. 3.1. Bénard convective cells.

If the bottom of the frying pan is flat and we heat it evenly, then we can assume that constant temperatures are maintained at the bottom and on the surface, below - T1, above - T2. While the temperature difference DT = T 1 - T 2 is small, the powder particles are motionless, and therefore the liquid is motionless.

We will gradually increase the temperature T1. With an increase in the temperature difference to the value DT c, the same picture is observed, but when DT > DT c, the entire medium is divided into regular hexagonal cells (see Fig. 3.1) in the center of each of which the liquid moves up, along the edges down. If you take another frying pan, you can make sure that the size of the resulting cells practically does not depend on its shape and size. This remarkable experiment was first carried out by Benard at the beginning of our century, and the cells themselves were called Benard cells.

An elementary qualitative explanation of the cause of fluid movement is as follows. Due to thermal expansion, the liquid stratifies, and in the lower layer the liquid density r 1 is less than in the upper layer r 2. An inverse density gradient appears, directed opposite to the force of gravity. If we select an elementary volume V, which shifts slightly upward as a result of the disturbance, then in the neighboring layer the Archimedean force will become more power gravity, since r 2 > r 1. In the upper part, a small volume, moving downward, enters an area of ​​​​lower density, and the Archimedean force will be less than the force of gravity F A< F T , возникает нисходящее движение жидкости. Направление движения нисходящего и восходящего потоков в данной ячейке случайно, движение же потоков в соседних ячейках, после выбора направлений в данной ячейке детерминировано. Полный поток энтропии через границы системы отрицателен, то есть система отдает энтропию, причем в стационарном состоянии отдает столько, сколько энтропии производится внутри системы (за счет потерь на трение).

dS e q q T 1 - T 2

¾ = ¾ - ¾ = q * ¾¾¾< 0 (3.1)

dt T 2 T 1 T 1 * T 2

The formation of a cellular cellular structure is explained by the minimal energy consumption in the system to create exactly this form of spatial structure. In this case, in the central part of the cell the liquid moves upward, and at its periphery - downward.

Further supercritical heating of the liquid leads to the destruction of the spatial structure - a chaotic turbulent regime arises.


Rice. 3.2. Illustration of the occurrence of thermal

convection in liquid.

This question is accompanied by a visual illustration of the occurrence of thermal convection in a liquid.

3.2 LASER AS A SELF-ORGANIZING SYSTEM.

We have already discussed this issue in the second chapter. Here, let's look at simple model laser

Laser - This is a device in which photons are generated through the process of stimulated emission.

Change in the number of photons over time n, or in other words, the rate of photon generation, is determined by an equation of the form:

dn / dt = "Gain" - "Loss" (3.2)

The increase is due to so-called stimulated emission. It is proportional to the number of photons already present and the number of excited atoms N. Thus:

Gain = G N n (3.3)

Here G is the gain, which can be obtained from microscopic theory. The term describing losses is due to the escape of photons through the ends of the laser. The only assumption we make is that the rate of escape is proportional to the number of photons present. Hence,

Losses = 2cn (3.4)

2c = 1/ t 0, where t 0 is the lifetime of a photon in the laser.

Now we should take into account one important circumstance that makes (2.1) a nonlinear equation of the form:

(3.5)

The number of excited atoms decreases due to the emission of photons. This decrease in DN is proportional to the number of photons present in the laser, since these photons continually force the atoms back to the ground state.

Thus, the number of excited atoms is equal to

N = N 0 - DN (3.7)

where N 0 is the number of excited atoms supported by the external

pumping, in the absence of laser generation.

Substituting (3.3) - (3.7) into (3.2), we obtain the basic equation of our simplified laser model:

(3.8)

where is the constant k gives the expression:

k = 2c - GN 0 >< 0 (3.9)

If the number of excited atoms N 0 (created by pumping) is small, then k is positive, while for sufficiently large N 0 k it can become negative. A sign change occurs when

GN 0 = 2c (3.10)

This condition is the laser lasing threshold condition.

From the theory of bifurcation it follows that when k > 0 there is no laser generation, while with k< 0 the laser emits photons.

Below or above the threshold, the laser operates in completely different modes.

Let us solve equation (3.8) and analyze it analytically:

This is the equation of a single mode laser.

Let us write equation (3.8) in the following form:

Let's divide original equation on n 2 .

and introduce a new function Z :

1/n = n -1 = Z Þ Z 1 = - n -2 therefore the equation takes the form:

Let's rewrite it in the following form:

divide both sides of this equation by -1, we get

(3.11)

Equation (3.11) is Bernoulli’s equation, so we make the following substitution Z = U× V, where U and V are still unknown functions n, Then Z 1 = U 1 V + U V 1 .

Equation (3.11), after changing variables, takes the form

U 1 V + UV 1 - k UV = k 1

transform, we get

U 1 V + U(V 1 - k V) = k 1 (3.12)

Let's solve equation (3.12)

V 1 - k V = 0 ® dV/dt = kV

let's separate the variables dV/V =k dt ® ln V = k t

result V = e kt (3.13)

From here we can rewrite equation (3.12) as:

U 1 e kt = k 1

This is the same as dU/dt = k 1 e -kt , dU = k 1 e -kt dt we express U from here, we get

(3.14)

Using Bernoulli's equation, we made the replacement Z = U V Substituting equations (3.13) and (3.14) into this substitution, we obtain

Previously introduced the function Z=n-1 , hence

(3.15)

Initial condition n 0 =1/(c-k 1 /k), from this condition we can determine the constant With in the following way

Substituting the constant we found into equation (3.15), we get

(3.16)

Let us study function (3.16) for k = 0 , k< 0 , k> 0 .

At k®0 ; e kt ® 0 ; (e kt - 1)®0, that is (e kt - 1)×k 1 /k®0×¥ (uncertainty), let's reveal this uncertainty using L'Hopital's rule. This uncertainty of the form 0×¥ should be reduced to the form . In this case, as always when applying L'Hopital's rule, as calculations proceed, it is recommended to simplify the resulting expressions as follows:

n(k) for k ® 0 ® 0 , therefore

Let us rewrite (3.16) in the following form

Linearize nonlinear equation, we get

ln n = - kt + c Þ

Let's build a graph for these conditions

Rice. 3.3 Toward self-organization in a single-mode laser:

curve 1 : k< 0 , laser lasing mode

curve 2 : k = 0,bifurcation point, threshold

curve 3 : k > 0, lamp mode.

For k = 0, equation (3.8) takes the form

solving it, we get

(3.8)

Given that ; n(t) = const, function (3.8) approaches the stationary state, regardless of the initial value n 0, but depending on the signs of k and k 1 (see Figure 3.3).

Thus, function (3.8) takes stationary solution

3.3. POPULATION DYNAMICS.

Extensive information has been collected on the distribution and abundance of species. A macroscopic characteristic describing a population can be the number of individuals in the population. This number acts as an order parameter. If different species are supported by a common food resource, then interspecific struggle begins, and then Darwin's principle applies: the fittest species survives.(One cannot fail to note the strong analogy that exists between the competition of laser modes and interspecies struggle). If there are food resources of the same type, then coexistence of species becomes possible. The number of species may be subject to temporary fluctuations.

ONE VIEW.

Let us first consider one population with the number of individuals in it n. If food resources are available A individuals reproduce at a rate:

and die at the rate:

Here k And d- some birth and death rates, generally depending on the parameters of the external environment. If food were unlimited, the evolutionary equation would look like this:

Let us introduce the notation a = kA-d

It would be linear and would describe unlimited experimental growth (at kA > d) or experimental death (at kA< d) популяции.

Rice. 3.4 Curve 1 : Exponential growth; a>0 , kA>d

Curve 2 : Exponential doom; a>0 , kA>d.

In general, however, food resources are limited, so the rate of food consumption

However, in the general case, it is possible to restore food resources at the rate:

Here, of course, we consider the marginal case of conservation of the total quantity organic matter

A + n = N = const,

N is the ability of the habitat to support the population.

Then, taking into account A = N - n, we obtain the following equation for the evolution of a population of one species (Verhulst logistic equation):

(3.17)

Let us solve equation (3.17) analytically and rewrite it as follows

, denote kN - d = k 1

Let's use the table integral, the resulting equation will take the form:

Let's solve this equation by transforming

Let's reduce the resulting expression by k, and move the variable k 1 to the right side, we get

hence n(t) ®

Initial conditions:

Substituting c into the solution, we obtain an equation in the following form

Previously we indicated that , substitute and transform

Let's reduce the birth rate by k and finally get the solution to equation (3.17)

So, an analytical solution to the logistic equation has been obtained - this solution indicates that population growth stops at a certain finite stationary level:

that is, the parameter n 1 indicates the height of the saturation plateau to which n(t) tends over time.

The parameter n 0 indicates the initial value of the number of one type of population: n 0 = n(t 0) . Really, , that is, n 1 is the maximum number of species in a given habitat. In other words, the parameter n 1 characterizes the capacity of the environment in relation to a given population. And finally, the parameter (kN - d) specifies the slope of the initial growth.

Note that with a small initial number n 0 (initial number of individuals), the initial population growth will be almost exponential

Rice. 3.5. Logistic curve.

(evolution of a population of one species)

The solution to equation (3.17) can be represented using a logistic curve (Fig. 3.5). Evolution is completely determined. The population stops growing when the environmental resource is exhausted.

Self-organization - with limited food resources. The system is self-organized and the explosive growth of the population (Fig. 3.4 Curve 1) is replaced by a curve with saturation.

We emphasize that when describing this biological system, the conceptual and physical and mathematical apparatus of nonlinear nonequilibrium thermodynamics is used.

It may happen, however, that following events not controlled within the framework of the model, new species (characterized by different ecological parameters k,N and d) . This ecological fluctuation raises the question of structural stability: new species may either disappear or displace the original inhabitants. Using linear stability analysis, it is not difficult to show that new species displace old ones only if

The sequence in which species fill an ecological niche is presented in Figure 3.6.

Rice. 3.6. Consistent filling of environmental

niches of various types.

This model allows us to give precise quantitative meaning to the statement that “survival of the fittest” within the framework of the problem of filling a given ecological niche.

3.3.2. "VICTIM - PREDATOR" SYSTEM.

Consider a system consisting of two species - a “prey” and a “predator” (for example, hares and foxes), then the evolution of the system and its self-organization look different than in the previous case.

Let there be two populations in a biological system - “preys” - rabbits (K), and “predators” - foxes (L), numbering K and L.

Let us now carry out an argument that will allow us to explain the existence of dissipative structures.

Rabbits (K) eat grass (T). Let us assume that the supply of grass is constant and inexhaustible. Then, the simultaneous presence of grass and rabbits contributes to the unlimited growth of the rabbit population. This process can be symbolically represented as follows:

Rabbits + Grass ® More rabbits

The fact that there is always plenty of grass in the land of rabbits is quite analogous to the continuous supply of thermal energy in the problem with Benard cells. Soon the process as a whole will look like a dissipative one (much like the Bénard process).

The Rabbits-Grass reaction occurs spontaneously in the direction of increasing the rabbit population, which is a direct consequence of the second law of thermodynamics.

But now predatory foxes (L) have crept into our picture, where rabbits frolic peacefully, for whom rabbits are prey. Just as the number of rabbits increases as they eat grass, the number of foxes increases due to the consumption of rabbits:

Foxes + Rabbits ® More foxes

In turn, foxes, like rabbits, are victims - this time of humans, or more precisely, a process is taking place

Foxes ® Furs

The final product - Furs - does not play a direct role in the further course of the process. This final product can, however, be considered as a carrier of energy removed from the system to which it was initially supplied (for example, in the form of grass).

Thus, in ecological system there is also a flow of energy - similar to what happens in a chemical test tube or a biological cell.

It's clear what's really happening periodic oscillations population size of rabbits and foxes, and the increase in the number of rabbits is followed by an increase in the number of foxes, which are replaced by a decrease in the number of rabbits, accompanied by the same sharp decline the number of foxes, then an increased increase in the number of rabbits, and so on (Fig. 3.7).

Rice. 3.7. Changes in rabbit and fox populations

with time. The presence of periodicity means

emergence of an ecological structure.

Over time, the numbers of both populations change in accordance with the sequential passage of points on the graph. After some time (the specific value depends on the speed at which foxes eat rabbits, as well as on the rate of reproduction of both species), the whole cycle begins again.

The behavior of populations at different degrees of fertility, as well as different abilities to avoid extermination, can be studied quantitatively using the program: POPULATION(in the application).

This program implements the solution of equations for the dissipative structure “rabbits - foxes”. The result of the solution is depicted graphically. The system is being solved differential equations

Here the letters K, L, T mean, respectively, the number of rabbits, foxes, grass; coefficients k 1 , k 2 , k 3 - indicate, respectively, the rate of birth of rabbits, the rate of rabbits being eaten by foxes and the rate of death of foxes.

The program will need to clarify the meaning of the relationship (approximately equal to 1), constant amount of grass (Also usually taken equal to 1), initial population values ​​of rabbits and foxes (usually 0.4), cycle duration (typical value 700) and step along the time axis (usually equal to 1).

A population program is a graph. It shows the behavior of populations at different degrees of fertility, as well as different abilities to avoid extermination.

It is quite clear that in reality there are periodic fluctuations in the population of rabbits and foxes, and an increase in the number of rabbits is followed by an increase in the number of foxes, which is followed by a decrease in the number of rabbits, accompanied by an equally sharp decrease in the number of foxes, then an increased rise in the number of rabbits, and so on, that is, it is clear that the system is self-organizing.

The program is attached.

CONCLUSION.

We have seen that the irreversibility of time is closely related to instabilities in open systems. I.R. Prigogine defines two times. One is dynamic, which allows you to specify a description of the motion of a point in classical mechanics or a change in the wave function in quantum mechanics. Another time is a new internal time, which exists only for unstable dynamic systems. It characterizes the state of the system associated with entropy.

Processes of biological or social development do not have a final state. These processes are unlimited. Here, on the one hand, as we have seen, there is no contradiction with the second law of thermodynamics, and on the other hand, the progressive nature of development (progress) in an open system is clearly visible. Development is associated, generally speaking, with the deepening of disequilibrium, and therefore, in principle, with the improvement of structure. However, as the structure becomes more complex, the number and depth of instabilities and the probability of bifurcation increase.

The success of solving many problems made it possible to identify general patterns in them, introduce new concepts and, on this basis, formulate a new system of views - synergetics. It studies issues of self-organization and therefore should give a picture of development and principles of self-organization complex systems to apply them in management. This task is of great importance, and, in our opinion, success in its research will mean progress in solving global problems: the problem of controlled thermonuclear fusion, environmental problems, management tasks and others.

We understand that all the examples given in the work relate to model problems, and many professionals working in the relevant fields of science may find them too simple. They are right about one thing: the use of ideas and concepts of synergetics should not replace deep analysis specific situation. Finding out what the path from model problems and general principles to a real problem might be is a matter for specialists. Briefly, we can say this: if in the system being studied one most important process (or a small number of them) can be identified, then synergetics will help to analyze it. It indicates the direction in which to move. And, apparently, this is already a lot.

The study of most real nonlinear problems was impossible without a computational experiment, without constructing approximate and high-quality models of the processes being studied (synergetics plays an important role in their creation). Both approaches complement each other. The effectiveness of one is often determined by the successful use of the other. Therefore, the future of synergetics is closely connected with the development and widespread use of computational experiments.

Studied in last years The simplest nonlinear media have complex and interesting properties. Structures in such environments can develop independently and be localized, can multiply and interact. These models can be useful in studying a wide range of phenomena.

It is known that there is some disunity between the natural scientific and humanitarian cultures. The rapprochement, and in the future, perhaps, the harmonious mutual enrichment of these cultures can be carried out on the foundation of a new dialogue with nature in the language of thermodynamics of open systems and synergetics.

LITERATURE :

1. Bazarov I.P. Thermodynamics. - M.: Higher School, 1991.

2. Glensdorf P., Prigozhin I. Thermodynamic theory of structure, stability and fluctuations. - M.: Mir, 1973.

3. Careri D. Order and disorder in the structure of matter. - M.: Mir, 1995.

4. Kurdyushov S.P. , Malinetsky G.G. Synergetics is the theory of self-organization. Ideas, methods of perspective. - M.: Knowledge, 1983.

5. Nikolis G., Prigozhin I. Self-organization in nonequilibrium systems. - M.: Mir, 1979

6. Nikolis G., Prigozhin I. Cognition of the complex. - M.: Mir, 1990.

7. Perovsky I.G. Lectures on the theory of differential equations. - M.: MSU, 1980

8. Popov D.E. Interdisciplinary connections and synergetics. - KSPU, 1996

9. Prigozhin I. Introduction to the thermodynamics of irreversible processes. - M.: Foreign literature, 1960.

10. Prigogine I. From existing to emerging. - M.: Science, 1985.

11. Synergetics, collection of articles. - M.: Mir, 1984.

12. Haken G. Synergetics. - M.: Mir, 1980.

13. Haken G. Synergetics. Hierarchy of instabilities in self-organizing systems and devices. - M.: Mir, 1985.

14. Shelepin L.A. Far from balance. - M.: Knowledge, 1987.

15. Eigen M., Schuster P. Hypercycle. Principles of self-organization of macromolecules. - M.: Mir, 1982.

16. Atkins P. Order and disorder in nature. - M.: Mir, 1987

Thermodynamic equilibrium state- a state of a system that does not change over time and is not accompanied by the transfer of matter or energy through the system. This state is characterized, first of all, by the equality of temperature of all parts of the system. The existence of the same temperature for all parts of a system in equilibrium is sometimes called zero law of thermodynamics. It can also be formulated like this:

All parts of a system in thermodynamic equilibrium have the same temperature.

In accordance with this law, to characterize the state of equilibrium of several systems, the following postulate can be given: if system A is in thermodynamic equilibrium with system B and system C, then systems B and C are also in equilibrium with each other.

First law of thermodynamics

First this principle was formulated by J.R. Mayer in 1842, and in 1845 it was experimentally tested by J.P. Joule by establishing the equivalence of heat and work.

The first law (like other laws of thermodynamics) is a postulate. Its validity is proven by the fact that none of the consequences to which it leads is in conflict with experience. This principle is a universal law, and a number of its consequences are of great importance for physical chemistry and for solving various industrial problems.

In chemistry, the first law of thermodynamics is considered as the law of conservation of energy for chemical processes accompanied by thermal phenomena. It underlies most of the equations of chemical thermodynamics. This law corresponds to the mathematical expression

D.U.= Q - w ,

which can be expressed as follows:

1. In any process, the change in internal energy DU = U 2 - U 1 of any system is equal to the amount of heat Q imparted to the system minus the amount of work w performed by the system .

(Symbol D means the difference between the final and initial values ​​of state functions, the change of which does not depend on the path of the process, and, therefore, it is not applicable to heat and work). For infinitesimal changes, the mathematical expression of the first principle should be written as follows:

dU= dQ - dw

(Where d - differential sign, d - sign of an infinitesimal change in quantity).

There are other formulations of the 1st law of thermodynamics, which have their own ways of writing mathematical expression. For chemistry, the most important of them are the following:

2. In any isolated system, the total energy supply remains constant .



Those. at Q = 0 and w = 0

U= const and D.U. = 0

3. If the system does not do work, then any change in internal energy occurs only due to the absorption or release of heat .

Those. at w = 0

D.U.= Q

It follows that the thermal effect of the process Q V , measured at a constant volume (for example, in a hermetically sealed calorimetric vessel that cannot be expanded), is numerically equal to the change in internal energy:

Q V = D.U.

4. If the system does not receive or give off heat, then the work it does is produced only due to the loss of internal energy .

Those. at Q = 0

D.U.= - w or w = - D.U.

It follows that it is impossible to create a perpetual motion machine of the 1st kind, that is, a mechanism that produces work indefinitely without an influx of energy from the outside.

Enthalpy

Most chemical processes, both in nature and in the laboratory, and in industry, do not occur at a constant volume, but at a constant pressure. In this case, often only one of the various types of work is performed - expansion work, equal to the product of pressure and change in volume of the system:

w = rDV.

In this case, the equation of the first law of thermodynamics can be written as

D.U. = Q p - rDV

Q p= D.U. + rDV

(index R shows that the amount of heat is measured at constant pressure). Replacing changes in quantities with the corresponding differences, we get:

Q p = U 2 - U 1 + p (V 2 - V 1 )

Q p = (U 2 + pV 2 ) - (U 1 + pV 1 )

Q p = (U + pV ) 2 - (U + pV ) 1 = H 2 - H 1

Because p And V - state parameters, and U is a state function, then the sum U + pV = N is also a function of state. This function is called enthalpy. Thus, the heat absorbed or released by the system in a process running at constant pressure is equal to the change in enthalpy:

Q p = D.H.

There is a relationship between the change in enthalpy and the change in the internal energy of the system, expressed by the equations

DH= D.U. + DnRT or D.U. = DH - DnRT ,



which can be obtained using the Mendeleev-Clapeyron equation

pV= nRT , where pDV = DnRT .

Quantities DH various processes are relatively easily measured using calorimetric installations operating at constant pressure. As a result, enthalpy change is widely used in thermodynamic and thermochemical studies. The SI dimension of enthalpy is J/mol.

It should be remembered that the absolute value of enthalpy, like internal energy, cannot be calculated using the equations of thermodynamics. But chemical thermodynamics and thermochemistry require mainly changes in enthalpy in some processes.


CHAPTER 2

So, any economic theory, not based on physics, is a utopia!

To understand what wealth is, you don’t need to read economic books, but study the basics of thermodynamics, which was born around the same time as Marx’s Capital.

Thermodynamics was born due to the fact that people wanted to subjugate " driving force fire,” for which it was necessary to create an effective steam engine. Therefore, thermodynamics was first concerned with the study of heat.

However, over time, thermodynamics expanded significantly and became a theory about the transformation of all forms of energy. Thermodynamics still exists in this form.

The value of thermodynamics turned out to be so great that the English writer, physicist and statesman Charles Percy Snow proposed a test of general culture, according to which ignorance of the second law of thermodynamics would be equivalent to ignorance of the works of Shakespeare.

Thermodynamics is based on a small number of statements that, in a condensed form, incorporate the vast experience of people in the study of energy.

These statements are called laws or began thermodynamics.

There are four laws (principles) of thermodynamics.

The second beginning was formulated first in time, the zero beginning was the last. And between them the first and third laws of thermodynamics were established.

Zero law of thermodynamics was formulated about a hundred years ago.

For progressives and for business, the beginning of zero may even be higher value What is most famous is the second beginning, and here's why.

First, it says the following: regardless of the initial state of an isolated system, thermodynamic equilibrium will eventually be established in it.

It is this statement that opens the way to a scientific understanding of the nature of wealth.

Secondly, the zero principle introduces the concept of temperature into scientific language.

And strange as it may sound, it is this very deep concept (temperature) that allows us to describe the conditions necessary for the generation of new wealth.

Although, if you forget about internal combustion engines and remember about the incubator, then nothing strange is observed here.

The zero beginning is formulated as follows:

If system A is in thermodynamic equilibrium with system B, and that, in turn, with system C, then system A is in equilibrium with C. Moreover, their temperatures are equal.

First law of thermodynamics was formulated in the mid-19th century. It was briefly formulated by Kelvin as follows: in any isolated system, the energy supply remains constant.

Kelvin gave this formulation because it suited his religious views. He believed that the Creator, at the moment of creation of the Universe, endowed it with a supply of energy, and this divine gift would exist forever.

The irony of the situation is this. According to the expanding universe theory, total energy The universe is indeed constant, but at the same time equal to zero. The positive part of the energy of the Universe, equivalent to the mass of particles existing in the Universe, can be exactly compensated by the negative part of the energy due to the gravitational potential of the attractive field.

Second law of thermodynamics states that spontaneous transfer of heat from a less heated body to a more heated body is impossible.

If we compare the first and second laws of thermodynamics, we can say this: the first law of thermodynamics prohibits the creation of a perpetual motion machine of the first kind, and the second law of thermodynamics prohibits the creation of a perpetual motion machine of the second kind.

A perpetual motion machine of the first kind is an engine that does work without drawing energy from any source. A perpetual motion machine of the second kind is an engine that has an efficiency coefficient equal to unity. This is an engine that converts one hundred percent of the heat into work.

But according to Marx’s theory, a hired worker is a mechanism that has an efficiency coefficient greater than one. And Marx does not see any problem in the fact that he invented a superperpetual motion machine. Come on Marx! Modern economists with the title of Doctor of Science do not see any problem in this either! It’s as if physics doesn’t exist for them at all!

Third law of thermodynamics states that it is impossible to cool a substance to absolute zero in a finite number of steps.

In conclusion, I can give the following advice: look on the Internet for information about a perpetual motion machine of the third kind. First of all, it's interesting. And secondly, a progressor must understand that all economists are the people who create a perpetual motion machine of the third kind.



Did you like the article? Share with your friends!