1 statistical physics and thermodynamics. Statistical thermodynamics

Classical and quantum statistical physics. Derivation of the Gibbs relation. Thermodynamic principles. Liouville's theorem and kinetic equations of Boltzmann and Ziegler. Methods of statistical physics in heterogeneous media.

1. Derivation of the Gibbs relation

Introductory Notes . The central place in the mechanics of heterogeneous media is occupied by the derivation of governing equations. It is the constitutive equations that contain the specification that allows one to distinguish between media with different mechanical properties. There are various ways to derive governing equations - both strict ones based on averaging methods, and heuristic ones. The most common method is a combination of thought experiments taking into account thermodynamic principles. Both of these approaches are phenomenological, although the thermodynamic method is deeply developed and based on fundamental physical laws. It is obvious that the phenomenological derivation of the defining relations needs to be justified based on general physical principles, in particular, using statistical methods.

Statistical physics studies systems consisting of huge number elements of the same or similar composition (atoms, molecules, ions, submolecular structures, etc.). In the mechanics of heterogeneous media, such elements are microinhomogeneities (pores, cracks, grains, etc.). Studying them using deterministic methods is almost impossible. At the same time, a huge number of these elements allows for the manifestation of statistical patterns and the study of this system using statistical methods.

At the core statistical methods are the concepts of main system and subsystem. The main system (thermostat) is much larger than the subsystem, but both are in a state of thermodynamic equilibrium. The object of study in statistical physics is precisely the subsystem, which in continuum mechanics is identified with an elementary volume, and in heterogeneous mechanics with a volume of phases in an elementary volume.

The Gibbs method in statistical physics is based on the concepts phase space and trajectories in phase space. Phase space is the topological product of the coordinate and momentum spaces of each particle that makes up the subsystem. Trajectories in phase space contain a lot of unnecessary information, for example, initial values and information about the boundary conditions when the trajectory reaches the boundary. When describing one single trajectory in phase space, the ergodic hypothesis is usually used (or some surrogate of it, which slightly modifies it, but is amenable to rigorous proof). The subtleties of the proof of the ergodic hypothesis are not important, and therefore we do not dwell on them. It allows one trajectory to be replaced by a whole ensemble of states. An equivalent description using an ensemble of states allows us to get rid of this unnecessary information. The ensemble of states allows for a simple and transparent interpretation. It can be imagined as some fictitious gas in phase space, which is described using the transport equation.

The statistical approach includes two levels of research - quantum and classical. Each microscopic inhomogeneity of a heterogeneous medium is described by continuum mechanics as some homogeneous homogeneous body. It is assumed that the theory of quantum statistical physics has already been used when studying the mechanical and thermodynamic properties of these inhomogeneities. When we perform averaging over random inhomogeneities in a heterogeneous environment, we consider these inhomogeneities as classical random objects. The line of reasoning in quantum and classical statistical physics is very similar, although it has some differences. In quantum statistics, the phase volume takes discrete values. However, this is not the only difference. In quantum statistics, a fictitious gas is incompressible and undergoes only transport. In classical statistics, the transport equation includes a term that describes dissipative processes at the molecular level. Formally, it looks like a source. The divergent appearance of this source allows the full mass of fictitious gas to be preserved, but allows for its local disappearance and reappearance. This process resembles diffusion in fictitious phase space.

Further, on the basis of classical statistics, thermodynamics itself is further expounded, including thermodynamics irreversible processes. The concepts of thermodynamic functions are introduced, with the help of which the governing equations are derived. Poroelastic media include conservative and dissipative processes. Reversible elastic deformations occur in the skeleton, which represent a conservative thermodynamic system, and dissipative processes occur in the fluid. In a porous-viscous medium, both phases (skeletal and fluid) are dissipative.

Microprocesses and macroprocesses . In heterogeneous media, a subsystem is an elementary volume that satisfies the postulates of heterogeneous media. In particular, it satisfies the conditions of local statistical homogeneity and local thermodynamic equilibrium. Accordingly, all objects and processes differ in their scale into microprocesses and macroprocesses. We will describe macroprocesses using generalized coordinates and generalized forces . Here, the subscripts mean not only vector and tensor indices, but also various quantities (including quantities with different tensor dimensions). When considering microprocesses we will use generalized coordinatesAnd generalized speeds. These coordinates describe the movement of large molecules, their associations and inhomogeneities, which are considered as classical objects. The phase space of the subsystem is formed by the coordinates and speeds all particles composing a given elementary volume.

It should be noted that in quantum mechanics the nature of particles is strictly established. The number of particles is finite, and the laws of their motion are known and uniform for each type of particle. A completely different situation arises in the mechanics of heterogeneous media. As a rule, we have constitutive relations derived by phenomenological methods for each of the phases. General constitutive relations for the entire elementary volume at the macro level are usually the subject of research. For this reason, the interaction of micro-level elements in heterogeneous environments does not lend itself to standard research methods.

In this regard, new methods and approaches are required, which have not yet been fully developed. One such approach is Ziegler's generalization of Gibbs' theory. Its essence lies in some modification of the Liouville equation. This approach will be described in more detail below. We first give a standard presentation of Gibbs's theory, and then present ideas that generalize it.

System energy changes due to work
at the macro level, which is expressed by the relation

. It also changes due to the influx of heat
associated with the movement of molecules. Let us write down the first law of thermodynamics in differential form

. (1.1)

We will describe microprocesses using Lagrange equations

, (1.2) where
Lagrange function,– kinetic, and – potential energy.

Gibbs' theory imposes the following restrictions. It is assumed that potential energy depends on microcoordinates and macrocoordinates, and kinetic energy depends only on microcoordinates and their velocities. Under such conditions, the Lagrange function does not depend on time and macro-velocities.

.

The approach based on the equations of motion in Lagrange form (1.2) can be replaced by an equivalent Hamiltonian formalism by introducing generalized momenta for microcoordinates

,
, And Hamilton function
, which has the meaning of the total energy of the particle. Let us write down the increment of the Hamilton function

Due to the definition of impulses and Lagrange equations of motion, this expression is transformed

, (1.2) which follows Hamilton's equations of motion

,
. (1.3a) where
has the meaning of the energy of the system, as well as the additional identity of races

. (1.3b)

It should be noted here that the Lagrange and Hamilton functions are expressed through different arguments. Therefore, the last identity has a not entirely trivial meaning. Let us write down the differential expression (1.2) for one particle along its trajectory

.

Using (1.3), we transform this expression

.

Consequently, the particle energy depends only on the generalized macrocoordinates. If they do not change over time, then the energy is conserved.

Statistical method for describing a system . The lack of information about the initial conditions for system (1.3) and about its behavior at the boundary of the body can be overcome if we use a statistical approach to studying this system. Let this mechanical system have degrees of freedom associated with microscopic variables. In other words, the position of all points in the usual three-dimensional space characterized generalized coordinates(
). Let us consider the phase space of a larger number of variables
. The phase state is characterized by a point with coordinates
V
-dimensional Euclidean space. In practice, we always study a specific object that is part of some large (compared to the given object) system ( external environment ). This object usually interacts with the external environment. Therefore, in the future we will talk about subsystem(which occupies part of the phase space) interacting with the system (which occupies the entire phase space).

When moving in
-dimensional space, a single trajectory gradually fills this entire phase space. Let's put
and denote by
that part of the phase space volume in which a given subsystem spends “almost all the time.” Here we mean the time during which the subsystem is in a quasi-equilibrium state. Over a sufficiently long period of time, the phase trajectory will pass through this section of phase space many times. Let us accept the ergodic hypothesis, according to which, instead of one moving point in phase space, we can consider many points forming a statistical ensemble. Passing to the infinitesimal elementary phase volume

, let us introduce a continuous distribution function using the ratio

. Here – number of points in the phase volume element
,
full number points in the entire phase space, – a certain normalization coefficient that has the dimension of action. It characterizes the statistical weight of the selected phase space volume element. The distribution function satisfies the normalization condition

or
. (1.4)

Let
– the total time that the system spends within the elementary volume
, A full time movement of a material point along its trajectory. In accordance with the ergodic hypothesis, we assume that

. (1.5)

Reasoning purely formally, we can assume that there is some fictitious gas in the phase space, the density of which is equal to the density of the number of points in the phase space. The conservation of the number of fictitious gas molecules is expressed by a transport equation in phase space, similar to the law of conservation of mass in ordinary three-dimensional space. This conservation law is called Liouville's theorem

. (1.6)

By virtue of Hamilton's equations, the condition for the incompressibility of the phase fluid follows:

(1.7)

Let us introduce the convective derivative

.

Combining (1.6) and (1.7), we obtain the phase fluid transport equation

or
. (1.8)

By virtue of the ergodic hypothesis, the density of the number of particles in phase space is proportional to the probability density in the ensemble of states. Therefore, equation (1.8) can be represented as

. (1.9)

In a state of equilibrium with constant external parameters, the energy of the microsystem, represented by the Hamiltonian, is conserved along the trajectory in phase space. In the same way, due to (1.9), the probability density is preserved. It follows that the probability density is a function of energy.

. (1.10)

Addiction from is easy to obtain if you notice that the energies of the subsystems are added, and the probabilities are multiplied. This condition is satisfied by the only form of functional dependence

. (1.11) This distribution is called canonical. Here Boltzmann constant, quantities
And
have the dimension of energy. Quantities
And are called free energy and temperature.

Let's determine the internal energy as the average value of the true energy

. (1.12)

Substituting (1.11) here, we get

.

Entropy is defined as

Relationship (1.13) introduces a new concept – entropy. The second law of thermodynamics states that equilibrium state system, its entropy tends to increase, and in a state of thermodynamic equilibrium the entropy remains constant. Combining (1.12) and (1.13), we obtain

. (1.14) Relationship (1.14) is the basis for deriving other thermodynamic functions that describe the equilibrium state of the subsystem.

Let us assume that inside the phase volume
of a given subsystem, the probability density is almost constant. In other words, this subsystem is weakly connected with the environment and is in a state of equilibrium. The relation is valid for it

. (1.15) Here
– delta function.

This distribution is called microcanonical in contrast to the canonical distribution (1.11). At first glance, it seems that both distributions are very different and even contradict each other. In fact, there is no contradiction between them. Let's enter the radius in multidimensional phase space it is very large number measurements. In a thin, equidistant (in energy) spherical layer, the number of points significantly exceeds the number of points inside this sphere. It is for this reason that distributions (1.11) and (1.15) differ little from each other.

In order to satisfy the last relation (1.4), it is necessary that this probability density be equal to

. (1.16)

Let us substitute distribution (1.11) into the last relation (1.4)

and differentiate it. Considering that
is a function of macrocoordinates, we have

,
.

Using (1.14), we transform this expression

. (1.17a) Here
– heat flow,
- Job external forces. This relationship was first developed by Gibbs, and it bears his name. For gas it has a particularly simple form

. (1.17b) Here - pressure, - volume.

At the phenomenological level, the definition of temperature is also given. Note that heat flow is not a differential of the thermodynamic function, while entropy is such by definition. For this reason, in expression (1.17) there is an integrating factor , which is called temperature. You can take some working fluid (water or mercury) and introduce a temperature change scale. Such a body is called thermometer. Let us write (1.17) in the form

. Temperature in this relation is some intensive quantity.

Generalized forces and displacements are thermodynamically conjugate quantities. Likewise, temperature and entropy are conjugate quantities, of which one is a generalized force and the other is a generalized displacement. From (1.17) it follows

. (1.18)

By virtue of (1.14) for free energy we have a similar differential expression

. (1.19) In this relation, temperature and entropy as conjugate quantities change places, and expression (1.18) is modified

. (1.20)

In order to use these relationships, it is necessary to specify independent defining parameters and expressions for the thermodynamic functions.

A more strict definition can be given for temperature. Let us consider, for example, a closed (isolated) system consisting of two bodies and in a state of thermodynamic equilibrium. Energy and entropy are additive quantities
,
. Note that entropy is a function of energy, so
. At equilibrium, entropy is stationary point regarding the redistribution of energy between two subsystems, i.e.

.

This directly follows

. (1.21)

The derivative of entropy with respect to energy is called absolute temperature (or simply temperature ). This fact also follows directly from (1.17). Relationship (1.21) means something more: in a state of thermodynamic equilibrium, the temperatures of bodies are equal

. (1.22)

STATISTICAL, statistical section. physics, dedicated to the substantiation of laws based on the laws of interaction. and the movements of the particles that make up the system. For systems in an equilibrium state, statistical allows one to calculate, record, phase and chemical conditions. . Non-equilibrium statistical provides a justification for the relationships (equations of transfer of energy, momentum, mass and their boundary conditions) and allows one to calculate the kinetics included in the equations of transfer. coefficients Statistical establishes quantities. connection between micro- and macro-properties of physical. and chem. systems Calculation methods statistical are used in all directions of modern. theoretical .

Basic concepts. For statistical macroscopic descriptions systems J. Gibbs (1901) proposed to use the concepts of statistical. ensemble and phase space, which makes it possible to apply methods of probability theory to solving problems. Statistical ensemble - a collection of a very large number of identical plural systems. particles (i.e., “copies” of the system under consideration) located in the same macrostate, which is determined by ; The microstates of the system may differ. Basic statistical ensembles - microcanonical, canonical, grand canonical. and isobaric-isothermal.

Microcanonical The Gibbs ensemble is used when considering (not exchanging energy E with) having a constant volume V and the number of identical particles N (E, V and N-systems). Kanonich. The Gibbs ensemble is used to describe systems of constant volume that are in thermal c (absolute temperature T) with a constant number of particles N (V, T, N). Grand Canon. The Gibbs ensemble is used to describe those located in thermal c (temperature T) and material with a reservoir of particles (all particles are exchanged through the “walls” surrounding the system with volume V). of such a system - V, T and m - chemical potential of particles. Isobaric-isothermal The Gibbs ensemble is used to describe systems in thermal and fur. s at constant P (T, P, N).

Phase space in statistical mechanics is a multidimensional space, the axes of which are all generalized coordinates q i and the impulses pi (i = 1,2,..., M) conjugate to them of a system with M degrees of freedom. For a system consisting of N, q i and p i correspond to the Cartesian coordinate and momentum component (a = x, y, z) of a certain j and M = 3N. The set of coordinates and momenta are denoted by q and p, respectively. The state of the system is represented by a point in phase space of dimension 2M, and the change in the state of the system in time is represented by the movement of a point along a line, called. phase trajectory. For statistical To describe the state of the system, the concepts of phase volume (an element of the volume of phase space) and the distribution function f(p, q) are introduced, which characterizes the probability density of finding a point representing the state of the system in an element of phase space near a point with coordinates p, q. Instead of phase volume, the concept of discrete energy is used. spectrum of a finite volume system, because the state of an individual particle is determined not by momentum and coordinates, but by a wave function, a cut in stationary dynamic. the state of the system corresponds to the energy. spectrum

Distribution function classic system f(p, q) characterizes the probability density of the implementation of a given microstates (p, q) in the volume element dГ of phase space. The probability of N particles being in an infinitesimal volume of phase space is equal to:

where dГ N is the element of the phase volume of the system in units of h 3N, h is Planck’s constant; divisor N! takes into account the fact that the rearrangement of identities. particles does not change the state of the system. The distribution function satisfies the normalization condition t f(p, q)dГ N = 1, because the system is reliably in the k.-l. condition. For quantum systems, the distribution function determines the probability w i , N of finding a system of N particles in , specified by the set quantum numbers i , with energy E i,N subject to normalization

The average value at time t (i.e. according toinfinitesimal time interval from t to t + dt) any physical. the value A(p, q), which is a function of the coordinates and momenta of all particles in the system, is calculated using the distribution function according to the rule (including for nonequilibrium processes):

Integration over coordinates is carried out over the entire volume of the system, and integration over impulses from - , to +, . Thermodynamic state systems should be considered as a limit t: , . For equilibrium states, the distribution functions are determined without solving the equation of motion of the particles that make up the system. The form of these functions (the same for classical and quantum systems) was established by J. Gibbs (1901).

In microcanon. In the Gibbs ensemble, all microstates with a given energy E are equally probable and the distribution function for the classical systems has the form:

f(p,q) = A d,

Where d - Dirac's delta function, H(p, q) - Hamilton's function, which is the sum of the kinetic. and potential energies of all particles; the constant A is determined from the normalization condition of the function f(p, q). For quantum systems with specification accuracy, equal to the value D E, in accordance with between energy and time (between momentum and particle coordinate), function w (E k) = -1, if EE k E + D E, and w (E k) = 0, if E k< Е и E k >E + D E. Value g(E, N, V)-t. called statistical , equal to the number in energy. layer D E. An important statistical relationship is the connection between the system and the statistical one. :

S(E, N, V) = klng(E, N, V), where the k-Boltzmann constant.

In canon. In the Gibbs ensemble, the probability of the system being in a microstate determined by the coordinates and momenta of all N particles or the values ​​of E i,N has the form: f(p, q) = exp (/kT); w i,N = exp[(F - E i,N)/kT],where F-free. energy (), depending on the values ​​of V, T, N:

F = -kTlnZ N ,

where Z N -statistic. amount (in case quantum system) or statistical integral (in the case of a classical system), determined from the condition for normalizing the functions w i,N or f(p, q):


Z N = t exp[-H(p, q)/kT]dpdq/(N!h 3N)

(sum over r over all systems, and integration is carried out over the entire phase space).

In the great canon. Gibbs ensemble distribution function f(p, q) and statistical. the sum X, determined from the normalization condition, has the form:

Where W - thermodynamic potential depending on the variables V, T, m (summation is carried out over all positive integers N). In isobaric-isothermal Gibbs ensemble distribution and statistical function. the sum Q, determined from the normalization condition, has the form:

where G-systems (isobaric-isothermal potential, free).

To calculate thermodynamic functions, you can use any distribution: they are equivalent to each other and correspond to different physical. conditions. Microcanonical The Gibbs distribution is applied. arr. in theoretical research. To solve specific tasks consider ensembles in which there is an exchange of energy with the environment (canonical and isobaric-isothermal) or an exchange of energy and particles (large canonical ensemble). The latter is especially convenient for studying phase and chemistry. . Statistical the sums of Z N and Q make it possible to determine F, G, as well as thermodynamic. properties of the system obtained by differentiation of statistical. amounts according to the relevant parameters (per 1 village): internal. energy U = RT 2 (9 lnZ N /9 T) V , H = RT 2 (9 lnQ/9 T) P , S = RlnZ N + RT(9 lnZ N /9 T) V = = R ln Q + RT (9 ln Q/9 T) P, at constant volume С V = 2RT(9 lnZ N/9 T) V + RT 2 (9 2 lnZ N/9 T 2) V, at constant С Р = 2RT (9 lnZ N /9 T) P + + RT 2 (9 2 lnZ N /9 T 2) P etc. Resp. all these quantities acquire statistical significance. meaning. Thus, it is identified with the average energy of the system, which allows us to consider it as the movement of the particles that make up the system; free energy is related to statistical the sum of the system, entropy - with the number of microstates g in a given macrostate, or statistical. macrostate, and therefore with its probability. The meaning as a measure of the probability of a state is preserved in relation to arbitrary (non-equilibrium) states. In a state of insulation. system has maximum possible meaning at given external conditions (E, V, N), i.e. the equilibrium state is the most. probable state (with max. statistical ). Therefore, the transition from a nonequilibrium state to an equilibrium state is a process of transition from less probable states to more probable ones. This is the statistical point. the meaning of the law of increase, according to which can only increase (see). At t-re abs. from scratch, any system is fundamentally state in which w 0 = 1 and S = 0. This statement is (see). It is important that for unambiguous definition need to use quantum description, because in classic statistics m.b. is defined only up to an arbitrary term.

Ideal systems. Calculation of statistical sums of most systems represents difficult task. It is significantly simplified if the contribution of potential. energy in full energy systems can be neglected. In this case, the complete distribution function f(p, q) for N particles ideal system is expressed through the product of one-particle distribution functions f 1 (p, q):


The distribution of particles among microstates depends on their kinetics. energy and from quantum saints in the system, due todue to the identity of particles. All particles are divided into two classes: fermions and bosons. The type of statistics that particles obey is uniquely related to their .

Fermi-Dirac statistics describes the distribution in a system of identities. particles with half-integer 1/2, 3/2,... in units ђ = h/2p. A particle (or quasiparticle) that obeys the specified statistics is called. fermion. Fermions include in, and, with odd, with odd difference and numbers, quasiparticles (for example, holes in), etc. This statistic was proposed by E. Fermi in 1926; in the same year, P. Dirac discovered its quantum mechanics. meaning. The wave function of the fermion system is antisymmetric, i.e. changes its sign when rearranging coordinates and any identities. particles. Each can contain no more than one particle (see). The average number of fermion particles n i in a state with energy E i is determined by the Fermi-Dirac distribution function:

n i =(1+exp[(E i - m )/kT]) -1 ,

where i is a set of quantum numbers characterizing the state of the particle.

Bose-Einstein statistics describes systems of identities. particles with zero or integer (0, ђ, 2ђ, ...). A particle or quasiparticle that obeys the specified statistics is called. boson. This statistics was proposed by S. Bose (1924) for photons and developed by A. Einstein (1924) in relation to , considered as composite particles from an even number of fermions, for example. with an even total number and (deuteron, 4 He nucleus, etc.). Bosons also include phonons in and liquid 4 He, excitons in and. The wave function of the system is symmetrical with respect to the permutation of any identities. particles. The filling numbers are not limited in any way, i.e. Any number of particles can exist in one state. The average number of particles n i bosons in a state with energy E i is described by the Bose-Einstein distribution function:

n i =(exp[(E i - m )/kT]-1) -1 .

The Boltzmann statistic is special case quantum statistics, when quantum effects can be neglected ( high t-ry). It considers the distribution of particles by momenta and coordinates in the phase space of one particle, and not in the phase space of all particles, as in the Gibbs distributions. As a minimum units of volume of phase space, which has six dimensions (three coordinates and three projections of particle momentum), in accordance with quantum mechanics. , you cannot choose a volume smaller than h 3 . The average number of particles n i in a state with energy E i is described by the Boltzmann distribution function:

n i =exp[( m -E i)/kT].

For particles that move according to classical laws. mechanics in external potential field U(r), the statistically equilibrium function of the distribution f 1 (p,r) over the momenta p and coordinates r of particles has the form:f 1 (p,r) = A exp( - [p 2 /2m + U(r)]/kT). Here p 2 /2t-kinetic. energy of mass w, constant A is determined from the normalization condition. This expression often called Maxwell-Boltzmann distribution, and the Boltzmann distribution is called. function

n(r) = n 0 exp[-U(r)]/kT],

where n(r) = t f 1 (p, r)dp - density of the number of particles at point r (n 0 - density of the number of particles in the absence of an external field). The Boltzmann distribution describes the distributioncool in the gravitational field (barometric f-la), and highly dispersed particles in the field centrifugal forces, in non-degenerate, and is also used to calculate the distribution in dilute. p-max (in the volume and at the boundary with), etc. At U(r) = 0, the Maxwell-Boltzmann distribution follows from the Maxwell-Boltzmann distribution, which describes the distribution of velocities of particles in a statistical state. (J. Maxwell, 1859). According to this distribution, the probable number per unit volume of velocity components, which lie in the intervals from u i to u i + du i (i = x, y, z), is determined by the following function:

The Maxwell distribution does not depend on the interaction. between Particles and is true not only for , but also for (if a classical description is possible for them), as well as for Brownian particles suspended in and . It is used to count the number of collisions between each other during chemical reactions. district and from the surface.

Amount by state. Statistical amount in canonical Gibbs ensemble is expressed through the sum over the states of one Q 1:

where E i is the energy of the i-th quantum level (i = O corresponds zero level), g i -statistic. i-th level. IN general case individual species the movements of , and groups in, as well as the movement as a whole are interconnected, but approximately they can be considered as independent. Then the sum over the states may be presented in the form of a product of individual components associated with the steps. movement (Q post) and with intramol. movements (Q int):

Q 1 = Q post ·Q int, Q post = l (V/N),

Where l = (2p mkT/h 2) 3/2. For Q ext represents the sum of the electronic and nuclear states; for Q int - the sum of electronic, nuclear, oscillations. and rotate. states. IN area t-r from 10 to 10 3 K, an approximate description is usually used, in which each of the indicated types of movement is considered independently: Q in = Q el · Q poison · Q rotation · Q count /g, where g is the number, equal to the number identity. configurations that arise during rotation, consisting of identical or groups.

The sum of the states of electronic motion Q el is equal to statistical. R t bas. electronic state. In plural cases bas. the level is non-degenerate and separated from the nearest excited level, which means. energy: (P t = 1). However, in some cases, e.g. for O 2, Р t = з, basically. state, the moment of the quantity of motion is different from zero and takes place, and the energy may. quite low. The sum over the states of Q poison, due to the degeneracy of nuclear ones, is equal to:

where s i is the spin of nucleus i, the product is taken over all . Sum by oscillation states. movement where v i -frequencies small fluctuations, n is the number in . The sum by state will be rotated. movements of a polyatomic with large moments of inertia can be considered classically [high-temperature approximation, T/q i 1, where q i = h 2 /8p 2 kI i (i = x, y, z), I t - main point inertia of rotation around axis i]: Q r = (p T 3 /q x q y q z) 1/2. For linear ones with moment of inertia I statistical. sum Q time = T/q, where q = h 2 /8p 2 *kI.

When calculating at temperatures above 10 3 K, it is necessary to take into account the anharmonicity of vibrations, interaction effects. oscillate and rotate. degrees of freedom (see), as well as electronic states, population of excited levels, etc. When low t-rah(below 10 K) it is necessary to take into account quantum effects (especially for diatomic ones). Okay, let's rotate it. the movement of heteronuclear AB is described by the following formula:

l-number rotate. states, and for homonuclear A 2 (especially for H 2, D 2, T 2) nuclear and rotate. degrees of freedom interaction Friendwith a friend: Q poison. rotate Q poison ·Q rotation

Knowing the sum over states allows one to calculate the thermodynamic. Saints and, incl. chem. , equilibrium degree of ionization, etc. Important in abs theory. speeds r-tions has the ability to calculate the process of formation of activation. complex ( transition state), which is presented as a modification. particle, one of the vibrations. degrees of freedom the cut is replaced by the degree of freedom of the input. movements.

Non-ideal systems. In interaction with each other. In this case, the sum over the states of the ensemble is not reduced to the product of the sums over the states of the individual ones. If we assume that intermol. interaction do not affect internal states, statistical sum of the system in classical approximation for , consisting of N identities. particles has the form:

Where

Here<2 N-config. integral taking into account the interaction. . Naib, often potential. energy U is considered as a sum of pair potentials: U = =where U(r ij) is the center potential. forces depending ondistances r ij between i and j. Multiparticle contributions to the potential are also taken into account. energy, orientation effects, etc. The need to calculate the configuration. integral arises when considering any condenser. phases and phase boundaries. Exact solution to the plural problem. bodies is almost impossible, therefore, to calculate statistical data. sums and all thermodynamic. St. in, obtained from statistical. sums by differentiation according to the corresponding parameters, use diff. approximate methods.

According to the so-called method of group expansions, the state of the system is considered as a set of complexes (groups) consisting of different numbers and configurations. the integral decomposes into a set of group integrals. This approach allows us to imagine any thermodynamic. function in the form of a series of degrees of density. Naib. An important relationship of this kind is the virial level of the state.

For theoretical descriptions of the properties of dense, and solutions of non-electrolytes and interfaces in these systems are more convenient than direct calculation of statistical data. sum is the method of n-particle distribution functions. In it, instead of counting statistics. each state with a fixed energy uses the relationships between distribution functions f n, which characterize the probability of finding particles simultaneously at points in space with coordinates r 1,..., r n; for n = N f N = b t f(p, r)dp (here and below q i = r i). The single-particle function f 1 (r 1) (n = 1) characterizes the density distribution of the substance. For this periodic. f-tion with maxima at crystalline nodes. structures; for or in the absence of external field is a constant value equal to macroscopic. density of the river The two-particle distribution function (n = 2) characterizes the probability of findingtwo particles at points 1 and 2, it determines the so-called. correlation function g(|r 1 - r 2 |) = f 2 (r 1, r 2)/r 2, characterizing the mutual correlation in the distribution of particles. Provides relevant experimental information.

Distribution functions of dimensions n and n + 1 are connected by an infinite system of interlocking integrodifferentials. equations of Bogolyubov-Born-Green-Kirkwood-Yvon, the solution of which is extremely difficult, therefore the effects of correlation between particles are taken into account by introducing decomp. approximations, which determine how the function f n is expressed through functions of lower dimension. Resp. developed by several approximate methods for calculating functions f n, and through them all thermodynamic. characteristics of the system under consideration. Naib. The Perkus-Ievik and hyperchain approximations are used.

Lattice condenser models. states have found wide application in thermodynamic. consideration of almost all physical-chemical. tasks. The entire volume of the system is divided into local regions with a characteristic size on the order of the size u 0 . In general, in different models the size of the local area may be both greater and less than u 0 ; in most cases they are the same. The transition to a discrete distribution in space significantly simplifies the calculation of diff. . Lattice models take into account the interaction. with each other; energy interaction energetically described. parameters. In a number of cases, lattice models allow exact solutions, which makes it possible to evaluate the nature of the approximations used. With their help, it is possible to consider multiparticle and specific ones. interaction, orientation effects, etc. Lattice models are fundamental in the study and implementation of applied calculations and highly inhomogeneous systems.

Numerical methods for determining thermodynamics. St.-in are becoming increasingly important as computing develops. technology. In the Monte Carlo method, multidimensional integrals are directly calculated, which allows one to directly obtain statistical data. average observedvalues ​​A(r1.....r N) according to any of the statistical ensembles(for example, A is the energy of the system). So, in canon. thermodynamic ensemble the average looks like:

This method is applicable to almost all systems; the average values ​​obtained with its help for limited volumes (N = 10 2 -10 5) serve as a good approximation for describing macroscopic properties. objects and can be considered as accurate results.

In the method they say. The dynamics of the state of the system is considered using the numerical integration of Newton's equations for the movement of each particle (N = = 10 2 -10 5) at given potentials of interparticle interaction. Equilibrium characteristics of the system are obtained by averaging over phase trajectories (over velocities and coordinates) over long times, after establishing the Maxwellian distribution of particles over velocities (the so-called thermalization period).

Limitations in the use of numerical methods in basic. determined by the capabilities of the computer. Specialist. will calculate. techniques allow you to bypass the difficulties associated with the fact that it is not a real system that is being considered, but a small volume; this is especially important when taking into account long-range interaction potentials, transitions, etc.

Physical kinetics is a section of statistics. physics, which provides justification for the relationships describing the transfer of energy, momentum and mass, as well as the influence of external influences on these processes. fields. Kinetic. macroscopic coefficients characteristics of a continuous medium that determine the dependencies of physical flows. quantities (heat, momentum, mass of components, etc.) fromthe gradient flows that cause these flows are hydrodynamic. speed, etc. It is necessary to distinguish the Onsager coefficients included in the equations that connect flows with thermodynamics. forces (thermodynamic equation of motion), and transfer coefficients (, etc.) included in the transfer equation. The first m.b. expressed through the latter using the relationships between macroscopic. characteristics of the system, so in the future only coefficients will be considered. transfer.

To calculate macroscopic coefficient transfer, it is necessary to carry out averaging over the probabilities of realization of elementary transfers using a nonequilibrium distribution function. The main difficulty is that the analyte. the form of the distribution function f(p, q, t) (t-time) is unknown (in contrast to the equilibrium state of the system, which is described using the Gibbs distribution functions obtained at t: , ). Consider n-particle distribution functions f n (r, q, t), which are obtained from functions f (p, q, t) by averaging over the coordinates and momenta of the remaining (N - n) particles:

For them, maybe. a system of equations has been compiled that allows one to describe arbitrary nonequilibrium states. Solving this system of equations is very difficult. As a rule, in kinetic theory and gaseous quasiparticles (fermions and bosons), only the equation for the single-particle distribution function f 1 is used. Under the assumption that there is no correlation between the states of any particles (hypothesis of molecular chaos), the so-called kinetic Boltzmann equation (L. Boltzmann, 1872). This equation takes into account the change in the distribution of particles under the influence of external influences. forces F(r, m) and pair collisions between particles:

Where f 1 (u, r, t) and particle distribution functions up tocollisions, f " 1 (u", r, t) and distribution functionsafter a collision; u and -velocities of particles before the collision, u" and -velocities of the same particles after the collision, and = |u -|-modulus of the relative speed of the colliding particles, q - the angle between the relative speed of the u - colliding particles and the line connecting their centers , s (u,q )dW -differential cross section of particle scattering at the solid angle dW in the laboratory coordinate system, depending on the law of particles. In the framework of classical mechanics, the differential cross section is expressed in terms of the collision parameters b and e (the corresponding impact distance and the azimuthal angle of the line of centers): s dW = bdbde, and are considered as centers of forces with a potential depending on the distance. . the effective cross section is obtained based on , taking into account the influence of effects on the probability of collision.

If the system is in statistical , the collision integral Stf is equal to zero and the solution of the kinetic. The Boltzmann equation will be the Maxwell distribution. For nonequilibrium states, the solutions of the kinetic. Boltzmann equations are usually sought in the form of a series expansion of the function f 1 (u, r, m) in small parameters relative to the Maxwell distribution function. In the simplest (relaxation) approximation, the collision integral is approximated as Stgas; for (the usual one-particle distribution function f 1 of molecules in liquids does not reveal the specifics of the phenomena and consideration of the two-particle distribution function f 2 is required. However, for sufficiently slow processes and in cases where the scales of spatial inhomogeneities are significantly smaller than the scale of correlation between particles, you can use a locally equilibrium single-particle distribution function with t-roy, chemical potentials and hydrodynamic speed, which correspond to the small volume under consideration. potentials of the components, and calculate the flows of impulses, energy and matter, and also justify the Navier-Stokes equation, and. In this case, the transfer coefficients turn out to be proportional to the space-time correlations of the flows of energy, momentum and matter. each component.

To describe matter at and at interfaces, the lattice condenser model is widely used. phases. the state of the system is described fundamentally. kinetic master equation regarding the distribution function P(q, t):

where P(q,t)= t f(p,q,t)du- distribution function, averaged over the impulses (velocities) of all N particles, describing the distribution of particles over the nodes of the lattice structure (their number is N y, N< N y), q- номер узла или его координата. В модели "решеточного " частица может находиться в узле (узел занят) или отсутствовать (узел свободен); W(q : q") is the probability of the system transitioning per unit time from state q, described by a complete set of particle coordinates, to another state q". The first sum describes the contribution of all processes in which the transition to a given state q is carried out, the second sum describes the exit from this state. In the case of equilibrium particle distribution (t : , ) P(q) = exp[-H(q)/kT]/Q, where Q-statistic. sum, H(q) is the energy of the system in state q. The transition probabilities satisfy the detailed principle: W(q" : q)exp[-H(q")/kT] = W(q : q")exp[-H(q)/kT]. Based on the equations for the functions P(q,t), a kinetic equation is constructed. equations for n-particle distribution functions, which are obtained by averaging over the locations of all other (N - n) particles. For small ones h in the boundary with, growth, phase transformations, etc. For interphase transfer, due to differences in the characteristic times of elementary particle migration processes, the type of boundary conditions at the phase boundaries plays an important role.

For small systems (number of nodes N y = 10 2 - 10 5) the system of equations relative to the function P(q,t) may be solved numerically using the Monte Carlo method. The stage of the system to an equilibrium state allows us to consider the differences. transient processes in the study of the kinetics of phase transformations, growth, kinetics of surface reactions, etc. and determine their dynamics. characteristics, including coefficient. transfer.

To calculate the coefficient. transfer in gaseous, liquid and solid phases, as well as at phase boundaries, various variants of the mol. method are actively used. dynamics, which allows us to trace in detail the systems from times of ~10 -15 s to ~10 -10 s (at times of the order of 10 -10 - 10 -9 s and more, the so-called Langevin equation is used, this equation Newton's concepts containing a stochastic term on the right side).

For systems with chemical p-tions, the nature of the distribution of particles is greatly influenced by the relationship between the characteristic transfer times and their chemical properties. transformations. If the speed of chemical transformation is small, the distribution of particles does not differ much from the case when there is no solution. If the speed of the distribution is high, its influence on the nature of the distribution of particles is great and it is impossible to use average particles (i.e. distribution functions with n = 1), as is done when using. It is necessary to describe the distribution in more detail using the distribution functions f n with n > 1. Important in describing the reaction. particle flows on the surface and velocities have boundary conditions(cm. ).

Lit.: Kubo R., Statistical mechanics, trans. from English, M., 1967; Zubarev D.N., Nonequilibrium statistical, M., 1971; Ishihara A., Statistical Physics, trans. from English, M., 1973; Landau L. D., Lifshits E. M L

Thermodynamics and statistical physics

Guidelines and control tasks for distance learning students

Shelkunova Z.V., Saneev E.L.

Methodological instructions and test assignments for distance learning students of engineering, technical and technological specialties. Contains sections of the programs “Statistical Physics”, “Thermodynamics”, examples of solving typical problems and variants of test tasks.

Key words: Internal energy, heat, work; isoprocesses, entropy: distribution functions: Maxwell, Boltzmann, Bose – Einstein; Fermi – Dirac; Fermi energy, heat capacity, characteristic temperature of Einstein and Debye.

Editor T.Yu.Artyunina

Prepared for printing. Format 6080 1/16

Conditional p.l. ; ed.l. 3.0; Circulation ____ copies. Order no.

___________________________________________________

RIO VSTU, Ulan-Ude, Klyuchevskaya, 40a

Printed on the rotaprint of VSTU, Ulan-Ude,

Klyuchevskaya, 42.

Federal Agency for Education

East Siberian State

university of technology

PHYSICS No. 4

(Thermodynamics and statistical physics)

Guidelines and control tasks

for distance learning students

Compiled by: Shelkunova Z.V.

Saneev E.L.

Publishing House VSTU

Ulan-Ude, 2009

Statistical physics and thermodynamics

Topic 1

Dynamic and statistical patterns in physics. Thermodynamic and statistical methods. Elements of molecular kinetic theory. Macroscopic condition. Physical quantities and states of physical systems. Macroscopic parameters as average values. Thermal equilibrium. Model ideal gas. Equation of state of an ideal gas. The concept of temperature.

Topic 2

Transference phenomena. Diffusion. Thermal conductivity. Diffusion coefficient. Thermal conductivity coefficient. Thermal diffusivity. Diffusion in gases, liquids and solids. Viscosity. Viscosity coefficient of gases and liquids.

Topic 3

Elements of thermodynamics. The first law of thermodynamics. Internal energy. Intensive and extensive parameters.

Topic 4

Reversible and irreversible processes. Entropy. Second law of thermodynamics. Thermodynamic potentials and equilibrium conditions. Chemical potential. Conditions of chemical equilibrium. Carnot cycle.

Topic 5

Distribution functions. Microscopic parameters. Probability and fluctuations. Maxwell distribution. Average kinetic energy of a particle. Boltzmann distribution. Heat capacity of polyatomic gases. Limitations of the classical theory of heat capacity.

Topic 6

Gibbs distribution. Model of the system in the thermostat. Canonical Gibbs distribution. Statistical meaning of thermodynamic potentials and temperature. The role of free energy.

Topic 7

Gibbs distribution for a system with a variable number of particles. Entropy and probability. Determination of the entropy of an equilibrium system through the statistical weight of a microstate.

Topic 8

Bose and Fermi distribution functions. Planck's formula for weighted thermal radiation. Order and disorder in nature. Entropy as a quantitative measure of chaos. The principle of increasing entropy. The transition from order to disorder about the state of thermal equilibrium.

Topic 9

Experimental methods for studying the vibrational spectrum of crystals. The concept of phonons. Dispersion laws for acoustic and optical phonons. Heat capacity of crystals at low and high temperatures. Electronic heat capacity and thermal conductivity.

Topic 10

Electrons in crystals. Approximation of strong and weak coupling. Free electron model. Fermi level. Elements of band theory of crystals. Bloch function. Band structure of the electron energy spectrum.

Topic 11

Fermi surface. The number and density of the number of electronic states in the zone. Band fillings: metals, dielectrics and semiconductors. Electrical conductivity of semiconductors. The concept of hole conductivity. Intrinsic and impurity semiconductors. Concept of p-n junction. Transistor.

Topic 12

Electrical conductivity of metals. Current carriers in metals. Insufficiency of classical electronic theory. Electron Fermi gas in metal. Current carriers as quasiparticles. The phenomenon of superconductivity. Cooper pairing of electrons. Tunnel contact. Josephson effect and its application. Capture and Quantization magnetic flux. The concept of high temperature conductivity.

STATISTICAL PHYSICS. THERMODYNAMICS

Basic formulas

1. Amount of substance of a homogeneous gas (in moles):

Where N- number of gas molecules; N A- Avogadro's number; m- mass of gas; -molar mass of gas.

If the system is a mixture of several gases, then the amount of substance in the system

,

,

Where i , N i , m i , i - respectively, the amount of substance, the number of molecules, mass, molar mass i- components of the mixture.

2. Clapeyron-Mendeleev equation (equation of state of an ideal gas):

Where m- mass of gas; - molar mass; R- universal gas constant; = m/ - amount of substance; T-thermodynamic temperature Kelvin.

3. Experimental gas laws, which are special cases of the Clapeyron-Mendeleev equation for isoprocesses:

    Boyle-Mariotte law

(isothermal process - T=const; m=const):

or for two gas states:

Where p 1 and V 1 - pressure and volume of gas in the initial state; p 2 and V 2

    Gay-Lussac's law (isobaric process - p=const, m=const):

or for two states:

Where V 1 And T 1 - volume and temperature of the gas in the initial state; V 2 And T 2 - the same values ​​in the final state;

    Charles' law (isochoric process - V=const, m=const):

or for two states:

Where r 1 And T 1 - pressure and temperature of the gas in the initial state; r 2 And T 2 - the same values ​​in the final state;

    combined gas law (m=const):

Where r 1 , V 1 , T 1 - pressure, volume and temperature of the gas in the initial state; r 2 , V 2 , T 2 - the same values ​​in the final state.

4. Dalton’s law, which determines the pressure of a gas mixture:

p = p 1 + p 2 + ... +r n

Where p i- partial pressures of the mixture components; n- number of mixture components.

5. Molar mass of a mixture of gases:

Where m i- weight i-th component of the mixture; i = m i / i- amount of substance i-th component of the mixture; n- number of mixture components.

6. Mass fraction  i i th component of the gas mixture (in fractions of a unit or percent):

Where m- mass of the mixture.

7. Concentration of molecules (number of molecules per unit volume):

Where N-the number of molecules contained in a given system;  is the density of the substance. The formula is valid not only for gases, but also for any state of aggregation of a substance.

8. Basic equation kinetic theory gases:

,

Where<>- average kinetic energy of translational motion of a molecule.

9. Average kinetic energy of translational motion of a molecule:

,

Where k- Boltzmann constant.

10. Average total kinetic energy of a molecule:

Where i- the number of degrees of freedom of the molecule.

11. Dependence of gas pressure on the concentration of molecules and temperature:

p = nkT.

12. Molecular speeds:

mean square ;

arithmetic mean ;

most likely ,

Methods Education About this site Library Mat. forums

Library > Physics Books > Statistical Physics

Search the library by authors and keywords from the book title:

Statistical physics

  • Aizenshits R. Statistical theory of irreversible processes. M.: Publishing house. Foreign lit., 1963 (djvu)
  • Anselm A.I. Fundamentals of statistical physics and thermodynamics. M.: Nauka, 1973 (djvu)
  • Akhiezer A.I., Peletminsky S.V. Methods of statistical physics. M.: Nauka, 1977 (djvu)
  • Bazarov I.P. Methodological problems statistical physics and thermodynamics. M.: Moscow State University Publishing House, 1979 (djvu)
  • Bogolyubov N.N. Selected works in statistical physics. M.: Moscow State University Publishing House, 1979 (djvu)
  • Bogolyubov N.N. (Jr.), Sadovnikov B.I. Some questions statistical mechanics. M.: Higher. school, 1975 (djvu)
  • Bonch-Bruevich V.L., Tyablikov S.V. Green's function method in statistical mechanics. M.: Fizmatlit, 1961 (djvu, 2.61Mb)
  • Vasiliev A.M. Introduction to statistical physics. M.: Higher. school, 1980 (djvu)
  • Vlasov A.A. Nonlocal statistical mechanics. M.: Nauka, 1978 (djvu)
  • Gibbs J.W. Basic principles of statistical mechanics (presented with special application to the rational basis of thermodynamics). M.-L.: OGIZ, 1946 (djvu)
  • Gurov K.P. Foundations of kinetic theory. Method N.N. Bogolyubova. M.: Nauka, 1966 (djvu)
  • Zaslavsky G.M. Statistical irreversibility in nonlinear systems. M.: Nauka, 1970 (djvu)
  • Zakharov A.Yu. Lattice models of statistical physics. Veliky Novgorod: NovGU, 2006 (pdf)
  • Zakharov A.Yu. Functional methods in classical statistical physics. Veliky Novgorod: NovSU, 2006 (pdf)
  • Ios G. Course theoretical physics. Part 2. Thermodynamics. Statistical physics. Quantum theory. Nuclear physics. M.: Education, 1964 (djvu)
  • Ishihara A. Statistical physics. M.: Mir, 1973 (djvu)
  • Kadanov L., Beim G. Quantum statistical mechanics. Green's function methods in the theory of equilibrium and nonequilibrium processes. M.: Mir, 1964 (djvu)
  • Katz M. Probability and related issues in physics. M.: Mir, 1965 (djvu)
  • Katz M. Several probabilistic problems of physics and mathematics. M.: Nauka, 1967 (djvu)
  • Kittel Ch. Elementary statistical physics. M.: IL, 1960 (djvu)
  • Kittel Ch. Statistical thermodynamics. M: Nauka, 1977 (djvu)
  • Kozlov V.V. Thermal equilibrium according to Gibbs and Poincaré. Moscow-Izhevsk: Institute of Computer Research, 2002 (djvu)
  • Kompaneets A.S. Laws of physical statistics. Shock waves. Superdense substance. M.: Nauka, 1976 (djvu)
  • Kompaneets A.S. Course of theoretical physics. Volume 2. Statistical laws. M.: Education, 1975 (djvu)
  • Kotkin G.L. Lectures on statistical physics, NSU (pdf)
  • Krylov N.S. Works on the substantiation of statistical physics. M.-L.: From the USSR Academy of Sciences, 1950 (djvu)
  • Kubo R. Statistical mechanics. M.: Mir, 1967 (djvu)
  • Landsberg P. (ed.) Problems in Thermodynamics and Statistical Physics. M.: Mir, 1974 (djvu)
  • Levich V.G. Introduction to statistical physics (2nd ed.) M.: GITTL, 1954 (djvu)
  • Libov R. Introduction to theory kinetic equations. M.: Mir, 1974 (djvu)
  • Mayer J., Geppert-Mayer M. Statistical mechanics. M.: Mir, 1980 (djvu)
  • Minlos R.A. (ed.) Mathematics. New in foreign science-11. Gibbs states in statistical physics. Collection of articles. M.: Mir, 1978 (djvu)
  • Nozdrev V.F., Senkevich A.A. Course of statistical physics. M.: Higher. school, 1965 (djvu)
  • Prigogine I. Nonequilibrium statistical mechanics. M.: Mir, 1964 (djvu)
  • Radushkevich L.V. Course of statistical physics (2nd ed.) M.: Education, 1966 (djvu)
  • Reif F. Berkeley course in physics. Volume 5. Statistical physics. M.: Nauka, 1972 (djvu)
  • Rumer Yu.B., Ryvkin M.Sh. Thermodynamics, statistical physics and kinetics. M.: Nauka, 1972 (djvu)
  • Rumer Yu.B., Ryvkin M.Sh. Thermodynamics, statistical physics and kinetics (2nd ed.). M.: Nauka, 1977 (djvu)
  • Ruel D. Statistical mechanics. M.: Mir, 1971 (djvu)
  • Savukov V.V. Clarification of the axiomatic principles of statistical physics. SPb.: Balt. state tech. Univ. "Voenmekh", 2006

10. Basic postulates of statistical thermodynamics

When describing systems consisting of a large number of particles, two approaches can be used: microscopic and macroscopic. In the first approach, based on classical or quantum mechanics, the microstate of the system is characterized in detail, for example, the coordinates and momenta of each particle at each moment in time. Microscopic description requires solutions to classical or quantum equations movements for a huge number of variables. Thus, each microstate of an ideal gas in classical mechanics is described by 6 N variables ( N- number of particles): 3 N coordinates and 3 N impulse projections.

The macroscopic approach, which is used by classical thermodynamics, characterizes only the macrostates of the system and uses a small number of variables for this, for example, three: temperature, volume and number of particles. If a system is in an equilibrium state, then its macroscopic parameters are constant, while its microscopic parameters change with time. This means that for each macrostate there are several (in fact, infinitely many) microstates.

Statistical thermodynamics establishes a connection between these two approaches. The basic idea is this: if each macrostate has many microstates associated with it, then each of them contributes to the macrostate. Then the properties of the macrostate can be calculated as the average over all microstates, i.e. summing up their contributions taking into account statistical weights.

Averaging over microstates is carried out using the concept of a statistical ensemble. Ensemble is an infinite set of identical systems located in all possible microstates corresponding to one macrostate. Each system of the ensemble is one microstate. The entire ensemble is described by some distribution function by coordinates and momenta ( p, q, t), which is defined as follows:

(p, q, t) dp dq is the probability that the ensemble system is located in a volume element dp dq near point ( p, q) at the moment of time t.

The meaning of the distribution function is that it determines the statistical weight of each microstate in the macrostate.

From the definition follow the elementary properties of the distribution function:

1. Normalization

. (10.1)

2. Positive certainty

(p, q, t) i 0 (10.2)

Many macroscopic properties of a system can be defined as average value functions of coordinates and momenta f(p, q) by ensemble:

For example, internal energy is the average value of the Hamilton function H(p,q):

The existence of a distribution function is the essence basic postulate of classical statistical mechanics:

The macroscopic state of the system is completely specified by some distribution function that satisfies conditions (10.1) and (10.2).

For equilibrium systems and equilibrium ensembles, the distribution function does not depend explicitly on time: = ( p,q). The explicit form of the distribution function depends on the type of ensemble. There are three main types of ensembles:

1) Microcanonical the ensemble describes isolated systems and is characterized by the following variables: E(energy), V(volume), N(number of particles). IN isolated system all microstates are equally probable ( equal prior probability postulate):

2) Canonical Ensemble describes systems that are in thermal equilibrium with their environment. Thermal equilibrium is characterized by temperature T. Therefore, the distribution function also depends on temperature:

(10.6)

(k= 1.38 10 -23 J/K - Boltzmann constant). The value of the constant in (10.6) is determined by the normalization condition (see (11.2)).

A special case of the canonical distribution (10.6) is Maxwell distribution by speed v, which is valid for gases:

(10.7)

(m- mass of a gas molecule). Expression (v) d v describes the probability that a molecule has absolute value speeds in the range from v to v + d v. The maximum of function (10.7) gives the most probable speed of molecules, and the integral

Average speed of molecules.

If the system has discrete energy levels and is described quantum mechanically, then instead of the Hamilton function H(p,q) use the Hamilton operator H, and instead of the distribution function - the density matrix operator:

(10.9)

The diagonal elements of the density matrix give the probability that the system is in i-th energy state and has energy E i:

(10.10)

The value of the constant is determined by the normalization condition: S i = 1:

(10.11)

The denominator of this expression is called the sum over states (see Chapter 11). It is key for statistical evaluation thermodynamic properties of the system From (10.10) and (10.11) one can find the number of particles N i having energy E i:

(10.12)

(N - total number particles). The distribution of particles (10.12) over energy levels is called Boltzmann distribution, and the numerator of this distribution is the Boltzmann factor (multiplier). Sometimes this distribution is written in a different form: if there are several levels with the same energy E i, then they are combined into one group by summing the Boltzmann factors:

(10.13)

(g i- number of energy levels E i, or statistical weight).

Many macroscopic parameters of a thermodynamic system can be calculated using the Boltzmann distribution. For example, average energy is defined as the average of energy levels taking into account their statistical weights:

, (10.14)

3) Grand Canonical Ensemble describes open systems that are in thermal equilibrium and capable of exchanging matter with the environment. Thermal equilibrium is characterized by temperature T, and the equilibrium in the number of particles is the chemical potential. Therefore, the distribution function depends on temperature and chemical potential. We will not use an explicit expression for the distribution function of the large canonical ensemble here.

In statistical theory it is proven that for systems with a large number particles (~ 10 23) all three types of ensembles are equivalent to each other. The use of any ensemble leads to the same thermodynamic properties, therefore the choice of one or another ensemble for describing a thermodynamic system is dictated only by the convenience of mathematical processing of distribution functions.

EXAMPLES

Example 10-1. A molecule can be at two levels with energies of 0 and 300 cm -1. What is the probability that the molecule will be on upper level at 250 o C?

Solution. It is necessary to apply the Boltzmann distribution, and to convert the spectroscopic unit of energy cm -1 to joules, use the multiplier hc (h= 6.63 10 -34 J. s, c= 3 10 10 cm/s): 300 cm -1 = 300 6.63 10 -34 3 10 10 = 5.97 10 -21 J.

Answer. 0.304.

Example 10-2. A molecule can be at a level with energy 0 or at one of three levels with energy E. At what temperature will a) all molecules be at the lower level, b) the number of molecules at the lower level will be equal to the number of molecules at the upper levels, c) the number of molecules at the lower level will be three times less than the number of molecules at the upper levels?

Solution. Let's use the Boltzmann distribution (10.13):

A) N 0 / N= 1; exp(- E/kT) = 0; T= 0. As the temperature decreases, molecules accumulate at lower levels.

b) N 0 / N= 1/2; exp(- E/kT) = 1/3; T = E / [k ln(3)].

V) N 0 / N= 1/4; exp(- E/kT) = 1; T= . At high temperatures molecules are evenly distributed across energy levels, because all Boltzmann factors are almost the same and equal to 1.

Answer. A) T= 0; b) T = E / [k ln(3)]; V) T = .

Example 10-3. When any thermodynamic system is heated, the population of some levels increases and others decreases. Using Boltzmann's distribution law, determine what the energy of a level must be in order for its population to increase with increasing temperature.

Solution. Occupancy is the proportion of molecules located at a certain energy level. By condition, the derivative of this quantity with respect to temperature must be positive:

In the second line we used the definition of average energy (10.14). Thus, population increases with temperature for all levels above average energy systems.

Answer. .

TASKS

10-1. A molecule can be at two levels with energies of 0 and 100 cm -1. What is the probability that the molecule will be on lowest level at 25 o C?

10-2. A molecule can be at two levels with energies of 0 and 600 cm -1. At what temperature will there be twice as many molecules at the upper level as at the lower level?

10-3. A molecule can be at a level with energy 0 or at one of three levels with energy E. Find the average energy of molecules: a) at very low temperatures, b) at very high temperatures.

10-4. When any thermodynamic system cools, the population of some levels increases and others decreases. Using Boltzmann's distribution law, determine what the energy of a level must be in order for its population to increase with decreasing temperature.

10-5. Calculate the most probable speed of molecules carbon dioxide at a temperature of 300 K.

10-6. Calculate average speed helium atoms under normal conditions.

10-7. Calculate the most probable speed of ozone molecules at a temperature of -30 o C.

10-8. At what temperature is the average speed of oxygen molecules equal to 500 m/s?

10-9. Under some conditions, the average speed of oxygen molecules is 400 m/s. What is the average speed of hydrogen molecules under the same conditions?

10-10. What is the fraction of molecules weighing m, having a speed above average at temperature T? Does this fraction depend on the mass of molecules and temperature?

10-11. Using Maxwell's distribution, calculate the average kinetic energy of motion of molecules of mass m at temperature T. Is this energy equal kinetic energy at average speed?



Did you like the article? Share with your friends!