Properties of the Fourier transform associated with differentiation. Fourier transform Fourier integral complex form of the integral Fourier transform cosine and sine transformations amplitude and phase spectra application properties

As follows from the theory of the Fourier series, it is applicable when dealing with periodic functions and with functions with a limited interval of variation of independent variables (since this interval can be extended to the entire axis by periodically extending the function). However, periodic functions are relatively rare in practice. This situation requires the creation of a more general mathematical apparatus for handling non-periodic functions, namely the Fourier integral and, based on it, the Fourier transform.

Let us consider the non-periodic function f(t) as the limit of a periodic one with period T=2l for l®?.

A periodic function with a period of 2l can be represented as a Fourier series expansion (we will use its complex form)

where the expressions for the coefficients have the form:

Let us introduce the following notation for frequencies:

Let us write the expansion in the Fourier series in the form of one formula, substituting in (1) the expression for the coefficients (2) and for the frequency (3):

Discrete spectrum of a periodic function with period 2l

Let us denote the minimum distance between the points of the spectrum, equal to the fundamental frequency of oscillations for, i.e.

and introduce this notation in (4):

In this notation, the Fourier series resembles the integral sum for a function.

Going to the limit at T=2l®? to a non-periodic function, we find that the frequency interval becomes infinitesimal (we denote it as dw), and the spectrum becomes continuous. From a mathematical point of view, this corresponds to replacing summation over a discrete set with integration over the corresponding variable over infinite limits.

This expression is the Fourier integral formula.

2.2 Fourier transform formulas.

It is convenient to represent the Fourier integral as a superposition of two formulas:

The function F(w), comparable according to the first formula of the function f(t), is called its Fourier transform. In turn, the second formula, which allows you to find the original function from its image, is called inverse Fourier transform. Let us pay attention to the symmetry of the formulas for the direct and inverse Fourier transforms up to an accuracy of a constant factor of 1/2p and the sign in the exponent.

Symbolically, the direct and inverse Fourier transforms will be denoted as f(t)~F(w).

Drawing an analogy with the trigonometric Fourier series, we can come to the conclusion that the Fourier image (6) is an analogue of the Fourier coefficient (see (2)), and the inverse Fourier transform (7) is an analogue of the expansion of a function into a trigonometric Fourier series (see (1) )).

Note that the multiplier, instead of the inverse transformation, can be attributed to the direct Fourier transform or make symmetrical multipliers for the direct and inverse transformations. The main thing is that both transformations together form the Fourier integral formula (5), i.e. the product of constant factors during direct and inverse transformation must be equal..

Note that for applied purposes, it is not the angular frequency w that is more convenient, but the frequency n associated with the first ratio w = 2pn. and measured in Hertz (Hz). In terms of this frequency, the Fourier transform formulas will look like:

Let us formulate without proof sufficient conditions for the existence of the Fourier transform.

  • 1) f(t) - limited at t?(-?,?);
  • 2) f(t) - absolutely integrable on t?(-?,?);
  • 3) The number of discontinuity points, maximum and minimum of the function f(t) is finite.

Another sufficient condition is the requirement that the function be quadratically integrable on its real axis, which physically corresponds to the requirement of finite signal power.

Thus, using the Fourier transform, we have two ways to represent the signal: time f(t) and frequency F(w).

  • 2.3 Properties of the Fourier transform.
  • 1. Linearity.

If f(t)~F(w),g(t)~G(w),

then аf(t)+bg(t) ~aF(w)+bG(w).

The proof is based on the linear properties of integrals.

  • 2. Parity.
  • 2.1 If f(t) is a real even function and f(t)~F(w), then F(w) is also a real even function.

Proof:

Using definition (6), as well as Euler’s formula, we obtain

  • -even function.
  • 2.2 If f(t) is an odd real function, then F(w) is an odd imaginary function.

2.3 If f(t) is an arbitrary real function, F(w) has an even real part and an odd imaginary part.

Proof:


The properties of parity 2 can be summarized in the formula:

3. Similarity

If f(t)~F(w), then f(at)~.

  • 4. Bias.
  • 4.1 If f(t)~F(w), then f(t-a)~.

Those. time delay corresponds to multiplication by a complex exponential in the frequency domain.

4.2 If f(t)~F(w), then~.

Those. the frequency shift corresponds to multiplication by a complex exponential in the time domain.

  • 5. If f(t)~F(w), then
  • 5.1 f’(t)~iwF(w),~

if f(t) has n continuous derivatives.

Proof:

if F(w) has n continuous derivatives.

Proof:

  • 2.4 The most important examples of finding the Fourier transform.

where is the rectangular impulse

At the same time, we took into account that is the Poisson integral.

Finding the last integral can be explained as follows. The integration contour C is a straight line in the complex plane (t,w), parallel to the real axis (w is a constant number). The integral of a scalar function over a closed loop is zero. We form a closed loop consisting of a straight line C and a real axis t, closing at infinity. Because at infinity the integrand function tends to zero, then the integrals along the closing curves are equal to zero. This means that the integral along the straight line C is equal to the integral taken along the real real axis passing in the positive direction.

2 .5 The uncertainty principle for the time-frequency representation of the signal.

Using the example of a rectangular pulse, we will show the validity uncertainty principle consisting in the fact that it is impossible to simultaneously localize a pulse in time and enhance its frequency selectivity.

According to 5), the width of a rectangular pulse in the time domain DT is equal to 2T. We take the distance between adjacent zeros of the central hump in the frequency domain as the width of the Fourier image of a rectangular pulse. The first zeros of the function are at.

Thus we get

Thus, the more a pulse is localized in time, the more its spectrum is smeared. Conversely, to reduce the spectrum, we are forced to stretch the pulse in time. This principle is valid for any form of impulse and is universal.

2.6 Convolution and its properties.

Convolution is the main procedure when filtering a signal.

Let us call a function h(t) the convolution of non-periodic functions f(t) and h(t) if it is defined as the following integral:

We will symbolically denote this fact as.

The convolution operation has the following properties.

  • 1. Commutativity.

The proof of commutativity can be obtained by changing the variable t-t=t’

  • 2. Associativity

Proof:

  • 3. Distributivity

The proof of this property follows directly from the linear properties of integrals.

For signal processing, the most important thing in the Fourier method (after the Fourier transform formulas) are the convolution theorems. We will use frequency n instead of w, because convolution theorems in this representation will be mutually invertible.

2.7 Convolution theorems

First convolution theorem.

The Fourier transform of a direct product of functions is equal to the convolution of the transformations

Proof:

Let it be then. Using the definition of the inverse Fourier transform and changing the order of integration, we obtain:

In terms of angular frequency w, this theorem has a less universal form

Second convolution theorem.

The Fourier transform of the convolution of functions is equal to the direct product of the transformations.

Proof:


For example, consider the convolution of a rectangular pulse

By condition f(t)=0 at t<-T и приt>T. Similarly, f(t-t)=0 for

t-t<-T и при t-t>T, i.e. att>t+T and att

at -2T

Combining both cases, we get the expression for convolution:

Thus, the convolution of a rectangular pulse with itself will be a triangular pulse (sometimes this function is called the L-function).

Using the convolution theorem, we can easily obtain the Fourier transform of the L-function

In practice, physical situations correspond to functions equal to zero at t<0. Это приводит к тому, что бесконечные пределы заменяются конечными.

Find the convolution of the functions f(t) and g(t)

because f(t)=0 att<0 и g(t-t)=0 при t-t<0,т.е. приt>t.

Let us introduce the concept of mutual correlation of two functions f(t) and g(t).

where t is a time shift that continuously changes in the interval (-?,?).

An important concept is the correlation of a function with itself, which is called autocorrelation.

  • 2.8 Signal power and energy.

Let's move on to consider the concept of signal power and energy. The importance of these concepts is explained by the fact that any transfer of information is actually a transfer of energy.

Consider an arbitrary complex signal f(t).

The instantaneous signal power p(t) is determined by the equality

The total energy is equal to the integral of the instantaneous power over the entire period of signal existence:

Signal power can also be considered as a function of frequency. In this case, the instantaneous frequency power is denoted as.

The total signal energy is calculated by the formula

The total signal energy should not depend on the selected representation. The total energy values ​​calculated from the time and frequency representations must match. Therefore, equating the right sides, we obtain the equality:

This equality constitutes the content of Parseval's theorem for non-periodic signals. A rigorous proof of this theorem will be given when studying the topic “Generalized Functions”.

Similarly, expressing the interaction energy of two different signals f(t) and g(t) in time and frequency representation, we obtain:

Let us find out the mathematical meaning of Parseval's theorem.

From a mathematical point of view, the integral is the scalar product of the functions f(t) and g(t), denoted as (f,g). The quantity is called the norm of the function f(t) and is denoted as. Therefore, from Parseval’s theorem it follows that the scalar product is invariant under the Fourier transform, i.e.

Instantaneous signal power considered as a function of frequency, i.e. , has another generally accepted name - the power spectrum. The power spectrum is the main mathematical tool of spectral analysis, which allows one to determine the frequency composition of a signal. In addition to the signal power spectrum, in practice the amplitude and phase spectra are used, defined respectively as:

  • 2.9 Wiener-Khinchin theorem.

The signal power spectrum density f(t) is equal to the Fourier transform of the autocorrelation function

The density of cross-spectral signals f(t) and g(t) is equal to the Fourier transform of the correlation function.

Both statements can be combined into one: Spectral density is equal to the Fourier transform of the correlation function.

The proof will be given later after introducing the concept of a generalized function.

The Fourier transform is a transformation that associates functions with a certain real variable. This operation is performed every time we perceive different sounds. The ear makes an automatic “calculation”, which our consciousness is capable of performing only after studying the corresponding section of higher mathematics. The human hearing organ builds a transformation, as a result of which sound (the oscillatory movement of conditioned particles in an elastic medium that propagate in wave form in a solid, liquid or gaseous medium) is presented in the form of a spectrum of sequential volume levels of tones of different heights. After this, the brain turns this information into a familiar sound.

Mathematical Fourier Transform

The transformation of sound waves or other oscillatory processes (from light radiation and ocean tides to cycles of stellar or solar activity) can also be carried out using mathematical methods. Thus, using these techniques, it is possible to expand functions by representing oscillatory processes as a set of sinusoidal components, that is, wavy curves that move from minimum to maximum, then back to minimum, like a sea wave. The Fourier transform is a transformation whose function describes the phase or amplitude of each sinusoid corresponding to a certain frequency. Phase represents the starting point of the curve, and amplitude represents its height.

The Fourier transform (examples are shown in the photo) is a very powerful tool that is used in various fields of science. In some cases, it is used as a means of solving rather complex equations that describe dynamic processes arising under the influence of light, thermal or electrical energy. In other cases, it allows you to determine regular components in complex vibrational signals, thanks to which you can correctly interpret various experimental observations in chemistry, medicine and astronomy.

Historical background

The first person to use this method was the French mathematician Jean Baptiste Fourier. The transformation later named after him was originally used to describe the mechanism of thermal conductivity. Fourier spent his entire adult life studying the properties of heat. He made enormous contributions to the mathematical theory of determining the roots of algebraic equations. Fourier was a professor of analysis at the Polytechnic School, secretary of the Institute of Egyptology, and was in the imperial service, in which he distinguished himself during the construction of the road to Turin (under his leadership, more than 80 thousand square kilometers of malarial swamps were drained). However, all this vigorous activity did not prevent the scientist from engaging in mathematical analysis. In 1802, he derived an equation that describes the propagation of heat in solids. In 1807, the scientist discovered a method for solving this equation, which was called the “Fourier transform.”

Thermal conductivity analysis

The scientist used a mathematical method to describe the mechanism of thermal conductivity. A convenient example, in which there are no difficulties in calculation, is the propagation of thermal energy along an iron ring, one part immersed in a fire. To conduct experiments, Fourier heated part of this ring red-hot and buried it in fine sand. After this, he took temperature measurements on the opposite part of it. Initially, the heat distribution is irregular: part of the ring is cold and the other is hot; a sharp temperature gradient can be observed between these zones. However, as heat spreads across the entire surface of the metal, it becomes more uniform. So, soon this process takes on the form of a sinusoid. First, the graph smoothly increases and just as smoothly decreases, exactly according to the laws of change in the cosine or sine function. The wave gradually levels out and as a result the temperature becomes the same over the entire surface of the ring.

The author of this method suggested that the initial irregular distribution can be completely decomposed into a number of elementary sinusoids. Each of them will have its own phase (initial position) and its own temperature maximum. Moreover, each such component changes from minimum to maximum and back in a full revolution around the ring an integer number of times. The component having one period was called the fundamental harmonic, and the value with two or more periods was called the second, and so on. Thus, the mathematical function that describes the temperature maximum, phase or position is called the Fourier transform of the distribution function. The scientist reduced a single component, which is difficult to describe mathematically, to an easy-to-use tool - the cosine and sine series, which together give the original distribution.

The essence of the analysis

Applying this analysis to the transformation of heat propagation through a solid object having a ring shape, the mathematician reasoned that increasing the periods of the sinusoidal component would lead to its rapid attenuation. This can be clearly seen at the fundamental and second harmonics. In the latter, the temperature reaches the maximum and minimum values ​​twice in one pass, and in the first - only once. It turns out that the distance covered by heat in the second harmonic will be half that in the fundamental one. In addition, the gradient in the second will also be twice as steep as that of the first. Consequently, since the more intense heat flow travels a distance that is twice as short, this harmonic will decay four times faster than the fundamental one as a function of time. In subsequent ones, this process will go even faster. The mathematician believed that this method allows one to calculate the process of the initial distribution of temperature over time.

Challenge to contemporaries

The Fourier transform algorithm challenged the theoretical foundations of mathematics at that time. At the beginning of the nineteenth century, most prominent scientists, including Lagrange, Laplace, Poisson, Legendre and Biot, did not accept his statement that the initial temperature distribution is decomposed into components in the form of a fundamental harmonic and higher frequencies. However, the Academy of Sciences could not ignore the results obtained by the mathematician and awarded him a prize for the theory of the laws of heat conduction, as well as its comparison with physical experiments. In the Fourier approach, the main objection was caused by the fact that the discontinuous function is represented by the sum of several sinusoidal functions that are continuous. After all, they describe breaking straight and curved lines. The scientist's contemporaries had never encountered a similar situation when discontinuous functions were described by a combination of continuous ones, such as quadratic, linear, sinusoid or exponential. If the mathematician was right in his statements, then the sum of an infinite series of a trigonometric function should be reduced to an exact step function. At the time, such a statement seemed absurd. However, despite doubts, some researchers (for example, Claude Navier, Sophie Germain) expanded the scope of their research and took it beyond the analysis of thermal energy distribution. Meanwhile, mathematicians continued to be tormented by the question of whether the sum of several sinusoidal functions can be reduced to an exact representation of a discontinuous one.

200 year history

This theory has developed over two centuries, and today it is finally formed. With its help, spatial or temporal functions are divided into sinusoidal components, which have their own frequency, phase and amplitude. This transformation is obtained by two different mathematical methods. The first of them is used in the case when the original function is continuous, and the second in the case when it is represented by many discrete individual changes. If the expression is obtained from values ​​that are defined by discrete intervals, then it can be divided into several sinusoidal expressions with discrete frequencies - from the lowest and then twice, three times, and so on above the main one. This sum is usually called the Fourier series. If the initial expression is given a value for each real number, then it can be decomposed into several sinusoids of all possible frequencies. It is usually called the Fourier integral, and the solution implies integral transformations of the function. Regardless of how the conversion is obtained, two numbers must be specified for each frequency: amplitude and frequency. These values ​​are expressed as a single Theory of expressions of complex variables together with the Fourier transform made it possible to carry out calculations when designing various electrical circuits, analyzing mechanical vibrations, studying the mechanism of wave propagation, and more.

Fourier transform today

Nowadays, the study of this process mainly comes down to finding effective methods for moving from a function to its transformed form and back. This solution is called the direct and inverse Fourier transform. What does it mean? In order to perform a direct Fourier transform, you can use mathematical methods, or you can use analytical ones. Despite the fact that certain difficulties arise when using them in practice, most integrals have already been found and included in mathematical reference books. Using numerical methods, you can calculate expressions whose form is based on experimental data, or functions whose integrals are missing in tables and are difficult to present in analytical form.

Before the advent of computer technology, calculations of such transformations were very tedious; they required manual execution of a large number of arithmetic operations, which depended on the number of points describing the wave function. To facilitate calculations, today there are special programs that make it possible to implement new ones. Thus, in 1965, James Cooley and John Tukey created software that became known as the “fast Fourier transform.” It allows you to save calculation time by reducing the number of multiplications when analyzing a curve. The Fast Fourier Transform method is based on dividing a curve into a large number of uniform sample values. Accordingly, the number of multiplications is halved with the same reduction in the number of points.

Applying the Fourier transform

This process is used in various fields of science: physics, signal processing, combinatorics, probability theory, cryptography, statistics, oceanology, optics, acoustics, geometry and others. The rich possibilities of its application are based on a number of useful features, which are called “properties of the Fourier transform.” Let's look at them.

1. The function transformation is a linear operator and, with appropriate normalization, is unitary. This property is known as Parseval's theorem, or in the general case Plancherel's theorem, or Pontryagin's dualism.

2. The transformation is reversible. Moreover, the inverse result has almost the same form as with the direct solution.

3. Sinusoidal basic expressions are their own differentiated functions. This means that such a representation changes with a constant factor into ordinary algebraic ones.

4. According to the convolution theorem, this process turns a complex operation into an elementary multiplication.

5. The discrete Fourier transform can be quickly calculated on a computer using the "fast" method.

Varieties of Fourier transform

1. Most often, this term is used to denote a continuous transformation that provides any square-integrable expression as a sum of complex exponential expressions with specific angular frequencies and amplitudes. This type has several different forms, which may differ in constant coefficients. The continuous method includes a conversion table that can be found in mathematical reference books. A generalized case is a fractional transformation, through which a given process can be raised to the required real power.

2. The continuous method is a generalization of the earlier technique of Fourier series, defined for various periodic functions or expressions that exist in a limited region and represents them as series of sinusoids.

3. Discrete Fourier transform. This method is used in computer technology for scientific calculations and digital signal processing. To carry out this type of calculation, it is required to have functions that define individual points, periodic or bounded areas on a discrete set instead of continuous Fourier integrals. The signal transformation in this case is represented as a sum of sinusoids. At the same time, the use of the “fast” method allows the use of discrete solutions for any practical problems.

4. The windowed Fourier transform is a generalized form of the classical method. Unlike the standard solution, when it is used which is taken in the full range of existence of a given variable, here only the local frequency distribution is of particular interest, provided that the original variable (time) is preserved.

5. Two-dimensional Fourier transform. This method is used to work with two-dimensional data arrays. In this case, the transformation is first performed in one direction, and then in the other.

Conclusion

Today, the Fourier method is firmly established in various fields of science. For example, in 1962, the shape of the DNA double helix was discovered using Fourier analysis in combination with the latter focusing on crystals of DNA fibers, as a result of which the image obtained by diffraction of radiation was recorded on film. This picture provided information about the amplitude value when using the Fourier transform to a given crystal structure. Phase data were obtained by comparing the diffraction map of DNA with maps obtained by analyzing similar chemical structures. As a result, biologists restored the crystal structure - the original function.

Fourier transforms play a huge role in space exploration, semiconductor and plasma physics, microwave acoustics, oceanography, radar, seismology and medical examinations.

I believe that everyone is generally aware of the existence of such a wonderful mathematical tool as the Fourier transform. However, for some reason it is taught so poorly in universities that relatively few people understand how this transformation works and how it should be used correctly. Meanwhile, the mathematics of this transformation is surprisingly beautiful, simple and elegant. I invite everyone to learn a little more about the Fourier transform and the related topic of how analog signals can be effectively converted into digital signals for computational processing.

Without using complex formulas and Matlab, I will try to answer the following questions:

  • FT, DTF, DTFT - what are the differences and how do seemingly completely different formulas give such conceptually similar results?
  • How to Correctly Interpret Fast Fourier Transform (FFT) Results
  • What to do if you are given a signal of 179 samples and the FFT requires an input sequence of length equal to a power of two
  • Why, when trying to obtain the spectrum of a sinusoid using Fourier, instead of the expected single “stick”, a strange squiggle appears on the graph and what can be done about it
  • Why are analog filters placed before the ADC and after the DAC?
  • Is it possible to digitize an ADC signal with a frequency higher than half the sampling frequency (the school answer is incorrect, the correct answer is possible)
  • How to restore the original signal using a digital sequence

I will proceed from the assumption that the reader understands what an integral is, a complex number (as well as its modulus and argument), convolution of functions, plus at least a “hands-on” idea of ​​what the Dirac delta function is. If you don’t know, no problem, read the above links. Throughout this text, by “product of functions” I will mean “pointwise multiplication”

We should probably start with the fact that the usual Fourier transform is some kind of thing that, as you can guess from the name, transforms one function into another, that is, it associates each function of a real variable x(t) with its spectrum or Fourier image y (w):

If we give analogies, then an example of a transformation similar in meaning can be, for example, differentiation, turning a function into its derivative. That is, the Fourier transform is essentially the same operation as taking the derivative, and it is often denoted in a similar way by drawing a triangular “cap” over the function. Only in contrast to differentiation, which can also be defined for real numbers, the Fourier transform always “works” with more general complex numbers. Because of this, problems constantly arise with displaying the results of this transformation, since complex numbers are determined not by one, but by two coordinates on a graph operating with real numbers. The most convenient way, as a rule, is to represent complex numbers in the form of a modulus and an argument and draw them separately as two separate graphs:

The graph of the argument of the complex value is often called in this case the “phase spectrum”, and the graph of the modulus is often called the “amplitude spectrum”. The amplitude spectrum is usually of much greater interest, and therefore the “phase” part of the spectrum is often skipped. In this article we will also focus on “amplitude” things, but we should not forget about the existence of the missing phase part of the graph. In addition, instead of the usual modulus of a complex value, its decimal logarithm multiplied by 10 is often drawn. The result is a logarithmic graph, the values ​​​​of which are displayed in decibels (dB).

Please note that not very negative numbers on the logarithmic graph (-20 dB or less) correspond to almost zero numbers on the “normal” graph. Therefore, the long and wide “tails” of various spectra on such graphs, when displayed in “ordinary” coordinates, as a rule, practically disappear. The convenience of such a strange at first glance representation arises from the fact that the Fourier images of various functions often need to be multiplied among themselves. With such pointwise multiplication of complex-valued Fourier images, their phase spectra are added, and their amplitude spectra are multiplied. The first is easy to do, while the second is relatively difficult. However, the logarithms of the amplitude add up when multiplying the amplitudes, so logarithmic amplitude graphs can, like phase graphs, simply be added pointwise. In addition, in practical problems it is often more convenient to operate not with the “amplitude” of the signal, but with its “power” (the square of the amplitude). On a logarithmic scale, both graphs (amplitude and power) look identical and differ only in the coefficient - all values ​​​​on the power graph are exactly twice as large as on the amplitude scale. Accordingly, to plot the power distribution by frequency (in decibels), you can not square anything, but calculate the decimal logarithm and multiply it by 20.

Are you bored? Wait a little longer, we'll be done with the boring part of the article explaining how to interpret graphs soon :). But before that, there is one extremely important thing to understand: although all of the above spectrum graphs were drawn for some limited ranges of values ​​(positive numbers in particular), all of these graphs actually continue to plus and minus infinity. The graphs simply depict some “most meaningful” part of the graph, which is usually mirrored for negative values ​​of the parameter and is often repeated periodically with a certain step when viewed on a larger scale.

Having decided what is drawn on the graphs, let's return to the Fourier transform itself and its properties. There are several different ways to define this transformation, differing in small details (different normalizations). For example, in our universities, for some reason, they often use the normalization of the Fourier transform, which defines the spectrum in terms of angular frequency (radians per second). I will use a more convenient Western formulation that defines the spectrum in terms of ordinary frequency (hertz). The direct and inverse Fourier transforms in this case are determined by the formulas on the left, and some properties of this transformation that we will need are determined by a list of seven points on the right:

The first of these properties is linearity. If we take some linear combination of functions, then the Fourier transform of this combination will be the same linear combination of the Fourier images of these functions. This property allows complex functions and their Fourier images to be reduced to simpler ones. For example, the Fourier transform of a sinusoidal function with frequency f and amplitude a is a combination of two delta functions located at points f and -f and with coefficient a/2:

If we take a function consisting of the sum of a set of sinusoids with different frequencies, then according to the property of linearity, the Fourier transform of this function will consist of a corresponding set of delta functions. This allows us to give a naive but visual interpretation of the spectrum according to the principle “if in the spectrum of a function frequency f corresponds to amplitude a, then the original function can be represented as a sum of sinusoids, one of which will be a sinusoid with frequency f and amplitude 2a.” Strictly speaking, this interpretation is incorrect, since the delta function and the point on the graph are completely different things, but as we will see later, for discrete Fourier transforms it will not be so far from the truth.

The second property of the Fourier transform is the independence of the amplitude spectrum from the time shift of the signal. If we move a function to the left or right along the x-axis, then only its phase spectrum will change.

The third property is that stretching (compressing) the original function along the time axis (x) proportionally compresses (stretches) its Fourier image along the frequency scale (w). In particular, the spectrum of a signal of finite duration is always infinitely wide and, conversely, the spectrum of finite width always corresponds to a signal of unlimited duration.

The fourth and fifth properties are perhaps the most useful of all. They make it possible to reduce the convolution of functions to a pointwise multiplication of their Fourier images, and vice versa - the pointwise multiplication of functions to the convolution of their Fourier images. A little further I will show how convenient this is.

The sixth property speaks of the symmetry of Fourier images. In particular, from this property it follows that in the Fourier transform of a real-valued function (i.e., any “real” signal), the amplitude spectrum is always an even function, and the phase spectrum (if brought to the range -pi...pi) is an odd one . It is for this reason that the negative part of the spectrum is almost never drawn on spectrum graphs - for real-valued signals it does not provide any new information (but, I repeat, it is not zero either).

Finally, the last, seventh property, says that the Fourier transform preserves the “energy” of the signal. It is meaningful only for signals of finite duration, the energy of which is finite, and suggests that the spectrum of such signals at infinity quickly approaches zero. It is precisely because of this property that spectrum graphs usually depict only the “main” part of the signal, which carries the lion’s share of the energy - the rest of the graph simply tends to zero (but, again, is not zero).

Armed with these 7 properties, let's look at the mathematics of signal “digitization”, which allows us to convert a continuous signal into a sequence of numbers. To do this, we need to take a function known as the “Dirac comb”:

A Dirac comb is simply a periodic sequence of delta functions with unity coefficient, starting at zero and proceeding with step T. For digitizing signals, T is chosen as small a number as possible, T<<1. Фурье-образ этой функции - тоже гребенка Дирака, только с гораздо большим шагом 1/T и несколько меньшим коэффициентом (1/T). С математической точки зрения, дискретизация сигнала по времени - это просто поточечное умножение исходного сигнала на гребенку Дирака. Значение 1/T при этом называют частотой дискретизации:

Instead of a continuous function, after such multiplication, a sequence of delta pulses of a certain height is obtained. Moreover, according to property 5 of the Fourier transform, the spectrum of the resulting discrete signal is a convolution of the original spectrum with the corresponding Dirac comb. It is easy to understand that, based on the properties of convolution, the spectrum of the original signal is “copied” an infinite number of times along the frequency axis with a step of 1/T, and then summed.

Note that if the original spectrum had a finite width and we used a sufficiently high sampling frequency, then the copies of the original spectrum will not overlap, and therefore will not sum with each other. It is easy to understand that from such a “collapsed” spectrum it will be easy to restore the original one - it will be enough to simply take the spectrum component in the region of zero, “cutting off” the extra copies going to infinity. The simplest way to do this is to multiply the spectrum by a rectangular function equal to T in the range -1/2T...1/2T and zero outside this range. Such a Fourier transform corresponds to the function sinc(Tx) and according to property 4, such a multiplication is equivalent to the convolution of the original sequence of delta functions with the function sinc(Tx)



That is, using the Fourier transform, we have a way to easily reconstruct the original signal from a time-sampled one, working provided that we use a sampling frequency that is at least twice (due to the presence of negative frequencies in the spectrum) higher than the maximum frequency present in the original signal. This result is widely known and is called the “Kotelnikov/Shannon-Nyquist theorem”. However, as it is easy to notice now (understanding the proof), this result, contrary to the widespread misconception, determines sufficient, but not necessary condition for restoring the original signal. All we need is to ensure that the part of the spectrum that interests us after sampling the signal does not overlap each other, and if the signal is sufficiently narrowband (has a small “width” of the non-zero part of the spectrum), then this result can often be achieved at a sampling frequency much lower than twice the maximum frequency of the signal. This technique is called “undersampling” (subsampling, bandpass sampling) and is quite widely used in processing all kinds of radio signals. For example, if we take an FM radio operating in the frequency band from 88 to 108 MHz, then to digitize it we can use an ADC with a frequency of only 43.5 MHz instead of the 216 MHz assumed by Kotelnikov’s theorem. In this case, however, you will need a high-quality ADC and a good filter.

Let me note that “duplication” of high frequencies with frequencies of lower orders (aliasing) is an immediate property of signal sampling that irreversibly “spoils” the result. Therefore, if the signal can, in principle, contain high-order frequencies (that is, almost always), an analog filter is placed in front of the ADC, “cutting off” everything unnecessary directly in the original signal (since after sampling it will be too late to do this). The characteristics of these filters, as analog devices, are not ideal, so some “damage” to the signal still occurs, and in practice it follows that the highest frequencies in the spectrum are, as a rule, unreliable. To reduce this problem, the signal is often oversampled, setting the input analog filter to a lower bandwidth and using only the lower part of the theoretically available frequency range of the ADC.

Another common misconception, by the way, is when the signal at the DAC output is drawn in “steps”. “Steps” correspond to the convolution of a sampled signal sequence with a rectangular function of width T and height 1:

The signal spectrum with this transformation is multiplied by the Fourier image of this rectangular function, and for a similar rectangular function it is again sinc(w), “stretched” the more, the smaller the width of the corresponding rectangle. The spectrum of the sampled signal with such a “DAC” is multiplied point by point by this spectrum. In this case, unnecessary high frequencies with “extra copies” of the spectrum are not completely cut off, but the upper part of the “useful” part of the spectrum, on the contrary, is attenuated.

In practice, of course, no one does this. There are many different approaches to constructing a DAC, but even in the closest weighting-type DACs, the rectangular pulses in the DAC, on the contrary, are chosen to be as short as possible (approximating the real sequence of delta functions) in order to avoid excessive suppression of the useful part of the spectrum. “Extra” frequencies in the resulting broadband signal are almost always canceled out by passing the signal through an analog low-pass filter, so that there are no “digital steps” either “inside” the converter, or, especially, at its output.

However, let's go back to the Fourier transform. The Fourier transform described above applied to a pre-sampled signal sequence is called a discrete-time Fourier transform (DTFT). The spectrum obtained by such a transformation is always 1/T-periodic, therefore the DTFT spectrum is completely determined by its values ​​on the segment dt =

= (1/2p)s(t)H(w") exp(-j(w-w")t) dw"dt =

(1/2p)H(w") dw"s(t) exp(-j(w-w")t) dt =

= (1/2p)H(w") S(w-w") dw" = (1/2p) H(w) * S(w). (4.29)

Thus, the product of functions in coordinate form is displayed in frequency representation by convolution of the Fourier images of these functions, with a normalization factor (1/2p), taking into account the asymmetry of the direct and inverse Fourier transforms of the functions s(t) and h(t) when using angular frequencies .

9. Convolution derivative two functions s"(t) = d/dt.

Using expressions (4.26) and (4.28), we obtain:

s"(t) = jw = (jw X(w)) Y(w) = X(w) (jw Y(w).

s"(t) = x"(t) * y(t) = x(t) * y"(t).

This expression allows you to calculate the derivative of a signal while simultaneously smoothing it with a weighting function that is the derivative of a smoothing function (for example, a Gaussian).

10. Power spectra. The time function of signal power in general form is determined by the expression:

w(t) = s(t) s * (t) = |s(t)| 2.

The spectral power density, accordingly, is equal to the Fourier transform of the product s(t) s * (t), which will be displayed in the spectral representation by convolution of the Fourier images of these functions:

W(f) = S(f) * S * (f) =S(f) S * (f-v) dv. (4.30)

But for all current values ​​of frequency f, the integral on the right side of this expression is equal to the product S(f)·S * (f), since for all values ​​of the shift v ≠ 0, due to the orthogonality of the harmonics S(f) and S * (f-v), the values their products are equal to zero. From here:

W(f) = S(f) * S * (f) = |S(f)| 2. (4.31)

The power spectrum is a real, non-negative even function, which is often called the energy spectrum. The power spectrum, as the square of the modulus of the signal spectrum, does not contain phase information about the frequency components, and, therefore, reconstruction of the signal from the power spectrum is impossible. This also means that signals with different phase characteristics can have the same power spectra. In particular, the signal shift does not affect its power spectrum.

For the signal interaction power functions in the frequency domain, we have, accordingly, the signal interaction power frequency spectra:

W xy (f) = X(f) Y*(f),

W yx (f) = Y(f) X*(f),

W xy (f) = W* yx (f).

Signal interaction power functions are complex, even if both functions x(t) and y(t) are real, with Re being an even function and Im being an odd one. Hence, the total energy of signal interaction when integrating the interaction power functions is determined only by the real part of the spectrum:

X(f) Y*(f) df.

From Parseval’s equality it follows that the scalar product of signals and the norm with respect to the Fourier transform is invariant:

áx(t),y(t)ñ = áX(f),Y(f)ñ, ||x(t)|| 2 = ||X(f)|| 2.

We should not forget that when representing spectra in circular frequencies (in w), the right side of the given equalities must contain the factor 1/2p.



Did you like the article? Share with your friends!