Background factor in experimental psychology. Experimental Psychology: Textbook

This plan is used extremely rarely. Most experimental psychology textbooks don't even mention it. Campbell also claims that this plan was never implemented.

Much more often than the above “extravagant” designs, quasi-experimental designs are used, which are generally called “discrete time series”. For the classification of these plans, two reasons can be distinguished: the study is carried out 1) with the participation of one group or several; 2) with one impact or a series. It should be noted that plans in which a series of homogeneous or heterogeneous influences are implemented with testing after each influence have traditionally been called “formative experiments” in Soviet and Russian psychological science. At their core, of course, they are quasi-experiments with all the inherent violations of external and internal validity in such studies.

When using such designs, we must be aware from the outset that they lack controls for external validity. It is impossible to control the interaction of pretesting and experimental treatment, to eliminate the effect of systematic mixing (the interaction of group composition and experimental treatment), to control the reaction of subjects to the experiment and to determine the effect of interaction between various experimental treatments.

Quasi-experimental designs based on a single-group time series design are similar in structure to single-subject experimental designs.

The discrete time series design is most often used in developmental, educational, social, and clinical psychology. Its essence is that the initial level of the dependent variable is initially determined on a group of subjects using a series of sequential measurements. The researcher then influences the subjects in the experimental group by varying the independent variable and conducts a series of similar measurements. The levels, or trends, of the dependent variable before and after the intervention are compared. The outline of the plan looks like this:

The main disadvantage of a discrete time series design is that it does not allow one to separate the effect of the independent variable from the effect of background events that occur during the course of the study. To eliminate the “history” effect, it is recommended to use experimental isolation of subjects.

A modification of this design is another quasi-experiment in a time series design, in which pre-measurement exposure alternates with pre-measurement no-exposure:

Alternation can be regular or random. This option is only suitable if the effect is reversible. When processing the data obtained in the experiment, the series is divided into two sequences and the results of those measurements where there was an impact are compared with the results of those measurements where there was no impact. To compare data it is used t-Student's t-test with the number of degrees of freedom n - 2(Where P - number of situations of the same type).

Time series plans are often implemented in practice (as I have already noted, in Soviet educational psychology, the formative experiment was considered almost the only option for evidence-based research). When they are implemented, the well-known “Hawthorne effect” is often observed. It was first discovered by Dixon and Roethlisberger in 1939 while conducting research at Hawthorne's factories in Chicago. It was assumed that changing the labor organization system would increase productivity. As a result, it turned out, as surveys of workers revealed, that participation in the experiment itself increased their motivation to work. The subjects realized that they were personally interested in them, and began to work more productively. To control for this effect (which is no different in nature from the placebo effect in time-series quasi-experiments), a control group is used.

The time series design for two non-equivalent groups, one of which receives no intervention, looks like this:

A quasi-experiment allows you to control the effect of the background factor (the “history” effect). This is usually the design recommended for researchers conducting experiments involving natural groups in kindergartens, schools, clinics, or workplaces. This can be called a formative experimental design with a control sample. This design is very difficult to implement, but if it is possible to randomize the groups, it turns into a “true formative experiment” design.

A combination of this design and the previous one is possible, in which series with and without exposure alternate on the same sample.

5.2.3 Post-facto plans

In conclusion, let's look at another specific method that is often used in psychology. It has several names: referenced experiment, experiment ex-post-facto etc. It is often used in sociology, pedagogy, as well as neuropsychology and clinical psychology. It was often used in sociological research in the 1930s and 40s. At the same time, sociologist F. S. Chase introduced the name of this method and developed data analysis schemes. In sociology and pedagogy, the strategy for its application is as follows. The experimenter himself does not influence the subjects. The impact (positive value of the independent variable) is some real event from their life. A group of “subjects” exposed to the effect and a group that did not experience it are selected. Selection is carried out on the basis of data on the characteristics of the “test subjects” before exposure; The information may include personal memories and autobiographies, information from archives, personal data, medical records, etc. Then the dependent variable is tested among representatives of the “experimental” and control groups. The data obtained as a result of testing the groups is compared and a conclusion is drawn about the influence of the “natural” influence on the further behavior of the subjects. Thus the plan ex-post-facto imitates an experimental design for two groups with their equalization (preferably randomization) and testing after exposure.

Group equivalence is achieved either by randomization or by pairwise adjustment, in which similar individuals are assigned to different groups. The randomization method gives more reliable results, but is applicable only when the sample from which we form the control and main groups is large enough.

This plan is being implemented in many modern studies. A typical study is of post-traumatic stress, which occurs in some individuals who find themselves in situations that go beyond the normal life experience, associated with a threat to a person’s health and life. Post-traumatic stress occurs in many (but not all) war participants, victims of violence, witnesses and victims of natural and man-made disasters, etc. The study of the causes of post-traumatic stress is carried out according to the following scheme: a sample of people who have suffered the impact of a combat situation, disaster, etc. is selected and tested for the presence of post-traumatic syndrome; the results are compared with the results of the control sample. The best strategy for forming the main and control samples is the preliminary selection of “subjects” for testing based on personal data and randomization of groups. But in reality, diagnostics can only be carried out on those persons who have suffered exposure to a traumatic factor and who themselves request to be examined by psychologists or doctors. Thus, there may be a risk that the sample of volunteers will be very different from the entire population of trauma survivors. First of all, these differences are manifested in the increased incidence of post-traumatic stress syndrome. The effect of the traumatic factor on the population will be exaggerated. And at the same time an experiment ex-post-facto - the only possible way to conduct such research (the laboratory of the psychology of post-traumatic states at the Institute of Psychology of the Russian Academy of Sciences, headed by N.V. Tarabrina, is working on these problems).

Method ex-post-facto often used in neuropsychology: brain injuries and lesions of certain structures provide a unique opportunity to identify the localization of mental functions. Injuries to the cerebral cortex during the war (primarily World War II) provided, no matter how blasphemous it may sound, a wealth of material for neuropsychologists and neurophysiologists, including domestic ones (the work of Luria and his school).

5.3. Correlation study

The reader should refer to Chap. 6. It details the theory of psychological measurement. A detailed description of the features of psychological measurement and testing is necessary not only in itself, but also in order to be able to approach the clarification of the features of the most common scheme of modern psychological empirical research - correlation.

The theory of correlation research, based on ideas about measures of correlation, was developed by K. Pearson and is described in detail in textbooks on mathematical statistics. Only the methodological aspects of correlational psychological research are considered here.

The strategy for conducting a correlational study is similar to a quasi-experiment. The only difference from a quasi-experiment is that there is no controlled influence on the object. The design of a correlational study is simple. The researcher hypothesizes that there is a statistical connection between several mental properties of an individual or between certain external levels and mental states. In this case, assumptions about causal dependence are not discussed.

Correlation called study, carried out to confirm or refute a hypothesis about a statistical relationship between several (two or more) variables. In psychology, mental properties, processes, states, etc. can act as variables.

“Correlation” literally means “correlation.” If a change in one variable is accompanied by a change in another, then we can talk about the correlation of these variables. The presence of a correlation between two variables does not say anything about the cause-and-effect relationships between them, but it makes it possible to put forward such a hypothesis. The absence of correlation allows us to reject the hypothesis of a cause-and-effect relationship between the variables. There are several interpretations of the presence of a correlation between two measurements:

1. Direct correlation. The level of one variable directly corresponds to the level of another. An example is Hick's law: the speed of information processing is proportional to the logarithm of the number of alternatives. Another example: the correlation of high personal plasticity and the tendency to change social attitudes.

2. Correlation due to the 3rd variable. 2 variables (a, c) are related to each other through the 3rd (c), which was not measured during the study. According to the transitivity rule, if there is R (a, b) And R(b,c), That R (a, c). An example of such a correlation is the fact established by US psychologists of the connection between the level of intelligence and the level of income. If such a study were carried out in today's Russia, the results would be different. Obviously, it's all about the structure of society. The speed of image recognition during fast (tachistoscopic) presentation and the vocabulary of the subjects are also positively correlated. The latent variable driving this correlation is general intelligence.

3. Random correlation not due to any variable.

4. Correlation due to sample heterogeneity. Let's imagine that the sample that we will survey consists of two homogeneous groups. For example, we want to find out whether belonging to a certain gender is associated with the level of extraversion. We believe that “measuring” gender does not cause any difficulties, but we measure extraversion using the Eysenck questionnaire ETI-1. We have 2 groups: male mathematicians and female journalists. It would not be surprising if we get a linear relationship between gender and the level of extraversion-introversion: most men will be introverts, most women will be extroverts.

Correlations vary in type. If an increase in the level of one variable is accompanied by an increase in the level of another, then we are talking about a positive correlation. The higher your personal anxiety, the greater your risk of developing a stomach ulcer. An increase in the volume of a sound is accompanied by a feeling of an increase in its pitch. If an increase in the level of one variable is accompanied by a decrease in the level of another, then we are dealing with a negative correlation. According to Zajonc, the number of children in a family negatively correlates with their level of intelligence. The more fearful an individual is, the less likely it is to occupy a dominant position in the group.

A correlation is called zero if there is no connection between the variables.

There are practically no examples of strictly linear relationships (positive or negative) in psychology. Most connections are nonlinear. A classic example of a nonlinear relationship is the Yerkes-Dodson law: an increase in motivation initially increases the effectiveness of learning, and then a decrease in productivity occurs (the “remotivation” effect). Another example is the relationship between the level of achievement motivation and the choice of tasks of varying difficulty. Individuals motivated by the hope of success prefer tasks of the average range of difficulty - the frequency of choices on the difficulty scale is described by a bell-shaped curve.

Pearson developed the mathematical theory of linear correlations. Its foundations and applications are presented in relevant textbooks and reference books on mathematical statistics. Recall that the Pearson linear correlation coefficient G varies from -1 to +1. It is calculated by normalizing the covariance of variables by the product of their standard deviations.

The significance of the correlation coefficient depends on the accepted significance level a and on the sample size. The larger the modulus of the correlation coefficient, the closer the relationship between the variables is to a linear functional dependence.

5.3.1 Design of a correlation study

A correlational research design is a type of quasi-experimental design in which the independent variable does not influence the dependent variables. In a more strict sense: the groups being tested must be under equivalent unchanging conditions. In a correlational study, all measured variables are dependent. The factor determining this relationship may be one of the variables or a latent, unmeasured variable.

A correlation study is divided into a series of independent measurements in a group of subjects R. There are simple and comparative correlation studies. In the first case, the group of subjects is homogeneous. In the second case, we have several randomized groups that differ according to one or more specific criteria. In general, the plan of such a study is described by a matrix of the form: R X ABOUT(subjects x measurements). The result of this study is a correlation matrix. Data processing can be carried out by comparing the rows of the original matrix or columns. By correlating lines with each other, we compare subjects with each other; correlations are interpreted as coefficients of similarity and difference between people. Of course R-correlations can be calculated only if the data are reduced to the same scale dimension, in particular using the Z-transform:

By correlating the columns with each other, we test the hypothesis about the statistical relationship of the measured variables. In this case, their size does not matter.


Such a study is called structural, since in the end we obtain a correlation matrix of measured variables, which reveals the structure of connections between them.

In research practice, the task often arises of identifying temporal correlations of parameters or detecting changes in the structure of parameter correlations over time. An example of such studies are longitudinal studies.

A longitudinal research design is a series of separate measurements of one or more variables at specified intervals. A longitudinal study is an intermediate option between a quasi-experiment and a correlational study, since time is interpreted by the researcher as an independent variable that determines the level of dependent (for example, personality traits).

The complete design of a correlation study is a parallelepiped R X ABOUT X R, the faces of which are designated as “test subjects”, “operations”, “temporary stages”.

The results of the study can be analyzed in different ways. In addition to calculating R- And ABOUT- correlations, it becomes possible to compare matrices R X ABOUT, obtained in different periods of time, by calculating two-dimensional correlation - the connection of two variables with a third. The same applies to matrices R X T And T X ABOUT.

But more often, researchers limit themselves to another type of processing, testing hypotheses about changes in variables over time, analyzing matrices R x T according to individual measurements.

Let's consider the main types of correlation research.

1. Comparison of two groups. This design can only conditionally be classified as a correlational study. It is used to establish the similarity or difference between two natural or randomized groups in terms of the severity of a particular psychological property or condition. Let's say you want to find out whether men and women differ in their levels of extraversion. To do this, you must create two representative samples, equalized for other parameters significant for extraversion-introversion (for parameters affecting the level of extraversion-introversion), and measure using the test EPQ. The average results of the 2 groups are compared using t-Student's t-test. If necessary, the variances of the extraversion indicator are compared according to the criterion F.

The simplest comparison of 2 groups contains sources of a number of artifacts characteristic of a correlation study. Firstly, the problem of randomizing groups arises - they must be clearly separated according to the selected criterion. Secondly, real measurements do not occur simultaneously, but at different times:

R" O 1 -

R" - O 2

Thirdly, it is good if testing within the group is carried out simultaneously. If individual subjects are tested at different times, the result may be affected by the influence of the time factor on the value of the variable.

Today it is impossible to change gender without special efforts (including without surgery), but you can move from one educational group to another, as well as from class to class.

If a researcher sets out to compare two educational groups in terms of academic performance, he must take care to ensure that they do not “mix” during the course of the study.

The effect of non-simultaneous measurement in two groups (if it is assumed that this factor is significant) could be “removed” by introducing two control groups, but they will also have to be tested at a different time. It is more convenient to divide the initial groups in half and test (if possible) according to the following plan:

R" O 1 -

R" - O 2

__________________

R" O 3 -

R" - O 4

Processing of the results to identify the sequence effect is carried out using the 2 x 2 two-factor analysis method. Comparison of natural (non-randomized) groups is carried out according to the same plan.

2. Univariate study of one group, under different conditions. The design of this study is similar to the previous one. But in its essence it is close to an experiment, since the conditions in which the group finds itself are different. In the case of correlation research, we do not control the level of the independent variable, but only note a change in the individual’s behavior in new conditions. An example is the change in the level of anxiety of children during the transition from kindergarten to 1st grade of school: the group is the same, but the conditions are different.

The main artifacts of this design are the accumulation of sequence and testing effects. In addition, the time factor (natural development effect) may have a distorting effect on the results.

The outline of this plan looks very simple: A O 1 B O 2, Where A And IN- different conditions. Subjects may be randomly selected from the general population or may be a natural group.

Data processing comes down to assessing the similarity between test results under conditions A And IN. To control for the sequence effect, you can counterbalance and move to a correlation design for two groups:

A O 1 B O 2

B O 3 A O 4

In this case we can consider A And IN as an influence, and the plan as a quasi-experiment.

3. Correlation study of pairwise equivalent groups. This design is used in twin studies using intrapair correlations. Dizygotic or monozygotic twins are divided into two groups: each group contains one twin from the pair. Mental parameters of interest to the researcher are measured in twins of both groups. Then the correlation between the parameters is calculated ( ABOUT-correlation) or twins ( R-correlation). There are many more complex design options for psychogenetic twin studies.

4. To test the hypothesis about the statistical relationship between several variables characterizing behavior, a multivariate correlational study. It is implemented according to the following program. A group is selected that represents either the general population or the population of interest. Tests that have been tested for reliability and internal validity are selected. Then the group is tested according to a specific program.

R A(O 1) B(O 2) C (O 3) D(O 4) .... N(O n),

Where A, B, C... N - tests, About i- testing operation.

The research data is presented in matrix form: T X P, Where T - number of subjects, P - tests. The raw data matrix is ​​processed and linear correlation coefficients are calculated. The result is a matrix of the form T X P, Where P - number of tests. In the cells of the matrix there are correlation coefficients, along its diagonal there are units (the correlation of the test with itself). The matrix is ​​symmetrical about this diagonal. Correlations are assessed for statistical differences as follows: first r translated into Z-estimates, then for comparison r applies t-Student's t-test. The significance of the correlation is assessed by comparing it with the table value. When comparing r exp. And r theor. the hypothesis is accepted that the correlation is significantly different from random at a given accuracy value (a= 0.05 or a = 0.001). In some cases, it becomes necessary to calculate multiple correlations, partial correlations, correlation relations, or dimension reduction—reducing the number of parameters.

To reduce the number of measured parameters, various latent analysis methods are used. Many publications are devoted to their use in psychological research. The main cause of artifacts that arise when conducting multidimensional psychological testing is real physical time. When analyzing data from a correlation study, we abstract from the non-simultaneity of the measurements taken. In addition, it is believed that the result of a subsequent measurement does not depend on the previous one, i.e., there is no transfer effect.

We list the main artifacts that arise during the application of this plan:

1. Sequence effect - previous execution of one test can affect the result of another (symmetric or asymmetric transfer).

2. Learning effect - when performing a series of different test tests, the participant in the experiment may increase his testing competence.

3. The effects of background influences and “natural” development lead to uncontrolled dynamics of the subject’s state during the study.

4. The interaction of the testing procedure and the composition of the group is manifested in the study of a heterogeneous group: introverts pass exams worse than extroverts, and “anxious” ones perform worse on high-speed intelligence tests. To control for sequence and carryover effects, you should use the same technique as when designing experiments, namely counterbalancing. Only instead of the influences, the order of the tests changes.

Table 5.14

For 3 tests, the complete correlation study design with counterbalancing is as follows:

1st group: A B C

2nd group: S A B

3rd group: B C A

Where A, B, C - various tests. However, I do not know of a single case where testing and transfer effects were controlled in domestic correlation studies.

Let me give you one example. We needed to identify how the type of task influences the success of completing successive tasks. We assumed that the subjects are not indifferent to the order in which they are given the tests. Tasks for creativity (from the Torrance test) and general intelligence (from the Eysenck test) were selected. The tasks were given to the subjects in random order. It turned out that if the creativity task is completed first, then the speed and accuracy of solving the intelligence task decreases. No reverse effect was observed. Without going into explanations of this phenomenon (this is a complex problem), we note that here we are faced with the classic effect of asymmetric transfer.

5. Structural correlation study. This scheme differs from previous options in that the researcher identifies not the absence or presence of significant correlations, but the difference in the level of significant correlations between the same indicators measured in representatives of different groups.

Let us illustrate this case with an example. Let's say we need to test a hypothesis about whether the gender of the parent and the gender of the child influence the similarities or differences in their personality traits, for example, the level of neuroticism according to Eysenck. To do this, we must conduct research on real groups - families. Then the correlation coefficients of anxiety levels of parents and children are calculated. There are 4 main correlation coefficients: 1) mother-daughter; 2) mother-son; 3) father-daughter; 4) father-son, and two additional: 5) son-daughter; 6) mother-father. If we are only interested in comparing the similarities and differences of the first group of correlations, and not in studying assortativity, then we build a 4-cell 2 x 2 table (Table 5.14).

Correlations are subject to Z-transformation and are compared by t- Student's t test.

Here is a simple example of a structural correlation study. In research practice, more complex versions of structural correlation studies are encountered. Most often they are carried out in the psychology of individuality (B. G. Ananyev and his school), the psychology of work and training (V. D. Shadrikov), the psychophysiology of individual differences (B. M. Teplov, V. D. Nebylitsyn, V. M. Rusalov, etc.), psychosemantics (V.F. Petrenko, A.G. Shmelev, etc.).

6. Longitudinal correlational study. Longitudinal research is a variant of quasi-experimental research designs. A psychologist conducting a longitudinal study considers time to be the influencing variable. It is analogous to a plan for testing one group under different conditions. Only conditions are considered constant. The result of any time study (including longitudinal) is the construction of a time trend of measured variables, which can be analytically described by certain functional dependencies.

A longitudinal correlation study uses a time series design with testing of a group at specified intervals. In addition to learning effects, consistency, etc. in a longitudinal study, the dropout effect should be taken into account: not all subjects who initially took part in the experiment can be examined after a certain time. There may be an interaction between dropout and testing effects (refusal to participate in a subsequent survey), etc.

Structural longitudinal research differs from simple longitudinal research in that we are interested not so much in changes in the central tendency or dispersion of any variable, but in changes in the relationships between variables. This kind of research is widespread in psychogenetics.

Factors that compromise the external validity, or representativeness, of an experiment include:

reactive effect, or test interaction effect, is a possible decrease or increase in the sensitivity or susceptibility of subjects to experimental influence under the influence of preliminary testing. The results of those who were pretested will not be representative of those who were not pretested, that is, those who comprise the population from which the subjects were selected;

effects of interaction between selection factor and experimental influence;

conditions for organizing the experiment that cause a reaction from the subjects to the experiment, which does not allow the data obtained on the influence of the experimental variable to be extended to individuals exposed to the same influence in non-experimental conditions;

mutual interference of experimental influences, which often occurs when the same subjects are exposed to several influences, since the influence of earlier influences, as a rule, does not disappear. This applies especially to single-group experimental designs.

Let's look at two more plans as examples. A design with pre-test and post-test on different randomized samples differs from a true experiment in that one group is pre-tested and the post-test (post-exposure) is tested on an equivalent (after randomization) group that was exposed:

This plan is also called a "simulation plan with initial and final testing." Its main drawback is the inability to control the influence of the “history” factor - background events occurring along with the impact in the period between the first and second testing.

A more complex version of this plan is a design with control samples for preliminary and post-testing. This design uses 4 randomized groups, but only 2 are exposed, with only one being tested after exposure. The plan looks like this:

In the event that randomization is carried out successfully, i.e. the groups are truly equivalent, this design is no different in quality from the designs of a “true experiment”. It has the best external validity because it eliminates the influence of the main external variables that violate it: the interaction of pretesting and exposure; interaction between group composition and experimental treatment; the reaction of the subjects to the experiment. It is only impossible to exclude the factor of interaction between the composition of groups and factors of natural development and background, since there is no opportunity to compare the influence of preliminary and subsequent testing on the experimental and control groups. The peculiarity of the plan is that each of the four groups is tested only once: either at the beginning or at the end of the study.

This plan is used extremely rarely. Campbell also claims that this plan was never implemented.

3.1.2 Discrete time series plans

Much more often than the above designs, quasi-experimental designs are used, which are generally called “discrete time series”. For the classification of these plans, two reasons can be distinguished: the study is carried out 1) with the participation of one group or several; 2) with one impact or a series. It should be noted that plans in which a series of homogeneous or heterogeneous influences are implemented with testing after each influence have traditionally been called “formative experiments” in Soviet and Russian psychological science. At their core, of course, they are quasi-experiments with all the inherent violations of external and internal validity in such studies.

When using such designs, we must be aware from the outset that they lack controls for external validity. It is impossible to control the interaction of pretesting and experimental treatment, to eliminate the effect of systematic mixing (the interaction of group composition and experimental treatment), to control the reaction of subjects to the experiment and to determine the effect of interaction between various experimental treatments.

Quasi-experimental designs based on a single-group time series design are similar in structure to single-subject experimental designs.

The discrete time series design is most often used in developmental, educational, social, and clinical psychology. Its essence is that the initial level of the dependent variable is initially determined on a group of subjects using a series of sequential measurements. The researcher then influences the subjects in the experimental group by varying the independent variable and conducts a series of similar measurements. The levels, or trends, of the dependent variable before and after the intervention are compared. The outline of the plan looks like this:

О 1 О 2 О 3 Х О 4 О 5 О 6

The main disadvantage of a discrete time series design is that it does not allow one to separate the effect of the independent variable from the effect of background events that occur during the course of the study. To eliminate the “history” effect, it is recommended to use experimental isolation of subjects.

A modification of this design is another quasi-experiment in a time series design, in which pre-measurement exposure alternates with pre-measurement no-exposure:

Х 0 1 – О 2 Х 0 3 – О 4 Х О 5

Alternation can be regular or random. This option is only suitable if the effect is reversible. During processing, the series is divided into two sequences and the results of those measurements where there was an impact are compared with the results of measurements where there was no impact. To compare data, Student's t-test is used with the number of degrees of freedom n-2 (where n is the number of situations of the same type).

Time series plans are often implemented in practice.

The time series design for two non-equivalent groups, one of which receives no intervention, looks like this:

O 1 O 2 O 3 X O 4 O 5 X O 6 O 7 O 8 O 9 O 10

O' 1 O' 2 O' 3 O' 4 O' 5 O' 6 O' 7 O' 8 O' 9 O' 10

A quasi-experiment allows you to control the effect of the background factor (the “history” effect). This is usually the design recommended for researchers conducting experiments involving natural groups in kindergartens, schools, clinics, or workplaces. It can be called a formative experimental design with a control sample. This design is very difficult to implement, but if it is possible to randomize the groups, it turns into a “true formative experiment” design.

A combination of this design and the previous one is possible, in which series with and without exposure alternate on the same sample.

3.2Types

3.2.1 Quasi-experimental designs with special treatment arrangements

For many psychological experiments, acceptable areas of generalization are obvious and the willingness of researchers to transfer the results obtained to other situations, types of activities, and groups of people is justified. This allows experiments to be conducted with good external validity. Sometimes approximation to natural or “field” conditions limits possible generalizations.

These are “field” experiments that are carried out in the conditions of actually functioning educational groups. In them, the NP “teaching method” is specified in the complex of realities of educational activities. But there may be no theoretical justification for the advantages of the new method. It is the mediating link of the theory - the theoretical understanding of the foundations of the established pattern, and not a high assessment of external validity - that allows for the transfer of knowledge about the established effects of the influence of NP on other types of training and educational activities in other institutions of a similar type.

In educational research, a design with an unequal control group (one of the quasi-experimental plans with a decrease in control before the organization of influences) is common. If the experiment uses actually established groups, then the experimental and control conditions cannot be considered equal, since there may be differences between the groups that can “overlay” the pattern being studied and cause incorrect interpretations. J. Campbell gives the following example.

At the University of Annapolis (USA), the influence of teaching psychology on the personal development of students was studied. It was assumed that acquaintance with this course would have a positive impact on personal growth.

The experimental group consisted of all second-year students who were taught a psychology course in accordance with the curriculum. After completing this course, students were tested on their personality traits. The control group consisted of third-year students, for whom the life situation is more stable, since the most difficult adaptation processes occur precisely in the first two years of study at the university. Therefore, the attitude towards the supposed higher performance expected after reading the course in the experimental group could be different.

The quasi-experimental design considered in the example included the measurement of GP in both groups not only after, but also before the periods of experimental intervention. Endpoint data between groups and changes in test scores within each group could be compared. It turned out that during the initial testing, the superiority of third-year students over second-year students and the direction of changes in indicators in the control and experimental groups were of a different order than what was predicted by the competing hypothesis, based on the leading role of the factor of natural development.

The inclusion of a control group, albeit an unequal one, allows in a number of cases to reject the hypothesis about the role of interaction between factors of group composition and natural development. The validity of the conclusion about the role of the influence of reading a psychology course was significantly higher than if there was no control group.

Most often, a true experiment, unattainable in the practice of research at higher schools, where the experimental and control groups should be completely equivalent, is fully approximated by a design with an inequivalent group, if there is no reason to suspect that initially selection into each of the existing “natural” groups was carried out in some special way . In particular, if one of the groups was formed on the principle of “volunteers”, then it included people with a desire to undergo testing; here the conclusion about the role of experimental influence will be threatened by the factor of “motivational inequality” of the groups.

3.2.2 Plans with non-equivalent groups

One of the ways to form quasi-experimental designs is to fail to fulfill the condition of randomization as a strategy for selecting subjects into groups. In this case, the intergroup design is similar to the designs of true experiments. In this case, NP can vary according to standard schemes (comparison of experimental and control conditions).

The reasons for non-fulfillment of the randomization condition are different. Often this is a desire to experiment in real conditions, and therefore with actually established groups. For example, they are study groups, school classes, in which their own intra-group history has already developed. The main consequence of a psychologist’s work with actually established groups is their non-equivalence and confusion of group composition with background and development factors.

The next significant reduction in the equivalence of conditions occurs due to the motivation factor. Thus, the introduction of new teaching methods has shown that there is an effect of “desirability” of experimental conditions and people “want to be exposed,” or, simply put, the majority wants to learn using new methods, assuming they are obviously better. This raises the problem of a “disguised experiment”, i.e. it would be better if no one knew about the research being carried out, and the children were trained with normal motivation.

Due to the non-equivalence of the composition of the groups, a psychologist can never attribute the obtained experimental effect only to a change in NP.

However, non-equivalent groups should not be confused with homogeneous ones. The latter usually imply the presence of an external criterion by which the experimental and control groups of subjects differ. Then this difference acts as an analogue of the NP. For all other factors, or secondary variables, the groups are homogeneous. An example of such a plan (and as a correlation study) is given in the textbook by R. Gottsdanker, when groups of children born first, second, third, fourth, fifth in a family were selected (1982). This design was necessary to test the hypothesis that the order of birth of a child in a family affects subsequent indicators of his intelligence (IQ was measured as GP).

There are different strategies for selecting subjects into homogeneous groups. Let's give an example of a pairwise strategy. Potential subjects are paired so that they are similar in everything except the factor being studied. The selected groups are called homogeneous.

Often the strategy of matching or selecting a control group for an experimental group that already exists is used.

A natural experiment is carried out only in natural, familiar working conditions for the subject, where his working day and work activities usually take place. This could be a desk in an office, a carriage compartment, a workshop, an institute auditorium, an office, a truck cabin, etc.
When using this method, the research subject may not be aware that some kind of research is currently being carried out. This is necessary for the “purity” of the experiment, because when a person does not know that he is being observed, he behaves naturally, relaxed and without embarrassment. It's like in a reality show: when you know that you are being filmed, you will never allow yourself to do what you could do without cameras (swearing, immoral behavior, etc.).
An example of a natural experiment would be an artificially created fire situation in a hospital, in order to observe and analyze the actions of service personnel, i.e., doctors, correct their actions if necessary and point out mistakes, so that under real circumstances all hospital personnel know how to behave , and was able to provide the necessary assistance. The advantage of this method is that all actions take place in a familiar working environment, but the results obtained can be used to solve practical problems. But this experimental method also has negative aspects: the presence of uncontrollable factors, the control of which is simply impossible, as well as the fact that it is necessary to obtain information as soon as possible, otherwise the production process will be disrupted. Forms of E.E.
E.E. has many forms and different techniques. To collect primary information, the following are usually used: Introductory tasks. In its simplest form, it is widely used in the form of introductory problems. These tasks can be set by the manager orally (“Something happened, what will you do?”) or by introducing deviations in his work unnoticed by the employee. Just one observation of such a natural experiment provides valuable facts and allows one to test one or another researcher’s hypothesis.
Formative experiment. Formative (training or educational) experiments are widely used in practical psychology, in which the skills or qualities of an individual are studied in the process of their formation and development. Changes in operating conditions. A unique methodological technique is a purposeful change in the structure of professional activity. The meaning of this technique is that when performing a certain activity, individual analyzers are turned off according to a pre-thought-out plan, the posture or “grip” of the control levers changes, additional stimuli are introduced, the emotional background and motives of the activity change, etc. Accounting for the results of activities in various conditions allows us to assess the role of certain factors in the structure of the activity being studied and the flexibility of the corresponding skills.
Modeling of the activity being studied. Modeling as a method is used in situations where the study of a phenomenon of interest by simple observation, survey, test or experiment is difficult or impossible due to complexity or inaccessibility. In this case, they resort to creating an artificial model of the phenomenon being studied, repeating its main parameters and expected properties. This model is used to study this phenomenon in detail and draw conclusions about its nature.
In addition to the listed methods intended for collecting primary information, psychology widely uses various methods and techniques for processing this data, their logical and mathematical analysis to obtain secondary results, that is, facts and conclusions arising from the interpretation of processed primary information. For this purpose, in particular, various methods of mathematical statistics are used, without which it is often impossible to obtain reliable information about the phenomena being studied, as well as methods of qualitative analysis.

22. Formative experiment
A formative experiment involves a person or group of people participating in training organized by experimenters and developing certain qualities and skills. And if the result is formed, we do not need to guess what led to this result: it was this technique that led to the result. There is no need to guess what the skill level of a particular person is - the extent to which you taught him a skill in the experiment, the extent to which he masters it. If you want a more stable skill, continue developing it. Such an experiment usually involves two groups: an experimental group and a control group. Participants in the experimental group are offered a specific task, which (in the opinion of the experimenters) will contribute to the formation of a given quality. The control group of subjects is not given this task. At the end of the experiment, the two groups are compared with each other to evaluate the results obtained. Formative psychological and pedagogical experiment as a method appeared thanks to the theory of activity (A.N. Leontiev, D.B. Elkonin, etc.), which affirms the idea of ​​the primacy of activity in relation to mental development. During a formative experiment, active actions are performed by both the subjects and the experimenter. A high degree of intervention and control over the main variables is required on the part of the experimenter. This distinguishes experiment from observation or examination.

23. The relationship between the concepts of “ideal experiment”, “real experiment” and “full compliance experiment”.
An ideal experiment is an experiment designed in such a way that the experimenter changes only the non-dependent variable, the dependent variable is controlled, and all other experimental conditions remain unchanged. An ideal experiment assumes the equivalence of all subjects, the invariance of their characteristics over time, and the absence of time itself. It can never be implemented in reality, since in life not only the parameters of interest to the researcher change, but also a number of other conditions. The correspondence of a real experiment to an ideal one is expressed in such a characteristic as internal validity. Internal validity shows the reliability of the results that a real experiment provides compared to an ideal one. The more the changes in the dependent variables are influenced by conditions not controlled by the researcher, the lower the internal validity of the experiment, therefore, the greater the likelihood that the facts discovered in the experiment are artifacts. High internal validity is the main sign of a well-conducted experiment. D. Campbell identifies the following factors that threaten the internal validity of an experiment: background factor, natural development factor, testing factor, measurement error, statistical regression, non-random selection, screening. If they are not controlled, they lead to the appearance of corresponding effects. The background (history) factor includes events that occur between the preliminary and final measurement and may cause changes in the dependent variable along with the influence of the independent variable. The factor of natural development is associated with the fact that changes in the level of the dependent variable may occur in connection with the natural development of the participants in the experiment (growing up, increasing fatigue, etc.). The testing factor is the influence of preliminary measurements on the results of subsequent ones. A measurement uncertainty factor relates to imprecision or changes in the procedure or method used to measure the experimental effect. The factor of statistical regression appears if subjects with extreme indicators of any assessments were selected to participate in the experiment. The factor of non-random selection accordingly occurs in cases where, when forming a sample, the selection of participants was carried out in a non-random manner. The attrition factor occurs when subjects drop out unevenly from the control and experimental groups. The experimenter must take into account and, if possible, limit the influence of factors that threaten the internal validity of the experiment. A full compliance experiment is an experimental study in which all conditions and their changes correspond to reality. The approximation of a real experiment to a full compliance experiment is expressed in external validity. The degree of transferability of the experimental results to reality depends on the level of external validity. External validity, as defined by R. Gottsdancker, affects the reliability of the conclusions provided by the results of a real experiment compared to a full compliance experiment. To achieve high external validity, it is necessary that the levels of additional variables in the experiment correspond to their levels in reality. An experiment that lacks external validity is considered invalid. Factors that threaten external validity include the following: reactive effect (consists in a decrease or increase in the susceptibility of subjects to experimental influence due to previous measurements); the effect of interaction between selection and influence (consists in the fact that the experimental influence will be significant only for the participants of this experiment); the experimental conditions factor (can lead to the fact that the experimental effect can be observed only in these specially organized conditions); the interference factor of influences (manifests when presentation of a sequence of mutually exclusive influences to one group of subjects).
Researchers working in applied areas of psychology - clinical, pedagogical, organizational - are especially concerned about the external validity of experiments, since in the case of an invalid study, its results will not give anything when transferring them to real conditions. An endless experiment involves an unlimited number of experiments and trials to obtain increasingly accurate results. Increasing the number of trials in an experiment with one subject leads to an increase in the reliability of the experimental results. In experiments with a group of subjects, an increase in reliability occurs with an increase in the number of subjects.

24. The concept of validity. Construct and ecological validity.
Validity is one of the most important characteristics of psychodiagnostic methods and tests, one of the main criteria for their quality. This concept is close to the concept of reliability, but not entirely identical. The problem of validity arises during the development and practical application of a test or technique, when it is necessary to establish a correspondence between the degree of expression of a personality property of interest and the method of measuring it. Validity refers to what a test or technique measures and how well it does it; The more valid they are, the better they reflect the quality (property) for which they were created. Quantitatively, validity can be expressed through correlations of the results obtained using a test or technique with other indicators, for example, with the success of performing the relevant activity. Validity can be justified in different ways, most often complexly. Additional concepts of conceptual, criterion, constructive, and other types of validity are also used - with their own ways of establishing their level. The requirement of validity is very important, and many complaints about tests or other psychodiagnostic techniques are associated with the doubtfulness of their validity. Different concepts require different composition of tasks, so the issue of conceptual validity is important. The more the tasks correspond to the author’s given concept of intelligence, the more confidently we can speak about the validity of the conceptual test. The correlation of a test with an empirical criterion indicates its possible validity with respect to that criterion. Determining the validity of a test always requires asking additional questions: validity for what? for what purpose? by what criterion? So, the concept of validity refers not only to the test, but also to the criterion for assessing its quality. The higher the correlation coefficient between the test and the criterion, the higher the validity. The development of factor analysis has made it possible to create tests that are valid in relation to the identified factor. Only tests tested for validity can be used in professional orientation, professional selection, and scientific research. Constructive validity (conceptual, conceptual validity) is a special case of operational validity, the degree of adequacy of the method for interpreting experimental data of a theory, which is determined by the correct use of the terms of a particular theory. Construct validity, substantiated by L. Cronbach in 1955, is characterized by the ability of a test to measure a trait that has been theoretically justified (as a theoretical construct). When it is difficult to find an adequate pragmatic criterion, a focus on hypotheses formulated on the basis of theoretical assumptions about the property being measured can be chosen. Confirmation of these hypotheses indicates the theoretical validity of the technique. First, it is necessary to describe as fully and meaningfully as possible the construct that the test is intended to measure. This is achieved by formulating hypotheses about it, prescribing what a given construct should correlate with and what it should not correlate with. Then these hypotheses are tested. This is the most effective method of validation for personality questionnaires for which establishing a single criterion for their validity is difficult. Construct validity is the most comprehensive and complex type of validity. Instead of one result (primarily pragmatic), it is necessary to take into account many (most often actually psychological). Construct validity refers to attempts to label any aspect of an experiment. The dangers of violating construct validity include mislabeling cause and effect using abstract terms, terms drawn from ordinary language, or formal theory. Ecological validity is the degree to which the experimental conditions correspond to the reality being studied. For example, in Kurt Lewin's famous experiment on studying types of leadership, relationships in groups of adolescents were little consistent with relationships in the state, therefore, ecological validity was violated.

25. Internal validity. Reasons for violation of internal validity.
Internal validity is a type of validity, the degree of influence of an independent variable on a dependent variable. Internal validity is higher the greater the likelihood that a change in the dependent variable is caused by a change in the independent variable (and not something else). This concept can be considered interdisciplinary: it is widely used in experimental psychology, as well as in other areas of science. Internal validity is the correspondence of a real study to an ideal one. In a study with internal validity, the researcher is confident that the results obtained by measuring the dependent variable are directly related to the independent variable and not to some other uncontrolled factor.
However, in fact, in science (especially in psychology) it is impossible to say with one hundred percent certainty that internal validity has been met. For example, it is impossible to study any mental process separately from the psyche as a whole. Therefore, in any psychological experiment, a scientist can only maximally (but not absolutely) remove or minimize various factors that threaten internal validity.
Change over time (dependence of subjects and the environment on the time of day, seasons, changes in the person himself - aging, fatigue and distraction during long-term studies, changes in the motivation of the subjects and the experimenter, etc.; cf. natural development)
Sequence effect
Rosenthal (Pygmalion) effect
Hawthorne effect
Placebo effect
Audience effect
First impression effect
Barnum effect
Concomitant confusion
Sampling factors
Incorrect selection (non-equivalence of groups in composition, causing systematic error in the results)
Statistical regression
Experimental attrition (uneven dropout of subjects from compared groups, leading to non-equivalence of groups in composition)
Natural development (the general property of living beings to change; cf. ontogeny), etc.

26. External validity. Reasons for violation of external validity.
External validity is a type of validity that determines the extent to which the results of a particular study can be extended to the entire class of similar situations/phenomena/objects. This concept can be considered interdisciplinary: it is widely used in experimental psychology, as well as in other areas of science. External validity is the correspondence of actual research to the objective reality being studied. External validity determines the extent to which the results obtained in an experiment can correspond to the type of life situation that was studied, and the extent to which these results can be generalized to all similar life situations. For example, the criticism of experimental psychologists that they know a lot about sophomores and white rats, but very little about everything else, can be considered a criticism of external validity.
As with any other validity, external validity in a study probably cannot be said to be absolutely met, only if it is violated. Absolute compliance with external validity would be considered when the results of a study can be generalized to any population under any conditions and at any time, so scientists do not talk about compliance or non-compliance with external validity, but about the degree of its compliance.
Campbell names the main reasons for the violation of external validity:
1.*The effect of testing is a decrease or increase in the susceptibility of subjects to experimental influence under the influence of testing. For example, preliminary control of students' knowledge can increase their interest in new educational material. Since the population is not subject to preliminary testing, the results for it may not be representative. *Conditions of the study. They cause the subject's reaction to the experiment. Consequently, its data cannot be transferred to individuals who did not take part in the experiment; these individuals are the entire general population, except for the experimental sample. *Interaction between selection factors and experimental content. Their consequences are artifacts (in experiments with volunteers or subjects participating under duress). *Interference of experimental influences. The subjects have memory and learning ability. If an experiment consists of several series, then the first influences do not pass without a trace and affect the appearance of effects from subsequent influences.
Most of the reasons for the violation of external validity are associated with the characteristics of a psychological experiment conducted with human participants, which differentiate psychological research from an experiment carried out by specialists in other natural sciences.

27. The influence of the experimental situation on its results.
All psychologists recognize the importance of the influence of the experimental situation on its results. Thus, it was found that the experimental procedure has a greater impact on children than on adults. Explanations for this are found in the characteristics of the child’s psyche:
1. Children are more emotional when communicating with adults. An adult is always a psychologically significant figure for a child. He is either useful, or dangerous, or likable and trustworthy, or unpleasant and should be stayed away from.
Consequently, children strive to please an unfamiliar adult or “hide” from contact with him. The relationship with the experimenter determines the attitude towards the experiment (and not vice versa).
2. The manifestation of personality traits in a child depends on the situation to a greater extent than in an adult. The situation is constructed during communication: the child must successfully communicate with the experimenter, understand his questions and requirements. A child masters his native language by communicating with his immediate environment, mastering not the literary language, but dialect, dialect, “slang.” An experimenter speaking a literary-scientific language will never be “emotionally his own” for him, unless the child belongs to the same social stratum. A system of concepts and methods of communication that are unusual for a child (manner of speaking, facial expressions, pantomime, etc.) will be a powerful barrier to his inclusion in the experiment.
3. The child has a more vivid imagination than the experimenter, and therefore can interpret the experimental situation differently, “fantastically,” than an adult. In particular, criticizing Piaget's experiments, some authors make the following arguments. The child may view the experiment as a game with “its own” laws. The experimenter pours water from one container to another and asks the child whether the amount of liquid has been preserved. The correct answer may seem banal and uninteresting to the child, and he will begin to play with the experimenter. He may imagine that he was invited to watch a trick with a magic glass or take part in a game where the laws of conservation of matter do not apply. But the child is unlikely to reveal the content of his fantasies. These arguments can only be speculations of Piaget's critics. After all, rational perception of an experimental situation is a symptom of a certain level of intelligence development. However, the problem remains unresolved, and experimenters are advised to pay attention to whether the child correctly understands the questions and requests addressed to him, and what he means by giving this or that answer.

28. Communication factors that can distort the results of the experiment
The founder of the study of socio-psychological aspects of psychological experiment was S. Rosenzweig. In 1933, he published an analytical review on this problem, where he identified the main factors of communication that can distort the results of an experiment: 1. Errors in “attitude to the observed.” They are associated with the subject’s understanding of the decision-making criterion when choosing a reaction. 2. Errors related to the motivation of the subject. The subject may be motivated by curiosity, pride, vanity and act inconsistently with goals
experimenter, but in accordance with his understanding of the goals and meaning of the experiment.3. Errors of personal influence associated with the subject’s perception of the experimenter’s personality. Currently, these sources of artifacts do not relate to socio-psychological ones (except for socio-psychological motivation).
The subject can participate in the experiment: either voluntarily or under duress. Participation in the experiment itself gives rise to a number of behavioral manifestations in the subjects, which are the causes of artifacts. Among the most famous are the “placebo effect”, “Hawthorne effect”, “audience effect”. The placebo effect was discovered by doctors: when subjects believe that the drug or the actions of the doctor contribute to their recovery, they experience an improvement in their condition. The effect is based on the mechanisms of suggestion and self-hypnosis. The Hawthorne effect manifested itself during socio-psychological studies in factories. Involvement in participation in the experiment, which was conducted by psychologists, was regarded by the subjects as a manifestation of attention to him personally. The study participants behaved as the experimenters expected them to. The Hawthorne effect can be avoided by not telling the subject the research hypothesis or by giving it a false one ("orthogonal"), and by presenting the instructions in as indifferent a tone as possible. The social reinforcement effect, or audience effect, was discovered by G. Zajonc. The presence of any external observer, in particular the experimenter and assistant, changes the behavior of the person performing this or that work. The effect is clearly visible in athletes during competitions: the difference in the results shown in public and in training.
Zajonc discovered* that during training, the presence of spectators confuses subjects and reduces their performance. When the activity is mastered or reduced to simple physical effort, the result improves. After additional research, such dependencies were established. 1. Not just any observer has an influence, but only a competent one who is significant to the performer and is able to give an assessment. The more competent and significant the observer, the more significant this effect. 2.The more difficult the task, the greater the influence. New skills and abilities, intellectual abilities are more susceptible to influence (towards a decrease in efficiency). On the contrary, old, simple, perceptual and sensorimotor skills are easier to demonstrate, and the productivity of their implementation in the presence of a significant observer increases. 3. Competition and joint activity, an increase in the number of observers enhances the effect (both positive and negative trends).
4. “Anxious” subjects experience greater difficulties when performing complex and new tasks that require intellectual effort than emotionally stable individuals. 5.The action of the “Zajonc effect” is well described by the Yerkes-Dodson law of optimum activation. The presence of an external observer (experimenter) increases the motivation of the subject. Accordingly, it can either improve productivity or lead to “overmotivation” and cause disruption.

29. Behavioral manifestations that are the causes of artifacts (“placebo effect”, “Hawthorne effect”, “audience effect”).
Manifestations of the placebo effect are associated with the patient’s unconscious expectation, his ability to be influenced, and the degree of trust in the psychologist. This effect is used to study the role of suggestion in drug-induced settings, where one group of subjects is given the real drug being tested and the other is given a placebo. If the drug really has a positive effect, then it should be greater than that from using a placebo. The typical rate of positive placebo effect in clinical trials is 5-10%. In studies, it is also easy to cause a negative nocebo effect, when 1-5% of subjects experience discomfort (allergy, nausea, cardiac dysfunction) from taking a “dummy”. Clinical observations indicate that nervous personnel produce nocebo effects, and prescribing anxiety-reducing medications to patients significantly reduces anxiety among doctors themselves. This phenomenon was called “placebo rebound.”
The Hawthorne effect is that the conditions of novelty and interest in the experiment, increased attention to the research itself lead to very positive results, which is a distortion and departure from the real state of affairs. According to the Hawthorne effect, study participants who are excited about their involvement in the study are “too conscientious” and therefore act differently than usual. This artifact manifests itself to the greatest extent in socio-psychological research. The effect was established by a group of researchers led by Elton Mayo during the Hawthorne Experiment (1927-1932). In particular, it has been proven that participation in the experiment itself influences workers in such a way that they behave exactly as the experimenters expect them to. The subjects considered their participation in the study as a manifestation of attention to themselves. To avoid the Hawthorne effect, the experimenter needs to behave calmly and take measures so that the participants do not recognize the hypothesis that is being tested.
audience effect - an effect manifested in psychological research, which consists in the fact that the presence of any external observer, in particular the experimenter and the assistant, changes the behavior of the person performing this or that work. The audience effect was discovered by G. Zajonc and is also called the Zajonc effect. This effect is clearly manifested in athletes in competitions, where the difference in the results shown in public differs significantly for the better from the results in training. Zajonc found that during the experiment, the presence of spectators embarrassed the subjects and reduced their performance.


A natural experiment is carried out only in natural, familiar working conditions for the subject, where his working day and work activities usually take place. This could be a desk in an office, a carriage compartment, a workshop, an institute auditorium, an office, a truck cabin, etc.
When using this method, the research subject may not be aware that some kind of research is currently being carried out. This is necessary for the “purity” of the experiment, because when a person does not know that he is being observed, he behaves naturally, relaxed and without embarrassment. It's like in a reality show: when you know that you are being filmed, you will never allow yourself to do what you could do without cameras (swearing, immoral behavior, etc.).
An example of a natural experiment would be an artificially created fire situation in a hospital, in order to observe and analyze the actions of service personnel, i.e., doctors, correct their actions if necessary and point out mistakes, so that under real circumstances all hospital personnel know how to behave , and was able to provide the necessary assistance. The advantage of this method is that all actions take place in a familiar working environment, but the results obtained can be used to solve practical problems. But this experimental method also has negative aspects: the presence of uncontrollable factors, the control of which is simply impossible, as well as the fact that it is necessary to obtain information as soon as possible, otherwise the production process will be disrupted. Forms of E.E.
E.E. has many forms and different techniques. To collect primary information, the following are usually used: Introductory tasks. In its simplest form, it is widely used in the form of introductory problems. These tasks can be set by the manager orally (“Something happened, what will you do?”) or by introducing deviations in his work unnoticed by the employee. Just one observation of such a natural experiment provides valuable facts and allows one to test one or another researcher’s hypothesis.
Formative experiment. Formative (training or educational) experiments are widely used in practical psychology, in which the skills or qualities of an individual are studied in the process of their formation and development. Changes in operating conditions. A unique methodological technique is a purposeful change in the structure of professional activity. The meaning of this technique is that when performing a certain activity, individual analyzers are turned off according to a pre-thought-out plan, the posture or “grip” of the control levers changes, additional stimuli are introduced, the emotional background and motives of the activity change, etc. Accounting for the results of activities in various conditions allows us to assess the role of certain factors in the structure of the activity being studied and the flexibility of the corresponding skills.
Modeling of the activity being studied. Modeling as a method is used in situations where the study of a phenomenon of interest by simple observation, survey, test or experiment is difficult or impossible due to complexity or inaccessibility. In this case, they resort to creating an artificial model of the phenomenon being studied, repeating its main parameters and expected properties. This model is used to study this phenomenon in detail and draw conclusions about its nature.
In addition to the listed methods intended for collecting primary information, psychology widely uses various methods and techniques for processing this data, their logical and mathematical analysis to obtain secondary results, that is, facts and conclusions arising from the interpretation of processed primary information. For this purpose, in particular, various methods of mathematical statistics are used, without which it is often impossible to obtain reliable information about the phenomena being studied, as well as methods of qualitative analysis.

22. Formative experiment
A formative experiment involves a person or group of people participating in training organized by experimenters and developing certain qualities and skills. And if the result is formed, we do not need to guess what led to this result: it was this technique that led to the result. There is no need to guess what the skill level of a particular person is - the extent to which you taught him a skill in the experiment, the extent to which he masters it. If you want a more stable skill, continue developing it. Such an experiment usually involves two groups: an experimental group and a control group. Participants in the experimental group are offered a specific task, which (in the opinion of the experimenters) will contribute to the formation of a given quality. The control group of subjects is not given this task. At the end of the experiment, the two groups are compared with each other to evaluate the results obtained. Formative psychological and pedagogical experiment as a method appeared thanks to the theory of activity (A.N. Leontiev, D.B. Elkonin, etc.), which affirms the idea of ​​the primacy of activity in relation to mental development. During a formative experiment, active actions are performed by both the subjects and the experimenter. A high degree of intervention and control over the main variables is required on the part of the experimenter. This distinguishes experiment from observation or examination.

23. The relationship between the concepts of “ideal experiment”, “real experiment” and “full compliance experiment”.
An ideal experiment is an experiment designed in such a way that the experimenter changes only the non-dependent variable, the dependent variable is controlled, and all other experimental conditions remain unchanged. An ideal experiment assumes the equivalence of all subjects, the invariance of their characteristics over time, and the absence of time itself. It can never be implemented in reality, since in life not only the parameters of interest to the researcher change, but also a number of other conditions. The correspondence of a real experiment to an ideal one is expressed in such a characteristic as internal validity. Internal validity shows the reliability of the results that a real experiment provides compared to an ideal one. The more the changes in the dependent variables are influenced by conditions not controlled by the researcher, the lower the internal validity of the experiment, therefore, the greater the likelihood that the facts discovered in the experiment are artifacts. High internal validity is the main sign of a well-conducted experiment. D. Campbell identifies the following factors that threaten the internal validity of an experiment: background factor, natural development factor, testing factor, measurement error, statistical regression, non-random selection, screening. If they are not controlled, they lead to the appearance of corresponding effects. The background (history) factor includes events that occur between the preliminary and final measurement and may cause changes in the dependent variable along with the influence of the independent variable. The factor of natural development is associated with the fact that changes in the level of the dependent variable may occur in connection with the natural development of the participants in the experiment (growing up, increasing fatigue, etc.). The testing factor is the influence of preliminary measurements on the results of subsequent ones. A measurement uncertainty factor relates to imprecision or changes in the procedure or method used to measure the experimental effect. The factor of statistical regression appears if subjects with extreme indicators of any assessments were selected to participate in the experiment. The factor of non-random selection accordingly occurs in cases where, when forming a sample, the selection of participants was carried out in a non-random manner. The attrition factor occurs when subjects drop out unevenly from the control and experimental groups. The experimenter must take into account and, if possible, limit the influence of factors that threaten the internal validity of the experiment. A full compliance experiment is an experimental study in which all conditions and their changes correspond to reality. The approximation of a real experiment to a full compliance experiment is expressed in external validity. The degree of transferability of the experimental results to reality depends on the level of external validity. External validity, as defined by R. Gottsdancker, affects the reliability of the conclusions provided by the results of a real experiment compared to a full compliance experiment. To achieve high external validity, it is necessary that the levels of additional variables in the experiment correspond to their levels in reality. An experiment that lacks external validity is considered invalid. Factors that threaten external validity include the following: reactive effect (consists in a decrease or increase in the susceptibility of subjects to experimental influence due to previous measurements); the effect of interaction between selection and influence (consists in the fact that the experimental influence will be significant only for the participants of this experiment); the experimental conditions factor (can lead to the fact that the experimental effect can be observed only in these specially organized conditions); the interference factor of influences (manifests when presentation of a sequence of mutually exclusive influences to one group of subjects).
Researchers working in applied areas of psychology - clinical, pedagogical, organizational - are especially concerned about the external validity of experiments, since in the case of an invalid study, its results will not give anything when transferring them to real conditions. An endless experiment involves an unlimited number of experiments and trials to obtain increasingly accurate results. Increasing the number of trials in an experiment with one subject leads to an increase in the reliability of the experimental results. In experiments with a group of subjects, an increase in reliability occurs with an increase in the number of subjects.

24. The concept of validity. Construct and ecological validity.
Validity is one of the most important characteristics of psychodiagnostic methods and tests, one of the main criteria for their quality. This concept is close to the concept of reliability, but not entirely identical. The problem of validity arises during the development and practical application of a test or technique, when it is necessary to establish a correspondence between the degree of expression of a personality property of interest and the method of measuring it. Validity refers to what a test or technique measures and how well it does it; The more valid they are, the better they reflect the quality (property) for which they were created. Quantitatively, validity can be expressed through correlations of the results obtained using a test or technique with other indicators, for example, with the success of performing the relevant activity. Validity can be justified in different ways, most often complexly. Additional concepts of conceptual, criterion, constructive, and other types of validity are also used - with their own ways of establishing their level. The requirement of validity is very important, and many complaints about tests or other psychodiagnostic techniques are associated with the doubtfulness of their validity. Different concepts require different composition of tasks, so the issue of conceptual validity is important. The more the tasks correspond to the author’s given concept of intelligence, the more confidently we can speak about the validity of the conceptual test. The correlation of a test with an empirical criterion indicates its possible validity with respect to that criterion. Determining the validity of a test always requires asking additional questions: validity for what? for what purpose? by what criterion? So, the concept of validity refers not only to the test, but also to the criterion for assessing its quality. The higher the correlation coefficient between the test and the criterion, the higher the validity. The development of factor analysis has made it possible to create tests that are valid in relation to the identified factor. Only tests tested for validity can be used in professional orientation, professional selection, and scientific research. Constructive validity (conceptual, conceptual validity) is a special case of operational validity, the degree of adequacy of the method for interpreting experimental data of a theory, which is determined by the correct use of the terms of a particular theory. Construct validity, substantiated by L. Cronbach in 1955, is characterized by the ability of a test to measure a trait that has been theoretically justified (as a theoretical construct). When it is difficult to find an adequate pragmatic criterion, a focus on hypotheses formulated on the basis of theoretical assumptions about the property being measured can be chosen. Confirmation of these hypotheses indicates the theoretical validity of the technique. First, it is necessary to describe as fully and meaningfully as possible the construct that the test is intended to measure. This is achieved by formulating hypotheses about it, prescribing what a given construct should correlate with and what it should not correlate with. Then these hypotheses are tested. This is the most effective method of validation for personality questionnaires for which establishing a single criterion for their validity is difficult. Construct validity is the most comprehensive and complex type of validity. Instead of one result (primarily pragmatic), it is necessary to take into account many (most often actually psychological). Construct validity refers to attempts to label any aspect of an experiment. The dangers of violating construct validity include mislabeling cause and effect using abstract terms, terms drawn from ordinary language, or formal theory. Ecological validity is the degree to which the experimental conditions correspond to the reality being studied. For example, in Kurt Lewin's famous experiment on studying types of leadership, relationships in groups of adolescents were little consistent with relationships in the state, therefore, ecological validity was violated.

25. Internal validity. Reasons for violation of internal validity.
Internal validity is a type of validity, the degree of influence of an independent variable on a dependent variable. Internal validity is higher the greater the likelihood that a change in the dependent variable is caused by a change in the independent variable (and not something else). This concept can be considered interdisciplinary: it is widely used in experimental psychology, as well as in other areas of science. Internal validity is the correspondence of a real study to an ideal one. In a study with internal validity, the researcher is confident that the results obtained by measuring the dependent variable are directly related to the independent variable and not to some other uncontrolled factor.
However, in fact, in science (especially in psychology) it is impossible to say with one hundred percent certainty that internal validity has been met. For example, it is impossible to study any mental process separately from the psyche as a whole. Therefore, in any psychological experiment, a scientist can only maximally (but not absolutely) remove or minimize various factors that threaten internal validity.
Change over time (dependence of subjects and the environment on the time of day, seasons, changes in the person himself - aging, fatigue and distraction during long-term studies, changes in the motivation of the subjects and the experimenter, etc.; cf. natural development)
Sequence effect
Rosenthal (Pygmalion) effect
Hawthorne effect
Placebo effect
Audience effect
First impression effect
Barnum effect
Concomitant confusion
Sampling factors
Incorrect selection (non-equivalence of groups in composition, causing systematic error in the results)
Statistical regression
Experimental attrition (uneven dropout of subjects from compared groups, leading to non-equivalence of groups in composition)
Natural development (the general property of living beings to change; cf. ontogeny), etc.

26. External validity. Reasons for violation of external validity.
External validity is a type of validity that determines the extent to which the results of a particular study can be extended to the entire class of similar situations/phenomena/objects. This concept can be considered interdisciplinary: it is widely used in experimental psychology, as well as in other areas of science. External validity is the correspondence of actual research to the objective reality being studied. External validity determines the extent to which the results obtained in an experiment can correspond to the type of life situation that was studied, and the extent to which these results can be generalized to all similar life situations. For example, the criticism of experimental psychologists that they know a lot about sophomores and white rats, but very little about everything else, can be considered a criticism of external validity.
As with any other validity, external validity in a study probably cannot be said to be absolutely met, only if it is violated. Absolute compliance with external validity would be considered when the results of a study can be generalized to any population under any conditions and at any time, so scientists do not talk about compliance or non-compliance with external validity, but about the degree of its compliance.
Campbell names the main reasons for the violation of external validity:
1.*The effect of testing is a decrease or increase in the susceptibility of subjects to experimental influence under the influence of testing. For example, preliminary control of students' knowledge can increase their interest in new educational material. Since the population is not subject to preliminary testing, the results for it may not be representative. *Conditions of the study. They cause the subject's reaction to the experiment. Consequently, its data cannot be transferred to individuals who did not take part in the experiment; these individuals are the entire general population, except for the experimental sample. *Interaction between selection factors and experimental content. Their consequences are artifacts (in experiments with volunteers or subjects participating under duress). *Interference of experimental influences. The subjects have memory and learning ability. If an experiment consists of several series, then the first influences do not pass without a trace and affect the appearance of effects from subsequent influences.
Most of the reasons for the violation of external validity are associated with the characteristics of a psychological experiment conducted with human participants, which differentiate psychological research from an experiment carried out by specialists in other natural sciences.

27. The influence of the experimental situation on its results.
All psychologists recognize the importance of the influence of the experimental situation on its results. Thus, it was found that the experimental procedure has a greater impact on children than on adults. Explanations for this are found in the characteristics of the child’s psyche:
1. Children are more emotional when communicating with adults. An adult is always a psychologically significant figure for a child. He is either useful, or dangerous, or likable and trustworthy, or unpleasant and should be stayed away from.
Consequently, children strive to please an unfamiliar adult or “hide” from contact with him. The relationship with the experimenter determines the attitude towards the experiment (and not vice versa).
2. The manifestation of personality traits in a child depends on the situation to a greater extent than in an adult. The situation is constructed during communication: the child must successfully communicate with the experimenter, understand his questions and requirements. A child masters his native language by communicating with his immediate environment, mastering not the literary language, but dialect, dialect, “slang.” An experimenter speaking a literary-scientific language will never be “emotionally his own” for him, unless the child belongs to the same social stratum. A system of concepts and methods of communication that are unusual for a child (manner of speaking, facial expressions, pantomime, etc.) will be a powerful barrier to his inclusion in the experiment.
3. The child has a more vivid imagination than the experimenter, and therefore can interpret the experimental situation differently, “fantastically,” than an adult. In particular, criticizing Piaget's experiments, some authors make the following arguments. The child may view the experiment as a game with “its own” laws. The experimenter pours water from one container to another and asks the child whether the amount of liquid has been preserved. The correct answer may seem banal and uninteresting to the child, and he will begin to play with the experimenter. He may imagine that he was invited to watch a trick with a magic glass or take part in a game where the laws of conservation of matter do not apply. But the child is unlikely to reveal the content of his fantasies. These arguments can only be speculations of Piaget's critics. After all, rational perception of an experimental situation is a symptom of a certain level of intelligence development. However, the problem remains unresolved, and experimenters are advised to pay attention to whether the child correctly understands the questions and requests addressed to him, and what he means by giving this or that answer.

28. Communication factors that can distort the results of the experiment
The founder of the study of socio-psychological aspects of psychological experiment was S. Rosenzweig. In 1933, he published an analytical review on this problem, where he identified the main factors of communication that can distort the results of an experiment: 1. Errors in “attitude to the observed.” They are associated with the subject’s understanding of the decision-making criterion when choosing a reaction. 2. Errors related to the motivation of the subject. The subject may be motivated by curiosity, pride, vanity and act inconsistently with goals
experimenter, but in accordance with his understanding of the goals and meaning of the experiment.3. Errors of personal influence associated with the subject’s perception of the experimenter’s personality. Currently, these sources of artifacts do not relate to socio-psychological ones (except for socio-psychological motivation).
The subject can participate in the experiment: either voluntarily or under duress. Participation in the experiment itself gives rise to a number of behavioral manifestations in the subjects, which are the causes of artifacts. Among the most famous are the “placebo effect”, “Hawthorne effect”, “audience effect”. The placebo effect was discovered by doctors: when subjects believe that the drug or the actions of the doctor contribute to their recovery, they experience an improvement in their condition. The effect is based on the mechanisms of suggestion and self-hypnosis. The Hawthorne effect manifested itself during socio-psychological studies in factories. Involvement in participation in the experiment, which was conducted by psychologists, was regarded by the subjects as a manifestation of attention to him personally. The study participants behaved as the experimenters expected them to. The Hawthorne effect can be avoided by not telling the subject the research hypothesis or by giving it a false one ("orthogonal"), and by presenting the instructions in as indifferent a tone as possible. The social reinforcement effect, or audience effect, was discovered by G. Zajonc. The presence of any external observer, in particular the experimenter and assistant, changes the behavior of the person performing this or that work. The effect is clearly visible in athletes during competitions: the difference in the results shown in public and in training.
Zajonc discovered* that during training, the presence of spectators confuses subjects and reduces their performance. When the activity is mastered or reduced to simple physical effort, the result improves. After additional research, such dependencies were established. 1. Not just any observer has an influence, but only a competent one who is significant to the performer and is able to give an assessment. The more competent and significant the observer, the more significant this effect. 2.The more difficult the task, the greater the influence. New skills and abilities, intellectual abilities are more susceptible to influence (towards a decrease in efficiency). On the contrary, old, simple, perceptual and sensorimotor skills are easier to demonstrate, and the productivity of their implementation in the presence of a significant observer increases. 3. Competition and joint activity, an increase in the number of observers enhances the effect (both positive and negative trends).
4. “Anxious” subjects experience greater difficulties when performing complex and new tasks that require intellectual effort than emotionally stable individuals. 5.The action of the “Zajonc effect” is well described by the Yerkes-Dodson law of optimum activation. The presence of an external observer (experimenter) increases the motivation of the subject. Accordingly, it can either improve productivity or lead to “overmotivation” and cause disruption.

29. Behavioral manifestations that are the causes of artifacts (“placebo effect”, “Hawthorne effect”, “audience effect”).
Manifestations of the placebo effect are associated with the patient’s unconscious expectation, his ability to be influenced, and the degree of trust in the psychologist. This effect is used to study the role of suggestion in drug-induced settings, where one group of subjects is given the real drug being tested and the other is given a placebo. If the drug really has a positive effect, then it should be greater than that from using a placebo. The typical rate of positive placebo effect in clinical trials is 5-10%. In studies, it is also easy to cause a negative nocebo effect, when 1-5% of subjects experience discomfort (allergy, nausea, cardiac dysfunction) from taking a “dummy”. Clinical observations indicate that nervous personnel produce nocebo effects, and prescribing anxiety-reducing medications to patients significantly reduces anxiety among doctors themselves. This phenomenon was called “placebo rebound.”
The Hawthorne effect is that the conditions of novelty and interest in the experiment, increased attention to the research itself lead to very positive results, which is a distortion and departure from the real state of affairs. According to the Hawthorne effect, study participants who are excited about their involvement in the study are “too conscientious” and therefore act differently than usual. This artifact manifests itself to the greatest extent in socio-psychological research. The effect was established by a group of researchers led by Elton Mayo during the Hawthorne Experiment (1927-1932). In particular, it has been proven that participation in the experiment itself influences workers in such a way that they behave exactly as the experimenters expect them to. The subjects considered their participation in the study as a manifestation of attention to themselves. To avoid the Hawthorne effect, the experimenter needs to behave calmly and take measures so that the participants do not recognize the hypothesis that is being tested.
audience effect - an effect manifested in psychological research, which consists in the fact that the presence of any external observer, in particular the experimenter and the assistant, changes the behavior of the person performing this or that work. The audience effect was discovered by G. Zajonc and is also called the Zajonc effect. This effect is clearly manifested in athletes in competitions, where the difference in the results shown in public differs significantly for the better from the results in training. Zajonc found that during the experiment, the presence of spectators embarrassed the subjects and reduced their performance.


5.1.1 Single independent variable designs

The design of a “true” experimental study differs from others in the following important ways:

1) using one of the strategies for creating equivalent groups, most often randomization;

2) the presence of an experimental and at least one control group;

3) completion of the experiment by testing and comparing the behavior of the group that received the experimental intervention (X1) with the group that did not receive the intervention X0.

The classic version of the plan would be a plan for 2 independent groups. In psychology, experimental planning began to be used in the first decades of the 20th century.

There are three main versions of this plan. When describing them, we will use the symbolization proposed by Campbell.

Table 5.1

Here R is randomization, X is exposure, O1 is testing the first group, O2 is testing the second group.

1) Two randomized group design with post-exposure testing. Its author is the famous biologist and statistician R. A. Fisher. The structure of the plan is shown in table. 5.1.

Equality between the experimental and control groups will be an absolutely necessary condition for the application of this plan. Most often, to achieve group equivalence, a randomization procedure is used (see Chapter 4). This plan is recommended for use in cases where it is not possible or necessary to conduct preliminary testing of subjects. If the randomization is carried out efficiently, then this plan will be the best; it allows you to control most sources of artifacts; in addition, various variants of variance analysis are applicable to it.

After randomization or another procedure for equalizing groups, an experimental intervention is carried out. In the simplest version, only two gradations of the independent variable are used: there is an impact, there is no impact.

If it is extremely important to use more than 1 level of exposure, then plans with several experimental groups (according to the number of exposure levels) and one control group are used.

If it is necessary to control the influence of one of the additional variables, then a design with 2 control groups and 1 experimental group is used. Measuring behavior provides material for comparing the 2 groups. Data processing is moving towards the use of traditional estimates for mathematical statistics. Let us study the case when the measurement is carried out using an interval scale. It is worth saying that to assess the difference in the average scores of groups, the Student t-test is used. Differences in the variation of the measured parameter between the experimental and control groups are assessed using the F criterion. The corresponding procedures are discussed in detail in textbooks on mathematical statistics for psychologists.

The use of a 2 randomized group design with post-exposure testing allows one to control for the main sources of internal validity (as defined by Campbell). Since there is no pre-testing, the interaction effect of the testing procedure and the content of the experimental intervention and the testing effect itself are excluded. The plan allows you to control the influence of group composition, spontaneous dropout, the influence of background and natural development, the interaction of group composition with other factors, and also allows you to eliminate the regression effect due to randomization and comparison of data from the experimental and control groups. Moreover, when conducting most pedagogical and social-psychological experiments, it is extremely important to strictly control the initial level of the dependent variable, be it intelligence, anxiety, knowledge or the status of an individual in a group. Randomization is the best possible procedure, but it does not provide an absolute guarantee of the correct choice. When there is doubt about the results of randomization, a pretest design is used.

Table 5.2

2) Design for two randomized groups with pretest and posttest. Let's study the structure of the plan (Table 5.2)

The pretest design is popular among psychologists. Biologists have more confidence in the randomization procedure. The psychologist knows very well that each person is unique and different from others, and subconsciously strives to capture these differences with the help of tests, not trusting the mechanical randomization procedure. At the same time, the hypothesis of most psychological studies, especially in the field of developmental psychology (“formative experiment”), contains a forecast of a certain change in the personality of an individual under the influence of an external factor. Therefore, the test-exposure-retest design using randomization and a control group is very common.

In the absence of a procedure for equalizing groups, the plan is converted into a quasi-experimental one (it will be discussed in section 5.2)

The main source of artifacts that undermines the external validity of a procedure is the interaction of testing with experimental effects. For example, testing the level of knowledge on a certain subject before conducting an experiment on memorizing material can lead to the updating of initial knowledge and a general increase in memorization productivity. This is achieved by updating mnemonic abilities and creating a memorization mindset.

At the same time, using this plan, you can control other external variables. The factor of “history” (“background”) is controlled, since in the interval between the first and second testing both groups are exposed to the same (“background”) influences. However, Campbell notes the need to control for “within-group events,” as well as the effect of non-simultaneous testing in both groups. In reality, it is impossible to ensure that the test and retest are carried out in them simultaneously. The design becomes quasi-experimental, for example:

Typically, non-simultaneous testing is controlled by two experimenters testing two groups simultaneously. The optimal procedure is to randomize the order of testing: testing members of the experimental and control groups is carried out in random order. The same is done with the presentation or non-presentation of experimental influence. Of course, such a procedure requires a significant number of subjects in the experimental and control samples (at least 30-35 people in each)

/images/6/557_image034.gif">

Natural history and testing effects are controlled by ensuring that the experimental and control groups are treated equally, and group composition and regression effects [Campbell, 1980] are controlled by randomization.

The results of applying the test-exposure-retest plan are presented in the table.

When processing data, parametric criteria t and F can usually be used (for data on an interval scale). Three t values ​​are calculated: comparison 1) O1 and O2; 2) O3 and O4; 3) O2 and O4. The hypothesis about the significant influence of the independent variable on the dependent variable can be accepted if two conditions are met: a) the differences between O1 and O2 are significant, and between O3 and O4 are insignificant, and b) the differences between O2 and O4 are significant. It is much more convenient to compare not absolute values, but the magnitude of the increase in indicators from the first test to the second (δ(i)). δ(i12) and δ(i34) are calculated and compared using the Student t-test. If the differences are significant, the experimental hypothesis about the influence of the independent variable on the dependent variable is accepted (Table 5.3)

It is also recommended to use Fisher analysis of covariance. In this case, pretest scores are taken as an additional variable, and subjects are divided into subgroups depending on pretest scores. Note that this results in the following table for data processing using the MANOVA method (Table 5.4)

The use of a “test-exposure-retest” design allows you to control the influence of “side” variables that violate the internal validity of the experiment.

External validity refers to the transferability of data to a real-life situation. The main point that distinguishes the experimental situation from the real one will be the introduction of preliminary testing. As we have already noted, the “test-exposure-retest” plan does not allow us to control the effect of the interaction of testing and experimental influence: the pre-tested subject “sensitizes” - becomes more sensitive to the influence, since in the experiment we measure exactly the dependent variable that we are going to measure influence by varying the independent variable.

Table 5.5

Preliminary testing

Impact

To control external validity, the plan of R. L. Solomon, which he proposed in 1949, is used.

3) The Solomon plan is used when conducting an experiment on four groups:

1. Experiment 1: R O1 X O2

2. Control 1: R O3 O4

3. Experiment 2: R X O5

4. Control 2: R O6

The plan includes a study of two experimental and two control groups and will essentially be multi-group (such as 2 x 2), but for convenience of presentation it is discussed in the ϶ᴛᴏth section.

Solomon's design is a combination of two previously discussed designs: the first, when no pretesting is carried out, and the second, test-exposure-retest. By using the "first part" of the design, the interaction effect of the first test and the experimental treatment can be controlled. Solomon, using his plan, reveals the effect of experimental exposure in four different ways: when comparing 1) O2 - O1; 2) O2 - O4; 3) O5 - O6 and 4) O5 - O3.

If you compare O6 with O1 and O3, you can identify the joint influence of the effects of natural development and “history” (background influences) on the dependent variable.

Campbell, criticizing the data processing schemes proposed by Solomon, suggests ignoring preliminary testing and reducing the data to a 2 x 2 scheme suitable for applying variance analysis (Table 5.5)

Comparison of column averages makes it possible to identify the effect of experimental influence - the influence of the independent variable on the dependent one. Row means show the pretest effect. Comparison of cell means characterizes the interaction of the testing effect and the experimental effect, which indicates the extent of the violation of external validity.

In the case where the effects of preliminary testing and interaction can be neglected, proceed to the comparison of O4 and O2 using the method of covariance analysis. As an additional variable, data from preliminary testing is taken according to the scheme given for the “test-exposure-retest” plan.

Finally, in some cases it is extremely important to check the persistence of the effect of the independent variable on the dependent variable over time: for example, to find out whether a new teaching method leads to long-term memorization of the material. For these purposes, the following plan is used:

1 Experiment 1 R O1 X O2

2 Control 1 R O3 O4

3 Experiment 2 R O5 X O6

4 Control 2 R O7 O8

5.1.2 Single independent variable and multiple group designs

Sometimes comparing two groups is not enough to confirm or refute an experimental hypothesis. This is exactly the problem that arises in two cases: a) when it is necessary to control external variables; b) if it is necessary to identify quantitative dependencies between two variables.

To control external variables, various versions of the factorial experimental design can be used. As for identifying a quantitative relationship between two variables, the need to establish it arises when testing an “exact” experimental hypothesis. In an experiment involving two groups, at best, it is possible to establish the fact of a causal relationship between the independent and dependent variables. But between two points you can draw an infinite number of curves. It is worth saying that in order to ensure the presence of a linear relationship between two variables, you should have at least three points that correspond to the three levels of the independent variable. Therefore, the experimenter must select several randomized groups and place them in different experimental conditions. The simplest option would be a design for three groups and three levels of the independent variable:

Experiment 1: R X1 O1

Experiment 2: R X2 O2

Control: R O3

The control group in this case is the third experimental group, for which the level of the variable X = 0.

When implementing this plan, each group will be presented with only one level of the independent variable. It is also possible to increase the number of experimental groups according to the number of levels of the independent variable. It is worth saying that to process the data obtained using such a plan, the same statistical methods are used as those listed above.

Simple “system experimental designs”, surprisingly, can very rarely be used in modern experimental research. Maybe researchers are “embarrassed” to put forward simple hypotheses, remembering the “complexity and multidimensionality” of mental reality? The tendency to use designs with many independent variables, indeed to conduct multivariate experiments, does not necessarily contribute to a better explanation of the causes of human behavior. As you know, “a smart person amazes with the depth of his idea, and a fool with the scope of his construction.” It is better to prefer a simple explanation to any complex one, although regression equations where everything equals everything and intricate correlation graphs may impress some dissertation committees.

5.1.3 Factorial designs

Factorial experiments are used when it is necessary to test complex hypotheses about the relationships between variables. The general form of such a hypothesis is: “If A1, A2,..., An, then B.” It must be remembered that such hypotheses are called complex, combined, etc. In this case, there can be various relationships between the independent variables: conjunction, disjunction, linear independence, additive or multiplicative, etc. Factorial experiments will be a special case of multivariate research, during which they are trying to establish relationships between several independent and several dependent variables. In a factorial experiment, as a rule, two types of hypotheses are tested simultaneously:

1) hypotheses about the separate influence of each of the independent variables;

2) hypotheses about the interaction of variables, namely, how the presence of one of the independent variables affects the effect on the other.

A factorial experiment is based on a factorial design. Factorial design of an experiment means that all levels of independent variables are combined with each other. The number of experimental groups is equal to the number of combinations of levels of all independent variables.

Today, factorial designs are the most common in psychology, since simple relationships between two variables practically do not occur in it.

There are many options for factorial designs, but not all are used in practice. Most often, factorial designs for two independent variables and two levels of the 2x2 type can be used. It is worth saying that the principle of balancing is used to draw up a plan. A 2x2 design is used to identify the effect of two independent variables on one dependent variable. The experimenter manipulates possible combinations of variables and levels. The data is given in a simple table (Table 5.6)

Less commonly, four independent randomized groups may be used. To process the results, Fisher's analysis of variance is used.

Other versions of the factorial design, namely 3x2 or 3x3, can also rarely be used. The 3x2 design is used in cases where it is necessary to establish the type of dependence of one dependent variable on one independent variable, and one of the independent variables is represented by a dichotomous parameter. An example of such a plan is an experiment to identify the impact of external observation on the success of solving intellectual problems. The first independent variable varies simply: there is an observer, there is no observer. The second independent variable is task difficulty levels. In this case, we get a 3x2 plan (Table 5.7)

Do not forget that the 3x3 design option is used if both independent variables have several levels and it is possible to identify the types of relationships between the dependent variable and the independent ones. This plan allows us to identify the influence of reinforcement on the success of completing tasks of varying difficulty (Table 5.8)

Table 5.6

2nd variable

1st variable

Table 5.7

1st variable

2nd variable

There is an observer

No observer

Table 5.8

Task difficulty level

Stimulation intensity

In general, the design for two independent variables looks like N x M. The applicability of such plans is limited only by the need to recruit a large number of randomized groups. The amount of experimental work increases excessively with the addition of each level of any independent variable.

Designs used to study the effects of more than two independent variables are rarely used. It is worth saying that for three variables they have the general form L x M x N.

Most often, 2x2x2 plans are used: “three independent variables - two levels.” Obviously, adding each new variable increases the number of groups. Their total number is 2, where n is the number of variables in the case of two intensity levels and K - in the case of K-level intensity (we assume that the number of levels is the same for all independent variables). An example of this plan could be the development of the previous one. In the case when we are interested in the success of completing an experimental series of tasks, which depends not only on general stimulation, which is carried out in the form of punishment - electric shock, but also on the ratio of reward and punishment, we use a 3x3x3 plan.

Table 5.9

A simplification of a complete plan with three independent variables of the form L x M x N will be planning using the “Latin square” method. "Latin square" is used when it is necessary to study the simultaneous influence of three variables that have two or more levels. The Latin square principle is essentially that two levels of different variables occur only once in an experimental design. Note that this greatly simplifies the procedure, not to mention the fact that the experimenter gets rid of the need to work with huge samples.

We will start from the assumption that we have three independent variables, with three levels each:

The plan using the “Latin square” method is presented in table. 5.9.

The same technique is used to control external variables (counter-balancing). It is easy to notice that the levels of the third variable N (A, B, C) occur once in each row and in each column. By combining results across rows, columns, and levels, it is possible to identify the influence of each of the independent variables on the dependent variable, as well as the degree of pairwise interaction between the variables.

"Latin Square" allows you to significantly reduce the number of groups. In particular, the 2x2x2 plan turns into a simple table (Table 5.10)

The use of Latin letters in the cells to indicate the levels of the 3rd variable (A - yes, B - no) is traditional, which is why the method is called “Latin square”.

A more complex plan using the “Greco-Latin square” method is used very rarely. It can be used to study the influence of four independent variables on a dependent variable. Its essence is as follows: to each Latin group of a plan with three variables, a Greek letter is added, indicating the levels of the fourth variable.

Let's study an example. We have four variables, each of which has three levels of intensity. The plan using the “Greco-Latin square” method will take the following form (Table 5.11)

The Fisher analysis of variance method is used to process the data. The methods of the “Latin” and “Greco-Latin” square came to psychology from agrobiology, but were not widely used. The exception will be some experiments in psychophysics and psychology of perception.

The main problem that can be solved in a factorial experiment and cannot be solved using several conventional experiments with one independent variable is determining the interaction of two variables.

Table 5.10

2nd variable

1st variable

Table 5.11

Let's study the possible results of the simplest 2x2 factorial experiment from the standpoint of interactions of variables. It is worth saying that for this we need to present the results of the experiments on a graph, where the values ​​of the first independent variable are plotted along the abscissa axis, and the values ​​of the dependent variable are plotted along the ordinate axis. Note that each of the two straight lines connecting the values ​​of the dependent variable at different values ​​of the first independent variable (A) characterizes one of the levels of the second independent variable (B). For simplicity, let us apply the results of a correlation study rather than an experimental one. Let us agree that we examined the dependence of a child’s status in a group on his state of health and level of intelligence. Let's explore options for possible relationships between variables.

First option: the lines are parallel - there is no interaction of variables (Fig. 5.1)

/images/6/101_image035.gif">

It is important to know that sick children have a lower status than healthy children, regardless of their level of intelligence. Intellectuals always have a higher status (regardless of health)

Second option: physical health with a high level of intelligence increases the chance of getting a higher status in the group (Figure 5.2)

/images/6/185_image036.gif">

In this case, the effect of divergent interaction between two independent variables was obtained. The second variable enhances the influence of the first on the dependent variable.

Third option: convergent interaction - physical health reduces the chance of an intellectual to acquire a higher status in the group. The “health” variable reduces the influence of the “intelligence” variable on the dependent variable. There are other cases of this type of interaction:

variables interact in such a way that an increase in the value of the first leads to a decrease in the influence of the second with a change in the sign of the dependence (Fig. 5.3)

/images/6/496_image037.gif">

Sick children with a high level of intelligence are less likely to receive a high status than sick children with low intelligence, while healthy children have a positive relationship between intelligence and status.

Note that it is theoretically possible to imagine that sick children would have a greater chance of achieving high status with a high level of intelligence than their healthy low-intelligence peers.

The last, fourth, possible variant of the relationships between independent variables observed in research: the case when there is an intersecting interaction between them, presented in the last graph (Fig. 5.4)

Thus, the following interactions of variables are possible: zero; divergent (with different signs of dependence); intersecting.

The magnitude of the interaction is assessed using analysis of variance, and Student's t test is used to assess the significance of group differences in `X.

In all considered experimental design options, a balancing method is used: different groups of subjects are placed in different experimental conditions. The procedure for equalizing the composition of groups allows for comparison of results.

/images/6/964_image038.gif">

Moreover, in many cases it is necessary to plan an experiment so that all its participants receive all options for the influence of independent variables. Then the counterbalancing technique comes to the rescue.

McCall calls plans that implement the “all subjects - all influences” strategy “rotation experiments,” and Campbell calls them “balanced plans.” To avoid confusion between the concepts of “balancing” and “counter-balancing”, we will use the term “rotation plan”.

Rotation plans are constructed using the “Latin square” method, but, unlike the example discussed above, the rows indicate groups of subjects, not the levels of the variable, the columns indicate the levels of influence of the first independent variable (or variables), and the cells of the table indicate the levels of influence of the second independent variable.

An example of an experimental design for 3 groups (A, B, C) and 2 independent variables (X,Y) with 3 intensity levels (1st, 2nd, 3rd) is given below. It is easy to see that this plan can be rewritten so that the cells contain the levels of the Y variable (Table 5.12)

Campbell includes this design as a quasi-experimental design on the basis that it is not known whether it controls for external validity. Indeed, it is unlikely that in real life a subject can receive a series of such influences as in the experiment.

As for the interaction of group composition with other external variables, sources of artifacts, randomization of groups, according to Campbell, should minimize the influence of this factor.

Column sums in a rotation design indicate differences in the effect size for different values ​​of one independent variable (X or Y), and row sums should indicate differences between groups. If the groups are randomized successfully, then there should be no differences between groups. If the composition of the group is an additional variable, it becomes possible to control it. The counterbalancing scheme does not avoid the training effect, although data from numerous experiments using the Latin square do not allow such a conclusion.

Table 5.12

Levels of the 1st variable

Summarizing the consideration of various options for experimental plans, we propose their classification. Experimental designs differ on the following grounds:

1. Number of independent variables: one or more. Depending on their number, either a simple or factorial design is used.

2. The number of levels of independent variables: with 2 levels we are talking about establishing a qualitative connection, with 3 or more - a quantitative connection.

3. Who gets the impact. If the scheme “each group gets its own combination” is used, then we are talking about an intergroup plan. If the “all groups - all influences” scheme is used, then we are talking about a rotation plan. Gottsdanker calls it cross-individual comparison.

The experimental planning scheme can be homogeneous or heterogeneous (depending on whether the number of independent variables is equal or not equal to the number of levels of their change)

5.1.4 Experimental designs for one subject

Experiments on samples with control of variables are a situation that has been widely used in psychology since the 1910-1920s.
It is worth noting that experimental studies on equalized groups became especially widespread after the creation of the theory of planning experiments and processing their results (analysis of variance and covariance) by the outstanding biologist and mathematician R. A. Fisher. But psychologists used experiments long before the advent of the theory of planning research samples. The first experimental studies were carried out with the participation of one subject - he was the experimenter himself or his assistant. Starting with G. Fechner (1860), the technique of experimentation came to psychology to test theoretical quantitative hypotheses.

A classic experimental study of one subject was the work of G. Ebbinghaus, which was carried out in 1913. Ebbinghaus investigated the phenomenon of forgetting by memorizing meaningless syllables (invented by him). It is worth noting that he memorized a series of syllables, and then tried to reproduce them after a certain time. As a result, a classic forgetting curve was obtained: the dependence of the volume of stored material on the time elapsed since memorization (Fig. 5.5)

/images/6/261_image039.gif">

In empirical scientific psychology, three research paradigms interact and struggle. Representatives of one of them, traditionally coming from natural science experiments, consider the only reliable knowledge to be that which is obtained in experiments on equivalent and representative samples.
It is worth noting that the main argument of supporters of this position is the need to control external variables and level out individual differences in order to find general patterns.

Representatives of the methodology of “experimental analysis of behavior” criticize supporters of statistical analysis and design of experiments on samples. In their opinion, it is necessary to conduct research with the participation of one subject and using certain strategies that will allow the sources of artifacts to be reduced during the experiment. Proponents of this methodology will be such famous researchers as B.F. Skinner, G.A. Murray and others.

Finally, classical idiographic research is contrasted with both single-subject experiments and designs that study behavior in representative samples. Idiographic research involves the study of individual cases: biographies or behavioral characteristics of individual people. An example would be Luria’s wonderful works “The Lost and Returned World” and “A Little Book of a Big Memory.”

In many cases, single-subject studies will be the only option. Single subject research methodology was developed in the 1970s and 1980s. many authors: A. Kasdan, T. Kratochwill, B.F. Skinner, F.-J. McGuigan et al.

During the experiment, two sources of artifacts will emerge: a) errors in the planning strategy and in the conduct of the study; b) individual differences.

If you create the “correct” strategy for conducting an experiment with one subject, then the whole problem will be reduced solely to taking into account individual differences. An experiment with one subject is possible when: a) individual differences can be neglected in relation to the variables studied in the experiment, all subjects are considered equivalent, so data can be transferred to each member of the population; b) the subject is unique, and the problem of direct data transfer is irrelevant.

The single-subject experimentation strategy was developed by Skinner to study learning. Data during the study are presented in the form of “learning curves” in the coordinate system “time” - “total number of responses” (cumulative curve). The learning curve is initially analyzed visually; its changes over time are considered. If the function describing the curve changes when the influence of A on B changes, then ϶ᴛᴏ may indicate the presence of a causal dependence of behavior on external influences (A or B)

/images/6/627_image040.gif">

/images/6/185_image041.gif">

Single-subject research is also called time series design.
It is worth noting that the main indicator of the influence of the independent variable on the dependent variable when implementing such a plan will be the change in the nature of the subject’s responses due to the impact on him of changes in experimental conditions over time. There are a number of basic schemes for applying this paradigm. The simplest strategy is the A-B scheme. The subject initially implements the activity in conditions A, and then in conditions B (see Fig. 5.8)

When using this plan, a logical question arises: would the response curve have retained its previous appearance if there had been no impact? Simply put, this design does not control for the placebo effect. Apart from the above, it is unclear what led to the effect: perhaps it was not variable B that had the effect, but some other variable not taken into account in the experiment.

Therefore, another scheme is more often used: A-B-A. Initially, the behavior of the subject is recorded under conditions A, then the conditions change (B), and at the third stage the previous conditions return (A). The change in the functional connection between the independent and dependent variables is studied. If, when conditions change at the third stage, the previous type of functional relationship between the dependent and dependent variables is restored, then the independent variable is considered a cause that can modify the behavior of the subject (Fig. 5.9)

/images/6/65_image042.gif">

At the same time, both the first and second options for planning time series do not allow taking into account the factor of cumulation of impacts. It is possible that a combination—a sequence of conditions (A and B)—leads to the effect. It is also not obvious that after returning to situation B, the curve will take the same form as it was when conditions B were first presented.

An example of a design that reproduces the same experimental effect twice would be the A-B-A-B design. If, during the 2nd transition from conditions A to conditions B, a change in the functional dependence of the subject’s responses on time is reproduced, then ϶ᴛᴏ will become evidence of the experimental hypothesis: the independent variable (A, B) influences the subject’s behavior.

Let's study the simplest case. We will choose the student’s total knowledge as the dependent variable. As an independent exercise - physical education classes in the morning (for example, wushu gymnastics). We will proceed from the assumption that the wushu complex has a beneficial effect on the general mental state of the student and promotes better memorization (Fig. 5.10)

/images/6/838_image043.gif">

It is quite clear that gymnastics had a beneficial effect on learning ability.

There are various options for planning using the time series method. There are schemes of regular alternation of series (AB-AB), series of stochastic sequences and positional adjustment schemes (example: ABBA). Modifications of the A-B-A-B scheme will be the A-B-A-B-A scheme or a longer one: A-B - A- B- A- B- A.

The use of longer time frames increases the confidence of detecting an effect, but leads to subject fatigue and other cumulative effects.

Except for the above, the A-B-A-B plan and its various modifications do not solve three major problems:

1. What would happen to the subject if there was no effect (placebo effect)?

2. Wouldn't the sequence of influences A-B itself be another influence (side variable)?

3. What cause led to the effect: if there were no effect at place B, would the effect be repeated?

To control for the placebo effect, the A-B-A-B series includes conditions that “simulate” either exposure A or exposure B. Let us study the solution to the last problem. But first, let's analyze this case: let's say a student constantly practices wushu. But from time to time a pretty girl (just a spectator) will be at the stadium or in the gym - impact B. Plan A-B-A-C revealed an increase in the efficiency of the student’s educational activities during periods when variable B appears. What will be the reason: the presence of a spectator as such or a specific pretty girl girls? To test the hypothesis about the presence of a specific cause, the experiment is structured according to the following scheme: A-B-A-C-A. For example, in the fourth time period another girl or a bored pensioner comes to the stadium. If the effectiveness of classes decreases significantly (not the same motivation), then ϶ᴛᴏ will indicate a specific reason for the deterioration in learning ability. It is also possible to test the impact of condition A (wushu classes without spectators). It is worth saying that for this you need to apply plan A-B-C-B. Let the student stop studying for some time in the absence of the girl. If her repeated appearance at the stadium leads to the same effect as the first time, then the reason for the increase in academic performance is in her, and not just in wushu classes (Fig. 5.11)

Please do not take this example seriously. In reality, just the opposite happens: infatuation with girls sharply reduces student performance.

There are many techniques for conducting single-subject studies. An example of the development of an A-B plan would be an “alternative impact plan.” Exposures A and B are randomly distributed over time, for example by day of the week, if we are talking about different methods of quitting smoking. Then all the moments when there was impact A are determined; a curve is constructed connecting the consecutive points. All moments in time when there was an “alternative” influence B are identified and, in order of sequence in time, also connected; the second curve is constructed. Then both curves are compared and it is determined which effect is more effective. Efficiency is determined by the magnitude of the rise or fall of the curve (Fig. 5.12)

/images/6/748_image044.gif">

Synonyms for the term “alternative impact plan” are: “series comparison plan”, “synchronized impact plan”, “multiple schedule plan”, etc.

Another option is the reverse plan. It is worth noting that it is used to study two alternative forms of behavior. Initially, a baseline level of manifestation of both forms of behavior is recorded. The first behavior can be actualized with the help of a specific influence, and the second, incompatible with it, is simultaneously provoked by another type of influence. The effect of two interventions is assessed. After a certain time, the combination of influences is reversed so that the first form of behavior receives the influence that initiated the second form of behavior, and the second - the influence relevant to the first form of behavior. This design is used, for example, when studying the behavior of young children (Fig. 5.13)

In the psychology of learning, the method of changing criteria, or the “plan of increasing criteria,” is used. Its essence is essentially that a change in the behavior of the subject is recorded in response to an increase (phase) of influence. The increase in the registered behavior parameter is recorded, and the next impact is carried out only after the subject reaches the specified criterion level. After the level of performance has stabilized, the subject is presented with the following gradation of influence. The curve of a successful experiment (confirming a hypothesis) resembles a staircase knocked down by heels, where the beginning of the step coincides with the beginning of the level of influence, and its end with the subject reaching the next criterion.

The way to level out the “sequence effect” is to invert the sequence of influences - plan A-B-B-A. Sequence effects are associated with the influence of a previous influence on a subsequent one (another name is order effects, or transfer effects). Transfer can be positive or negative, symmetrical or asymmetrical. The sequence A-B-B-A is called a positionally equalized circuit. As Gottsdanker notes, the effects of variables A and B are due to early or late carryover effects. Exposure A is associated with late transfer, and B is associated with early transfer. Except for the above, if there is a cumulative effect, then two consecutive exposures B can affect the subject as a single cumulative exposure. An experiment can be successful only if these effects are insignificant. The variants of plans with regular alternation or random sequences discussed above are most often very long, making them difficult to implement.

To summarize briefly, we can say that the schemes for presenting the influence are used depending on the capabilities that the experimenter has.

A random sequence of influences is obtained by randomizing tasks. It is used in experiments that require a large number of samples. Random alternation of influences guarantees against the manifestation of sequence effects.

For a small number of samples, a regular alternation scheme of type A-B-A-B is recommended. Attention should be paid to the periodicity of background influences, which may coincide with the action of the independent variable. For example, if you give one intelligence test in the morning, and the second one always in the evening, then under the influence of fatigue, the effectiveness of the second test will decrease.

A positionally equalized sequence can be suitable only when the number of influences (tasks) is small and the influence of early and late transfer is insignificant.

But none of the schemes excludes the manifestation of differentiated asymmetric transfer, when the influence of previous exposure A on the effect of exposure B is greater than the influence of previous exposure B on the effect of exposure A (or vice versa)

A variety of designs for one subject were summarized by D. Barlow and M. Hersen in the monograph “Experimental designs for single cases” (Single case experimental designs, 1984) (Table 5.13)

Table 5.13

/images/6/90_image045.gif">

Major artifacts in a single-subject study are virtually unavoidable. It is difficult to imagine how the effects associated with the irreversibility of events can be eliminated. If the effects of order or interaction of variables are to some extent controllable, then the already mentioned effect of asymmetry (differential transfer) cannot be eliminated.

No less problems arise when establishing the initial level of intensity of the recorded behavior (the level of the dependent variable). The initial level of aggressiveness that we recorded in a child in a laboratory experiment may be atypical for him, since it was caused by recent previous events, for example, a quarrel in the family, suppression of his activity peers or kindergarten teachers.

The main problem is the possibility of transferring the results of the study of one subject to each of the representatives of the population. We are talking about taking into account individual differences that are significant for the study. Note that the following move is theoretically possible: presentation of individual data in a “dimensionless” form; at ϶ᴛᴏm, individual parameter values ​​are normalized to a value equal to the spread of values ​​in the population.

/images/6/526_image046.gif">

Let's study an example. In the early 1960s. in the laboratory of B.N. Let us note that a thermal problem arose: why are all the graphs describing changes in reaction time depending on the intensity of the stimulus different for the subjects? V.D. Nebylitsyn [Nebylitsyn V.D., 1966] proposed presenting to the subjects a signal that changes not in units of physical intensity, but in units of a pre-measured individual absolute threshold (“one threshold”, “two thresholds”, etc.) The results of the experiment brilliantly confirmed Nebylitsyn’s hypothesis: curves of the dependence of reaction time on the level of impact measured in units of individual absolute threshold turned out to be identical in all subjects.

A similar scheme is used when interpreting data. At the Institute of Psychology of the Russian Academy of Sciences, A. V. Drynkov conducted research into the process of formation of simple artificial concepts. The learning curves showed the dependence of success on time. It is worth noting that they turned out to be different for all subjects: they were described by power functions. Drynkov suggested that normalizing individual indicators to the value of the initial level of training (along the Y axis) and to the individual time to achieve the criterion (along the X axis) makes it possible to obtain a functional dependence of success on time, the same for all subjects. This was confirmed: the indicators of changes in the individual results of the subjects, presented in a “dimensionless” form, obeyed the quadratic power law.

Consequently, identifying a general pattern by leveling individual differences is decided each time on the basis of a meaningful hypothesis about the influence of an additional variable on the interindividual variation in the results of the experiment.

Let us dwell once again on one feature of experiments with the participation of one subject. The results of these experiments are very dependent on the experimenter’s prejudices and the relationship that develops between him and the subject. When conducting a long series of sequential influences, the experimenter can unconsciously or consciously act so that the subject actualizes behavior that confirms the experimental hypothesis. That is why in this kind of research it is recommended to use “blind experiments” and “double-blind experiments”. In the first option, the experimenter knows, but the subject does not know, when the latter receives the placebo and when the effect. A “double-blind experiment” is essentially that the experiment is conducted by a researcher who is unfamiliar with the hypothesis and does not know when the subject receives a placebo or treatment.

/images/6/929_image047.gif">

/images/6/257_image048.gif">

Experiments involving one subject play an important role in psychophysiology, psychophysics, learning psychology, and cognitive psychology. The methodology of such experiments has penetrated into the psychology of programmed training and social management, into clinical psychology, especially into behavioral therapy, of which Eysenck is the main promoter [Eysenck G. Yu., 1999].



Did you like the article? Share with your friends!