2 homogeneous and inhomogeneous slough solution methods. Homogeneous systems of linear algebraic equations

Systems of linear homogeneous equations- has the form ∑a k i x i = 0. where m > n or m A homogeneous system of linear equations is always consistent, since rangA = rangB. It obviously has a solution consisting of zeros, which is called trivial.

Purpose of the service. The online calculator is designed to find a non-trivial and fundamental solution to the SLAE. The resulting solution is saved in a Word file (see example solution).

Instructions. Select matrix dimension:

Properties of systems of linear homogeneous equations

In order for the system to have non-trivial solutions, it is necessary and sufficient that the rank of its matrix be less than the number of unknowns.

Theorem. A system in the case m=n has a nontrivial solution if and only if the determinant of this system is equal to zero.

Theorem. Any linear combination of solutions to a system is also a solution to that system.
Definition. The set of solutions to a system of linear homogeneous equations is called fundamental system of solutions, if this set consists of linearly independent solutions and any solution to the system is a linear combination of these solutions.

Theorem. If the rank r of the system matrix is ​​less than the number n of unknowns, then there exists a fundamental system of solutions consisting of (n-r) solutions.

Algorithm for solving systems of linear homogeneous equations

  1. Finding the rank of the matrix.
  2. We select the basic minor. We distinguish dependent (basic) and free unknowns.
  3. We cross out those equations of the system whose coefficients are not included in the basis minor, since they are consequences of the others (according to the theorem on the basis minor).
  4. We move the terms of the equations containing free unknowns to the right side. As a result, we obtain a system of r equations with r unknowns, equivalent to the given one, the determinant of which is nonzero.
  5. We solve the resulting system by eliminating unknowns. We find relationships expressing dependent variables through free ones.
  6. If the rank of the matrix is ​​not equal to the number of variables, then we find the fundamental solution of the system.
  7. In the case rang = n we have a trivial solution.

Example. Find the basis of the system of vectors (a 1, a 2,...,a m), rank and express the vectors based on the base. If a 1 =(0,0,1,-1), and 2 =(1,1,2,0), and 3 =(1,1,1,1), and 4 =(3,2,1 ,4), and 5 =(2,1,0,3).
Let's write down the main matrix of the system:


Multiply the 3rd line by (-3). Let's add the 4th line to the 3rd:
0 0 1 -1
0 0 -1 1
0 -1 -2 1
3 2 1 4
2 1 0 3

Multiply the 4th line by (-2). Let's multiply the 5th line by (3). Let's add the 5th line to the 4th:
Let's add the 2nd line to the 1st:
Let's find the rank of the matrix.
The system with the coefficients of this matrix is ​​equivalent to the original system and has the form:
- x 3 = - x 4
- x 2 - 2x 3 = - x 4
2x 1 + x 2 = - 3x 4
Using the method of eliminating unknowns, we find a nontrivial solution:
We obtained relations expressing the dependent variables x 1 , x 2 , x 3 through the free ones x 4 , that is, we found a general solution:
x 3 = x 4
x 2 = - x 4
x 1 = - x 4

A system of linear equations in which all free terms are equal to zero is called homogeneous :

Any homogeneous system is always consistent, since it always has zero (trivial ) solution. The question arises under what conditions will a homogeneous system have a nontrivial solution.

Theorem 5.2.A homogeneous system has a nontrivial solution if and only if the rank of the underlying matrix is ​​less than the number of its unknowns.

Consequence. A square homogeneous system has a nontrivial solution if and only if the determinant of the main matrix of the system is not equal to zero.

Example 5.6. Determine the values ​​of the parameter l at which the system has nontrivial solutions, and find these solutions:

Solution. This system will have a non-trivial solution when the determinant of the main matrix is ​​equal to zero:

Thus, the system is non-trivial when l=3 or l=2. For l=3, the rank of the main matrix of the system is 1. Then, leaving only one equation and assuming that y=a And z=b, we get x=b-a, i.e.

For l=2, the rank of the main matrix of the system is 2. Then, choosing the minor as the basis:

we get a simplified system

From here we find that x=z/4, y=z/2. Believing z=4a, we get

The set of all solutions of a homogeneous system has a very important linear property : if columns X 1 and X 2 - solutions to a homogeneous system AX = 0, then any linear combination of them a X 1 + b X 2 will also be a solution to this system. Indeed, since AX 1 = 0 And AX 2 = 0 , That A(a X 1 + b X 2) = a AX 1 + b AX 2 = a · 0 + b · 0 = 0. It is because of this property that if a linear system has more than one solution, then there will be an infinite number of these solutions.

Linearly independent columns E 1 , E 2 , Ek, which are solutions of a homogeneous system, are called fundamental system of solutions homogeneous system of linear equations if the general solution of this system can be written as a linear combination of these columns:

If a homogeneous system has n variables, and the rank of the main matrix of the system is equal to r, That k = n-r.

Example 5.7. Find the fundamental system of solutions to the following system of linear equations:

Solution. Let's find the rank of the main matrix of the system:

Thus, the set of solutions to this system of equations forms a linear subspace of dimension n-r= 5 - 2 = 3. Let’s choose minor as the base

Then, leaving only the basic equations (the rest will be a linear combination of these equations) and the basic variables (we move the rest, the so-called free variables to the right), we obtain a simplified system of equations:

Believing x 3 = a, x 4 = b, x 5 = c, we find


Believing a= 1, b = c= 0, we obtain the first basic solution; believing b= 1, a = c= 0, we obtain the second basic solution; believing c= 1, a = b= 0, we obtain the third basic solution. As a result, the normal fundamental system of solutions will take the form

Using the fundamental system, the general solution of a homogeneous system can be written as

X = aE 1 + bE 2 + cE 3. a

Let us note some properties of solutions to an inhomogeneous system of linear equations AX=B and their relationship with the corresponding homogeneous system of equations AX = 0.

General solution of an inhomogeneous systemis equal to the sum of the general solution of the corresponding homogeneous system AX = 0 and an arbitrary particular solution of the inhomogeneous system. Indeed, let Y 0 is an arbitrary particular solution of an inhomogeneous system, i.e. AY 0 = B, And Y- general solution of a heterogeneous system, i.e. AY=B. Subtracting one equality from the other, we get
A(Y-Y 0) = 0, i.e. Y-Y 0 is the general solution of the corresponding homogeneous system AX=0. Hence, Y-Y 0 = X, or Y=Y 0 + X. Q.E.D.

Let the inhomogeneous system have the form AX = B 1 + B 2 . Then the general solution of such a system can be written as X = X 1 + X 2 , where AX 1 = B 1 and AX 2 = B 2. This property expresses a universal property of any linear systems in general (algebraic, differential, functional, etc.). In physics this property is called superposition principle, in electrical and radio engineering - principle of superposition. For example, in the theory of linear electrical circuits, the current in any circuit can be obtained as the algebraic sum of the currents caused by each energy source separately.

6.3. HOMOGENEOUS SYSTEMS OF LINEAR EQUATIONS

Let now in system (6.1).

A homogeneous system is always consistent. Solution () is called zero, or trivial.

A homogeneous system (6.1) has a nonzero solution if and only if its rank ( ) is less than the number of unknowns. In particular, a homogeneous system in which the number of equations is equal to the number of unknowns has a nonzero solution if and only if its determinant is zero.

Because this time everything, instead of formulas (6.6) we obtain the following:

(6.7)

Formulas (6.7) contain any solution of the homogeneous system (6.1).

1. The set of all solutions to the homogeneous system of linear equations (6.1) forms a linear space.

2. Linear spaceRall solutions of the homogeneous system of linear equations (6.1) withnunknowns and the rank of the main matrix equal tor, has dimensionn–r.

Any set of (n–r) linearly independent solutions of the homogeneous system (6.1) forms a basis in spaceRall decisions. It's called fundamental a set of solutions to the homogeneous system of equations (6.1). Special emphasis is placed on "normal" fundamental set of solutions of the homogeneous system (6.1):




(6.8)

By definition of the basis, any solution X homogeneous system (6.1) can be represented in the form

(6.9)

Where – arbitrary constants.

Since formula (6.9) contains any solution to the homogeneous system (6.1), it gives general solution this system.

Example.

Homogeneous system of linear equations AX = 0 always together. It has non-trivial (non-zero) solutions if r= rank A< n .

For homogeneous systems, the basic variables (the coefficients of which form the basic minor) are expressed through free variables by relations of the form:

Then n-r Linearly independent vector solutions will be:

and any other solution is a linear combination of them. Vector solutions form a normalized fundamental system.

In a linear space, the set of solutions to a homogeneous system of linear equations forms a subspace of dimension n-r; - the basis of this subspace.

System m linear equations with n unknown(or, linear system

Here x 1 , x 2 , …, x n a 11 , a 12 , …, a mn- system coefficients - and b 1 , b 2 , … b m a iji) and unknown ( j

System (1) is called homogeneousb 1 = b 2 = … = b m= 0), otherwise - heterogeneous.

System (1) is called square, if number m equations equal to the number n unknown.

Solution systems (1) - set n numbers c 1 , c 2 , …, c n, such that the substitution of each c i instead of x i into system (1) turns all its equations into identities.

System (1) is called joint non-joint

Solutions c 1 (1) , c 2 (1) , …, c n(1) and c 1 (2) , c 2 (2) , …, c n various

c 1 (1) = c 1 (2) , c 2 (1) = c 2 (2) , …, c n (1) = c n (2) .

certain uncertain. If there are more equations than unknowns, it is called redefined.

Solving systems of linear equations

Solving matrix equations ~ Gauss method

Methods for solving systems of linear equations are divided into two groups:

1. precise methods, which are finite algorithms for calculating the roots of a system (solving systems using an inverse matrix, Cramer’s rule, Gauss’s method, etc.),

2. iterative methods, which make it possible to obtain a solution to the system with a given accuracy through convergent iterative processes (iteration method, Seidel method, etc.).

Due to inevitable rounding, the results of even exact methods are approximate. When using iterative methods, in addition, the error of the method is added.

The effective use of iterative methods significantly depends on the successful choice of the initial approximation and the speed of convergence of the process.

Solving matrix equations

Consider the system n linear algebraic equations with respect to n unknown X 1 , X 2 , …, x n:

. (15)

Matrix A, the columns of which are the coefficients for the corresponding unknowns, and the rows are the coefficients for the unknowns in the corresponding equation, is called matrix of the system; matrix-column b, the elements of which are the right-hand sides of the equations of the system, is called right-hand side matrix or just right side of the system. Column matrix X, whose elements are the unknown unknowns, is called system solution.

If matrix A- non-special, that is, det A n e is equal to 0, then system (13), or the matrix equation (14) equivalent to it, has a unique solution.

In fact, provided det A is not equal 0 there is an inverse matrix A-1. Multiplying both sides of equation (14) by the matrix A-1 we get:

(16)

Formula (16) gives a solution to equation (14) and it is unique.

It is convenient to solve systems of linear equations using the function lsolve.

lsolve( A, b)

The solution vector is returned x such that Oh= b.

Arguments:

A- square, non-singular matrix.

b- a vector having the same number of rows as there are rows in the matrix A .

Figure 8 shows the solution to a system of three linear equations in three unknowns.

Gauss method

The Gaussian method, also called the Gaussian elimination method, consists in the fact that system (13) is reduced by sequential elimination of unknowns to an equivalent system with a triangular matrix:

In matrix notation, this means that first (the direct approach of the Gaussian method), by elementary operations on rows, the extended matrix of the system is reduced to a stepwise form:

and then (the reverse of the Gaussian method) this step matrix is ​​transformed so that in the first n columns we get a unit matrix:

.

Last, ( n+ 1) the column of this matrix contains the solution to system (13).

In Mathcad, the forward and backward moves of the Gaussian method are performed by the function rref(A).

Figure 9 shows the solution of a system of linear equations using the Gaussian method, which uses the following functions:

rref( A)

The step form of the matrix is ​​returned A.

augment( A, IN)

Returns an array formed by the location A And IN side by side. Arrays A And IN must have the same number of lines.

submatrix( A, ir, jr, ic, jc)

Returns a submatrix consisting of all elements with ir By jr and columns with ic By jc. Make sure that ir jr And

ic jc, otherwise the order of the rows and/or columns will be reversed.

Figure 9.

Description of the method

For a system of n linear equations with n unknowns (over an arbitrary field)

with the determinant of the system matrix Δ different from zero, the solution is written in the form

(the i-th column of the system matrix is ​​replaced by a column of free terms).
In another form, Cramer’s rule is formulated as follows: for any coefficients c1, c2, ..., cn the following equality holds:

In this form, Cramer's formula is valid without the assumption that Δ is different from zero; it is not even necessary that the coefficients of the system be elements of an integral ring (the determinant of the system can even be a divisor of zero in the coefficient ring). We can also assume that either the sets b1,b2,...,bn and x1,x2,...,xn, or the set c1,c2,...,cn, do not consist of elements of the coefficient ring of the system, but some module above this ring. In this form, Cramer's formula is used, for example, in the proof of the formula for the Gram determinant and Nakayama's Lemma.

35) Kronecker-Capelli theorem
In order for a system of m inhomogeneous linear equations in n unknowns to be consistent, it is necessary and sufficient that Proof of necessity. Let system (1.13) be consistent, that is, there exist such numbers X 1 =α 1 , X 2 =α 2 , …, x n = α n , What (1.15) Let us subtract from the last column of the extended matrix its first column, multiplied by α 1, the second - by α 2, ..., nth - multiplied by α n, that is, from the last column of matrix (1.14) we should subtract the left sides of the equalities ( 1.15). Then we get the matrix whose rank will not change as a result of elementary transformations and . But it is obvious, and therefore proof of sufficiency. Let and for definiteness let a non-zero minor of order r be located in the upper left corner of the matrix: This means that the remaining rows of the matrix can be obtained as linear combinations of the first r rows, that is, the m-r rows of the matrix can be represented as the sums of the first r rows multiplied by some numbers. But then the first r equations of system (1.13) are independent, and the rest are their consequences, that is, the solution to the system of the first r equations is automatically a solution to the remaining equations. There are two possible cases. 1. r=n. Then the system consisting of the first r equations has the same number of equations and unknowns and is consistent, and its solution is unique. 2.r (1.16) “Free” unknown x r +1, x r +2 , …, x n can be given any values. Then the unknowns get the corresponding values x 1 , x 2 , …, x r. System (1.13) in this case is consistent, but uncertain. Comment. Nonzero minor of order r, where r X 1 , X 2 , …, X r are also called basic, the rest are free. System (1.16) is called shortened. If free unknowns are denoted x r +1 =c 1 , x r +2 =c 2 , …, x n = c n - r, then the basic unknowns will depend on them, that is, the solution to a system of m equations with n unknowns will have the form X = ( x 1 (c 1 , …, c n - r), x 2 (c 1 , …, c n - r), …, x r(c 1 , …, c n - r), c 1 , c 2 , …, c n - r) T , where T means transpose. This solution of the system is called general.

36) certainty, uncertainty
System m linear equations with n unknown(or, linear system) in linear algebra is a system of equations of the form

Here x 1 , x 2 , …, x n- unknowns that need to be determined. a 11 , a 12 , …, a mn- system coefficients - and b 1 , b 2 , … b m- free members - are assumed to be known. Coefficient indices ( a ij) systems denote equation numbers ( i) and unknown ( j), at which this coefficient stands, respectively.

System (1) is called homogeneous, if all its free terms are equal to zero ( b 1 = b 2 = … = b m= 0), otherwise - heterogeneous.

System (1) is called joint, if it has at least one solution, and non-joint, if she doesn’t have a single solution.

A joint system of type (1) may have one or more solutions.

Solutions c 1 (1) , c 2 (1) , …, c n(1) and c 1 (2) , c 2 (2) , …, c n(2) joint systems of the form (1) are called various, if at least one of the equalities is violated:

c 1 (1) = c 1 (2) , c 2 (1) = c 2 (2) , …, c n (1) = c n (2) .

A joint system of the form (1) is called certain, if it has a unique solution; if it has at least two different solutions, then it is called uncertain

37) Solving systems of linear equations using the Gauss method

Let the original system look like this

Matrix A is called the main matrix of the system, b- column of free members.

Then, according to the property of elementary transformations over rows, the main matrix of this system can be reduced to a stepwise form (the same transformations must be applied to the column of free terms):

Then the variables are called main variables. All others are called free.

[edit]Condition of compatibility

The above condition for all can be formulated as a necessary and sufficient condition for compatibility:

Recall that the rank of a joint system is the rank of its main matrix (or extended matrix, since they are equal).

Algorithm

Description

The algorithm for solving SLAEs using the Gaussian method is divided into two stages.

§ At the first stage, the so-called direct move is carried out, when, through elementary transformations over the rows, the system is brought to a stepped or triangular form, or it is established that the system is incompatible. Namely, among the elements of the first column of the matrix, a non-zero one is selected, it is moved to the uppermost position by rearranging the rows, and the first row obtained after the rearrangement is subtracted from the remaining rows, multiplying it by an amount equal to the ratio of the first element of each of these rows to the first element of the first row, zeroing thus the column below it. After the indicated transformations have been completed, the first row and first column are mentally crossed out and continued until a matrix of zero size remains. If at any iteration there is no non-zero element among the elements of the first column, then go to the next column and perform a similar operation.

§ At the second stage, the so-called reverse move is carried out, the essence of which is to express all the resulting basic variables in terms of non-basic ones and construct a fundamental system of solutions, or, if all the variables are basic, then express numerically the only solution to the system of linear equations. This procedure begins with the last equation, from which the corresponding basic variable is expressed (and there is only one) and substituted into the previous equations, and so on, going up the “steps”. Each line corresponds to exactly one basis variable, so at each step except the last (topmost), the situation exactly repeats the case of the last line.

Gaussian method requires order O(n 3) actions.

This method relies on:

38)Kronecker-Capelli theorem.
A system is consistent if and only if the rank of its main matrix is ​​equal to the rank of its extended matrix.

You can order a detailed solution to your problem!!!

To understand what it is fundamental decision system you can watch a video tutorial for the same example by clicking. Now let's move on to the actual description of all the necessary work. This will help you understand the essence of this issue in more detail.

How to find the fundamental system of solutions to a linear equation?

Let's take for example the following system of linear equations:

Let's find the solution to this linear system of equations. To begin with, we you need to write out the coefficient matrix of the system.

Let's transform this matrix to a triangular one. We rewrite the first line without changes. And all the elements that are under $a_(11)$ must be made zeros. To make a zero in place of the element $a_(21)$, you need to subtract the first from the second line, and write the difference in the second line. To make a zero in place of the element $a_(31)$, you need to subtract the first from the third line and write the difference in the third line. To make a zero in place of the element $a_(41)$, you need to subtract the first multiplied by 2 from the fourth line and write the difference in the fourth line. To make a zero in place of the element $a_(31)$, you need to subtract the first multiplied by 2 from the fifth line and write the difference in the fifth line.

We rewrite the first and second lines without changes. And all the elements that are under $a_(22)$ must be made zeros. To make a zero in place of the element $a_(32)$, you need to subtract the second one multiplied by 2 from the third line and write the difference in the third line. To make a zero in place of the element $a_(42)$, you need to subtract the second multiplied by 2 from the fourth line and write the difference in the fourth line. To make a zero in place of the element $a_(52)$, you need to subtract the second multiplied by 3 from the fifth line and write the difference in the fifth line.

We see that the last three lines are the same, so if you subtract the third from the fourth and fifth, they will become zero.

According to this matrix write a new system of equations.

We see that we have only three linearly independent equations, and five unknowns, so the fundamental system of solutions will consist of two vectors. So we we need to move the last two unknowns to the right.

Now, we begin to express those unknowns that are on the left side through those that are on the right side. We start with the last equation, first we express $x_3$, then we substitute the resulting result into the second equation and express $x_2$, and then into the first equation and here we express $x_1$. Thus, we expressed all the unknowns that are on the left side through the unknowns that are on the right side.

Then, instead of $x_4$ and $x_5$, we can substitute any numbers and find $x_1$, $x_2$ and $x_3$. Each five of these numbers will be the roots of our original system of equations. To find the vectors that are included in FSR we need to substitute 1 instead of $x_4$, and substitute 0 instead of $x_5$, find $x_1$, $x_2$ and $x_3$, and then vice versa $x_4=0$ and $x_5=1$.



Did you like the article? Share with your friends!