Menu
Is free
registration
home  /  Internet/ Equivalent matrices. Solving arbitrary systems of linear equations Elementary transformations of systems

Equivalent matrices. Solving arbitrary systems of linear equations Elementary transformations of systems

Transition to a new basis.

Let (1) and (2) be two bases of the same m-dimensional linear space X.

Since (1) is a basis, it is possible to expand the vectors of the second basis over it:

From the coefficients at, we compose the matrix:

(4) - matrix transformation of coordinates in the transition from basis (1) to basis (2).

Let a vector, then (5) and (6).

Relation (7) means that

The matrix P is non-degenerate, since otherwise there would be a linear relationship between its columns, and then between the vectors.

The converse is also true: any nondegenerate matrix is ​​a coordinate transformation matrix defined by formulas (8). Because P is a non-degenerate matrix, then there is an inverse for it. Multiplying both sides of (8) by, we get: (9).

Let 3 bases be chosen in the linear space X: (10), (11), (12).

Where, i.e. (13).

That. with sequential transformation of coordinates, the matrix of the resulting transformation is equal to the product of the matrices of the transformation components.

Let a linear operator and let a pair of bases be chosen in X: (I) and (II), and in Y - (III) and (IV).

The operator A in a pair of bases I - III corresponds to the equality: (14). The same operator in a pair of bases II - IV corresponds to the equality: (15). That. for this operator And we have two matrices and. We want to establish a relationship between them.

Let P be the coordinate transformation matrix in the transition from I to III.

Let Q be the coordinate transformation matrix in the transition from II to IV.

Then (16), (17). Substituting the expressions for and from (16) and (17) into (14), we get:

Comparing this equality with (15), we get:

Relation (19) connects the matrix of the same operator in different bases. In the case when the spaces X and Y coincide, the role of the III basis is played by I, and IV - by the II, then relation (19) takes the form:.

Bibliography:

3.Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: a textbook for universities, -M. : Physical and mathematical literature, 2000, 368 p.

Lecture number 16 (II semester)

Theme: A necessary and sufficient condition for the equivalence of matrices.

Two matrices, A and B, of the same size, are called equivalent if there are two nondegenerate matrices R and S such that (1).

Example: Two matrices corresponding to the same operator for different choices of bases in the linear spaces X and Y are equivalent.

It is clear that the relation defined on the set of all matrices of the same size using the above definition is an equivalence relation.



Theorem 8: In order for two rectangular matrices of the same size to be equivalent, it is necessary and sufficient that they be of the same rank.

Proof:

1. Let A and B be two matrices for which it makes sense. The rank of the product (matrix C) is not higher than the rank of each of the factors.

We see that the k-th column of the matrix C is a linear combination of the vectors of the columns of the matrix A and this is true for all columns of the matrix C, i.e. for all. That. , i.e. - subspace of linear space.

Since and since the dimension of the subspace is less than or equal to the dimension of the space, the rank of the matrix C is less than or equal to the rank of the matrix A.

Let us fix the index i in equalities (2) and assign all possible values ​​to k from 1 to s. Then we obtain a system of equalities similar to system (3):

It is seen from equalities (4) that i-th line of matrix C is a linear combination of rows of matrix B for all i, and then the linear hull spanned by the rows of matrix C is contained in the linear hull spanned by the rows of matrix B, and then the dimension of this linear hull is less than or equal to the dimension of the linear hull of the vectors of rows of matrix B , therefore, the rank of the matrix C is less than or equal to the rank of the matrix B.

2. The rank of the product of the matrix A on the left and on the right by a nondegenerate square matrix Q is equal to the rank of the matrix A. (). Those. the rank of the matrix C is equal to the rank of the matrix A.

Proof: According to what was proved in case (1). Since the matrix Q is non-degenerate, then for it there exists: and in accordance with what was proved in the previous statement.

3. Let us prove that if matrices are equivalent, then they have the same ranks. By definition, A and B are equivalent if there are R and S such that. Since multiplying A on the left by R and on the right by S yields matrices of the same rank, as proved in item (2), the rank of A is equal to the rank of B.

4. Let the matrices A and B of the same rank. Let us prove that they are equivalent. Consider.

Let X and Y be two linear spaces in which the bases (basis X) and (basis Y) are chosen. As is known, any matrix of the form defines some linear operator acting from X to Y.

Since r is the rank of the matrix A, there are exactly r linearly independent vectors among the vectors. Without loss of generality, we can assume that the first r vectors are linearly independent. Then all the others are linearly expressed through them, and you can write:

We define a new basis in the space X as follows:. (7)

The new basis in the space Y is as follows:

Vectors, by condition, are linearly independent. Let's supplement them with some vectors up to the basis Y: (8). So (7) and (8) are two new bases X and Y. Let's find the matrix of the operator A in these bases:

So, in the new pair of bases, the matrix of the operator A is the matrix J. The matrix A was originally an arbitrary rectangular matrix of the form, rank r. Since the matrices of the same operator in different bases are equivalent, this shows that any rectangular matrix of the form of rank r is equivalent to J. Since we are dealing with an equivalence relation, this shows that any two matrices A and B of the form and rank r , being equivalent to the matrix J are equivalent to each other.

Bibliography:

1. Voevodin V.V. Linear algebra. Saint Petersburg: Lan, 2008, 416 p.

2. Beklemishev DV Course of analytic geometry and linear algebra. Moscow: Fizmatlit, 2006, 304 p.

3.Kostrikin A.I. Introduction to algebra. part II. Fundamentals of algebra: a textbook for universities, -M. : Physical and mathematical literature, 2000, 368 p.

Lecture number 17 (II semester)

Theme: Eigenvalues ​​and eigenvectors. Eigenspaces. Examples.

Our immediate goal is to prove that any matrix can be reduced to some standard forms using elementary transformations. The language of equivalent matrices is useful along this path.

Let be. We will say that the matrix l_equivalent (n_equivalent or equivalent) to the matrix and denote (or) if the matrix can be obtained from the matrix using a finite number of row (respectively, columnar or row and columnar) elementary transformations. It is clear that n_equivalent and n_equivalent matrices are equivalent.

First, we will show that any matrix can only be reduced by row transformations to a special form called reduced.

Let be. They say that a nonzero row of this matrix has a reduced form if it contains such an element equal to 1 that all elements of the column, other than, are equal to zero,. The marked single line element will be called the leading element of this line and enclosed in a circle. In other words, a row of a matrix has a reduced form if this matrix contains a column of the form

For example, in the following matrix

the line has the reduced form, since. Note that in this example, the element also claims to be the pivot of the row. In what follows, if there are several elements in the line of the given type that have the properties of the leading one, we will select only one of them in an arbitrary way.

A matrix is ​​said to have a reduced form if each of its nonzero rows has a reduced form. For example, the matrix

has the reduced form.

Proposition 1.3 For any matrix, there exists an l_equivalent matrix of reduced form.

Indeed, if the matrix has the form (1.1) and, then after carrying out elementary transformations in it

we get the matrix

in which the line has the reduced form.

Secondly, if the row in the matrix was reduced, then after carrying out elementary transformations (1.20) the matrix row will be reduced. Indeed, since the given one, there is a column such that

but then and, consequently, after carrying out transformations (1.20), the column does not change, i.e. ... Therefore, the line has the reduced form.

Now it is clear that by alternately transforming each nonzero row of the matrix in the above way, after a finite number of steps, we get a matrix of the reduced form. Since only row elementary transformations were used to obtain the matrix, it is l_equivalent to the matrix. >

Example 7. Construct a reduced form matrix, l_equivalent to the matrix

Equivalent matrices

As mentioned above, the minor of a matrix of order s is the determinant of the matrix formed from the elements of the original matrix located at the intersection of any selected s rows and s columns.

Definition. In a matrix of order mn, a minor of order r is called basic if it is not equal to zero, and all minors of order r + 1 and higher are equal to zero, or do not exist at all, i.e. r matches the lesser of m or n.

The columns and rows of the matrix on which the basic minor is located are also called basic.

The matrix can have several different basic minors with the same order.

Definition. The order of the basic minor of a matrix is ​​called the rank of the matrix and is denoted by Rg A.

A very important property of elementary matrix transformations is that they do not change the rank of the matrix.

Definition. The matrices obtained as a result of an elementary transformation are called equivalent.

It should be noted that equal matrices and equivalent matrices are completely different concepts.

Theorem. The largest number of linearly independent columns in a matrix is ​​equal to the number of linearly independent rows.

Because elementary transformations do not change the rank of the matrix, then the process of finding the rank of the matrix can be significantly simplified.

Example. Determine the rank of the matrix.

2. Example: Determine the rank of the matrix.

If, using elementary transformations, it is not possible to find a matrix equivalent to the original one, but of a smaller size, then finding the rank of the matrix should begin with calculating the minors of the highest possible order. In the above example, these are minors of order 3. If at least one of them is not equal to zero, then the rank of the matrix is ​​equal to the order of this minor.

Basic minor theorem.

Theorem. In an arbitrary matrix A, each column (row) is a linear combination of the columns (rows) in which the base minor is located.

So the rank arbitrary matrix A is equal to the maximum number of linearly independent rows (columns) in the matrix.

If A is a square matrix and det A = 0, then at least one of the columns is a linear combination of the other columns. The same is true for strings. This statement follows from the property of linear dependence with the determinant equal to zero.

Solving arbitrary systems of linear equations

As mentioned above, the matrix method and Cramer's method are applicable only to those systems linear equations, in which the number of unknowns is equal to the number of equations. Next, consider arbitrary systems of linear equations.

Definition. A system of m equations with n unknowns in general view is written as follows:

where aij are coefficients and bi are constants. The solutions of the system are n numbers, which, when substituted into the system, turn each of its equations into an identity.

Definition. If a system has at least one solution, then it is called joint. If the system does not have a single solution, then it is called inconsistent.

Definition. A system is called definite if it has only one solution and indefinite if more than one.

Definition. For a system of linear equations, the matrix

A = is called the matrix of the system, and the matrix

A * = called the extended matrix of the system

Definition. If b1, b2,…, bm = 0, then the system is called homogeneous. a homogeneous system is always compatible, since always has a zero solution.

Elementary system transformations

Elementary transformations include:

1) Adding to both sides of one equation the corresponding parts of the other, multiplied by the same number, not equal to zero.

2) Permutation of equations in places.

3) Removal from the system of equations that are identities for all x.

The Kronecker - Capeli theorem (compatibility condition for the system).

(Leopold Kronecker (1823-1891) German mathematician)

Theorem: A system is consistent (has at least one solution) if and only if the rank of the system matrix is ​​equal to the rank of the extended matrix.

It is obvious that system (1) can be written as.