The inverse of a matrix is ​​a matrix. Finding the inverse matrix

The matrix $A^(-1)$ is called the inverse of the square matrix $A$ if $A^(-1)\cdot A=A\cdot A^(-1)=E$, where $E $ is the identity matrix, the order of which is equal to the order of the matrix $A$.

A non-singular matrix is ​​a matrix whose determinant is not equal to zero. Accordingly, a degenerate matrix is ​​one whose determinant is equal to zero.

The inverse matrix $A^(-1)$ exists if and only if the matrix $A$ is nonsingular. If the inverse matrix $A^(-1)$ exists, then it is unique.

There are several ways to find the inverse of a matrix, and we'll look at two of them. This page will discuss the adjoint matrix method, which is considered standard in most higher mathematics courses. The second way to find the inverse matrix (method of elementary transformations), which involves the use of the Gauss method or the Gauss-Jordan method, is considered in the second part.

Adjoint (union) matrix method

Let the matrix $A_(n\times n)$ be given. In order to find the inverse matrix $A^(-1)$, three steps are required:

  1. Find the determinant of the matrix $A$ and make sure that $\Delta A\neq 0$, i.e. that the matrix A is nondegenerate.
  2. Compose algebraic complements $A_(ij)$ of each element of the matrix $A$ and write down the matrix $A_(n\times n)^(*)=\left(A_(ij) \right)$ from the found algebraic complements.
  3. Write the inverse matrix taking into account the formula $A^(-1)=\frac(1)(\Delta A)\cdot (A^(*))^T$.

The matrix $(A^(*))^T$ is often referred to as the adjoint (mutual, allied) matrix of $A$.

If the decision is made manually, then the first method is good only for matrices of relatively small orders: second (), third (), fourth (). To find the inverse matrix for a higher order matrix, other methods are used. For example, the Gauss method, which is discussed in the second part.

Example #1

Find matrix inverse to matrix $A=\left(\begin(array) (cccc) 5 & -4 &1 & 0 \\ 12 &-11 &4 & 0 \\ -5 & 58 &4 & 0 \\ 3 & - 1 & -9 & 0 \end(array) \right)$.

Since all elements of the fourth column are equal to zero, then $\Delta A=0$ (i.e. the matrix $A$ is degenerate). Since $\Delta A=0$, there is no matrix inverse to $A$.

Answer: matrix $A^(-1)$ does not exist.

Example #2

Find the matrix inverse to the matrix $A=\left(\begin(array) (cc) -5 & 7 \\ 9 & 8 \end(array)\right)$. Run a check.

We use the adjoint matrix method. First, let's find the determinant of the given matrix $A$:

$$ \Delta A=\left| \begin(array) (cc) -5 & 7\\ 9 & 8 \end(array)\right|=-5\cdot 8-7\cdot 9=-103. $$

Since $\Delta A \neq 0$, then the inverse matrix exists, so we continue the solution. Finding Algebraic Complements

\begin(aligned) & A_(11)=(-1)^2\cdot 8=8; \; A_(12)=(-1)^3\cdot 9=-9;\\ & A_(21)=(-1)^3\cdot 7=-7; \; A_(22)=(-1)^4\cdot (-5)=-5.\\ \end(aligned)

Compose a matrix of algebraic complements: $A^(*)=\left(\begin(array) (cc) 8 & -9\\ -7 & -5 \end(array)\right)$.

Transpose the resulting matrix: $(A^(*))^T=\left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end(array)\right)$ (the resulting matrix is ​​often is called the adjoint or union matrix to the matrix $A$). Using the formula $A^(-1)=\frac(1)(\Delta A)\cdot (A^(*))^T$, we have:

$$ A^(-1)=\frac(1)(-103)\cdot \left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end(array)\right) =\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \end(array)\right) $$

So the inverse matrix is ​​found: $A^(-1)=\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \end(array)\right) $. To check the truth of the result, it is enough to check the truth of one of the equalities: $A^(-1)\cdot A=E$ or $A\cdot A^(-1)=E$. Let's check the equality $A^(-1)\cdot A=E$. In order to work less with fractions, we will substitute the matrix $A^(-1)$ not in the form $\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \ end(array)\right)$ but as $-\frac(1)(103)\cdot \left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end(array )\right)$:

$$ A^(-1)\cdot(A) =-\frac(1)(103)\cdot \left(\begin(array) (cc) 8 & -7\\ -9 & -5 \end( array)\right)\cdot\left(\begin(array) (cc) -5 & 7 \\ 9 & 8 \end(array)\right) =-\frac(1)(103)\cdot\left( \begin(array) (cc) -103 & 0 \\ 0 & -103 \end(array)\right) =\left(\begin(array) (cc) 1 & 0 \\ 0 & 1 \end(array )\right) =E $$

Answer: $A^(-1)=\left(\begin(array) (cc) -8/103 & 7/103\\ 9/103 & 5/103 \end(array)\right)$.

Example #3

Find the inverse of the matrix $A=\left(\begin(array) (ccc) 1 & 7 & 3 \\ -4 & 9 & 4 \\ 0 & 3 & 2\end(array) \right)$. Run a check.

Let's start by calculating the determinant of the matrix $A$. So, the determinant of the matrix $A$ is:

$$ \Delta A=\left| \begin(array) (ccc) 1 & 7 & 3 \\ -4 & 9 & 4 \\ 0 & 3 & 2\end(array) \right| = 18-36+56-12=26. $$

Since $\Delta A\neq 0$, then the inverse matrix exists, so we continue the solution. We find the algebraic complements of each element of the given matrix:

$$ \begin(aligned) & A_(11)=(-1)^(2)\cdot\left|\begin(array)(cc) 9 & 4\\ 3 & 2\end(array)\right| =6;\; A_(12)=(-1)^(3)\cdot\left|\begin(array)(cc) -4 &4 \\ 0 & 2\end(array)\right|=8;\; A_(13)=(-1)^(4)\cdot\left|\begin(array)(cc) -4 & 9\\ 0 & 3\end(array)\right|=-12;\\ & A_(21)=(-1)^(3)\cdot\left|\begin(array)(cc) 7 & 3\\ 3 & 2\end(array)\right|=-5;\; A_(22)=(-1)^(4)\cdot\left|\begin(array)(cc) 1 & 3\\ 0 & 2\end(array)\right|=2;\; A_(23)=(-1)^(5)\cdot\left|\begin(array)(cc) 1 & 7\\ 0 & 3\end(array)\right|=-3;\\ & A_ (31)=(-1)^(4)\cdot\left|\begin(array)(cc) 7 & 3\\ 9 & 4\end(array)\right|=1;\; A_(32)=(-1)^(5)\cdot\left|\begin(array)(cc) 1 & 3\\ -4 & 4\end(array)\right|=-16;\; A_(33)=(-1)^(6)\cdot\left|\begin(array)(cc) 1 & 7\\ -4 & 9\end(array)\right|=37. \end(aligned) $$

We compose a matrix of algebraic additions and transpose it:

$$ A^*=\left(\begin(array) (ccc) 6 & 8 & -12 \\ -5 & 2 & -3 \\ 1 & -16 & 37\end(array) \right); \; (A^*)^T=\left(\begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & -3 & 37\end(array) \right) . $$

Using the formula $A^(-1)=\frac(1)(\Delta A)\cdot (A^(*))^T$, we get:

$$ A^(-1)=\frac(1)(26)\cdot \left(\begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & - 3 & 37\end(array) \right)= \left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \ \ -6/13 & -3/26 & 37/26 \end(array) \right) $$

So $A^(-1)=\left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \\ - 6/13 & -3/26 & 37/26 \end(array) \right)$. To check the truth of the result, it is enough to check the truth of one of the equalities: $A^(-1)\cdot A=E$ or $A\cdot A^(-1)=E$. Let's check the equality $A\cdot A^(-1)=E$. In order to work less with fractions, we will substitute the matrix $A^(-1)$ not in the form $\left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \\ -6/13 & -3/26 & 37/26 \end(array) \right)$, but as $\frac(1)(26)\cdot \left( \begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & -3 & 37\end(array) \right)$:

$$ A\cdot(A^(-1)) =\left(\begin(array)(ccc) 1 & 7 & 3 \\ -4 & 9 & 4\\ 0 & 3 & 2\end(array) \right)\cdot \frac(1)(26)\cdot \left(\begin(array) (ccc) 6 & -5 & 1 \\ 8 & 2 & -16 \\ -12 & -3 & 37\ end(array) \right) =\frac(1)(26)\cdot\left(\begin(array) (ccc) 26 & 0 & 0 \\ 0 & 26 & 0 \\ 0 & 0 & 26\end (array) \right) =\left(\begin(array) (ccc) 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end(array) \right) =E $$

The check was passed successfully, the inverse matrix $A^(-1)$ was found correctly.

Answer: $A^(-1)=\left(\begin(array) (ccc) 3/13 & -5/26 & 1/26 \\ 4/13 & 1/13 & -8/13 \\ -6 /13 & -3/26 & 37/26 \end(array) \right)$.

Example #4

Find matrix inverse of $A=\left(\begin(array) (cccc) 6 & -5 & 8 & 4\\ 9 & 7 & 5 & 2 \\ 7 & 5 & 3 & 7\\ -4 & 8 & -8 & -3 \end(array) \right)$.

For a matrix of the fourth order, finding the inverse matrix using algebraic additions is somewhat difficult. However, such examples are found in the control works.

To find the inverse matrix, first you need to calculate the determinant of the matrix $A$. The best way to do this in this situation is to expand the determinant in a row (column). We select any row or column and find the algebraic complement of each element of the selected row or column.

For example, for the first row we get:

$$ A_(11)=\left|\begin(array)(ccc) 7 & 5 & 2\\ 5 & 3 & 7\\ 8 & -8 & -3 \end(array)\right|=556; \; A_(12)=-\left|\begin(array)(ccc) 9 & 5 & 2\\ 7 & 3 & 7 \\ -4 & -8 & -3 \end(array)\right|=-300 ; $$ $$ A_(13)=\left|\begin(array)(ccc) 9 & 7 & 2\\ 7 & 5 & 7\\ -4 & 8 & -3 \end(array)\right|= -536;\; A_(14)=-\left|\begin(array)(ccc) 9 & 7 & 5\\ 7 & 5 & 3\\ -4 & 8 & -8 \end(array)\right|=-112. $$

The determinant of the matrix $A$ is calculated by the following formula:

$$ \Delta(A)=a_(11)\cdot A_(11)+a_(12)\cdot A_(12)+a_(13)\cdot A_(13)+a_(14)\cdot A_(14 )=6\cdot 556+(-5)\cdot(-300)+8\cdot(-536)+4\cdot(-112)=100. $$

$$ \begin(aligned) & A_(21)=-77;\;A_(22)=50;\;A_(23)=87;\;A_(24)=4;\\ & A_(31) =-93;\;A_(32)=50;\;A_(33)=83;\;A_(34)=36;\\ & A_(41)=473;\;A_(42)=-250 ;\;A_(43)=-463;\;A_(44)=-96. \end(aligned) $$

Algebraic complement matrix: $A^*=\left(\begin(array)(cccc) 556 & -300 & -536 & -112\\ -77 & 50 & 87 & 4 \\ -93 & 50 & 83 & 36\\ 473 & -250 & -463 & -96\end(array)\right)$.

Attached matrix: $(A^*)^T=\left(\begin(array) (cccc) 556 & -77 & -93 & 473\\ -300 & 50 & 50 & -250 \\ -536 & 87 & 83 & -463\\ -112 & 4 & 36 & -96\end(array)\right)$.

Inverse matrix:

$$ A^(-1)=\frac(1)(100)\cdot \left(\begin(array) (cccc) 556 & -77 & -93 & 473\\ -300 & 50 & 50 & -250 \\ -536 & 87 & 83 & -463\\ -112 & 4 & 36 & -96 \end(array) \right)= \left(\begin(array) (cccc) 139/25 & -77/100 & -93/100 & 473/100 \\ -3 & 1/2 & 1/2 & -5/2 \\ -134/25 & 87/100 & 83/100 & -463/100 \\ -28/ 25 & 1/25 & 9/25 & -24/25 \end(array) \right) $$

Checking, if desired, can be done in the same way as in the previous examples.

Answer: $A^(-1)=\left(\begin(array) (cccc) 139/25 & -77/100 & -93/100 & 473/100 \\ -3 & 1/2 & 1/2 & -5/2 \\ -134/25 & 87/100 & 83/100 & -463/100 \\ -28/25 & 1/25 & 9/25 & -24/25 \end(array) \right) $.

In the second part, another way of finding the inverse matrix will be considered, which involves the use of transformations of the Gauss method or the Gauss-Jordan method.

We continue talking about actions with matrices. Namely, in the course of studying this lecture, you will learn how to find the inverse matrix. Learn. Even if the math is tight.

What is an inverse matrix? Here we can draw an analogy with reciprocals: consider, for example, the optimistic number 5 and its reciprocal. The product of these numbers is equal to one: . It's the same with matrices! The product of a matrix and its inverse is - identity matrix, which is the matrix analogue of the numerical unit. However, first things first, we will solve an important practical issue, namely, we will learn how to find this very inverse matrix.

What do you need to know and be able to find the inverse matrix? You must be able to decide determinants. You must understand what is matrix and be able to perform some actions with them.

There are two main methods for finding the inverse matrix:
by using algebraic additions and using elementary transformations.

Today we will study the first, easier way.

Let's start with the most terrible and incomprehensible. Consider square matrix . The inverse matrix can be found using the following formula:

Where is the determinant of the matrix , is the transposed matrix of algebraic complements of the corresponding elements of the matrix .

The concept of an inverse matrix exists only for square matrices, matrices "two by two", "three by three", etc.

Notation: As you probably already noticed, the inverse of a matrix is ​​denoted by a superscript

Let's start with the simplest case - a two-by-two matrix. Most often, of course, "three by three" is required, but, nevertheless, I strongly recommend studying a simpler task in order to learn the general principle of the solution.

Example:

Find the inverse of a matrix

We decide. The sequence of actions is conveniently decomposed into points.

1) First we find the determinant of the matrix.

If the understanding of this action is not good, read the material How to calculate the determinant?

Important! If the determinant of the matrix is ZERO– inverse matrix DOES NOT EXIST.

In the example under consideration, as it turned out, , which means that everything is in order.

2) Find the matrix of minors.

To solve our problem, it is not necessary to know what a minor is, however, it is advisable to read the article How to calculate the determinant.

The matrix of minors has the same dimensions as the matrix , that is, in this case .
The case is small, it remains to find four numbers and put them instead of asterisks.

Back to our matrix
Let's look at the top left element first:

How to find it minor?
And this is done like this: MENTALLY cross out the row and column in which this element is located:

The remaining number is minor of the given element, which we write in our matrix of minors:

Consider the following matrix element:

Mentally cross out the row and column in which this element is located:

What remains is the minor of this element, which we write into our matrix:

Similarly, we consider the elements of the second row and find their minors:


Ready.

It's simple. In the matrix of minors, you need CHANGE SIGNS for two numbers:

It is these numbers that I have circled!

is the matrix of algebraic complements of the corresponding elements of the matrix .

And just something…

4) Find the transposed matrix of algebraic additions.

is the transposed matrix of algebraic complements of the corresponding elements of the matrix .

5) Answer.

Remember our formula
All found!

So the inverse matrix is:

It's best to leave the answer as is. NO NEED divide each element of the matrix by 2, as fractional numbers will be obtained. This nuance is discussed in more detail in the same article. Actions with matrices.

How to check the solution?

Matrix multiplication must be performed either

Examination:

already mentioned identity matrix is a matrix with units on main diagonal and zeros elsewhere.

Thus, the inverse matrix is ​​found correctly.

If you perform an action, then the result will also be an identity matrix. This is one of the few cases where matrix multiplication is permutable, more information can be found in the article Properties of operations on matrices. Matrix expressions. Also note that during the check, the constant (fraction) is brought forward and processed at the very end - after the matrix multiplication. This is a standard take.

Let's move on to a more common case in practice - the three-by-three matrix:

Example:

Find the inverse of a matrix

The algorithm is exactly the same as for the two-by-two case.

We find the inverse matrix by the formula: , where is the transposed matrix of algebraic complements of the corresponding elements of the matrix .

1) Find the matrix determinant.


Here the determinant is revealed on the first line.

Also, do not forget that, which means that everything is fine - inverse matrix exists.

2) Find the matrix of minors.

The matrix of minors has the dimension "three by three" , and we need to find nine numbers.

I'll take a look at a couple of minors in detail:

Consider the following matrix element:

MENTALLY cross out the row and column in which this element is located:

The remaining four numbers are written in the determinant "two by two"

This two-by-two determinant and is a minor of the given element. It needs to be calculated:


Everything, the minor is found, we write it into our matrix of minors:

As you may have guessed, there are nine two-by-two determinants to calculate. The process, of course, is dreary, but the case is not the most difficult, it can be worse.

Well, to consolidate - finding another minor in the pictures:

Try to calculate the rest of the minors yourself.

Final Result:
is the matrix of minors of the corresponding elements of the matrix .

The fact that all the minors turned out to be negative is pure coincidence.

3) Find the matrix of algebraic additions.

In the matrix of minors, it is necessary CHANGE SIGNS strictly for the following elements:

In this case:

Finding the inverse matrix for the “four by four” matrix is ​​not considered, since only a sadistic teacher can give such a task (for the student to calculate one “four by four” determinant and 16 “three by three” determinants). In my practice, there was only one such case, and the customer of the test paid for my torment quite dearly =).

In a number of textbooks, manuals, you can find a slightly different approach to finding the inverse matrix, but I recommend using the above solution algorithm. Why? Because the probability of getting confused in calculations and signs is much less.

This topic is one of the most hated among students. Worse, probably, only determinants.

The trick is that the very concept of the inverse element (and I'm not just talking about matrices now) refers us to the operation of multiplication. Even in the school curriculum, multiplication is considered a complex operation, and matrix multiplication is generally a separate topic, to which I have a whole paragraph and a video lesson devoted to it.

Today we will not go into the details of matrix calculations. Just remember: how matrices are denoted, how they are multiplied and what follows from this.

Review: Matrix Multiplication

First of all, let's agree on notation. A matrix $A$ of size $\left[ m\times n \right]$ is simply a table of numbers with exactly $m$ rows and $n$ columns:

\=\underbrace(\left[ \begin(matrix) ((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) \\ (( a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) \\ ... & ... & ... & ... \\ ((a)_(m1)) & ((a)_(m2)) & ... & ((a)_(mn)) \\\end(matrix) \right])_(n)\]

In order not to accidentally confuse rows and columns in places (believe me, in the exam you can confuse one with a deuce - what can we say about some lines there), just take a look at the picture:

Determination of indexes for matrix cells

What's happening? If we place the standard coordinate system $OXY$ in the upper left corner and direct the axes so that they cover the entire matrix, then each cell of this matrix can be uniquely associated with the coordinates $\left(x;y \right)$ - this will be the row number and column number.

Why is the coordinate system placed exactly in the upper left corner? Yes, because it is from there that we begin to read any texts. It's very easy to remember.

Why is the $x$ axis pointing down and not to the right? Again, it's simple: take the standard coordinate system (the $x$ axis goes to the right, the $y$ axis goes up) and rotate it so that it encloses the matrix. This is a 90 degree clockwise rotation - we see its result in the picture.

In general, we figured out how to determine the indices of the matrix elements. Now let's deal with multiplication.

Definition. The matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$, when the number of columns in the first matches the number of rows in the second, are called consistent.

It's in that order. One can be ambiguous and say that the matrices $A$ and $B$ form an ordered pair $\left(A;B \right)$: if they are consistent in this order, then it is not at all necessary that $B$ and $A$, those. the pair $\left(B;A \right)$ is also consistent.

Only consistent matrices can be multiplied.

Definition. The product of consistent matrices $A=\left[ m\times n \right]$ and $B=\left[ n\times k \right]$ is the new matrix $C=\left[ m\times k \right]$ , whose elements $((c)_(ij))$ are calculated by the formula:

\[((c)_(ij))=\sum\limits_(k=1)^(n)(((a)_(ik)))\cdot ((b)_(kj))\]

In other words: to get the element $((c)_(ij))$ of the matrix $C=A\cdot B$, you need to take the $i$-row of the first matrix, the $j$-th column of the second matrix, and then multiply in pairs elements from this row and column. Add up the results.

Yes, that's a harsh definition. Several facts immediately follow from it:

  1. Matrix multiplication is, generally speaking, non-commutative: $A\cdot B\ne B\cdot A$;
  2. However, multiplication is associative: $\left(A\cdot B \right)\cdot C=A\cdot \left(B\cdot C \right)$;
  3. And even distributive: $\left(A+B \right)\cdot C=A\cdot C+B\cdot C$;
  4. And distributive again: $A\cdot \left(B+C \right)=A\cdot B+A\cdot C$.

The distributivity of multiplication had to be described separately for the left and right multiplier-sum just because of the non-commutativity of the multiplication operation.

If, nevertheless, it turns out that $A\cdot B=B\cdot A$, such matrices are called permutable.

Among all the matrices that are multiplied by something there, there are special ones - those that, when multiplied by any matrix $A$, again give $A$:

Definition. A matrix $E$ is called identity if $A\cdot E=A$ or $E\cdot A=A$. In the case of a square matrix $A$ we can write:

The identity matrix is ​​a frequent guest in solving matrix equations. And in general, a frequent guest in the world of matrices. :)

And because of this $E$, someone came up with all the game that will be written next.

What is an inverse matrix

Since matrix multiplication is a very time-consuming operation (you have to multiply a bunch of rows and columns), the concept of an inverse matrix is ​​also not the most trivial. And it needs some explanation.

Key Definition

Well, it's time to know the truth.

Definition. The matrix $B$ is called the inverse of the matrix $A$ if

The inverse matrix is ​​denoted by $((A)^(-1))$ (not to be confused with the degree!), so the definition can be rewritten like this:

It would seem that everything is extremely simple and clear. But when analyzing such a definition, several questions immediately arise:

  1. Does an inverse matrix always exist? And if not always, then how to determine: when it exists and when it does not?
  2. And who said that such a matrix is ​​exactly one? What if for some original matrix $A$ there is a whole crowd of inverses?
  3. What do all these "reverses" look like? And how do you actually count them?

As for the calculation algorithms - we will talk about this a little later. But we will answer the rest of the questions right now. Let us arrange them in the form of separate assertions-lemmas.

Basic properties

Let's start with how the matrix $A$ should look like in order for it to have $((A)^(-1))$. Now we will make sure that both of these matrices must be square, and of the same size: $\left[ n\times n \right]$.

Lemma 1. Given a matrix $A$ and its inverse $((A)^(-1))$. Then both of these matrices are square and have the same order $n$.

Proof. Everything is simple. Let the matrix $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ a\times b \right]$. Since the product $A\cdot ((A)^(-1))=E$ exists by definition, the matrices $A$ and $((A)^(-1))$ are consistent in that order:

\[\begin(align) & \left[ m\times n \right]\cdot \left[ a\times b \right]=\left[ m\times b \right] \\ & n=a \end( align)\]

This is a direct consequence of the matrix multiplication algorithm: the coefficients $n$ and $a$ are "transit" and must be equal.

At the same time, the inverse multiplication is also defined: $((A)^(-1))\cdot A=E$, so the matrices $((A)^(-1))$ and $A$ are also consistent in this order:

\[\begin(align) & \left[ a\times b \right]\cdot \left[ m\times n \right]=\left[ a\times n \right] \\ & b=m \end( align)\]

Thus, without loss of generality, we can assume that $A=\left[ m\times n \right]$, $((A)^(-1))=\left[ n\times m \right]$. However, according to the definition of $A\cdot ((A)^(-1))=((A)^(-1))\cdot A$, so the dimensions of the matrices are exactly the same:

\[\begin(align) & \left[ m\times n \right]=\left[ n\times m \right] \\ & m=n \end(align)\]

So it turns out that all three matrices - $A$, $((A)^(-1))$ and $E$ - are square in size $\left[ n\times n \right]$. The lemma is proven.

Well, that's already good. We see that only square matrices are invertible. Now let's make sure that the inverse matrix is ​​always the same.

Lemma 2. Given a matrix $A$ and its inverse $((A)^(-1))$. Then this inverse matrix is ​​unique.

Proof. Let's start from the opposite: let the matrix $A$ have at least two instances of inverses — $B$ and $C$. Then, according to the definition, the following equalities are true:

\[\begin(align) & A\cdot B=B\cdot A=E; \\ & A\cdot C=C\cdot A=E. \\ \end(align)\]

From Lemma 1 we conclude that all four matrices $A$, $B$, $C$ and $E$ are square of the same order: $\left[ n\times n \right]$. Therefore, the product is defined:

Since matrix multiplication is associative (but not commutative!), we can write:

\[\begin(align) & B\cdot A\cdot C=\left(B\cdot A \right)\cdot C=E\cdot C=C; \\ & B\cdot A\cdot C=B\cdot \left(A\cdot C \right)=B\cdot E=B; \\ & B\cdot A\cdot C=C=B\Rightarrow B=C. \\ \end(align)\]

We got the only possible option: two copies of the inverse matrix are equal. The lemma is proven.

The above reasoning almost verbatim repeats the proof of the uniqueness of the inverse element for all real numbers $b\ne 0$. The only significant addition is taking into account the dimension of matrices.

However, we still do not know anything about whether any square matrix is ​​invertible. Here the determinant comes to our aid - this is a key characteristic for all square matrices.

Lemma 3 . Given a matrix $A$. If the matrix $((A)^(-1))$ inverse to it exists, then the determinant of the original matrix is ​​nonzero:

\[\left| A \right|\ne 0\]

Proof. We already know that $A$ and $((A)^(-1))$ are square matrices of size $\left[ n\times n \right]$. Therefore, for each of them it is possible to calculate the determinant: $\left| A \right|$ and $\left| ((A)^(-1)) \right|$. However, the determinant of the product is equal to the product of the determinants:

\[\left| A\cdot B \right|=\left| A \right|\cdot \left| B \right|\Rightarrow \left| A\cdot ((A)^(-1)) \right|=\left| A \right|\cdot \left| ((A)^(-1)) \right|\]

But according to the definition of $A\cdot ((A)^(-1))=E$, and the determinant of $E$ is always equal to 1, so

\[\begin(align) & A\cdot ((A)^(-1))=E; \\ & \left| A\cdot ((A)^(-1)) \right|=\left| E\right|; \\ & \left| A \right|\cdot \left| ((A)^(-1)) \right|=1. \\ \end(align)\]

The product of two numbers is equal to one only if each of these numbers is different from zero:

\[\left| A \right|\ne 0;\quad \left| ((A)^(-1)) \right|\ne 0.\]

So it turns out that $\left| A \right|\ne 0$. The lemma is proven.

In fact, this requirement is quite logical. Now we will analyze the algorithm for finding the inverse matrix - and it will become completely clear why, in principle, no inverse matrix can exist with a zero determinant.

But first, let's formulate an "auxiliary" definition:

Definition. A degenerate matrix is ​​a square matrix of size $\left[ n\times n \right]$ whose determinant is zero.

Thus, we can assert that any invertible matrix is ​​nondegenerate.

How to find the inverse matrix

Now we will consider a universal algorithm for finding inverse matrices. In general, there are two generally accepted algorithms, and we will also consider the second one today.

The one that will be considered now is very efficient for matrices of size $\left[ 2\times 2 \right]$ and - in part - of size $\left[ 3\times 3 \right]$. But starting from the size $\left[ 4\times 4 \right]$ it is better not to use it. Why - now you will understand everything.

Algebraic additions

Get ready. Now there will be pain. No, don't worry: a beautiful nurse in a skirt, stockings with lace does not come to you and will not give you an injection in the buttock. Everything is much more prosaic: algebraic additions and Her Majesty the "Union Matrix" are coming to you.

Let's start with the main one. Let there be a square matrix of size $A=\left[ n\times n \right]$ whose elements are named $((a)_(ij))$. Then, for each such element, one can define an algebraic complement:

Definition. Algebraic complement $((A)_(ij))$ to the element $((a)_(ij))$ in the $i$-th row and $j$-th column of the matrix $A=\left[ n \times n \right]$ is a construction of the form

\[((A)_(ij))=((\left(-1 \right))^(i+j))\cdot M_(ij)^(*)\]

Where $M_(ij)^(*)$ is the determinant of the matrix obtained from the original $A$ by deleting the same $i$-th row and $j$-th column.

Again. The algebraic complement to the matrix element with coordinates $\left(i;j \right)$ is denoted as $((A)_(ij))$ and is calculated according to the scheme:

  1. First, we delete the $i$-row and the $j$-th column from the original matrix. We get a new square matrix, and we denote its determinant as $M_(ij)^(*)$.
  2. Then we multiply this determinant by $((\left(-1 \right))^(i+j))$ - at first this expression may seem mind-blowing, but in fact we just find out the sign in front of $M_(ij)^(*) $.
  3. We count - we get a specific number. Those. the algebraic addition is just a number, not some new matrix, and so on.

The matrix $M_(ij)^(*)$ itself is called the complementary minor to the element $((a)_(ij))$. And in this sense, the above definition of an algebraic complement is a special case of a more complex definition - the one that we considered in the lesson about the determinant.

Important note. Actually, in "adult" mathematics, algebraic additions are defined as follows:

  1. We take $k$ rows and $k$ columns in a square matrix. At their intersection, we get a matrix of size $\left[ k\times k \right]$ — its determinant is called a minor of order $k$ and is denoted by $((M)_(k))$.
  2. Then we cross out these "selected" $k$ rows and $k$ columns. Again, we get a square matrix - its determinant is called the complementary minor and is denoted by $M_(k)^(*)$.
  3. Multiply $M_(k)^(*)$ by $((\left(-1 \right))^(t))$, where $t$ is (attention now!) the sum of the numbers of all selected rows and columns . This will be the algebraic addition.

Take a look at the third step: there is actually a sum of $2k$ terms! Another thing is that for $k=1$ we get only 2 terms - these will be the same $i+j$ - the "coordinates" of the element $((a)_(ij))$, for which we are looking for an algebraic complement.

So today we use a slightly simplified definition. But as we will see later, it will be more than enough. Much more important is the following:

Definition. The union matrix $S$ to the square matrix $A=\left[ n\times n \right]$ is a new matrix of size $\left[ n\times n \right]$, which is obtained from $A$ by replacing $(( a)_(ij))$ by algebraic complements $((A)_(ij))$:

\\Rightarrow S=\left[ \begin(matrix) ((A)_(11)) & ((A)_(12)) & ... & ((A)_(1n)) \\ (( A)_(21)) & ((A)_(22)) & ... & ((A)_(2n)) \\ ... & ... & ... & ... \\ ((A)_(n1)) & ((A)_(n2)) & ... & ((A)_(nn)) \\\end(matrix) \right]\]

The first thought that arises at the moment of realizing this definition is “this is how much you have to count in total!” Relax: you have to count, but not so much. :)

Well, all this is very nice, but why is it necessary? But why.

Main theorem

Let's go back a little. Remember, Lemma 3 stated that an invertible matrix $A$ is always non-singular (that is, its determinant is non-zero: $\left| A \right|\ne 0$).

So, the converse is also true: if the matrix $A$ is not degenerate, then it is always invertible. And there is even a search scheme $((A)^(-1))$. Check it out:

Inverse matrix theorem. Let a square matrix $A=\left[ n\times n \right]$ be given, and its determinant is nonzero: $\left| A \right|\ne 0$. Then the inverse matrix $((A)^(-1))$ exists and is calculated by the formula:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))\]

And now - all the same, but in legible handwriting. To find the inverse matrix, you need:

  1. Calculate the determinant $\left| A \right|$ and make sure it's non-zero.
  2. Compile the union matrix $S$, i.e. count 100500 algebraic additions $((A)_(ij))$ and put them in place $((a)_(ij))$.
  3. Transpose this matrix $S$ and then multiply it by some number $q=(1)/(\left| A \right|)\;$.

And that's it! The inverse matrix $((A)^(-1))$ is found. Let's look at examples:

\[\left[ \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right]\]

Solution. Let's check the reversibility. Let's calculate the determinant:

\[\left| A \right|=\left| \begin(matrix) 3 & 1 \\ 5 & 2 \\\end(matrix) \right|=3\cdot 2-1\cdot 5=6-5=1\]

The determinant is different from zero. So the matrix is ​​invertible. Let's create a union matrix:

Let's calculate the algebraic additions:

\[\begin(align) & ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| 2\right|=2; \\ & ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| 5\right|=-5; \\ & ((A)_(21))=((\left(-1 \right))^(2+1))\cdot \left| 1 \right|=-1; \\ & ((A)_(22))=((\left(-1 \right))^(2+2))\cdot \left| 3\right|=3. \\ \end(align)\]

Pay attention: determinants |2|, |5|, |1| and |3| are the determinants of matrices of size $\left[ 1\times 1 \right]$, not modules. Those. if there were negative numbers in the determinants, it is not necessary to remove the "minus".

In total, our union matrix looks like this:

\[((A)^(-1))=\frac(1)(\left| A \right|)\cdot ((S)^(T))=\frac(1)(1)\cdot ( (\left[ \begin(array)(*(35)(r)) 2 & -5 \\ -1 & 3 \\\end(array) \right])^(T))=\left[ \begin (array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]\]

OK it's all over Now. Problem solved.

Answer. $\left[ \begin(array)(*(35)(r)) 2 & -1 \\ -5 & 3 \\\end(array) \right]$

A task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right] \]

Solution. Again, we consider the determinant:

\[\begin(align) & \left| \begin(array)(*(35)(r)) 1 & -1 & 2 \\ 0 & 2 & -1 \\ 1 & 0 & 1 \\\end(array) \right|=\begin(matrix ) \left(1\cdot 2\cdot 1+\left(-1 \right)\cdot \left(-1 \right)\cdot 1+2\cdot 0\cdot 0 \right)- \\ -\left (2\cdot 2\cdot 1+\left(-1 \right)\cdot 0\cdot 1+1\cdot \left(-1 \right)\cdot 0 \right) \\\end(matrix)= \ \ & =\left(2+1+0 \right)-\left(4+0+0 \right)=-1\ne 0. \\ \end(align)\]

The determinant is different from zero — the matrix is ​​invertible. But now it will be the most tinny: you have to count as many as 9 (nine, damn it!) Algebraic additions. And each of them will contain the $\left[ 2\times 2 \right]$ qualifier. Flew:

\[\begin(matrix) ((A)_(11))=((\left(-1 \right))^(1+1))\cdot \left| \begin(matrix) 2 & -1 \\ 0 & 1 \\\end(matrix) \right|=2; \\ ((A)_(12))=((\left(-1 \right))^(1+2))\cdot \left| \begin(matrix) 0 & -1 \\ 1 & 1 \\\end(matrix) \right|=-1; \\ ((A)_(13))=((\left(-1 \right))^(1+3))\cdot \left| \begin(matrix) 0 & 2 \\ 1 & 0 \\\end(matrix) \right|=-2; \\ ... \\ ((A)_(33))=((\left(-1 \right))^(3+3))\cdot \left| \begin(matrix) 1 & -1 \\ 0 & 2 \\\end(matrix) \right|=2; \\ \end(matrix)\]

In short, the union matrix will look like this:

Therefore, the inverse matrix will be:

\[((A)^(-1))=\frac(1)(-1)\cdot \left[ \begin(matrix) 2 & -1 & -2 \\ 1 & -1 & -1 \\ -3 & 1 & 2 \\\end(matrix) \right]=\left[ \begin(array)(*(35)(r))-2 & -1 & 3 \\ 1 & 1 & -1 \ \ 2 & 1 & -2 \\\end(array) \right]\]

Well, that's all. Here is the answer.

Answer. $\left[ \begin(array)(*(35)(r)) -2 & -1 & 3 \\ 1 & 1 & -1 \\ 2 & 1 & -2 \\\end(array) \right ]$

As you can see, at the end of each example, we performed a check. In this regard, an important note:

Don't be lazy to check. Multiply the original matrix by the found inverse - you should get $E$.

It is much easier and faster to perform this check than to look for an error in further calculations, when, for example, you solve a matrix equation.

Alternative way

As I said, the inverse matrix theorem works fine for the sizes $\left[ 2\times 2 \right]$ and $\left[ 3\times 3 \right]$ (in the latter case, it's not so "beautiful" anymore). ”), but for large matrices, sadness begins.

But don't worry: there is an alternative algorithm that can be used to calmly find the inverse even for the $\left[ 10\times 10 \right]$ matrix. But, as is often the case, to consider this algorithm, we need a little theoretical background.

Elementary transformations

Among the various transformations of the matrix, there are several special ones - they are called elementary. There are exactly three such transformations:

  1. Multiplication. You can take the $i$-th row (column) and multiply it by any number $k\ne 0$;
  2. Addition. Add to the $i$-th row (column) any other $j$-th row (column) multiplied by any number $k\ne 0$ (of course, $k=0$ is also possible, but what's the point of that? ?Nothing will change though).
  3. Permutation. Take the $i$-th and $j$-th rows (columns) and swap them.

Why these transformations are called elementary (for large matrices they do not look so elementary) and why there are only three of them - these questions are beyond the scope of today's lesson. Therefore, we will not go into details.

Another thing is important: we have to perform all these perversions on the associated matrix. Yes, yes, you heard right. Now there will be one more definition - the last one in today's lesson.

Attached Matrix

Surely in school you solved systems of equations using the addition method. Well, there, subtract another from one line, multiply some line by a number - that's all.

So: now everything will be the same, but already “in an adult way”. Ready?

Definition. Let the matrix $A=\left[ n\times n \right]$ and the identity matrix $E$ of the same size $n$ be given. Then the associated matrix $\left[ A\left| E\right. \right]$ is a new $\left[ n\times 2n \right]$ matrix that looks like this:

\[\left[ A\left| E\right. \right]=\left[ \begin(array)(rrrr|rrrr)((a)_(11)) & ((a)_(12)) & ... & ((a)_(1n)) & 1 & 0 & ... & 0 \\((a)_(21)) & ((a)_(22)) & ... & ((a)_(2n)) & 0 & 1 & ... & 0 \\... & ... & ... & ... & ... & ... & ... & ... \\((a)_(n1)) & ((a)_(n2)) & ... & ((a)_(nn)) & 0 & 0 & ... & 1 \\\end(array) \right]\]

In short, we take the matrix $A$, on the right we assign to it the identity matrix $E$ of the required size, we separate them with a vertical bar for beauty - here's the attached one. :)

What's the catch? And here's what:

Theorem. Let the matrix $A$ be invertible. Consider the adjoint matrix $\left[ A\left| E\right. \right]$. If using elementary string transformations bring it to the form $\left[ E\left| B\right. \right]$, i.e. by multiplying, subtracting and rearranging rows to obtain from $A$ the matrix $E$ on the right, then the matrix $B$ obtained on the left is the inverse of $A$:

\[\left[ A\left| E\right. \right]\to \left[ E\left| B\right. \right]\Rightarrow B=((A)^(-1))\]

It's that simple! In short, the algorithm for finding the inverse matrix looks like this:

  1. Write the associated matrix $\left[ A\left| E\right. \right]$;
  2. Perform elementary string conversions until the right instead of $A$ appears $E$;
  3. Of course, something will also appear on the left - a certain matrix $B$. This will be the reverse;
  4. PROFITS! :)

Of course, much easier said than done. So let's look at a couple of examples: for the sizes $\left[ 3\times 3 \right]$ and $\left[ 4\times 4 \right]$.

A task. Find the inverse matrix:

\[\left[ \begin(array)(*(35)(r)) 1 & 5 & 1 \\ 3 & 2 & 1 \\ 6 & -2 & 1 \\\end(array) \right]\ ]

Solution. We compose the attached matrix:

\[\left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & -2 & 1 & 0 & 0 & 1 \\\end(array) \right]\]

Since the last column of the original matrix is ​​filled with ones, subtract the first row from the rest:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 3 & 2 & 1 & 0 & 1 & 0 \\ 6 & - 2 & 1 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\\end(matrix)\to \\ & \to \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

There are no more units, except for the first line. But we do not touch it, otherwise the newly removed units will begin to "multiply" in the third column.

But we can subtract the second line twice from the last one - we get a unit in the lower left corner:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 5 & -7 & 0 & -1 & 0 & 1 \\\end(array) \right]\begin(matrix) \ \\ \downarrow \\ -2 \\\end(matrix)\to \\ & \left [ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Now we can subtract the last row from the first and twice from the second - in this way we will “zero out” the first column:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 1 & 5 & 1 & 1 & 0 & 0 \\ 2 & -3 & 0 & -1 & 1 & 0 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -1 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \ to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right] \\ \end(align)\]

Multiply the second row by −1 and then subtract it 6 times from the first and add 1 time to the last:

\[\begin(align) & \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & -1 & 0 & -3 & 5 & -2 \ \ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 6 & 1 & 0 & 2 & -1 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & -1 & 0 & 1 & -2 & 1 \\\end(array) \right]\begin(matrix) -6 \\ \updownarrow \\ +1 \\\end (matrix)\to \\ & \to \left[ \begin(array)(rrr|rrr) 0 & 0 & 1 & -18 & 32 & -13 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 1 & 0 & 0 & 4 & -7 & 3 \\\end(array) \right] \\ \end(align)\]

It remains only to swap lines 1 and 3:

\[\left[ \begin(array)(rrr|rrr) 1 & 0 & 0 & 4 & -7 & 3 \\ 0 & 1 & 0 & 3 & -5 & 2 \\ 0 & 0 & 1 & - 18 & 32 & -13 \\\end(array) \right]\]

Ready! On the right is the required inverse matrix.

Answer. $\left[ \begin(array)(*(35)(r))4 & -7 & 3 \\ 3 & -5 & 2 \\ -18 & 32 & -13 \\\end(array) \right ]$

A task. Find the inverse matrix:

\[\left[ \begin(matrix) 1 & 4 & 2 & 3 \\ 1 & -2 & 1 & -2 \\ 1 & -1 & 1 & 1 \\ 0 & -10 & -2 & -5 \\\end(matrix) \right]\]

Solution. Again we compose the attached one:

\[\left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \ \ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\]

Let's borrow a little, worry about how much we have to count now ... and start counting. To begin with, we “zero out” the first column by subtracting row 1 from rows 2 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 1 & -2 & 1 & -2 & 0 & 1 & 0 & 0 \\ 1 & -1 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right]\begin(matrix) \downarrow \\ -1 \\ -1 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & -1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\\end(array) \right] \\ \end(align)\]

We observe too many "minuses" in lines 2-4. Multiply all three rows by −1, and then burn out the third column by subtracting row 3 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & -6 & -1 & -5 & - 1 & 1 & 0 & 0 \\ 0 & -5 & -1 & -2 & -1 & 0 & 1 & 0 \\ 0 & -10 & -2 & -5 & 0 & 0 & 0 & 1 \\ \end(array) \right]\begin(matrix) \ \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\ \left| \cdot \left(-1 \right) \right. \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 4 & 2 & 3 & 1 & 0 & 0 & 0 \\ 0 & 6 & 1 & 5 & ​​1 & -1 & 0 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 10 & 2 & 5 & 0 & 0 & 0 & -1 \\\end (array) \right]\begin(matrix) -2 \\ -1 \\ \updownarrow \\ -2 \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr| rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Now it's time to "fry" the last column of the original matrix: subtract row 4 from the rest:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & -1 & -1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 3 & 0 & -1 & 1 & 0 \\ 0 & 5 & 1 & 2 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array ) \right]\begin(matrix) +1 \\ -3 \\ -2 \\ \uparrow \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

Final roll: "burn out" the second column by subtracting row 2 from row 1 and 3:

\[\begin(align) & \left[ \begin(array)(rrrr|rrrr) 1 & -6 & 0 & 0 & -3 & 0 & 4 & -1 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 5 & 1 & 0 & 5 & 0 & -5 & 2 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end( array) \right]\begin(matrix) 6 \\ \updownarrow \\ -5 \\ \ \\\end(matrix)\to \\ & \to \left[ \begin(array)(rrrr|rrrr) 1 & 0 & 0 & 0 & 33 & -6 & -26 & -17 \\ 0 & 1 & 0 & 0 & 6 & -1 & -5 & 3 \\ 0 & 0 & 1 & 0 & -25 & 5 & 20 & -13 \\ 0 & 0 & 0 & 1 & -2 & 0 & 2 & -1 \\\end(array) \right] \\ \end(align)\]

And again, the identity matrix on the left, so the inverse on the right. :)

Answer. $\left[ \begin(matrix) 33 & -6 & -26 & 17 \\ 6 & -1 & -5 & 3 \\ -25 & 5 & 20 & -13 \\ -2 & 0 & 2 & - 1 \\\end(matrix) \right]$

OK it's all over Now. Do the check yourself - I'm scrapped. :)

Consider a square matrix . Denote by Δ = det A its determinant. A square B is (OM) for a square A of the same order if their product A*B = B*A = E, where E is the identity matrix of the same order as A and B.

A square A is called non-degenerate, or non-singular, if its determinant is non-zero, and degenerate, or special, if Δ = 0.

Theorem. In order for A to have an inverse, it is necessary and sufficient that its determinant be different from zero.

(OM) A, denoted by A -1, so that B \u003d A -1 and is calculated by the formula

, (1)

where А i j - algebraic complements of elements a i j , Δ = detA.

Calculating A -1 by formula (1) for high-order matrices is very laborious, so in practice it is convenient to find A -1 using the method of elementary transformations (EP). Any non-singular A by means of EP of only columns (or only rows) can be reduced to unit E. If EPs performed over the matrix A are applied in the same order to unit E, then the result will be A -1 . It is convenient to perform an EP on A and E at the same time, writing both side by side through the line A|E. If you want to find A -1 , you should use only rows or only columns in your conversions.

Finding the Inverse Matrix Using Algebraic Complements

Example 1. For find A -1 .

Solution. We first find the determinant A
hence, (OM) exists and we can find it by the formula: , where A i j (i,j=1,2,3) - algebraic complements of elements a i j of the original A.

The algebraic complement of the element a ij is the determinant or minor M ij . It is obtained by deleting column i and row j. The minor is then multiplied by (-1) i+j , i.e. A ij =(-1) i+j M ij

where .

Finding the inverse matrix using elementary transformations

Example 2. Using the method of elementary transformations, find A -1 for: A \u003d.

Solution. We attribute to the original A on the right a unit of the same order: . With the help of elementary column transformations, we reduce the left “half” to the unit one, simultaneously performing exactly such transformations on the right “half”.
To do this, swap the first and second columns: ~. We add the first to the third column, and the first multiplied by -2 to the second: . From the first column we subtract the doubled second, and from the third - the second multiplied by 6; . Let's add the third column to the first and second: . Multiply the last column by -1: . The square table obtained to the right of the vertical bar is the inverse of A -1. So,
.

An inverse matrix for a given one is such a matrix, multiplication of the original one by which gives an identity matrix: A mandatory and sufficient condition for the presence of an inverse matrix is ​​the inequality of the determinant of the original one (which in turn implies that the matrix must be square). If the determinant of a matrix is ​​equal to zero, then it is called degenerate and such a matrix has no inverse. In higher mathematics, inverse matrices are important and are used to solve a number of problems. For example, on finding the inverse matrix a matrix method for solving systems of equations is constructed. Our service site allows calculate inverse matrix online two methods: the Gauss-Jordan method and using the matrix of algebraic additions. The first implies a large number of elementary transformations within the matrix, the second - the calculation of the determinant and algebraic additions to all elements. To calculate the determinant of a matrix online, you can use our other service - Calculating the determinant of a matrix online

.

Find the inverse matrix on the site

website allows you to find inverse matrix online fast and free. On the site, calculations are made by our service and a result is displayed with a detailed solution for finding inverse matrix. The server always gives only the exact and correct answer. In tasks by definition inverse matrix online, it is necessary that the determinant matrices was different from zero, otherwise website will report the impossibility of finding the inverse matrix due to the fact that the determinant of the original matrix is ​​equal to zero. Finding task inverse matrix found in many branches of mathematics, being one of the most basic concepts of algebra and a mathematical tool in applied problems. Independent inverse matrix definition requires considerable effort, a lot of time, calculations and great care in order not to make a slip or a small error in the calculations. Therefore, our service finding the inverse matrix online will greatly facilitate your task and will become an indispensable tool for solving mathematical problems. Even if you find inverse matrix yourself, we recommend checking your solution on our server. Enter your original matrix on our Calculate Inverse Matrix Online and check your answer. Our system is never wrong and finds inverse matrix given dimension in the mode online instantly! On the site website character entries are allowed in elements matrices, in this case inverse matrix online will be presented in general symbolic form.