Linear Algebra
Matrices
A matrix is a rectangular array of numbers, and is often used to represent the coefficients of a set of simultaneous equations. Two or more equations are simultaneous if each time a variable appears in any of the equations, it represents the same quantity. For example, suppose the following relationship exists between the ages of a brother and two sisters: Jack is three years older than his sister Mary, and eleven years older than his sister Nancy, who is half as old as Mary. There are three separate statements here, each of which can be translated into mathematical notation, as follows:
This is a system of three simultaneous equations in three unknowns. Each unknown age is represented by a variable. Each time a particular variable appears in an equation, it stands for the same quantity. In order to see how the concept of a matrix enters, rewrite the above equations, using the standard rules of algebra, as:
Since a matrix is a rectangular array of numbers, the coefficients of equations (1'), (2'), and (3') can be written in the form of a matrix, A, called the matrix of coefficients, by letting each column contain the coefficients of a given variable (j, m, and n from left to right) and each row contain the coefficients of a single equation (equations (1'), (2'), and (3') from top to bottom. That is,
Matrix multiplication is carried out by multiplying each row in the left matrix times each column in the right matrix. Thinking of the left matrix as containing a number of "row vectors" and the right matrix as containing a number of "column vectors," matrix multiplication consists of a series of vector dot products. Row 1 times column 1 produces a term in row 1 column 1 of the product matrix, row 2 times column 1 produces a term in row 2 column 1 of the product matrix, and so on, until each row has been multiplied by each column. The product matrix has the same number of rows as the left matrix and the same number of columns as the right matrix. In order that two matrices be compatible for multiplication, the right must have the same number of rows as the left has columns. The matrix with ones on the diagonal (the diagonal of a matrix begins in the upper left corner and ends in the lower right corner) and all other elements zero, is the identity element for multiplication of matrices, usually denoted by I. Thus the inverse of a matrix A is the matrix A-1 such that AA-1 = I. Not every matrix has an inverse, however, if a square matrix has an inverse, then A-1A = AA-1 = I. That is, multiplication of a square matrix by its inverse is commutative.
Just as a matrix can be thought of as a collection of vectors, a vector can be thought of as a one-column, or one-row, matrix. Thus, multiplication of a vector by a matrix is accomplished using the rules of matrix multiplication. For example, let the variables in the previous example be represented by the vector j = (j,m,n). Then the product of the coefficient matrix, A, times the vector, j, results in a three-row, one-column matrix, containing terms that correspond to the left hand side of each of equations (1'), (2'), and (3').
Finally, by expressing the constants on the right hand side of those equations as a constant column vector, c, the three equations can be written as the single matrix equation: Aj = c. This equation can be solved using the inverse of the matrix A. That is, multiplying both sides of the equation by the inverse of A provides the solution: j = A-1c. The general method for finding the inverse of a matrix and hence the solution to a system of equations is given by Cramer's rule.
Additional topics
Science EncyclopediaScience & Philosophy: Laser - Background And History to Linear equationLinear Algebra - Historical Background, Fundamental Principles, Matrices, Applications - Vectors