All notes
LinearAlgebra

Eigenvalues and eigenvectors

The prefix eigen- sources from German word for "self-" or "unique to".

An eigenvector of a square matrix $A$ is any non-zero vector $v$ so that the equation $Av=\lambda v$ holds, and the $\lambda$ is called the eigenvalue corresponding to $v$.

For a transformation matrix $A$, any of its eigenvectors, $v$, after the transformation represented in $A$, will stay at the same direction as before, while stretched by its corresponding eigenvalue. This is a way of visualising the equation $Av = \lambda v$.

An example for calculating eigenvectors

How to calculate the eigenvectors and eigenvalues? Suppose we have a square matrix $A$ and $Av=\lambda v$. Subtracting both sides with the right side, we have $(A-\lambda I)v=0$ (eqn 1). Since $v$ is not zero vector, $det(A-\lambda I)=0$. For example, as to the matrix $\left[ \begin{array}{cc} 2 & 1\\ 1 & 2 \end{array} \right]$, we have the $dev(A-\lambda I)=0$ equation as $\left| \begin{array}{cc} 2-\lambda & 1\\ 1 & 2-\lambda \end{array} \right| = 0$, which is equivalent to the equation $4-4\lambda+\lambda^2-1=0$, or $3-4\lambda+\lambda^2=0$. Recall that the solutions in the quadratic equation of one unknown $ax^2+bx+c=0$ are $\frac{-b \pm \sqrt{b^2-4ac}}{2a}$. We could get the two eigenvalues: $2 \pm 1/2 \sqrt{16-12}$, e.g. $3$ and $1$. Substitute those two eigenvalues into the equation (1) and we get $\left[ \begin{array}{cc} 2-3 & 1\\ 1 & 2-3 \end{array} \right]v = 0$, and $\left[ \begin{array}{cc} 2-1 & 1\\ 1 & 2-1 \end{array} \right]v = 0$. Thus the two eigenvectors, assuming they have the form $[a, b]^T$, could be calculated as $\left\{ \begin{array}{c} -a+b=0 \\ a-b=0 \end{array} \right.$, which represents the vector $[a, a]$ corresponding to eigenvalue $3$. And $\left\{ \begin{array}{c} a+b=0 \\ a+b=0 \end{array} \right.$, which represents the vector $[a, -a]$ corresponding to eigenvalue $1$. Therefore, the matrix is a linear operator stretching all vectors parallel to $y=x$ in a factor of 3, while keeping the perpendicular vectors un-changed.

Trivial cases

If $A$ is diagonal matrix (with $A_{ij}=0$ whenever $i \neq j$), all its diagonal elements consists of the set of eigenvalues.

The notion could be extended to any vector space, where the eigenvector becomes eigenfunction. For example, the exponential function $f(x)=e^{\lambda x}$ is an eigenfunction of the derivative operator, $\prime$, with the eigenvalue $\lambda$, since $f'(x)=\lambda e^{\lambda x}$.

There are some related notions:
For a matrix (or linear operator),

Hermitian matrix

A hermitian matrix is the complex analogy of symmetric real matrix. A symmetric real matrix $A$ is defined to obey $A^T=A$. A hermitian matrix $A$ is defined to obey $A^*=A$, where $A^*$ is sometimes written as $A^H$, denoting the conjugate transpose - first conjugate every element and then do a transposition. $A^*=A$ could also be written as $a_{ij}=\bar{a_{ji}}$.

Properties:

Positive definite matrix

A symmetric real matrix $M$ is said to be positive definite $\iff \forall$ non-zero column vectors $v$, $v^TMv$ is always positive. Similarly, a Hermitian matrix $M$ is positive definite $\iff v^*Mv$ is always positive.
General definition: An $n \times n$ matrix $M$ is positive definite if $R[x^*Mx]>0 \, \forall x \neq 0$.

A symmetric matrix is always rectangular since $M^T=M \rightarrow$ row number is equal to column number. But why symmetric?

From Wolfram Mathworld, positive definite real matrix doesn't need to be symmetric. It says: A real rectangular matrix $A$ is positive definite $\iff$ its symmetric part $A_s=1/2(A^T+A)$ is positive definite. Could you provide an example of such a non-symmetric matrix?

Orthogonal basis

Wikipedia. The concept of an orthogonal (but not of an orthonormal) basis is applicable to a vector space V (over any field) equipped with a symmetric bilinear form ⟨·,·⟩, where orthogonality of two vectors v and w means ⟨v, w⟩ = 0. For an orthogonal basis $\{\mathbf{e}_k\}$: $$ \langle\mathbf{e}_j,\mathbf{e}_k\rangle = \left\{\begin{array}{ll}q(\mathbf{e}_k) & j = k \\ 0 & j \ne k \end{array}\right.\quad, $$ where q is a quadratic form associated with ⟨·,·⟩: q(v) = ⟨v, v⟩ (in an inner product space $q(\mathbf{v}) = | \mathbf{v} |^2)$. Hence, $$ \langle\mathbf{v},\mathbf{w}\rangle = \sum\limits_{k} q(\mathbf{e}_k) v^k w^k\ , $$ where vk and wk are components of v and w in $\{\mathbf{e}_k\}$.

Vector field

References: Wolfram.