# Eigenvalues and eigenvectors

The prefix *eigen-* sources from German word for "self-" or "unique to".

An eigenvector of a *square* matrix $A$ is any *non-zero* vector $v$ so that the equation $Av=\lambda v$ holds, and the $\lambda$ is called the eigenvalue corresponding to $v$.

For a transformation matrix $A$, any of its eigenvectors, $v$, after the transformation represented in $A$, will stay at the same direction as before, while stretched by its corresponding eigenvalue. This is a way of visualising the equation $Av = \lambda v$.

#### An example for calculating eigenvectors

How to calculate the eigenvectors and eigenvalues? Suppose we have a square matrix $A$ and $Av=\lambda v$. Subtracting both sides with the right side, we have $(A-\lambda I)v=0$ (eqn 1). Since $v$ is not zero vector, $det(A-\lambda I)=0$. For example, as to the matrix $\left[ \begin{array}{cc} 2 & 1\\ 1 & 2 \end{array} \right]$, we have the $dev(A-\lambda I)=0$ equation as $\left| \begin{array}{cc} 2-\lambda & 1\\ 1 & 2-\lambda \end{array} \right| = 0$, which is equivalent to the equation $4-4\lambda+\lambda^2-1=0$, or $3-4\lambda+\lambda^2=0$. Recall that the solutions in the quadratic equation of one unknown $ax^2+bx+c=0$ are $\frac{-b \pm \sqrt{b^2-4ac}}{2a}$. We could get the two eigenvalues: $2 \pm 1/2 \sqrt{16-12}$, e.g. $3$ and $1$. Substitute those two eigenvalues into the equation (1) and we get $\left[ \begin{array}{cc} 2-3 & 1\\ 1 & 2-3 \end{array} \right]v = 0$, and $\left[ \begin{array}{cc} 2-1 & 1\\ 1 & 2-1 \end{array} \right]v = 0$. Thus the two eigenvectors, assuming they have the form $[a, b]^T$, could be calculated as $\left\{ \begin{array}{c} -a+b=0 \\ a-b=0 \end{array} \right.$, which represents the vector $[a, a]$ corresponding to eigenvalue $3$. And $\left\{ \begin{array}{c} a+b=0 \\ a+b=0 \end{array} \right.$, which represents the vector $[a, -a]$ corresponding to eigenvalue $1$. Therefore, the matrix is a linear operator stretching all vectors parallel to $y=x$ in a factor of 3, while keeping the perpendicular vectors un-changed.

#### Trivial cases

If $A$ is diagonal matrix (with $A_{ij}=0$ whenever $i \neq j$), all its diagonal elements consists of the set of eigenvalues.

The notion could be extended to any vector space, where the eigenvector becomes *eigenfunction*. For example, the exponential function $f(x)=e^{\lambda x}$ is an eigenfunction of the derivative operator, $\prime$, with the eigenvalue $\lambda$, since $f'(x)=\lambda e^{\lambda x}$.

There are some related notions:

For a matrix (or linear operator),

- eigensystem: the set of all eigenvectors, each paired with its correspondong eigenvalue.
- eigenspace: the set of all eigenvectors with the same eigenvalue (It is easy to see that they are just multiples of one another, pointing at the same direction. Is is possible that an eigenvalue has two eigenvectors not linearly dependent?), together with the zero vector.
- eigenbasis: the basis of all linearly independent eigenvectors. Not every matrix has an eigenbasis, but every symmetric one does.