Projections and Orthogonality
Dot Products
Dot products are rather simple. Let's say you have a vector v with elements v1,v2,v3,vn and so on, and a vector u with elements u1,u2,u3,un. Then the dot product is as follows.
Here are some properties of dot products.
u⋅v=v⋅u
c(u⋅v)=(cu)⋅v=u⋅(cv)
(u+v)⋅p=p⋅u+v⋅p
It's a somewhat uncommon way of expressing it but you can also express the length (or norm) of a vector using dot products.
Orthogonal Vectors
Two vectors are orthogonal if their dot product equals zero.
The above concept is an important one to understand. It can be proven with the law of cosines, which I can't be bothered to format in LaTeX, so if you really want to know the proof here it is.
Let's talk about vector spaces W and W⊥.
We're gonna assume that W is a subspace of Rn.
W⊥ is vector space made of vectors where every vector v in W⊥ is orthogonal to every vector in W
W⊥ is a subspace of Rn .
W⊥ is referred to as the orthogonal complement of W.
There are a few important orthogonal complements to take note of based on the vectors that we have learned.
(RowA)⊥=NulA
(ColA)⊥=NulA⊺
In words, this means that all of the columns of Row A are orthogonal to every column in Nul A. The second statement is saying all of the columns of Col A are orthogonal to every row in Nul A.
The proof for this is simply that the definition of a null space of A is that Ax=0 , meaning that all the dot products of each row of A and x is 0, fitting the definitions of an orthogonal vector.
Orthogonal Sets
An orthogonal set is a set of vectors in which each vector in the set is orthogonal to every other vector in the set. Note that if the set contains all nonzero vectors, the vectors in the set are linearly independent.
An orthogonal basis for a subspace W in R^n is exactly what it sounds like it is.
Here is a theorem of orthogonal sets. Assuming {u1,u2,up} is an orthogonal basis, the weights in the linear combination for each vector yin W (the orthogonal basis), y=c1u1+c2u2...etc the weights are cj=uj⋅ujy⋅uj.
Decomposition and Projection
Let's say we had a vector y in Rn. We can decompose or break up into parts, that vector, into two vectors where one a multiple of some nonzero vector u and some other vector orthogonal to it. In other words, observe the diagram.

Here is the notation for the that bottom vector — it is written as βz=y^. In accordance of our original goal of decomposing the vector y into the sum of two vectors, we find that y=y^+z. However, this form useless insofar as we don't know the constant β. Let's look at a more useful form below that doesn't involve calculating the constant at all.
The Orthogonal Decomposition Theorem
Let Wbe a subspace of Rn. Then each y inRncan be expressed uniquely in the following form.
where y^ is in W and z is in W⊥. Keeping in mind that {u1,u2,up} is an orthogonal basis for W, the following holds true.
And of course, z=y−y^.
The Best Approximation Theorem
Given that Wis a subspace of Rn,and yis any vector in Rn,and y^ being the orthogonal projection of y onto W, y^is the closest point in Wtoy.
Properties of Orthonormal Matrices
An orthonormal matrix is a matrix whose columns from an orthogonal basis in Rnand are all unit vectors (meaning that each vector's magnitude is one). Let's look at some theorems of orthonormal bases, given U is an orthonormal matrix.
The Gram-Schmidt Process
The Gram-Schmidt process is a painful method of finding an orthonormal basis for a subspace. Given any basis for a nonzero subspace Wof Rn,the following holds true.
QR Decomposition
A matrix A can be decomposed into a product between orthonormal matrix Q and an upper triangular matrix R. The columns of Q can be found with the Gram-Schmidt process and can be turned into unit vectors.
Well, we know Q is orthonormal, so we can rewrite this. Remember a property mentioned earlier, that U⊺U=I where U is orthonormal. Therefore, we can rewrite the above equation like so.
Which simplifies to
Last updated