## Vector – matrix multiplication: What matrices do to vectors

Posted by Andy on May 29, 2009

This is the first post in a series on perspective projection, but it’s useful on its own, too.

In order to understand how a projection matrix works, first you have to understand what a matrix actually does to a vector when you multiply them. It can seem a little mysterious if you don’t look too closely, but once you look, it’s almost intuitive.

First, a refresher on dot products. Remember that you get the dot product of two vectors by adding the component-wise products, like so:

You can look at vector/matrix multiplication as taking the dot products of the vector with each row of the matrix. Each dot product becomes a component of the result vector. Here’s how I like to think about this: *All *of the components of the original vector can potentially contribute to *each *component of the result vector. Each row of the matrix determines the contributions for one component of the result vector. So, in the result vector, **x **is determined by **row 1 **in the matrix, **y **is determined by **row 2**, and so on. Let’s see how this works for the simplest case: the identity matrix. After this, it should make sense why the identity matrix has ones on a diagonal.

Each component of the original vector can contribute additively to any component in the result vector, and the weightings of the contributions are determined by the corresponding row in the matrix. So, **P’x **is determined by **row 1**. You can see that in the identity matrix, row 1 has a one** **in the x position, and zeros in all other positions. So **Px** from the original vector is preserved intact by the one in that row, while **Py**, **Pz**, and **Pw** are suppressed by the zeros. In the second row, there is a one in the **y** position, and the rest are zeros. Since row 2 determines the contribution weightings for **P’y**, the contribution from **Py **is preserved, while **Px, Pz, **and **Pw** get zeroed.

Hopefully that makes sense after going over it once or twice. The trick is to see each row in the matrix as *selecting *the way that the original vector’s components combine to produce each component of the result. The tools used to do this are multiplication (to scale or cancel a source component) and addition (to combine those components into a single result component).

Of course, most matrices are more interesting than the identity matrix. Try applying this to some of the transformation matrices. Notice how the rows of a scaling matrix only affect one vector component at a time, just like the identity matrix, since **x **scaled is always a function of just **x**. In contrast, notice how the rows of the rotation matrices affect multiple components at once, since **x **rotated can be a function of both **x and y**, or **x and z**, depending on the axis of rotation. Once you absorb this, you’ll be able to see intuitively what a given matrix is doing to a vector, and make up your own matrices to do interesting things to vectors.

## Leave a Reply