Simply 3D

Math for computer graphics, made simple

Archive for the ‘Polygonal Models’ Category

In pursuit of a good 3D model representation

Posted by Andy on June 13, 2009

One of the most basic foundations of any 3d graphics system is its native model representation.  Although the concepts involved are pretty simple and well understood, devising a data representation that is convenient, flexible, and efficient can be tricky in practice.  This article is concerned with a general model representation that can easily be converted to the data formats required by 3D graphics hardware, but it is not directly concerned with those hardware-ready formats.

Let’s start with the basics.  At minimum, a polygonal model consists of:

  • a set of points in 3d space, called vertices
  • relationships or connections between those points, which organize them into polygons, called faces

mesh1The faces are what the system actually renders.  The vertices provide spacial information about the surface, and determine the ultimate locations and shapes of the faces.  So far, so good.  It seems clear that we need:

  • a list of vertices.  each vertex can be represented by a 3d vector that gives its position in space, relative to origin <0, 0, 0>
  • a list of faces.  each face should somehow indicate which vertices belong to it.  for example, a face could be a list of indices into the list of vertices

This does indeed give us enough information to render the model in some way.  But as our rendering techniques become more sophisticated, we soon find that we need more information than this.  Surface shading models generally depend on surface normals to calculate how light reflects or scatters in different directions.  It is convenient to associate these normals with the vertices – either the lighting calculations will happen at the vertices, or else the normals themselves will be interpolated across the face so the lighting calculations can be performed at each pixel.  A vertex normal is calculated by averaging the normals of the faces that share the vertex.

As soon as we try to associate a normal with a vertex, and to figure out which face normals should contribute to its calculation, we immediately encounter an interesting problem of model representation, which I will call the hard-edge/soft-edge problem.  Consider the following variations in shading on our model:

mesh2
In the version on the left, the edges of these square faces are hard, or sharp.  This gives the surface a faceted look – it is made of distinct, flat faces.  Hard edges are useful for models of things like cubes and robots.  In the version on the right, these faces are just part of a continuous, smooth surface.  The edges are soft – in fact, we wish to downplay the presence of any edges altogether.  Soft edges are good for models of things like spheres and human faces.  Clearly these are very different effects.  But what is the difference, exactly?

The difference is that the hard faces have all their normals pointing one way.  The surface normals do not vary across the faces.  In the smooth model, the surface normals do vary continuously across the faces.  So what is the difference in terms of data representation?  Well, given that we are (at least conceptually) interpolating the normals at the vertices across the faces, soft edges are actually the default effect.  For hard edges, the only way the normals will not vary is if they are the same at every vertex for a given face.  In particular, the normal at each vertex for that face should be the face normal itself.  Look at the normals in the following illustration:

mesh3
In the smooth surface, each vertex has just one normal, so each normal is shared between adjacent faces.  In the hard-edged surface, it’s as if the normals are split at each vertex, so that each adjacent face gets its own normal.  Unfortunately, this seems rather at odds with the notion of a vertex normal.  These split normals are no longer really associated with a vertex.  Rather, each normal is now associated with a vertex and a face – i.e. with a vertex-face pair.

How should we represent this in our model data?  For the smooth case, it seems like we really do want to store the normals with the vertices.  But for the hard-edge case, it seems like the normals belong with the faces.  We could have two different representations, but there are lots of objects, like cylinders, that have both hard and smooth edges.  For this reason, it would be much better if we could find a common representation that would accomodate both.  We could actually make the more general case the standard – that is, always store the normals per-face.  For the smooth case, this isn’t really necessary, but we can get the smooth effect by ensuring that adjacent faces have the same normals associated with shared vertices.  That would look like this:

mesh4Here, the two cases have the same basic representation.  And it isn’t awful…  But there are a couple of things wrong with doing this in the smooth model.  First, it duplicates data needlessly.  Second, we’ve actually lost an interesting piece of information – the fact that those faces share normals at those vertices is no longer explicit in the model.  The best we could do is compare the normals numerically in order to guess that, which is not very reliable.

What we need to do is to approach this like the relational data-modeling problem that it is.  We need to get more precise about what the relationship between vertices, faces, and so-called vertex normals really is.  One face has many vertices, and multiple faces may share the same vertex.  For shared vertices, the normals may be shared as well, but they might just as well not be shared.  That last part tells us we have to decouple the normals from the vertices, but still be able to relate faces to both.  Another illustration should make this clearer:

mesh5

The red dots are vertex positions.  There are a straightforward, fixed number of vertex positions, regardless of how we share normals or connect faces.  The purple dots represent the vertex normals.  The blue lines are faces.  For hard edges (left), normals are not shared among faces.  For soft edges (right), they are.  Both of the objections raised two paragraphs ago are solved.  In the smooth model, normals are not duplicated.  Even better, we have explicit information about which faces share which normals.  This has at least one specific and important benefit: each normal is associated with exactly the set of faces whose face normals contribute to the calculation of the vertex normal.  This makes calculation of the vertex normals from the model very convenient.  Notice that the faces are no longer directly related to the vertex positions – instead, they are related to them via the vertex normals.  This might seem a little funny at first, but it is just as convenient in practice as a direct relation, and it is a more accurate representation of what we are trying to model.

So far, our model consists of:

  • vertex positions – 3d vectors representing points in 3d space
  • vertex normals – not so much a value in itself, as a link that relates faces to vertices, while supporting hard and soft edges.  It will contain a reference to the underlying vertex position, and a list of references to the faces that share it.  The actual value, also a 3d vector, can be calculated easily from this face list.
  • faces – a face is now a list of vertex normals.  Indirectly, it can still be considered a list of vertex positions.

That’s concise, and works beautifully.  So we’re done, right?

Unfortunately, as soon as we introduce the next common piece of per-vertex data, we immediately run into a similar problem yet again.  I’m talking about UV coordinates.  UV coordinates are coordinates in texture space, which map a polygonal region of the texture to a face.  Like normals, they are basically related to a face-vertex pair.  UV coordinates may be shared by faces at a vertex,  or each face might have its own.  Note the similarity to the hard-edge/soft-edge problem:

uv1

The case on the left, like the hard-edge case for normals, requires each face to have its own set of UV coordinates.  This is because the UV coordinates needed at the inside edges will be discontinuous in texture space.  E.g. the U coordinate will go from zero at the left edge to one at the middle edge, and then from zero at the middle edge to one at the right edge.  The case on the right, like the soft-edge case for normals, is better represented by sharing UV coordinates among adjacent faces.

Since the problem is so similar, can we just piggyback the UV coordinates onto the normals?  Not quite.  That would enable the correct relation between faces, UVs, and vertices, but not between UVs and normals.  We can have shared normals where UVs are not shared, and shared UVs where normals are not shared.  The cases are similar, but independent of one another.  Therefore, we need another layer of indirection that works in the same way.  Here’s what that looks like:

mesh6

Whoa.  What’s that all about?  The green dots are a new level of indirection that I’ll call face vertices.  They give us a way to group data that belongs to these face-vertex pairs that we’ve been touching on.  When we were only considering vertex normals, we could afford to sort of lump the concepts of face vertices and vertex normals together.  Now that we’ve introduced a new kind of per-face, per-vertex data, we need to make this concept formal so that we can make distinctions between kinds of data.  The yellow dots are that new kind of data – the UV coordinates.  The vertex normals can be shared, or not, as before.  The UVs can also be shared, or not.  Notice, though, that the face vertices themselves can only be shared if all the data associated with them is shared.  That’s the bottom-right case in the illustration.  In the bottom-left case, even though the UVs are shared, the face vertices cannot be shared because they must be able to refer uniquely to the vertex normals, which are not shared.  We could remove this asymmetry by never sharing face vertices – and you might prefer that.  Sharing them in the special case at the bottom-right is just a little efficiency.

With the addition of face vertices, any new kind of per-face, per-vertex data can be shared, or not.  This begins to look like a general solution.  Our model now consists of:

  • vertex positions – 3d vectors representing points in 3d space
  • vertex normals – links that relate face vertices to vertex positions, while supporting hard and soft edges.  Each contains a reference to the underlying vertex position, and a list of references to the faces that share it.  The actual value, also a 3d vector, can be calculated easily from this face list.
  • UV coordinates – vectors representing points in texture space
  • face vertices – links that relate faces to per-face, per-vertex data, such as vertex normals, UV coordinates, skin weights, tangents, and so on.  Each contains references to this data, and those references can be to shared or separate referents.
  • faces – each face is a list of face vertices.  Indirectly, via the face vertices, it can also be considered a list of vertex positions, vertex normals, UV coordinates, and other per-face, per-vertex data.  A face typically also contains a reference to the material to be applied to that face.

This gives us a general polygonal model representation, which can be extended by adding new kinds of per-face, per-vertex data.  Because the data is organized by face, and then by face vertex, this representation can be readily converted to the vertex/index stream formats expected by 3d graphics hardware.  The considerations that drive all the little details of that conversion would be a good topic for a follow-up article…

Advertisement

Posted in Fundamentals, Polygonal Models | Tagged: , , , , , | Leave a Comment »