In a previous blog post, I wrote about algorithmically generating images with Core Image. Computer-generated art is a fascinating subject, and things get even more interesting when we enter the third dimension. In fact, mathematical equations can be used to parameterize 3D surfaces such as the one pictured below:
The seashell surface, shown above, can be parameterized by the following equations:
Note that $ x, y, z $ are functions of $ u $ and $ v $, where $ 0 \leq u \leq 2 \pi $ and $ -2 \pi \leq v \leq 2 \pi $. In other words, any point in 2D $ uv $-space maps to a 3D point in our model. If we were to generate these points at infinitesimally small intervals, we would get a “point cloud” that resembles the seashell rendered above. That’s all for the math; now let’s translate this to code.
In OS X “Mountain Lion”, Apple introduced SceneKit, a new Objective-C framework for rendering 3D graphics. While SceneKit is technically a wrapper over OpenGL, it provides an object-oriented (rather than procedural) model of programming. More specifically, a scene in SceneKit consists of a hierarchy of nodes to which we can attach geometry, lights, cameras, and so on. SceneKit’s node hierarchy will feel familiar to anyone who has worked with Core Animation layers. In fact, a lot of the concepts and terminology used in Core Animation carry over to SceneKit, providing a low learning curve once you get used to thinking in 3D space. I won’t focus on the basics of SceneKit here; Apple’s 2012 and 2013 WWDC sessions are good starting points.
Like any good framework, SceneKit makes common tasks easy and complex tasks possible. We’ll focus on the latter here. Going beyond SceneKit’s primitive geometries – cubes, cylinders, spheres – we’ll explore how we can programmatically construct the geometry for the seashell pictured above.
The three building blocks of geometry in SceneKit are vertices, vertex normals, and triangles. As you probably guessed, the $ x, y, z $ equations listed above give us our model’s vertices. Unfortunately, we can’t pass these equations along to SceneKit. Rather, we need to come up with a set of discrete vertices corresponding to points in our $ uv $-space. We can do this by subdividing our $ uv $-space into an evenly-spaced grid of points. As the number of subdivisions grows larger, the distances between the vertices gets smaller, ultimately resulting in smoother surfaces.
The code below does just this.
First, we define a custom Vertex type that stores $ x, y, z $ coordinates and $ nx, ny, nz $ normal coordinates. Then we generate a 2D matrix of points in $ uv $-space. You can tweak the SUBDIVISIONS constant to reach a good balance between rendering performance and detail.
Next, we need to compute vertex normals for each of the vertices. Among other things, vertex normals ensure light is properly reflected off the model during rendering. We can compute a normal vector for a vertex by taking the cross product of the partial derivatives $ \frac{dr}{du} $ and $ \frac{dr}{dv} $ of the parameterization above. The resulting equations, obtained with Mathematica, are quite complex:
Add the following lines of code below the comment that says: STEP 2: add normal equations here:
Lastly, we need to combine triplets of vertices into triangles. We can do that by running through our vertices in squares and creating 2 triangles for each one.
Finally, we create and return the geometry.
You can find a sample program that renders the geometry on GitHub.