Creating the Scene (Days 13 and 14)

I saved it for last, but that doesn’t mean it’s anywhere near the least important part of the project. This varies depending on the intended application(s) or purpose(s) of a particular model, of course, but in this case, I thought it was only fair to put as much effort into creating an environment for the rabbit as I did into creating the rabbit itself.

Plane Manipulation

While I didn’t intend to create the rabbit’s actual habitat, I decided it needed to consist of something more than a flat, featureless plane. I’m pretty sure I neglected to mention it previously, but during the first attempt at creating the rabbit, I created a plane as a placeholder and included it in a separate layer so I could work on it when the time came without worrying about anything else getting in the way. There were others as well, but the plane was the only placeholder that didn’t get scrapped when I made my second attempt. Instead, it was slightly modified through proportional editing. I subdivided the plane and then raised and lowered some of the faces this created to form basic hills and valleys. I then assigned a brown material to it to simulate soil.

Making Grass

Importing and Transparency

Anyone who’s ever been outdoors knows that grass tends not to be uniform in size, shape, or distribution anywhere it grows without external manipulation and maintenance. To reflect that in Blender, I could have found a bunch of grass blades, imported them as reference images, and then modeled them. I also could have created a hair-based particle system and then simply manipulated it. However, I was made aware of a third option: importing images as planes and then using them that way. Enabled through an add-on in Blender’s user preferences, it creates a plane that uses an image on its face. By manipulating the plane, it’s possible to change the shape and appearance of the image in ways an image editor can’t (e.g. “bending” the image to make it a three-dimensional object). It’s also possible to do things in Blender that an image editor can, such as making an image’s background transparent with the Node Editor.

NodeTransparency

The above grass textures were imported as planes with black backgrounds. Each has an alpha channel, or an indicator of how transparent any given pixel in an image is when blended with another. This is important to note because the above node setup (i.e. a diffuse shader and a transparency shader combined with a Mix shader)* uses an image’s alpha channel to make it transparent. Connecting the alpha on the texture node to the Factor input restricts the effect of the output to the background of the image rather than applying it to the whole image. The reason for it in this case is because the images’ backgrounds were made transparent in a separate image editor** prior to importing.

If an image doesn’t have an alpha channel, one can be created in Blender with an additional node, specifically the Math node. The Math node has various properties that correspond to different operations and outputs a numerical value. To create transparency, the property needed is either Greater Than or Less Than, and the node receives the image’s color channel as an input. Consider that the colors of pixels in a digital image can each be created by combining different amounts of red, green, and blue. Changing the percentage of any of these changes the resulting color. In Blender, this resulting color can be represented on a numerical scale ranging from zero to one, where zero is 0%, and one is 100% (of that color). The idea here, then, is to apply transparency to pixels with color values greater than or less than a value specified in the Math node. That value depends on the specific image used and therefore requires experimentation to figure out. In this project, I created an alpha channel for the grass blade depicted on the far right in the image below using the Less Than property and a value of .800 (it had a white background):

GrassTextures
The node setup depicted here is a simplification of the earlier one, created by grouping three nodes together into a single node that can be inserted and manipulated like any other. The two “grass plants” were created by duplicating, scaling, and combining the textures into a single object.

*I didn’t have any specific knowledge of which nodes did what. I just played around with them until I got the effect I wanted. Come to think of it, that could describe anything I do in Blender.

**The images were created by Michael Bridges and included in an asset pack of nature images that I downloaded.

Application and Distribution

As it turns out, I technically did create and manipulate a hair-based particle system to populate the plane with grass. Unlike when I first covered the rabbit in fur, however, I immediately created a number of others, one for each individual texture I wanted to use as well as one for each of the two grass plants. I reduced the number of particles from the default to 1000 to under 50 before setting their length to one. How did I replace these hairs with the grass textures? In the Render section of a particle system’s properties, there are buttons that determine how particles are rendered, if at all. These options are None, Path, Object, and Group. I didn’t experiment with the others, so I couldn’t tell you exactly what they do (I assume the two that render them deal with particle arrangement), but Object replaces each particle with a specified object. This option has a size component, which I also set to one. Doing this helped ensure that by default, their size would be identical to the size at which they were created.

plane_WP
The weight distribution for the initial particle system (grass base).

Aside from this change, the procedure for handling the distribution of grass particles was exactly the same. I created vertex groups and defined them with weight painting. I then adjusted the number of particles and manipulated their children until I achieved an effect with which I was satisfied. This took a number of tests, so to save time, I previewed renders at a low sample rate, something else I may have neglected to mention in earlier entries. The purpose of tests is to get a rough (rather than actual) idea of how the final product will look. Increasing the sample rate increases the quality of the final product but also substantially increases the render time, so it just makes sense to only have to do one high-quality render if quality is a particular consideration.

Sapling and the Final Render (Day 15)

I’d be lying if I said that I intended to mention sapling when I started writing this entry because of how little of the result is visible in the final render of the scene. However, after maybe ten minutes of thought, I decided to give it at least a passing mention, since it could easily be a part of and have a larger role in future projects. Sapling is another add-on in Blender’s user preferences, and it enables the quick creation of trees. A tree generated in this way is a curve, not a mesh object, so it needs to be converted to one before placing it in a scene that will be rendered. Before that, though, it has several settings that can be adjusted to fit a project’s needs: Animation, Armature, Leaves, Pruning, Branch Growth, Branch Splitting, Branch Radius, and Geometry. Each of these has various other settings that further refine the appearance of the tree.

I used Geometry, Branch Splitting, and Leaves. In the first, I applied the Bevel setting, which made it solid and loaded a preset. These presets correspond to actual trees. I wasn’t going to use one, but I coincidentally stumbled upon a Japanese maple, so I thought it was appropriate. Still in the Geometry settings, I scaled the tree down slightly. In Branch Splitting, I increased the number of levels (recursive branches) from the default two to three. In the Leaves section, I clicked Show Leaves, and leaves appeared at the ends of the branches. Increasing the levels resulted in more branches and therefore more leaves on it, and I didn’t have to manually adjust the number of either. When this was done, I converted the maple to a mesh object and then applied materials to the trunk, branches, and leaves before moving the entire thing into the layer that held everything else (i.e. the grass-covered plane and the rabbit).

All that remained at this point was positioning, lighting, and of course, rendering. Well, almost. Not content to simply change the color of the background as I had when I rendered the chess set (which occupies the site’s banner or whatever that is at the time of this writing), I added a skybox. The Amami black rabbit is nocturnal, so I chose another pre-made asset, a moonscape. It’s ironic that it was called that, though, since the way I arranged the scene makes it impossible to see the Moon. Anyway, to apply it, I went into the World tab and changed the “color” of the background to an environment texture. I could specify its type as either equirectangular or a mirror ball. Equirectangular images like the moonscape mimic latitude-longitude projections, and mirror ball images are spherical and mimic the curvature of the Earth. The image cloaked the entire scene in a blue light, the intensity of which I adjusted to look more natural. I then scaled the rabbit down before positioning it and the tree in the scene. Test renders followed. Once I found a position I liked, I added a UV sphere in the scene and applied a white emission material to it. This turned it into a light source, which was necessary because the light from Blender’s default lamps behaves differently in Cycles Render than it does in Blender Render. I performed a few more tests before increasing the sample rate and rendering the final image. Almost nineteen hours* later, the render finished, and I considered the project done.

*I rendered it at 200 samples with my CPU. I could have done it with the GPU but didn’t remember until several hours had passed. I’m not able to speculate about how much of an improvement it would have made. The scene consisted of 237,967 vertices, 196,606 faces, 388,430 triangles, and tens of thousands of grass blades, so it would have taken a while, regardless.

Project Reflections

I must admit that I was a little intimidated when I started this project. The chess set I mentioned earlier was the most complex thing I had made in Blender up to this point, and I was going straight from low-poly modeling to high-poly organic modeling. It had also been months since I had created anything in Blender, period, so I had to spend part of the process getting reacquainted while applying the new skills I was learning. Once I got past the metaball stage, though, I got into a groove and relished every day I was working on this. The most difficult parts were applying the fur (especially on the ears) and creating the eyes, both of which took a lot more time than I expected. Rendering the final image was somewhat stressful, since I wasn’t sure if my laptop was going to overheat from being on all night (it didn’t, thankfully). While I could probably devote paragraphs to picking it apart and analyzing what I could have done better, I’m happy with how the render turned out. The rabbit in particular turned out a lot better than I could have hoped, and I think I did the real thing justice. If not, then I can be satisfied knowing that I gave it a hell of a shot.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s