Pixel Practice: More Tools, More Pixels_part1.png

The following images were each created using a canvas of 32×32 pixels, a notable step up from the 8×8 dimensions of the first set:

pattern

This first image is a geometric pattern created using GIMP’s pencil tool. Its main feature and the focus behind its design is the incorporation of straight lines. I started with the X-shape (six lines), modified how they intersected, then worked outward one quadrant at a time.

These four images are the result of an exercise in using layers. The first image is the base layer, and the other three are modifications created in separate layers of the same file. When exporting the images, I toggled the visibility of the layers so that only two were visible at a time, i.e. the base layer and one of the others.

These four images mark a departure from the pencil tool in favor of the paintbrush and airbrush tools. Unlike the pencil’s “hard edge,” these tools have “soft edges.” Their colors are a mix of two or three hues, and areas of color painted with them blend together rather than appear visually distinct. You can understand, then, how useful they are for creating something like fire, the theme of these images.

The hardest part of the first, second, and fourth images was capturing the shape of fire. Even after studying several reference images, it took me a number of attempts before I was satisfied. The left-most image was made using the brush and was the first of these I created. It was also the most frustrating as a result. Beginning with the second image (my favorite), I learned my lesson and used more than one tool. It was not only easier, but the final results looked objectively better. In this image, a candle, I used the airbrush to create the flame and the brush to create the base. That the result ended up looking semi-photo-realistic was a happy accident.

For the third image, I decided to take a more abstract approach. I knew I wanted to create lava, but I wasn’t sure how well I’d actually be able to do that, so I opted instead to use the brush to create streaks over the background to simulate rock, and then I changed the color to create rivulets of lava, thus dividing the rock into “plates.” To make them look more like lava, I added small amounts of yellow in some areas before going over it all with white using the airbrush, which I thought would help simulate a glow.

The last image is somewhat macabre, but at this point, I was out of ideas for fire-related imagery. I used the brush to create a “stake,” used the pencil to create a stick figure “victim” and a rudimentary restraint (I couldn’t add detail without compromising the appearance of the figure), then switched the airbrush and painted fire over them, starting with red and moving to yellow before finishing with white. The effect, I hope, emulates looking through the flames to the stake and victim within.

Advertisements

Pixel Practice.png

I used to be able to draw fairly well. Next to reading books and writing stories, it was my favorite way to treat the chronic disease of ennui that plagued me throughout my formative years. My ability to draw things that existed in the world around me was average at best, but I was more drawn (sorry) to the myriad of objects and creatures that swirled around in my head. Many, I called or associated with demons, vile things that pulsed and slithered across sheets of notebook paper, their domains bordering the word oceans that were my class notes. By the time they occupied sheets of their own, techniques like highlighting and shading made them seem more real, although their monochromatic skin kept them in the realm of fantasy. I used a pencil exclusively, believing that introducing color to my creations would ruin them somehow. The pencil thus became a subconscious self-imposed limitation that, looking back, I think made each piece I drew better because conveying certain effects and features required more thought and inspired more creativity.

First Foray

It would then surprise no one to know that I have long been fascinated by pixel art and the games that use them. Pixel art has been periodically criticized as being overused, particularly in “indie” games, but that criticism’s about as valuable to developers and the industry as that directed toward the game engine in which a game is made (i.e. not at all, especially if it comes from people who have no desire to make their own games). The specific tools used should not matter if those tools are used well. They should only matter to consumers if and when that use inspires them to experiment with and explore their own ideas.

On that note, I thought GIMP (GNU Image Manipulation Program) was as good an option as any for learning how to make pixel art. It helped that it was already on my computer. The impetus for actually doing so came rather recently in the form of a course taught by Michael Bridges, the instructor whose Blender course gave me some direction in creating the Amami black rabbit. I had always considered pixel art, like I did 3D modeling a year ago, daunting for various reasons, so I was confident that I now had the means to be able to change my perspective in relatively short order.

Initially, two restrictions were placed on created images: they had to be eight pixels in height and width and only use black and white. Then, the concepts of hue, saturation, and value were introduced, and the images progressed to grayscale. The use of color was last.

Rather than a bunch of unrelated icons, this initial set as well as most that followed were considered “visual stories,” each icon representing a word or set of words that together formed a coherent idea or adhered to a theme. This one was “Person Loves Bowling.”

While creating the grayscale set (“Man Walks Dog in Park”), I realized that if the images were placed side by side, they could form a single image as long as their backgrounds were the same color, so I exploited this in the creation of each by making a note of pixel placement and color in progressive images.

The color set (“Astronaut Travels in Rocket to Outer Space”) focused on using lighter shades in certain areas of the image to create the impression of light shining onto those areas. Although employed in the first, second, and fourth images, I think it’s more evident in the first and fourth than it is in the second. In the third image, which depicts the Earth, my intent was to create a gradient sort of effect that depicted the atmosphere’s distinct layers.

Fruits of My Labor

For this final set, I was given no direction other than to make 16 and to use what had been learned so far. I think it’s clear enough, but in case it isn’t, I spent more time on some of these than I did on others. It’s more difficult than it sounds to make 16 8×8 images that resemble real-world objects. I’m not using that an excuse for them taking two days (although I could), since I became interested in finding out just how much detail I could cram into 64 pixels. That’s how I ended up with the first image in the first row and the first and third images in the third row. For those unable to discern the objects represented in these images, I have courteously listed them below:

First Row

  • Aerial view of a step pyramid
  • Bananas
  • Baby Bottle
  • Cactus

Second Row

  • Candle
  • Diamond
  • Cover of Pink Floyd’s Dark Side of the Moon album
  • GameDev.TV logo

Third Row

  • Hamburger (grayscale)
  • Pencil
  • Final render of that rabbit project I mentioned – Fun fact: it was the first image I made in what would be this set.
  • The Eye of Sauron – I haven’t seen the movies and am only so far as book two of The Fellowship of the Ring as of this post, so I don’t actually know who he is yet or what the significance is of this giant flaming eye. I’ve read The Hobbit, though, so it’s not like I’m not trying to get familiar with Middle-earth.

Fourth Row

  • Screw – I’ve been on a Castlevania kick since the Netflix series debuted, and I started making pixel art just before finishing Castlevania III: Dracula’s Curse. I mention that only because I originally intended to make the Oak Stake from Castlevania II: Simon’s Quest, but I discovered that the sprites in that game are 9 by 9 rather than 8 by 8, so I just used the stake as “inspiration.”
  • Scythe
  • Toothbrush and Toothpaste
  • Umbrella

The Perils of Pentalagus furnessi_part4_final.blend

Creating the Scene (Days 13 and 14)

I saved it for last, but that doesn’t mean it’s anywhere near the least important part of the project. This varies depending on the intended application(s) or purpose(s) of a particular model, of course, but in this case, I thought it was only fair to put as much effort into creating an environment for the rabbit as I did into creating the rabbit itself.

Plane Manipulation

While I didn’t intend to create the rabbit’s actual habitat, I decided it needed to consist of something more than a flat, featureless plane. I’m pretty sure I neglected to mention it previously, but during the first attempt at creating the rabbit, I created a plane as a placeholder and included it in a separate layer so I could work on it when the time came without worrying about anything else getting in the way. There were others as well, but the plane was the only placeholder that didn’t get scrapped when I made my second attempt. Instead, it was slightly modified through proportional editing. I subdivided the plane and then raised and lowered some of the faces this created to form basic hills and valleys. I then assigned a brown material to it to simulate soil.

Making Grass

Importing and Transparency

Anyone who’s ever been outdoors knows that grass tends not to be uniform in size, shape, or distribution anywhere it grows without external manipulation and maintenance. To reflect that in Blender, I could have found a bunch of grass blades, imported them as reference images, and then modeled them. I also could have created a hair-based particle system and then simply manipulated it. However, I was made aware of a third option: importing images as planes and then using them that way. Enabled through an add-on in Blender’s user preferences, it creates a plane that uses an image on its face. By manipulating the plane, it’s possible to change the shape and appearance of the image in ways an image editor can’t (e.g. “bending” the image to make it a three-dimensional object). It’s also possible to do things in Blender that an image editor can, such as making an image’s background transparent with the Node Editor.

NodeTransparency

The above grass textures were imported as planes with black backgrounds. Each has an alpha channel, or an indicator of how transparent any given pixel in an image is when blended with another. This is important to note because the above node setup (i.e. a diffuse shader and a transparency shader combined with a Mix shader)* uses an image’s alpha channel to make it transparent. Connecting the alpha on the texture node to the Factor input restricts the effect of the output to the background of the image rather than applying it to the whole image. The reason for it in this case is because the images’ backgrounds were made transparent in a separate image editor** prior to importing.

If an image doesn’t have an alpha channel, one can be created in Blender with an additional node, specifically the Math node. The Math node has various properties that correspond to different operations and outputs a numerical value. To create transparency, the property needed is either Greater Than or Less Than, and the node receives the image’s color channel as an input. Consider that the colors of pixels in a digital image can each be created by combining different amounts of red, green, and blue. Changing the percentage of any of these changes the resulting color. In Blender, this resulting color can be represented on a numerical scale ranging from zero to one, where zero is 0%, and one is 100% (of that color). The idea here, then, is to apply transparency to pixels with color values greater than or less than a value specified in the Math node. That value depends on the specific image used and therefore requires experimentation to figure out. In this project, I created an alpha channel for the grass blade depicted on the far right in the image below using the Less Than property and a value of .800 (it had a white background):

GrassTextures
The node setup depicted here is a simplification of the earlier one, created by grouping three nodes together into a single node that can be inserted and manipulated like any other. The two “grass plants” were created by duplicating, scaling, and combining the textures into a single object.

*I didn’t have any specific knowledge of which nodes did what. I just played around with them until I got the effect I wanted. Come to think of it, that could describe anything I do in Blender.

**The images were created by Michael Bridges and included in an asset pack of nature images that I downloaded.

Application and Distribution

As it turns out, I technically did create and manipulate a hair-based particle system to populate the plane with grass. Unlike when I first covered the rabbit in fur, however, I immediately created a number of others, one for each individual texture I wanted to use as well as one for each of the two grass plants. I reduced the number of particles from the default to 1000 to under 50 before setting their length to one. How did I replace these hairs with the grass textures? In the Render section of a particle system’s properties, there are buttons that determine how particles are rendered, if at all. These options are None, Path, Object, and Group. I didn’t experiment with the others, so I couldn’t tell you exactly what they do (I assume the two that render them deal with particle arrangement), but Object replaces each particle with a specified object. This option has a size component, which I also set to one. Doing this helped ensure that by default, their size would be identical to the size at which they were created.

plane_WP
The weight distribution for the initial particle system (grass base).

Aside from this change, the procedure for handling the distribution of grass particles was exactly the same. I created vertex groups and defined them with weight painting. I then adjusted the number of particles and manipulated their children until I achieved an effect with which I was satisfied. This took a number of tests, so to save time, I previewed renders at a low sample rate, something else I may have neglected to mention in earlier entries. The purpose of tests is to get a rough (rather than actual) idea of how the final product will look. Increasing the sample rate increases the quality of the final product but also substantially increases the render time, so it just makes sense to only have to do one high-quality render if quality is a particular consideration.

Sapling and the Final Render (Day 15)

I’d be lying if I said that I intended to mention sapling when I started writing this entry because of how little of the result is visible in the final render of the scene. However, after maybe ten minutes of thought, I decided to give it at least a passing mention, since it could easily be a part of and have a larger role in future projects. Sapling is another add-on in Blender’s user preferences, and it enables the quick creation of trees. A tree generated in this way is a curve, not a mesh object, so it needs to be converted to one before placing it in a scene that will be rendered. Before that, though, it has several settings that can be adjusted to fit a project’s needs: Animation, Armature, Leaves, Pruning, Branch Growth, Branch Splitting, Branch Radius, and Geometry. Each of these has various other settings that further refine the appearance of the tree.

I used Geometry, Branch Splitting, and Leaves. In the first, I applied the Bevel setting, which made it solid and loaded a preset. These presets correspond to actual trees. I wasn’t going to use one, but I coincidentally stumbled upon a Japanese maple, so I thought it was appropriate. Still in the Geometry settings, I scaled the tree down slightly. In Branch Splitting, I increased the number of levels (recursive branches) from the default two to three. In the Leaves section, I clicked Show Leaves, and leaves appeared at the ends of the branches. Increasing the levels resulted in more branches and therefore more leaves on it, and I didn’t have to manually adjust the number of either. When this was done, I converted the maple to a mesh object and then applied materials to the trunk, branches, and leaves before moving the entire thing into the layer that held everything else (i.e. the grass-covered plane and the rabbit).

All that remained at this point was positioning, lighting, and of course, rendering. Well, almost. Not content to simply change the color of the background as I had when I rendered the chess set (which occupies the site’s banner or whatever that is at the time of this writing), I added a skybox. The Amami black rabbit is nocturnal, so I chose another pre-made asset, a moonscape. It’s ironic that it was called that, though, since the way I arranged the scene makes it impossible to see the Moon. Anyway, to apply it, I went into the World tab and changed the “color” of the background to an environment texture. I could specify its type as either equirectangular or a mirror ball. Equirectangular images like the moonscape mimic latitude-longitude projections, and mirror ball images are spherical and mimic the curvature of the Earth. The image cloaked the entire scene in a blue light, the intensity of which I adjusted to look more natural. I then scaled the rabbit down before positioning it and the tree in the scene. Test renders followed. Once I found a position I liked, I added a UV sphere in the scene and applied a white emission material to it. This turned it into a light source, which was necessary because the light from Blender’s default lamps behaves differently in Cycles Render than it does in Blender Render. I performed a few more tests before increasing the sample rate and rendering the final image. Almost nineteen hours* later, the render finished, and I considered the project done.

*I rendered it at 200 samples with my CPU. I could have done it with the GPU but didn’t remember until several hours had passed. I’m not able to speculate about how much of an improvement it would have made. The scene consisted of 237,967 vertices, 196,606 faces, 388,430 triangles, and tens of thousands of grass blades, so it would have taken a while, regardless.

Project Reflections

I must admit that I was a little intimidated when I started this project. The chess set I mentioned earlier was the most complex thing I had made in Blender up to this point, and I was going straight from low-poly modeling to high-poly organic modeling. It had also been months since I had created anything in Blender, period, so I had to spend part of the process getting reacquainted while applying the new skills I was learning. Once I got past the metaball stage, though, I got into a groove and relished every day I was working on this. The most difficult parts were applying the fur (especially on the ears) and creating the eyes, both of which took a lot more time than I expected. Rendering the final image was somewhat stressful, since I wasn’t sure if my laptop was going to overheat from being on all night (it didn’t, thankfully). While I could probably devote paragraphs to picking it apart and analyzing what I could have done better, I’m happy with how the render turned out. The rabbit in particular turned out a lot better than I could have hoped, and I think I did the real thing justice. If not, then I can be satisfied knowing that I gave it a hell of a shot.

The Perils of Pentalagus furnessi_part3.blend

Embattled with Ears No Longer (Day 8)

I thought I’d start with this to relieve any anxiety anyone might be feeling as a result of the previous post, which ended with uncertainty regarding what I was going to do about the rabbit’s ears. It wasn’t long after publishing that post when I discovered a solution through child manipulation. For the benefit of anyone who may be reading this without having read that post, I’m referring to the children of the hairs, not human children. Anyway, there are six basic options for manipulating children: Clump, Length, Minimum, Maximum, Parting, and Roughness. Four of these are self-explanatory.  Minimum and Maximum refer to the minimum and maximum angles of hairs from root to tip, respectively. Each of these has at least one option that determines the extent to which the manipulation occurs. For example, Clump has an option called Shape that determines, well, the shape of any clumping added, while Roughness can have degrees of uniformity or randomness. Any or all of these only need to be manipulated slightly to have a significant impact on the appearance of fur. Whether that impact is beneficial or not is best determined by experimenting with each as I did.

touched_up_render
I manipulated children on the body, head, and ears, leaving the mouth area alone aside from adding whiskers, which I had forgotten to do earlier.
BDO_wide_shot
I added another camera to the scene and gave it a focal length of 18mm to create the “wide” shot seen here. The result of manipulating the ear fur was more accidental than intentional, but I have no qualms with it now.

The Eyes Have It (Days 9, 10, 11, and 12)

Replacement and Iris Creation

It eventually occurred to me that creating the eyes as I had would make it more difficult to do the work needed to give them a semblance of realism, so I deleted them both after making a note of one’s location and scale in order to save myself time positioning and resizing them. I only made a note of one because I technically only needed one, thanks to the Mirror modifier. This modifier mirrors an object along a specified axis or axes and uses the object’s origin as a reference point by default. Any modifications done to the object are also done on the mirrored object automatically, which is convenient for making something like a pair of eyes. This is true only as long as the modifier isn’t actually applied. Once it is, then the original object and the mirrored version can be edited independently from one another, which could be desirable in some cases, but it wasn’t in this case for me.

iris_alt
The completed “iris”

 

 

To create an iris, I opened a new Blender file and added a Circle mesh to the scene. I then filled it with a triangle fan (in Edit Mode, it resembled a pie or pizza), subdivided it, then increased the number of fractals. According to the Blender Manual, this “displaces the number of vertices in random directions.” Here, it created the jagged/folded pattern seen in the image on the left. I then extruded a couple of faces downward, creating the ring-like impressions in the center. I made its alpha (essentially the background) transparent, changed its color, rendered the circle in Blender Render, then saved the final result as a PNG image. The idea was to use it as a texture rather than create it on the eye itself because the latter option would have added too much geometry.

Combining Shaders with the Node Editor

NodeEditor1

Returning to the eyes themselves, the next step was to assign materials to them. I created three: one called Eye, one called Iris, and one called Pupil. The Eye material was composed of a white Diffuse BSDF shader and a white Glossy BSDF shader. I added these into the Node Editor the same way I would a mesh into Blender’s default scene. As nodes, they could be manipulated to create complex materials with effects that would otherwise have to be reproduced by making and assigning separate materials to the eyes. In this case, by combining the two shaders with a Mix shader, I created a material that was both diffuse and glossy, which I then duplicated and altered to create the Iris and Pupil materials. These two were assigned to groups of faces I selected manually, overwriting the existing material on those parts of the eyes:

NewEyesAlt
I found it unnerving to look at the whole rabbit at this point, so this is an “eye cam” with a focal length of 85mm. Pictured is the placeholder iris material created in the Node Editor.

Iris Replacement

The eyes were coming along, but it was time to give them each the iris I created earlier. Unlike with the placeholder, this was a three-step process. The first step involved adding a texture to the Node Editor and then opening the image to represent that texture. Textures act and can be manipulated like any other node. I connected it to both the diffuse and glossy shaders, which overwrote their color properties. The second step involved opening the image again in the Image Editor. The third and final step involved selecting the parts of the eye mesh to which the iris was assigned and then unwrapping them. Basically, this mapped the mesh to the texture, which then made it visible (the selected parts of the mesh otherwise look black). In the Image Editor, the parts I selected overlaid the iris texture. By scaling inward and outward, I could decide how much or how little of the texture made up the iris material proper:

NewEyesAlt2
As with child manipulation, this result was more due to experimentation than anything else. 

Adding a Lens

If the rabbit were less realistic, I probably would have left the eyes alone at this point, but since I was aiming for accuracy, they each needed a lens. This ended up simply being the parts of the eye mesh that comprised the iris and pupil duplicated and separated into its own object. The three materials assigned to this new object were deleted and replaced with a single Glass BSDF shader. When rendered, as I discovered, this shader is not only reflective, but it also magnifies whatever is behind it. The exception to this is when it is directly on top of something else. Then, it looks like this:

cloudy_eyes

The solution was, I thought, relatively simple. First, I created a loop cut on the editable eye and moved it onto the edge bordering the iris. This allowed me to move the entire section, once selected, backward without stretching or compressing the parts of the eye nearby until it was just behind the lens. This magnified the pupil, making the entire visible half of the eye appear black. To solve this, I selected the center-most portion of the front of the lens and then enabled proportional editing* to move part of the lens away from the eye. This took me two days to do correctly because I had to figure out how far to move the eye backward as well as how much of the lens I had to move forward so that the iris and pupil were both visible. I then had to render the eye as an image every time I made an adjustment because it was the only way I could see how reflective it was. Eventually, my inner perfectionist was satisfied, and I rendered the result seen in the featured image. The lighting isn’t the best, but that will be remedied at the project’s end.

*Unselected parts of the mesh are affected by the movement of the selected part(s). The effect is greater on parts closer to the selection than it is on those farther away. 

 

The Perils of Pentalagus furnessi_part2.blend

Fun (?) with Fur (Day 4)

fur_render_initial

With the rabbit more or less fully formed (for the purposes of this project), the next logical step was to give it fur. In Blender, this is achieved by creating a particle system and assigning it to the object. There are many different types of particles, but the obvious choice for a rabbit’s fur is a system of hair particles. My approach to this was the same as it is for everything related to development: get it done first, then worry about efficiency and appearance. Pictured above is therefore the result of a single interpolated* system comprised of at least twenty thousand hairs, and I only edited their length. The appearance of the ears led me to christen the rabbit Splotchy the Eldritch Hare. Little did I know at the time how well the name would fit.

*In a particle system, particles can have sub-particles known as children that mimic their appearance and behavior. Instead of creating a system that emits 10,000 particles, for example, you can achieve the same result with a system that emits 1,000 particles with ten children each. Children can either be simple or interpolated. The difference in this case is basically between the possibility of hairs being generated in the space around the rabbit and ensuring that hairs come only out of the “skin.”

Style, Render, Repeat (Day 5)

The result of Day 4’s work was more or less just to give me an idea of how the fur looked before I did anything to it. That’s what I tell myself now, anyway. The first actual order of business was to assign materials to both the skin and fur so that Splotchy wouldn’t be gray when rendered. I gave the former a Diffuse BSDF (bidirectional scattering distribution function) shader, and the latter a Hair BSDF shader. These affect how the materials look when rendered in Cycles as opposed to in Blender’s internal renderer. Without going into too much detail, Cycles is better for photorealistic models and scenes, the diffuse shader helps to determine the skin’s color, and the hair shader helps the fur look like, well, fur. I made the skin a light pink, and I made the fur black. Initially, the rabbit looked like this:

fur_render_color1

There were two main problems. I saw and addressed one almost immediately. The other one went unnoticed for several hours despite it staring it me in the face, but more on that later. It was at this point in the process when I started using Particle Edit mode for the first time. Like sculpting, particle editing provides several different brush options. I used three: Comb, Add, and Cut. Their functions, I think, are more or less self-explanatory, but the mode itself is not because it limits both what can be edited and how it can be edited. First, the view when using the mode is not the particles themselves but general representations of them (see below). This means that it’s not possible to see exactly how actions performed on them affect their appearance without switching into another mode.

BDOParticleEditShot

Second, once in Particle Edit mode, it’s no longer possible to edit the number of particles emitted, only the number of children each has, at least without switching to Object Mode because the Add brush option performs a similar function. If any of the brushes has been used, then the only way to edit that number at all is to select an option in Particle Edit mode called “Free Edit,” which then undoes everything done in the mode. These made the process more incremental in terms of progress than it had been up to this point because it required a lot of minor changes and toggling between modes to achieve something that looked at least passable. I was also still editing a single particle system for the entire rabbit, so I ended up creating more problems than I solved:

fur_render_color2
The dark patches indicate areas where the fur has not been combed. It’s still in the default position.
fur_render_color3
The area on the back is related to the second problem I mentioned (no, it’s not leprosy).
fur_render_color4
Splotchy’s covered in far too many hairs, but he’s not sentient enough to mind.

The Joy of Weight Painting (Day 6)

Eventually, the proverbial clouds parted, and I saw sense. A single particle system for an animal’s fur was impractical, somewhat unwieldy, and definitely excessive. I could instead use multiple particle systems, one for each part of the rabbit. They could be manipulated independently from one another and cut down on the total number of hairs, so it was a win-win. All I had to do was create vertex groups that I could then assign and define by painting different areas of the model. By default, the entire mesh has a weight of zero and a blue color. Weight is on a scale from zero to one and on a spectrum from dark blue to bright red. In the case of particles, it determines the percentage of them that will appear on a vertex group. I started with the body, which would have the most fur, then moved to the head and the mouth area (see below).  In so doing, I also fixed that second problem with the initial render. I most likely didn’t assign the skin material to the skin after creating it, so the material it had was that of the fur. The distribution of the fur in the second-to-last image in the previous section was too thin on the rabbit’s back, so the skin was visible. I just didn’t notice because it wasn’t the light pink I expected it to be.

BDOWeightPaintingBody
That patchiness on what I guess would be the shoulder bothered me more than it should have.
BDOWPB2
I didn’t proceed to the next vertex group until I was satisfied with the one I was working on.
BDOWPBH
I think it looks much better than the initial attempt, but that’s not saying much.
BDOWPB3
Between this and the final image (i.e. the featured one), I spontaneously decided to make the mouth fur lighter.

Embattled with Ears Again (Day 7)

I didn’t plan this, I swear. When I painted the ears in Weight Paint mode and then styled the fur in Particle Edit mode, I kept getting bald patches on either the tops or backs of both no matter what adjustments I made (I apologize for the lack of visual aids), so I manually selected their vertices, made that selection a vertex group, then assigned it a weight of one without actually painting anything. The result, after combing the fur, was what’s in the featured image. It’s the best I was able to achieve after a couple hours of work, and I thought it was good enough for the time being. It may be good enough, period, depending on whether I can make the ears look better without remodeling them. That would be a last resort given how long they took to make compared to everything else thus far, but either way, I’ll have made a decision by the time the next entry in this series is written.

The Perils of Pentalagus furnessi_part1.blend

Conception

Every Blender project begins when two decisions are made. The first concerns what to model, and the second concerns how to begin modeling it. In this case, both of these decisions were made for me as a student of Ben Tristem’s and Michael Bridges’s Complete Blender Creator Course. The model was a rabbit, serving as an introduction to organic modeling, and the starting method involved combining metaballs. These fluid-like shapes can be manipulated like other objects in Blender but combine with one another similarly to drops of water. The result is not necessarily the neatest in terms of the number of polygons, faces, and vertices (generally, the goal is to reduce these by as much as possible in order to make the object more efficient. It’s the same principle behind image compression), but it is a quick and effective starting point for something like this.

This “something” for me was Pentalagus furnessi, or the Amami black rabbit, which is only found on two small islands in Japan. I don’t recall how exactly I stumbled upon it, but it’s something I never expected to see in a search result, and it certainly didn’t fit my mental image of how a rabbit looks. It did, however, pique my creative curiosity, and I relished the modeling challenge it would present. I had no idea how long it would take or how well I would do, but with Blender, that’s part of the fun.

First Attempt (Day 1)

pentalagus_furnessi_large

The second decision made at the start of any Blender project is influenced by two auxiliary considerations: the intent behind the model and, if applicable, which reference material(s) to use. Intent determines whether and to what extent the model is a realistic depiction or a caricature of an object or organism, and reference material helps ensure that the model is as accurate as desired or needed. I wanted my rabbit to be as realistic as my skills could make it, so I chose the above reference image. Normally, I would import the image into Blender. Doing this helps with object positioning, shape, and scale. For whatever reason, when I started combining and manipulating metaballs to create a rough head and body, I didn’t import the image. I instead had the image open in a browser tab and alternated between it and Blender. This was the result:

OGMetaballBunny

For what it was, it wasn’t bad, but it also wasn’t what I wanted. Since I had already converted the metaballs into a single mesh object (a mesh object is anything with faces, edges, and vertices) to perform more complex modifications, it would have been more difficult and time-consuming to fix the issues I had with the model than it was to simply scrap it and try again.

Second Attempt (Day 2)

After I imported the reference image, I achieved the below result within a few minutes (the black circles identify individual metaballs because they hadn’t been converted into a mesh yet):

DoOverMetaballBunnypng

I proceeded to use a sculpting brush to add detail to the rabbit’s head. Using my imported reference image and a few others, I approximated the size and positions of the nose and mouth. It took several methodical attempts to define their shapes. Ensuring they were visible was easy. Ensuring that they didn’t appear forcefully chiseled into the rabbit’s flesh was not. When I was satisfied, I used the same brush to bore holes into the sides of the rabbit’s head. These were filled with appropriately scaled spheres that served as eyes. I applied a Boolean modifier to each and performed the difference operation (one of three the modifier allows), which deleted the portions of the spheres that were inside the head and left only what was visible from the outside.

Embattled with Ears (Day 3)

At Michael’s suggestion, I modeled the ears as separate objects rather than attempting to sculpt them. I chose a cube as a starting point and ended up with a passable ear shape after a lot of scaling, moving, deletion, addition, merging, and subdivision. I then duplicated it to create the other ear. This other ear got deleted (and not for the first time) when I suddenly decided to improve the original. When the needs and guidelines I am trying to meet are my own, modeling is a somewhat spontaneous process. It’s incredibly easy to lose track of time and to go beyond the original scope of the model because of how quickly and easily changes can be made, so it helps to know when to leave well enough alone, even temporarily. For me, that point was after I inset the outer face slightly and extruded it inward. I beveled the outermost edges of the ear, and then I scaled the inset face down, which created the pattern depicted in the picture below. Finally, I duplicated and rotated it before applying a Boolean modifier to both and performing the same operation I had on the spheres.

BunnyEar