The Thing image

The Thing We talked to Image Engine, the main vendor for visual effects on The Thing, who worked on both its horrifically beautiful creatures and environments.

Image Engine provides world-class visual effects for feature films. The company is based in Vancouver, BC and was formed back in 1995 to provide high-end visual effects for television. After accumulating numerous Emmy, Gemini and Visual Effects Society nominations and awards, Image Engine went on to seize the growing opportunities in feature film visual effects work. Its first feature film project was “Slither”, and since then the company has worked on high-profile productions including the Academy Award®-nominated “District 9”; “2012”; The Twilight Saga: Eclipse/Breaking Dawn Part 1”; “Immortals”; “The Thing” and “Battleship”.

Image Engine is considered to be highly specialized in creature/character work and digital environments. The company is currently working on “R.I.P.D.” (Universal Pictures, 2013) and Neill Blomkamp’s upcoming second feature “Elysium” (Media Rights Capital and Sony Pictures Entertainment, 2013), amongst others.

Image Engine started using ZBrush many years ago, back when they were working on TV projects including Stargate SG-1. While it was used for many of the first feature film projects, including Slither, Mr. Magorium's Wonder Emporium and Incredible Hulk, District 9 was the first major show to exploit ZBrush’s potential.

We talked with Marco Menco, 3D Creature Artist.

The Thing was awarded to Image Engine partly on the strength of District 9, which proved that the company could provide creature animation of the highest quality. Also, Visual Effects Executive Producer Steve Garrad had previously worked with Jennifer Bell (Universal) on Hellboy 2 at Double Negative.

Image Engine provided over 500 visual effects shots for The Thing. Jesper Kjolsrud helmed the project as the production Visual Effects Supervisor. The team worked closely with Director Matthijs Van Heijningen, Producers Mark Abraham and Eric Newman, Visual Effects Producer Petra Holtorf and Universal Pictures to realize a new vision for the 2011 film.

Image Engine's contribution spanned over a year, from pre-planning, on-set supervision through to post-production with a crew of over 100. The majority of the work involved creating computer generated creature transformations and digital environments. In all, there were 167 creature shots that were 100% computer generated.

Image Engine was the main vendor for the visual effects on The Thing project. We worked on both creatures and environments, building models that varied from characters, props and vehicles to vast set extensions. We developed the character transformations including fully computer generated creatures, with effects like blood and slime. We also created full CG sets with snow and fire.

Of course the original The Thing was a huge inspiration during the creation of the creatures. We approached this film with the intent to mesmerize the audience, keeping people's attention watching the creatures moving across the screen. As Director Matthijs van Heijningen Jr. would often say, the creatures had to look "horrifically beautiful."

We used ZBrush heavily during the concepting stage as well as when sculpting details. It was particularly useful when we very often had to change the design of the creatures in order to make them work with the shots. We started with digital scans of clay maquettes coming from ADI which gave us a good starting point and also great inspiration getting going. ADI did an amazing job on the concept sculptures.

Initially we were supposed to enhance the practical effects with CGI, however in the end most of the creatures were entirely computer-generated. This was partly due to CG's efficiency for changing designs and animation.

The initial concepting part that we did evolved around the transformations and the many limbs. We used ZBrush to create as many variations as we could for the stages of the transformations and also the many possible looks that limbs could have had when growing out of the human parts.

It was incredibly rapid to sculpt the anatomy and the skin details in ZBrush on some simple geometry and then immediately render them on the plates for concepting. Sometimes we would even simply use MatCaps without a need to render the sculpts. The fast turn around that there was between sculpting and rendering allowed us, the artists, to keep ideas coming. It gave the VFX Supervisor - Jesper Kjolsrud and the Director - Matthijs van Heijningen Jr., a great selection of concepts to work with.

The client, after deciding to replace most of the practical effects with CG, started to request many more variations on the actual concepts of the creatures. So ZBrush was, again, the best tool for the job.

We constantly used Project All when we had to change our geometries. It let us adapt to the new concepts while keeping all the sculpting previously done. We didn't have DynaMesh during concepting, so we used ReMesh All a lot, in combination with ProjectAll.

ZSpheres were used to quickly create and place things like tentacles and fangs.

What really stood out in my opinion is how amazingly responsive the brushes are in the sculpting phase and how versatile ZBrush is in general when we had to solve problems.

Another thing we would use a lot was ZBrush's Turntable feature. Every time we had a new sculpt we would render a turntable of it and send it to the client for immediate feedback. It was quite similar to having a clay maquette on hand. The client was able to look at the sculpt from every angle and point at things he wanted to change.

We would use the concept sculpt as a base for retopology in combination with the scans. As I said before, we used Project All to retain volumes and details when we had new geometries following a new concept. Then in general there was a lot of back and forth between Maya and ZBrush when modeling.

We used SpotLight to lay out reference images on the screen while sculpting. We also used it to give to the models a fast PolyPaint before texturing. This was mainly done using ZApplink and Photoshop.

Once we had models and textures, we would bake 32-bit displacement maps using Multi Map Exporter and then render the creatures with 3Delight.

We sculpted everything we could using ZBrush.

The main volumes were polymodeled, but from the mid frequency to the high frequency details it was all sculpted in ZBrush. Also, all the blend shapes and the animated displacement maps were done in ZBrush.

I personally sculpted the creatures EdvardThing and Edvard/AdamThing and their blend shapes. I used the Layer system and I worked around the shapes using the subdivisions. This allowed me to rapidly export all my shapes from the first subdivision and then bake every layer in maps that we would animate using the rig and the shader. ZBrush was essential.

Definitely Layers and Project All. All the work done on the faces and the muscle deformations really benefited from the possibility to sculpt and store the blend shapes, using ZBrush brushes.

If I was going to do all that work in Maya it would have taken three times longer and it would also have been much more frustrating. Using ZBrush made the work artistic. I was able to move vertices around with the Move Topological brush and also the Nudge brush when I was sculpting blend shapes. Then I could also subdivide these shapes and sculpt the consequential details all by using the same application. This way, when animating the facial expressions, the blendShapes value was always the same as the magnitude of its displacement maps because they came from the same sculpt.

Project All was used more often when we would have to transfer our sculpted details from one geometry to another. We used it initially to transfer scanned data from the maquettes onto our geometries as well. But my favorite way to use it was while sculpting the blend shapes.

One example: Very often we had to add or remove limbs. It happened also that we had to increase the density of the polygons in Edvard/Adam faces when the client decided to go full CG. Or as with Jonas Thing, we had to pierce the skin of his face and to add holes in the topology.

In every one of these cases we had a concept model initially and which after a number of variations, the client approved. Then every time we made a geometry change, we were able to re-project the details from the concept sculpt onto the new geometry and to use those details as a start for the final ones thanks to Project All.

We used ZBrush for our texturing as well, accompanied with Photoshop. Our texture artists would start the color map by PolyPainting either using SpotLight or simply "hand" painting in ZBrush. They would then export that map as a base color. In Photoshop they would use high definition images to paint in 2D and then go back to ZBrush to fix the texture seams or to add new layers of detail painting using ZApplink.

ZApplink was actually the main tool for the job. We also used straight PolyPaint to paint masks directly in 3D.

SpotLight was really cool to lay down our base colors. It would be used to project photos in 3D onto the models and have a fast feedback of how the final map could have been. Sometimes, using SpotLight while sculpting bump maps, we would load images over the canvas and place them around the edges to use as reference.

I'd say that we spent the 80% of the time working in ZBrush. Just thinking about our concepts, all the variations, then the sculpting, the blend shapes and the painting, the turntables... it was a lot of time! In my opinion, the real benefit of spending our time in ZBrush was the fast turnaround.

I want to also add to the benefits list, the pleasure of doing our work. It was simply more enjoyable to sculpt the creatures, more artistic. We could really focus on making them look terrifying and as an artist I couldn't ask for more.

For the facial expressions, as I mentioned before, I found Project All to be extremely useful -- for example, to re-project the volumes onto the landmarks of the Edvard/Adam faces. I usually approach blend shape sculpting by creating a layer, then starting to move bigger volumes and then subdividing and adding details.

So when I would move bigger volumes, let's say when I was creating the shape deriving from the flexing of the risorius muscle, I would make the radius of my brush as big as the fall-off of the skin movement would be. Then I would pull the skin in the direction of the muscle and usually I would run into problems such as loss or gain of volumes in areas where this should not happen, where the skin should just slide over the bones.

So I would duplicate the head in its neutral pose and store it as a SubTool. For the more bony areas, such as the landmarks of a face, the cheek, brow and jawbones, I would use Project All to re-project the volumes from the duplicate head. This process sounds more convoluted when written like this but it's actually really fast. In fact, it was an immediate success; the resulting effect would be a realistic sliding of the skin over the bones.

The way I sculpted the shapes in ZBrush was the same way I would do it in Maya. The only (huge) difference was that in ZBrush it was ten times faster because I could use its brushes. The vertex movement in the lowest subdivision was already good to be used as a blend shape. So the only thing I had to do was to export every single layer in its lowest subdivision as OBJs to then be used in Maya.

The tricky part was to be able to use the other subdivisions as displacement maps. We tried initially by creating a displacement map for each blend shape. It was hard to manage. So we chose to group the details in fewer maps: three. We had one displacement map with all the details coming from layers that would contain shapes with positive motion. So those shapes would go on one direction (up - for instance). Then another map was for the opposite directions, negative motion. And finally there was a third map that would contain the details coming from all the shapes that where remaining.

This way, we were sure to never have to deal with double displacement because if, for example, the upper lip was going up, then it would trigger the map with positive motion. But the same lip would never go up and down at the same time, so the details would only appear where they would naturally occur.

The way we had to reveal those details was by using masks multiplied over the displacement opacity. There were as many of those as there were shapes. These maps were much lighter than the 8k 32-bit displacement maps. We then had our R&D team create a tool that would connect the blend shapes to the shader so animators would animate the shapes and their values would trigger the value of opacity for the masks. This then naturally revealed the details coming from the displacement maps. All the masks were generated automatically by a script (again thanks to R&D) that would compare the models in their neutral poses against all the shapes. Every difference in vertex position between the neutral pose and shapes would be automatically painted in the mask.

The only remaining thing for me to do was to make an action in Photoshop that would blur the edges of those masks. They would then be ready to be use.

The benefit was to have a lighter approach to the relatively high number of shapes that we had to deal with and the weight of our displacement maps. Also, visually the details would occur at the same time the face would move because they would come from the same sculpted layers. What would have been a value of 0.75 in the blend shape would have been the same on the layer and thus the same on the displacement of the details.

Turntables in ZBrush were particularly useful over the whole process of creating. We used them in our dailies sessions. We used them to review the models and to show material to the clients. This again was one of those things that would speed up the process of receiving feedback. The fact that we could sculpt and then a couple of minutes later also have a turntable by using the same software was huge. It was really beneficial to keep the rhythm up and not to be interrupted by technicalities.

TimeLine was also used when presenting variations of concepts. I personally also used it to present the blend shapes animating the layers. It was certainly faster than having to set up the shader with all the connections from the shapes to the maps. It has a lot of potential and it already works great when used to show different views or poses.