Kerry Logistics’ Video Wall

This project is not classified as a commercial. It was meant to be privately displayed inside the company’s (Kerry Logistics) reception area. It’s a couple of metres high, six times across. Pretty big, if you ask me, though I’m sure there’s always some guy who can pee higher. There always is.

Anyway, that’s the reason that I can’t show, at least in its final form, what it was. But I do have pictures — pretty pictures — of the stuff that I got to contribute.

Mocap data, to geo in Maya, then LW for point rendering.
LW nodal displacements with the help of Denis Pontonnier’s tools.

Particles made easy by LW’s nodal displacement fancy-footwork.
Additive particle morphing in LW using nodal displacements.
RealFlow HYBRIDO sim. LW for instance rendering.

Mass transformations using Denis Pontonnier’s toolkit and LW instancing.
Nixed scene. LW displacements, scene and render.
Motion graphic shenanigans in LW using 2-point polygons and instancing.

The keyword in this project was ‘repetition’. Now, a guy in the studio kept using the word ‘iterative process’. But no: this wasn’t iteration. Iteration means:

…repetition of a mathematical or computational procedure applied to the result of a previous application, typically as a means of obtaining successively closer approximations to the solution of a problem.

The operative phrase is ‘successively closer approximations to the solution of a problem’. If it were actually the case that in the so-called creative industry that iteration existed, of which my experiential opinion says is more of anomaly than a rule, it would follow that some end goal could be discerned at the beginning. This was not the case here. It began with an idea, then killed, to reincarnate into a new form, killed again, rose from the ashes, ad nauseum. Indeed, nausea is actually a good word for it. Isn’t just better just say repetition to be truthful? Instead, we are encouraged to think it’s iterative, so as to regard each ‘iteration’ not the pointless exercise it actually was.

Here, I also encountered the novel concept of ‘not second-guessing’ the client. What this actually meant was that ‘the client doesn’t know what they want, but we do.’ Basically, a Jedi mind trick. The hilarity of it all is that we’re not Jedi. No indeedy. Hence, the bulk of the setbacks were clients totally rejecting the concept and, despite the studio’s assumptive airs, we took it by the balls — what choice did we really have? They were the ones with the money — and re-did it again and again and again. Joke’s on us. Actually, joke’s on me, because I was at the bottom of that food chain. As I say, things like money/wealth may be too dense to trickle down. But work, overtime, and frustration, those things don’t sit at the top for too long.

At the of the day the repetition stopped. Where we got to is for Kerry Logisitics employees and guests to see. Where we had came from is, as they say, history.

 

Commercial: Tip Top Popsicle Smoothie

This one was a long, drawn-out schedule. In order to accommodate its numerically-challenged budget, it was thought that the schedule could be extended to several months so that we could put in other jobs in between this one. However, near the end of the schedule, we started picking it up again in earnest, and ended ironically with a quick tempo. Over-extended jobs like these invariably turn out to be rush jobs in the end. This wasn’t the case of procrastination, but the limiting of hours that could be spent on it. Consequently, we were set to do other jobs, and in the end, it could be argued that we had to spend more time on it due to the inefficiency of going back-and-forth different things.

My contribution to this ad was that I helped think out the render strategy, helped the render layer setups, modelled the base of the blender (yay!), troubleshooted character models as it relates to the rig, did the white mist effect using Maya fluids, did a breaking ice simulation that replaced by another simulation, helped shade a few of the elements, and assisted in the initial comping.

The render strategy was thought out early; we saw the character design as a final art for a poster, and Terry and I worked out the booby-traps in making this character work in 3d. It all boiled down to the refractive properties of the characters especially around the cavities (eg eyes, mouth). If modelled literally, the character would look wrong (and slightly horrific) in many angles, since the glass would refract the dark cavities.

The solution was to make render the glass as though the smoothie content was unbroken by the mouth or eye cavities. This way, the refraction was as seamless as possible.main_v010_BTY_blenderGlass.0039Then, it was a simple matter of comping, using masks, the mouth cavity and eyes, and the rest of the limbs.

main_v010_BTY_blenderBodyParts.0039

 

Apart from my usual responsibilities maintaining scene integrity throughout the whole project, one of my other main contribution, was rigging the characters, which I used AdvancedSkeleton with. The rig went through a lot of iterations in its cycle, mainly to do with consideration of the render elements that changed as we moved forward. Towards the end, collision objects were added into the rig to affect the ice that broke apart around the characters.

Because these two characters were identically in many respects, yet had considerable differences, too, I opted to create a generic rig was which featured elements from both characters. The most notable feature are the fruits sitting on top of the characters. A simple boolean switch handled the switch between the ‘pink’ and ‘purple’ characters.

In keeping with Sandline workflow the rigs had to be uniquely named, so I wrote a simple variant export. When a rig change had to be made, the generic rig was modified, and the variant export was done.

One of the workflow strategies we developed in the Mother Earth Pingos ad was using rigging low-resolution meshes into lattices in the rig itself, whose lattice points would be exported as vertex animation. Then we use the same lattice setup in the models file, and put the high-resolution meshes there. This is the approach we used for controlling the high-resolution fruit meshes on top of the character (though, on reflection, a lower resolution would have sufficed).

2016-02-10 12_40_21-Autodesk Maya 2015_ r__3d_2015_07_TipTop_Popsicle_Smoothie_3d_scenes_rigs_PINK_r

 

 

Commercial: China Southern Air

china_southern_air_thumb_1
Click to play video.

I worked on this with Dominic Taylor who had set up the comps, cameras, and worked with the clients on the direction. I mainly did the flipboard effect.

This was quite a challenging and difficult effect to do in Maya. The main driver of the rotations was expressions; the expression were taking their values from samples from textures, which were generated from AE. The main difficulty lay in the fact that it was slow, and it needed to be baked out before it could be sent to the farm because setAttr was the mandatory method of applying the motions.

The flipboard effect was not only flipping from image A to image B; in fact, it goes through a series of photographs before it resolves into the final, and designing the mechanics of the scene took some tries before getting right.

In retrospect, LW’s nodal displacement in conjunction with Denis Pontonnier’s Part Move nodes is a superior method. Where it took me about week to get all the shots set up in Maya, I think I would have done the same in LW for less than half the time.

 

Commercial: Paper Plus

Ah. Paper. Lots of paper. I helped contribute some scenes in this ad when I worked as a freelancer. This was a mixed bag, indeed. Rendered in Maxwell Render, some scenes were in LightWave, and some were in Maya. As a freelancer, I worked as a TD, too, and I helped troubleshoot Maya and LW scenes together. I ran cloth sims in LW, and helped render using Maxwell, though I hadn’t used it before.

I missed the days of working as a freelancer, when I knew that the lifespan of any trouble would only last for the duration of the job’s schedule.

Commercial: Spark Light Box

lightbox_thumb_1
This was a joint effort by Dan McKay and I, with him driving the project from After Effects. My part lay mostly on populating the screens with footage. But there was a particular problem; the clients did not/could not sign off on all the footage, and knowing that in advance, I thought of making the footage generation procedural, so that Dan could render his bits and I could create the screen footage separately, thus working in parallel.

The main technique was UV mapping (STMapping in Nuke) plus floating-point ID mattes. It wasn’t enough to use UV mapping because even though I could easily map a footage on a screen, there were hundreds of screens that needed semi-random footage running on them. Using RGB mattes were out of the question since I would end up, still, with too many mattes to manage. I decided that I needed to mark these screens by ID, and so approached it a numbers-based AOV render.

This was done by first UV mapping all the screens into one UV map, then create two ramps (one U and the other V) with  gradients and a precise multiply calculations which enabled colour ranges way past 1.0. The idea is that the first screen would have the surface value of 0, and another will be 1.0, and the next 2.0, and so on. When rendered from Maya as an .exr, every screen looks white, but when colour-picked in Nuke, floating-point values are recognised.

In Nuke, I created an setup which took any number of footage variations, and randomly assigned them to ID mattes, which were subsequently piped into STMaps. The result is that I had a ‘rig’ which I could switch any footage for another, replace any one screen with a particular footage if I wanted to, and/or change the randomisation of generic screens at any time.

It was a technical challenge that I found satisfying, and all the more so because the client did the predictable thing and started change stuff around. But we were ready.

Watching the video, one would never think the lengths of which artists go through to account for things that seems out of scope of a commercial. Most people just think about colours, sound, motion, effects, and all the stuff that’s in front of them; but as cg artists, we have to think about the framework behind all that in order to accommodate eventualities known client feedback.

Sandline

As a CG supervisor in a small CG group, I find it part of my job to think of new ways to improve the workflow beyond the scope of the job. Yes, I technically supervise a job, but who technically supervises the group? Indeed, to introduce small improvements after every job is one of the main ideas of what it means to be supervising.

This requires some chin-rubbing. The company I work for retains only a very small core group — less than the fingers of your either hand — so it has been used to hiring freelances for any conceivable job. Part of the problem of freelances is that when the job is finished, you don’t keep the experience they’ve gained from working on the project. Another problem is that no one can guarantee that any freelance will be hired for the next job. These make it difficult to implement an efficient pipeline when, most of the time, most of the crew needs to be indoctrinated to it at the start of the project.

Freelance artists have various ways of working, and they can be required to adhere to certain procedures, but depending on whether or not you’ve worked with an artist before, this is a time-consuming task, characterised with many user errors and frustration that persist throughout the entire project, culminating in freelances finally concluding their contracts — and leaving — just when they have finally gotten to grips with the method. And when a new job begins, you may have to do it all over again.

It is easy enough to suggest scripting tools to user-proof known issues. But to cover a multitude of different possible variances coming from unknown future artists is hard to improve upon when the next job comes along: the same ‘mistake’ is not always done the same way. Fighting fires is part of the work of a TD, but when looking for a workable pipeline, you don’t want to depend on it.

Simplicity was my goal: the more generic the structure, the easier it is to understand. Perhaps the structure, methods, and protocols mimick already-established conventions. Perhaps it becomes incorporated into the host app GUI so it feels more natural to get into.

The shot workflow we now use was first developed through a collaboration between me and Louis Desrochers, who was, appropriately enough, a freelance who had at the time been working with us on a commercial. Later, my colleague Terry Nghe and I would extend this workflow.

I called this workflow and the tools that support it Sandline.

 

SHOT

There are several facets, but one of them is the simple concept of the shot:

  • A shot is represented by a folder, and contains all things unique to that shot; the folder’s name is the shot name
  • A shot contains ‘scene’ folders such as ‘anim’, ‘layout’, ‘render’, and others — it is open-ended
  • A shot contains a special cache folder to store vertex-cache data, point clouds, meshes, etc.
  • A shot contains a special image plane folder
  • A shot can be considered a ‘sub-shot’ if the shot folder is nested in the directory structure
  • A shot has a definition file which define its frame range, renderer, resolution, render client min/max frame setting, and a description of the scene
  • A shot’s definition sits on top of a global project definition

One of the reasons the shot folder came into being is due to our experience in cloud-rendering. We had used the default Maya workspace behaviour in which cache files were written to the project root’s cache data directory. When it was time to upload the data to the cloud service, we would sometimes forget to upload some of the cache files or the scene files because they were being dragged from their two different and respective places.

So why not move all cache files into the same folder since they are only relevant for that shot?

While that solution was an answer a very specific workflow problem — we no longer use FTP-based cloud services when we can help it — the logic behind it was sound: we would have convenient access to all data related to a specific shots.

 

CACHING

The original Sandline development centered around automating vertex-caching. It does it this way:

  • Meshes to be cached are tagged by grouping them in a specially-named node, or by applying custom attributes to nodes
  • Namespaces in meshes are treated like directory structures
  • Vertex caches are versioned according to the animation scene’s own version
  • Any version of the vertex cache be be applied to a scene sans cache nodes, and does this based on name-matching and tagging — the same way it saved the cache

 

MODELS

An adjunct to caching is models which refer to a scene file that is contains plain geometry and its shading. The idea behind models is to have a geometry with the same point-order as the rig. When the cache is saved off the rig, it is applied to the shaded models version. In this way, it is possible to totally separate the pipeline between animators, riggers, modellers, and shaders.

The models folder is a global folder, which means it can be used by any shot. It also has a ‘versions’ folder where working versioned scenes are worked on. When the models are published, they are promoted to the models directory — and appropriately renamed and stripped off their version number — to be used directly by any scene.

 

RIGS

Rigs are very much attached to the same idea as models in that that resulting geometry that is used can come from either one, but they must contain the same geometry if the project involves vertex caching (not all projects do). If a rig has been built around a production mesh, and the mesh was modified, the model must be imported back in. Likewise, if, by technical requirements of the rig, the model needed to be modified, those changes must be exported out to a models file to be sorted out by the modeller and shader to conform with the rig file.

Like models, rigs are publish-able: they have a separate ‘versions’ folder where version of rigs are stored. When published, the version number is stripped and promoted to the rigs folder.

 

MAYA INTEGRATION

I took some pains to integrate, as much as I can, the functions directly into the interface of Maya.

2015-05-16 21_26_19-Autodesk Maya 2013 x64_ untitled_The ANIM, LAYOUT, RENDER menus are references to the subfolder of each shot. But instead of listing each shot on the menu, they appear underneath  scene folders:

2015-05-16 21_29_21-Autodesk Maya 2013 x64_ untitled_

ROLE-CENTRIC

This might appear odd to most people because you’d normally expect to traverse to your desired scene in the same way you traverse a directory structure. But what’s happening here is that I tried to arrange it from the point of view of what is interesting for the artist in a specific capacity. Indeed, the roles of the freelance is general, but it is always specific for particular time. If you were an animator, you would typically be only concerned with the ANIM folder. If you were responsible for scene assembly or layout, you will cast your attention on the LAYOUT menu. If you were setting the scene up for render, the RENDER menu (and some others). In other words, the menus are arranged according to use, according to role.

And the most important thing about Sandline is that the project leads makes up the roles on a per-project basis: sometimes the LAYOUT role is not relevant, or the LAYOUT role is used as an FX scene. The name of the folder is only a general term, and it is by no means restricted to those roles that has been named as default.

 

FLEXIBILITY

I work in commercials, which means that almost every project is going to be different from the last one. This means that our workflow — and not least of all our mindset — must be pliable enough to adapt to practical requirements.

For instance, when is it a good time to use the cache system? When there is major mesh deformation happening on high poly counts, or if the scene’s combined complexity — say, multiple characters — complicates render scenes, then a cache system will surely be considered. But when a shot only involves transformations, or if visibility settings are animating (eg attributes that do not propagate in cache data), how much practical benefits would you really get using caches? Or perhaps we’re animating high poly count meshes using transforms (eg mechanical objects); caching those verts to represent transformations instead of using the transformations themselves is a waste of storage and a waste of time.

Also, not all freelances are going to adhere to procedure. More often than not, regardless of how skilled a freelance is, they will do things their way before they do anything else. And there comes a point when they have progressed too far ahead in the scene to repair a dubious workflow, such as forgetting or refusing to reference scenes in certain situations. It has happened too many times. What happens here?

Well, the answer is not in the tool, per se. Of course, I can tell you now that I can come up with several ideas for tools that fixes certain issues, and if time allowed, I would have written those tools. Sure: but the main point of Sandline is the ability to accept those inconsistencies and ignore them; or rather, focus on the basic workflow, to encourage its use. So when a freelance forgets/refuses to reference, the tools don’t hinge on their improper use.

I’ve seen other systems which are rigid, and it was properly rigid due to the straightforwardness of their actual project: there is a strict flow of data, and protocols of usage. In a commercials environment, this doesn’t work, and no one will get away playing that sort of tyranny with freelances; you won’t get the project moving, unless it’s a 4-month long post-production schedule, which doesn’t happen any more.

 

So, this has been just an introduction to the idea behind Sandline, which is named after Sandline International, a now-defunct private military company. The play is on the idea of ‘freelance’, and this tool was created with that in mind.

That said, freelances who have had the opportunity to use the system, while saying it was ‘cool’, still fell back to their own workflow. Again, this is natural, and almost expected, and is not really that bad. However, in time, I hope to improve Sandline to the point where using it is more seamless, and the measure of its success is how quickly freelances are able to take it in and see immediately its workflow benefits even if they haven’t used the system before.