The cg aspect to this — falling blueberries on yoghurt in a pot — was another solo job for me. The final comp was a Flame job, though I always try to get 3d renders as close to the actual colours as possible.
There was no pre-production for me as this was just given to me all of a sudden; I would have liked to have gotten lighting information in the set and reference plates. Basically, what we had — not a whole lot — was all that I could work on. Thankfully, there was a close-up shot of the pot as part of the edit, which I used as projected texture back to my cg pot model. This allowed me to get graded colours directly onto the 3d render.
The viscous yoghurt fluid sim was done in Realflow, and rendered in V-Ray because the sub-surface shading there was very easy to get. But the rest of the elements were rendered in LightWave where I could get the most control over how colours were being rendered. This was important because I had also taken a piece of reference footage which showed how the pot looked like under a lighting condition similar to that of the cut. LightWave’s nodal shading system made it easier for me to control the shading of local areas.
Only one episode of this aborted series seems to be out on the Internet, which is the video you see above. In fact, three episodes were commissioned and finished; the two other episodes are lurking in someone’s hard drive, and although I could get a copy of it, I don’t think I’m legally authorised to upload it, regretfully.
All of the character animation in those three episodes were done by Brett Tunnicliff, save the titles, which I had done. Terry and I had been responsible for modelling and texturing the characters. I rigged, rendered, composited, and cut the the episodes myself. The direction of the first 2 episodes was headed by the boss. But by the third, its future prospects for its continuation waned — as did interest in it — and I was given the honour of finishing off the series oddment as a quasi-director.
Predicatably, the third was my favourite as I felt a bit freer to experiment; no one really cared enough to put their 2 cents into it by this time. So I tightened the storyboard to make the cuts fit in better, and began with a beauty pass timing animatic that I would get sign-off from the boss, and things proceeded smoothly from there. Brett commented that he particularly enjoyed the flow of the third episode, which is a nice thing to hear, as I enjoyed running my own small project.
No one would probably see that work, unfortunately. And sure, the end result might look dodgy to some, but to remember one of the reasons why I post these things: many jobs come with disadvantageous circumstances that affect the outcome, but if people knew just how much work was put in, they’d know it would have been a surprise anything came out of it at all. Most people appreciate only just the bells and whistles, or the polished gold trims, but I’m here saying that there is a hidden engine that powers all creative endeavours that should be recognised on equal footing.
Ah. Paper. Lots of paper. I helped contribute some scenes in this ad when I worked as a freelancer. This was a mixed bag, indeed. Rendered in Maxwell Render, some scenes were in LightWave, and some were in Maya. As a freelancer, I worked as a TD, too, and I helped troubleshoot Maya and LW scenes together. I ran cloth sims in LW, and helped render using Maxwell, though I hadn’t used it before.
I missed the days of working as a freelancer, when I knew that the lifespan of any trouble would only last for the duration of the job’s schedule.
This job was won from the strength of another win that was Toyotown: the director was rather pleased with our abilities, and wanted to work with us again. The Toyotown job had been given to me to lead; and I wanted to prove that a good workflow means all the difference to a good-looking product or a bad one. It gave the group an opportunity to prove ourselves successful without the legacy workflow encumbrances we would otherwise have carried.
But it was a disappointment to find out that the team wouldn’t get another go at it: I was asked to revert back to my titular role as ‘cg supervisor’, a euphemism for high-level cg minion, and everyone else knew we would be going back to the same wretched workflow we were trying to change. I don’t know why, but perhaps, now that it had been won, it was status quo ante bellum, and all that.
My contribution was mainly the Realflow water sims and particle effects, the rigging of Gachapin (the character), scene layouts, matchmoving, and pipeline wrangling and custom development.
We had been RND’ing Hybrido sims during pre-production with the (wrong) assumption that the sweet spot for the water sim would be when the character is almost upright. Hybrido did this well, but in the middle of the schedule, we were informed that depicting something coming out of the water in Japan was a no-no (something to do with the population’s sensitivity towards tsunamis, we were told), and the speed of the rising of Gachapin had to be slow, and yet it must depicted as powerful. Thus, we had to throw out weeks’ worth of RND, and ended up fudging and cheating a powerful effect when the character barely rises from the surface. I ended up chucking Hybrido and used a combination of Realwave sims, splash emitters, and Maya’s particles.
The job offered more surprises as conflicting intentions came up surfacing (pun intended). At the last minute, we were asked to come up with a fur solution, scrambled Seekscale to do cloud rendering to help manage the unexpectedly-heavy renders, but still needed to throw back the schedule for a week.
The contributing shot I like best is the water spray shot. It was also the last shot that was approved because it kept on coming back: I couldn’t get it right, for some reason. Then years of experience shouted inside my brain and told me: cheat the shit of it. So I took old water renders, which weren’t even properly tracked to the newest character renders, and put in multiple layers of Trapcode Particular particles, fudged stuff around, and voila: approved!
When I said this job was a difficult road, it was not the work that I was referring to, but the knowledge that I haven’t made a difference. To make a difference lies in constantly making a difference, effecting change; but it becomes impossible if simple opportunities are stunted by the constant retreat to status quo.
This was a joint effort by Dan McKay and I, with him driving the project from After Effects. My part lay mostly on populating the screens with footage. But there was a particular problem; the clients did not/could not sign off on all the footage, and knowing that in advance, I thought of making the footage generation procedural, so that Dan could render his bits and I could create the screen footage separately, thus working in parallel.
The main technique was UV mapping (STMapping in Nuke) plus floating-point ID mattes. It wasn’t enough to use UV mapping because even though I could easily map a footage on a screen, there were hundreds of screens that needed semi-random footage running on them. Using RGB mattes were out of the question since I would end up, still, with too many mattes to manage. I decided that I needed to mark these screens by ID, and so approached it a numbers-based AOV render.
This was done by first UV mapping all the screens into one UV map, then create two ramps (one U and the other V) with gradients and a precise multiply calculations which enabled colour ranges way past 1.0. The idea is that the first screen would have the surface value of 0, and another will be 1.0, and the next 2.0, and so on. When rendered from Maya as an .exr, every screen looks white, but when colour-picked in Nuke, floating-point values are recognised.
In Nuke, I created an setup which took any number of footage variations, and randomly assigned them to ID mattes, which were subsequently piped into STMaps. The result is that I had a ‘rig’ which I could switch any footage for another, replace any one screen with a particular footage if I wanted to, and/or change the randomisation of generic screens at any time.
It was a technical challenge that I found satisfying, and all the more so because the client did the predictable thing and started change stuff around. But we were ready.
Watching the video, one would never think the lengths of which artists go through to account for things that seems out of scope of a commercial. Most people just think about colours, sound, motion, effects, and all the stuff that’s in front of them; but as cg artists, we have to think about the framework behind all that in order to accommodate eventualities known client feedback.
I rarely get solo projects, and when I do, it’s often some retail job that either involve the simplest form of motion graphics that a 10-year old could do, or some CG-ish product shot that covers the same old ground that I’ve been treading on for 13 years now.
Well, although, this ad is of the second form (CG-ish product), the fact that it is a solo project is something that always fills me with delight, as I feel freest when I work alone as I move into the pace that suits me best.
For a product ad, for what it is, I think the ad is visually ok. Obviously, it breaks no barriers, but I had fun doing it. I learned a bit more about LightWave’s instancing, added some features and fixed bugs in Janus; it was relaxing to do something on my own, based on my own tastes.
Remember that this work thread is about the fact that cg projects are rarely straightforward; one artist might look he’s not doing anything, when, in fact, he’s doing everything, and vice-versa.
My contribution to this commercial is intermingled with the fact that it had been animated, and its look set up by someone else to be rendered in Octane, a GPU-based, unbiased renderer, presumably to make rendering faster and more beautiful. However, it wasn’t faster, and it wasn’t that much more beautiful, as the look was mainly AO-like. Time was running out. We have a renderfarm that can render mental ray, V-Ray, LightWave, and After Effects; but not Octane. So, it was passed on to me so I can render and comp it in time. Not surprisingly, it wasn’t just about hitting the render button in LightWave.
I took the original assets, replaced everything with LightWave shaders, tweaked the shadows and diffuse shading so that they matched, as close as possible, to the original test Octane renders, and had to fix many of the errors present in the scene, as well a number of broken models. I strategised on how much needs rendering based on the animatics; I used Janus, the ultimate LW ass-saver, to breakout lots of the necessary animated elements and mattes, and rendered them; Richard and I did the motion graphics, and I comped everything for the final product in After Effects.
This hot-potato workflow, in which a project is tossed completely to another person to be rescued, is out of my control; I simply have to do it. The main problem I have with it is that few recognises it as that: a pawning-off of accountability and yet accepting the full credit for it (as such, I’m not credited). And, again, this is why this work thread is being written: a project like this would not have seen the light of day if someone hadn’t objectively dealt with the details that were required to actually finish the job to the client’s standards. What you see isn’t what went on.
There are hard facts in professional workflows, which some are in denial of. Workflows that fly against simple reason will not get them where they want, no matter how much they cuss or growl at the monitor. There is a lesson to be learnt here, but are those who need to learn it actually get it?
I find it odd looking at a movie poster of this. I think we were the sub-sub-contractor, and we practically did one sequence from this movie: the escape to the helicopter scene, where we created the dust effects.
This is one of those many commercials that I do that best serve as an example of why I started to post these works up. I’ll explain. This commercial is Mother Earth – Pingos, and if we go by the usual way we credit these sorts of works in the commercials industry, I would be considered as having nothing to do with the project whatsoever. The animatic was done by the animation director, and from there everything else was was done by Terry. The main character was primarily animated by Paul working as a contractor. Terry set up, lit, shaded, and rendered all this in V-Ray, and then comped it up as well.
My part was a thing called Sandline, our budding shot/cache automation system, which has grown much since then. It wasn’t me who operated it, however; it was just me who’s developing it. Sandline is one of those things that helped Terry along the way; conversely, Terry helped refine Sandline as well by spotting workflow issues as it related to this job.
Oh, I lied; I just remembered I did actually do something in the ad: I did the cool looking ribbon logo animation at the end. There! Can I have my award now?