Iteration

Iteration is the creative process of improving the work in incremental steps. I don’t know if it’s truly a buzzword, but from where I’m standing, it’s always buzzing around. But I think that iteration means something different depending on where you’re standing.

In an advertising agency, for example, the creative team goes through their own rounds of iteration, brainstorming ideas, solidifying them visually through thumbs for internal meetings, then a concept board (if it’s a TVC) to be cleared with client, then upon feedback, work the process up to a storyboard. The creative process is completely internal in that they have full control over their workflow with the client giving feedback. Ask the creative director what iteration means for her workflow, and she’ll tell you “it’s working up the Idea in small steps, making sure, all the while, the client is kept in the loop and making appropriate feedback, which we then apply, and advance the Idea into a final storyboard to be produced.” So far, so good.

In the post-production shop, the process is pretty much the same, only a bit more complex, naturally; we deal with lots of technical elements. So while an agency might have a single pipeline we have at last four going almost concurrently, and those pipelines intermingle with each other. We have models to be made, rigging to be applied to models, animation to be applied to the rig, models to be shaded, shaded models to be lit, whole scenes to render, renders to be comped, effects to be designed and comped, etc. And that’s a standard bread-and-butter job. Let’s not get into things like simulations, matchmoving, rotoscoping, and the like.

Now imagine the same creative director is working with a post-production shop to produce the TVC. Ask the same question, “What is iteration?” She’ll answer, non-verbatim, with this expectation: “I want to see the final product very soon, and iterate that until it becomes better.”

Because the post-production process is unknown to her, she doesn’t realise that we have many final ‘products’ to iterate over: models have their own iteration-cycle, distinct from the animation iteration-cycle; so is look development, so is effects development; and these come together as a ‘master’ development pipeline with a separate iteration-cycle as well. She doesn’t automatically think to apply her own iterative workflow principle to the post-production side because they are uninformed. But because they prefer not to know, they remain at arm’s length from the post-production group, as distanced as they themselves, as creative teams, are to their clients, who are equally indifferent of their process. The indifference is passed down from client to agency, from agency to post-production, generally speaking.

Now, all this time, I’ve been using the agency’s creative director as my example. This is not a fair emphasis, by the way, though it surely makes the point clear, and many agencies relate this way to post-production houses. But you will also find directors, be it art directors, TVC directors, or anyone calling the ‘creative shots’ are just as guilty of this indifference. But the worse of all, it should be noted, is that the indifference occurs within a post-production group, as some of the upper crust only pay lip-service to the very technical nature of their own operation. Though I began with the ignorance of an agency creative director, she is the least guilty of them all.

The post-production upper crust might have done well to learn the internal creative process of the agency. But I think they condescended to think they could be anything but the client, and thus distancing themselves from their own post-production group. Perhaps by assuming the superior client role, they thought can eke something creative out of the ‘headphone-hooded geeks’.

The agency enjoys a creative process that they themselves have built and enforce in order to serve their own purposes because doing so will yield a better product for the client and for themselves. Yet, the post-production group gets served up onto a plate of uninformed demands by uninformed folks, left undefended by the upper crust who are just as uninformed; and it would have yielded poor results if not for talent and lots unnecessary personal sacrifices. But even sacrifices have their limits.

Anyone who demands, “I want you to go hard out so you can get me the final product tomorrow, so I can iterate/nitpick/pixel-fuck that until it becomes better” does not know what iteration means and lacks the discipline of imagination necessary to mix the creative aesthetic with the highly technical processes, which is what this industry is about.

 

Janus Macros – Problem of Permutations

Long live Janus. ;-)

 

I’ve been doing some custom development for an architectural visualisation company in Australia called 3DVIZ recently. During Janus’s commercial phase,  3DVIZ bought a licence. It is usually the case that I don’t know what Janus user actually do with Janus technically-speaking. Few users ask for feature requests, and even fewer ones explain their workflow and how to better improve it through Janus. That’s mainly why Janus developed the way it was: my own personal production needs, and my curiosity in proceduralism.

It is the proceduralism that seemed to draw 3DVIZ into Janus. When I wrote the FOR loop constructs in Janus, it was mostly out of admiration of Houdini and the curiosity of what proceduralism could look like in Janus. No had asked for it, I only had an inkling that I may, just possibly, need it if ever the chance I get into a large-scale project in LightWave. But, if I’m honest, I’ve never actually used FOR loops for any of my commercial projects; none of them were ever big enough to warrant to advantages of using the feature.

When 3DVIZ contacted me for support, I realised that they were using it in a far more advanced way than I personally used it as a TD in commercial work. It was gratifying to see that someone actually had some need to proceduralise their scene management and rendering to that level, and that Janus’s FOR loops actually made all the difference.

3DVIZ approached me again with a permutation problem. From the little I know about it so far, their asset hierarchy is well-organised (ie rigid). And this is the case because they need to be able to render more variants upon variants; from a house, they render parts of the house in varying configurations, each with their own types of materials, and so on so forth.

Part of 3DVIZ’s own request, since they know their problem more than I do, is to enable them to automate scene loading and Janus operations from a more ‘global’ script; as FOR loops handle the breakouts for any given scene, now they want to expand that capability across scenes. The concept is similar to LightWave’s own Render-Q script, where a script resides ‘persistently’ and orders LightWave to do tasks.

The most obvious idea to automate Janus is to allow it to accept macro commands. A ‘controller’ script handles the scene loading, then writes a macro to a file, which contains commands to breakout so-and-so render pass; then signals Janus to receive the macro; when Janus completes the macro, the ‘controller’ loads up another scene in its queue, then writes another macro, and repeats the procedure.

Thanks to the years put into Janus, the implementation of macros was clean, and with some limited testing, the concept works as well as imagined.

However, my main obstacle would be their expansive asset hierarchy. The real challenge is to make sense of it in my head, and design a ‘controller’ script that creates sensible macros that solve 3DVIZ’s particular problem of permutations.

 

Over Time

I think that when I was younger all that mattered was doing a good job. As I grew older, I wondered if I was missing something. I don’t mean promotions or salary raises. I thought of Time, that forever deal that no one gets to turn away from. I lie to my side at night and, before I go to sleep, I hear my heart beating under me. I wonder why I feel it more keenly now. I think of the day I finally stop trying to sleep and die. I wonder if everything I’ve done since would have been worth it, even just for me.

The strife of overtime is more than just about money, or boredom, or even health, or all the bad reasons why we burn Time this way. Rising onto the surface is waste: Life wasted on things I don’t love; on vanity, on mediocrity, on lusts, on fear.

I love differently, I love different things, as I bear Time. Now, the world has become unintelligible and malicious, and I feel as though I am being born yet again unto myself, coming out of a mystical womb with hysterical infantile cries that I myself don’t hear. The pain of a rebellious newborn — never known — is now remembered; dissidence grows desperately, yearning never to die with an old heart.

If ever I run free, in the present I will live, and all my moments will be as aeons are: more Time than I can ever hope to ask for.

 

 

 

Cold Light

The Cold was a short poem I wrote in front of my workstation one night. Fittingly, I wrote it in my code editor.

The poem has given me much to remember, and through remembrance it holds me to account for all my present moments. It is where this blog’s name comes from, although at the time, I didn’t really consider it beyond the poem’s literal imagery.

The poem talks about dying in the middle of any conceivable night, when the world has gone to bed, except you (me), and the city lights, the office fluorescents still wave-pulsating, are droning a tiny sound. When death comes over, there is no noise above the silence, so that all is silent, and no one hears you, or sees you, depart.

I remember that one cold night. I was surrounded by the darkness, which I preferred when I worked late. The air was air-conditioned cold against the skin of my night body — a body that loses heat in expectation of sleep. I looked to my right and saw the dead streets, wet after the rain, yellow-orange under lights. I looked down to my hand resting on the keyboard. I saw the monitor drape its light over me. Underneath the office table, lit blue by the computer’s power light, was my sleeping bag.

I have cause to remember this poem, because I always come to the moment of wanting to write it again. Reading it, I find nothing needs to be added, nothing needs needs to be trimmed. It says everything I need to feel at the moment. To read about a quiet death in a quiet room filled with computer fans humming, fills me with an alarm that sounds at the back of my heart. I can hear a humungous gong, a devil screaming in another plane. But I see no vision except the physical sparkles of particles and aura around my eyes, which streak back and forth causing me to turn: is someone there?

The devil is screaming. Or is it my voice I’m hearing?

If I go on like this, I will die much like how I describe it myself. No one will close the lights before my eyes shutter themselves from knowledge of them. I will inherit this sadness in passing — forever. This cold light is the sky of a poor life. Only in leaving this room can there be hope of better chapters.

 

Sandline

As a CG supervisor in a small CG group, I find it part of my job to think of new ways to improve the workflow beyond the scope of the job. Yes, I technically supervise a job, but who technically supervises the group? Indeed, to introduce small improvements after every job is one of the main ideas of what it means to be supervising.

This requires some chin-rubbing. The company I work for retains only a very small core group — less than the fingers of your either hand — so it has been used to hiring freelances for any conceivable job. Part of the problem of freelances is that when the job is finished, you don’t keep the experience they’ve gained from working on the project. Another problem is that no one can guarantee that any freelance will be hired for the next job. These make it difficult to implement an efficient pipeline when, most of the time, most of the crew needs to be indoctrinated to it at the start of the project.

Freelance artists have various ways of working, and they can be required to adhere to certain procedures, but depending on whether or not you’ve worked with an artist before, this is a time-consuming task, characterised with many user errors and frustration that persist throughout the entire project, culminating in freelances finally concluding their contracts — and leaving — just when they have finally gotten to grips with the method. And when a new job begins, you may have to do it all over again.

It is easy enough to suggest scripting tools to user-proof known issues. But to cover a multitude of different possible variances coming from unknown future artists is hard to improve upon when the next job comes along: the same ‘mistake’ is not always done the same way. Fighting fires is part of the work of a TD, but when looking for a workable pipeline, you don’t want to depend on it.

Simplicity was my goal: the more generic the structure, the easier it is to understand. Perhaps the structure, methods, and protocols mimick already-established conventions. Perhaps it becomes incorporated into the host app GUI so it feels more natural to get into.

The shot workflow we now use was first developed through a collaboration between me and Louis Desrochers, who was, appropriately enough, a freelance who had at the time been working with us on a commercial. Later, my colleague Terry Nghe and I would extend this workflow.

I called this workflow and the tools that support it Sandline.

 

SHOT

There are several facets, but one of them is the simple concept of the shot:

  • A shot is represented by a folder, and contains all things unique to that shot; the folder’s name is the shot name
  • A shot contains ‘scene’ folders such as ‘anim’, ‘layout’, ‘render’, and others — it is open-ended
  • A shot contains a special cache folder to store vertex-cache data, point clouds, meshes, etc.
  • A shot contains a special image plane folder
  • A shot can be considered a ‘sub-shot’ if the shot folder is nested in the directory structure
  • A shot has a definition file which define its frame range, renderer, resolution, render client min/max frame setting, and a description of the scene
  • A shot’s definition sits on top of a global project definition

One of the reasons the shot folder came into being is due to our experience in cloud-rendering. We had used the default Maya workspace behaviour in which cache files were written to the project root’s cache data directory. When it was time to upload the data to the cloud service, we would sometimes forget to upload some of the cache files or the scene files because they were being dragged from their two different and respective places.

So why not move all cache files into the same folder since they are only relevant for that shot?

While that solution was an answer a very specific workflow problem — we no longer use FTP-based cloud services when we can help it — the logic behind it was sound: we would have convenient access to all data related to a specific shots.

 

CACHING

The original Sandline development centered around automating vertex-caching. It does it this way:

  • Meshes to be cached are tagged by grouping them in a specially-named node, or by applying custom attributes to nodes
  • Namespaces in meshes are treated like directory structures
  • Vertex caches are versioned according to the animation scene’s own version
  • Any version of the vertex cache be be applied to a scene sans cache nodes, and does this based on name-matching and tagging — the same way it saved the cache

 

MODELS

An adjunct to caching is models which refer to a scene file that is contains plain geometry and its shading. The idea behind models is to have a geometry with the same point-order as the rig. When the cache is saved off the rig, it is applied to the shaded models version. In this way, it is possible to totally separate the pipeline between animators, riggers, modellers, and shaders.

The models folder is a global folder, which means it can be used by any shot. It also has a ‘versions’ folder where working versioned scenes are worked on. When the models are published, they are promoted to the models directory — and appropriately renamed and stripped off their version number — to be used directly by any scene.

 

RIGS

Rigs are very much attached to the same idea as models in that that resulting geometry that is used can come from either one, but they must contain the same geometry if the project involves vertex caching (not all projects do). If a rig has been built around a production mesh, and the mesh was modified, the model must be imported back in. Likewise, if, by technical requirements of the rig, the model needed to be modified, those changes must be exported out to a models file to be sorted out by the modeller and shader to conform with the rig file.

Like models, rigs are publish-able: they have a separate ‘versions’ folder where version of rigs are stored. When published, the version number is stripped and promoted to the rigs folder.

 

MAYA INTEGRATION

I took some pains to integrate, as much as I can, the functions directly into the interface of Maya.

2015-05-16 21_26_19-Autodesk Maya 2013 x64_ untitled_The ANIM, LAYOUT, RENDER menus are references to the subfolder of each shot. But instead of listing each shot on the menu, they appear underneath  scene folders:

2015-05-16 21_29_21-Autodesk Maya 2013 x64_ untitled_

ROLE-CENTRIC

This might appear odd to most people because you’d normally expect to traverse to your desired scene in the same way you traverse a directory structure. But what’s happening here is that I tried to arrange it from the point of view of what is interesting for the artist in a specific capacity. Indeed, the roles of the freelance is general, but it is always specific for particular time. If you were an animator, you would typically be only concerned with the ANIM folder. If you were responsible for scene assembly or layout, you will cast your attention on the LAYOUT menu. If you were setting the scene up for render, the RENDER menu (and some others). In other words, the menus are arranged according to use, according to role.

And the most important thing about Sandline is that the project leads makes up the roles on a per-project basis: sometimes the LAYOUT role is not relevant, or the LAYOUT role is used as an FX scene. The name of the folder is only a general term, and it is by no means restricted to those roles that has been named as default.

 

FLEXIBILITY

I work in commercials, which means that almost every project is going to be different from the last one. This means that our workflow — and not least of all our mindset — must be pliable enough to adapt to practical requirements.

For instance, when is it a good time to use the cache system? When there is major mesh deformation happening on high poly counts, or if the scene’s combined complexity — say, multiple characters — complicates render scenes, then a cache system will surely be considered. But when a shot only involves transformations, or if visibility settings are animating (eg attributes that do not propagate in cache data), how much practical benefits would you really get using caches? Or perhaps we’re animating high poly count meshes using transforms (eg mechanical objects); caching those verts to represent transformations instead of using the transformations themselves is a waste of storage and a waste of time.

Also, not all freelances are going to adhere to procedure. More often than not, regardless of how skilled a freelance is, they will do things their way before they do anything else. And there comes a point when they have progressed too far ahead in the scene to repair a dubious workflow, such as forgetting or refusing to reference scenes in certain situations. It has happened too many times. What happens here?

Well, the answer is not in the tool, per se. Of course, I can tell you now that I can come up with several ideas for tools that fixes certain issues, and if time allowed, I would have written those tools. Sure: but the main point of Sandline is the ability to accept those inconsistencies and ignore them; or rather, focus on the basic workflow, to encourage its use. So when a freelance forgets/refuses to reference, the tools don’t hinge on their improper use.

I’ve seen other systems which are rigid, and it was properly rigid due to the straightforwardness of their actual project: there is a strict flow of data, and protocols of usage. In a commercials environment, this doesn’t work, and no one will get away playing that sort of tyranny with freelances; you won’t get the project moving, unless it’s a 4-month long post-production schedule, which doesn’t happen any more.

 

So, this has been just an introduction to the idea behind Sandline, which is named after Sandline International, a now-defunct private military company. The play is on the idea of ‘freelance’, and this tool was created with that in mind.

That said, freelances who have had the opportunity to use the system, while saying it was ‘cool’, still fell back to their own workflow. Again, this is natural, and almost expected, and is not really that bad. However, in time, I hope to improve Sandline to the point where using it is more seamless, and the measure of its success is how quickly freelances are able to take it in and see immediately its workflow benefits even if they haven’t used the system before.

Python and SQLite – First Steps

The company I work for uses LTO to archive data. A few years ago, the key IT personnel left, and along with them the full working knowledge of the archiving system (TBS), which was heavily based around Linux scripts (I’ve spotted some PERL).

The retrieval system is composed of a user-friendly HTML browser front-end that communicates with the archiving daemon, which publishes search results and the generates code to handle retrieval requests.

The archiving/storing system, on the other hand, is entirely another matter: it seems to be all Linux shell scripting, and no one knew how to use it. We could retrieve stuff, but we couldn’t archive anything back to tape.

The company consulted third-party IT pros and companies about it and they made a decision to buy and use PreRollPost as our archiving system. I haven’t really used PRP, and I don’t have any opinion on it. Needless to say, TBS allowed view access, search queries, and retrieval requests to the database from any networked computer. PRP can only be operated locally, so all actions and operations must be done in that one computer it is installed on.

In the past, when producers came to me asking to pull an archive, or check for its existence, I would do a search in TBS from my own workstation. Now, it wouldn’t be possible, and that an archive operator role (ARCOP) needs to be created. In an effort to go back to the same efficient workflow, one of the guys in charge of the new PRP system, decided that he’d create a Google spreadsheet and enter the database information by hand. It’s incredibly tedious and, I reckon, prone to mistakes. Due to the company’s slow adoption of a new archive system, he’s also backlogged with stuff that needs archiving.

I’m not in the IT department; I’m a CG technical supervisor, and I hesitate to try to assist in these things because I’m expected, first and foremost, to be a CG operator. The producers would sooner throw me a motion graphics job over any workflow project for the archive system. Actual jobs are visible, and thus get more attention. But they won’t pay anyone to fix things that aren’t readily visible — they don’t deem it really that important — until they become the cause of visible disruptions. And so launching into a coding project always risks not being able to finish it; we’re not a software company, too.

When I saw the spreadsheet, I first thought that I should make a simple helper script for the ARCOP by retrieving relevant information directly from PRP’s SQLite database, putting that as a CSV-formatted text in a buffer or file, and it would be a copy-paste matter after that. I began the journey first by converting the .sqlite to .csv using DB Browser for SQLite, which was indispensable both in analyzing PRP’s database structure, and learning and troubleshooting SQLite commands (more on that later). When I realised that I didn’t want the ARCOP to deal with having to convert .sqlite to .csv everytime he needs to update the spreadsheet, I reformatted the code to use the sqlite3 Python module and directly access the .sqlite itself.

All was going fine; I copied the .sqlite file from the PRP computer into my dev workstation as a local copy. Here, I saw that the PRP database stored full source paths and this meant that I could extrapolate some details from this: I could determine the project title, the tape number, whether or not an entry should be filtered out based on its relevance. But a surprise came when I re-checked the PRP database and found out I was using an older version. The main difference was huge: full paths were no longer stored; nodes were checked in with their own unique node numbers, and they also had a parent node. This meant that if I wanted the full path to a particular node, or the full path that a node belongs to, I would have to recurse through the hierarchy myself using SQL commands.

It was at this time that an assistant producer rang me asking if a certain project existed. I looked at the ARCOP’s spreadsheet, and it wasn’t there. I was getting a bit better using DB Browser that I decided to look directly into the database. I put in the keywords in one of the columns’ filter, and found it; tracing its parent node one-by-one until I got to the root, I finally concatenated the PRP location on a piece of paper, and sent it to the ARCOP to retrieve.

It was at this time that I decided that creating a helper script for the ARCOP to assist inputting data into the spreadsheet was an obsolete idea: there’s no need to recreate a database in a spreadsheet when I already had the ability to programmatically access the database. The issue here was that I wasn’t familiar at all with SQLite. And, up until recently, I hadn’t really touched GUI programming in Python. These were the major knowledge hurdles for me, but there seemed to be no other way, unless I simply dropped the endeavour, which didn’t appeal to me.

The idea now changed to creating a prog that will provide a simple search query mechanism to the database. It would have a ‘search all keywords’ function, it will need to filter out most nodes except .tar files (which is used as compression container for most projects) and directories. Then it needs to present the search results in a list and a string field so that the user can copy-paste the location to a text email when he/she requests an archive pull.

After some brief research I decided to implement wxPython for my GUI. I spotted a helper prog called wxFormBuilder that helped me immensely in quickly understanding, through its generated code, what wxPython was doing. Through the many articles in Stack Overflow, I also learned how to create a multi-column list which would be populated with search results.

I think the hardest part about the project was SQLite which I’ve no previous experience. The main difficulty lay in understanding the flow, and if there were actual concepts of variables, what Common Table Expressions were conceptually, and just how much difference lay in programming languages and SQL commands.

WITH RECURSIVE
    find_parent(Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE) AS ( 
        SELECT Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE FROM ZPRPFILESYSTEMNODE WHERE ZNAME LIKE '%fsp%'
        UNION 
        SELECT ZPRPFILESYSTEMNODE.Z_PK, ZPRPFILESYSTEMNODE.ZPARENTNODE, ZPRPFILESYSTEMNODE.ZNAME, ZPRPFILESYSTEMNODE.ZISDIRECTORY, ZPRPFILESYSTEMNODE.ZBYTECOUNT, ZPRPFILESYSTEMNODE.ZNOTES, ZPRPFILESYSTEMNODE.ZBACKUPDATABASE, ZPRPFILESYSTEMNODE.ZMODIFICATIONDATE, ZPRPFILESYSTEMNODE.ZCREATIONDATE FROM ZPRPFILESYSTEMNODE, find_parent WHERE find_parent.ZPARENTNODE=ZPRPFILESYSTEMNODE.Z_PK
        ),
    find_child(Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE) AS ( 
        SELECT Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE FROM ZPRPFILESYSTEMNODE WHERE ZNAME LIKE '%fsp%'
        UNION  
        SELECT ZPRPFILESYSTEMNODE.Z_PK, ZPRPFILESYSTEMNODE.ZPARENTNODE, ZPRPFILESYSTEMNODE.ZNAME, ZPRPFILESYSTEMNODE.ZISDIRECTORY, ZPRPFILESYSTEMNODE.ZBYTECOUNT, ZPRPFILESYSTEMNODE.ZNOTES, ZPRPFILESYSTEMNODE.ZBACKUPDATABASE, ZPRPFILESYSTEMNODE.ZMODIFICATIONDATE, ZPRPFILESYSTEMNODE.ZCREATIONDATE FROM ZPRPFILESYSTEMNODE, find_child
        WHERE find_child.Z_PK=ZPRPFILESYSTEMNODE.ZPARENTNODE
        )

 

The snippet above is one the test SQL commands I was debugging under DB Browser. It is the template that I used to get a node’s parent node and child node. I admit that there is still this haze between my brain and SQL commands. I think time will tell whether I would need to delve into it a bit more.

Over the weekend, I struggled with many aspects of the search function and search results. One of the main issues was how much filtering should/could I do using fast SQL commands versus when should I let Python’s convenience functions do work. In the end, SQL command return all traversed trees containing any of the keywords, and Python adds another level of filtering based on the user options.

There are still things to do in the prog itself, such as problems with the sortable columns (probably the itemDataMap attribute is not being populated properly), adding the ability to choose a database to connect to, etc. I’d also need to decide whether the prog run as a script, or a compiled .app/.exe prog. Back at the office, the PRP database needs to be accessible to networked computers.

I’m pretty happy that I got relatively deeper than I thought in a few days of development. But this Saturday weekend was necessary for the push since there are too many distractions at work to be able to really absorb technical information especially a topic that’s really new to me.

 

Retirement and Retrospect: Janus EOL

After some thought last year, weighing in what I want to do for the future, I  decided that I should ‘clean my closet’ first. And one of the things that stood out in that closet was Janus development. There have been no new sales for Janus for quite a long time now, unsurprisingly, because I never really made any respectable effort in its marketing. Besides a few clients, its userbase has equally been quiet. And so I’ve decided to retire Janus from commercial development, and the main reason is that I can’t see myself guaranteeing the same kind of support that Janus users have enjoyed through the years for free.

It feels like a nominal thing to say that Janus is no longer being developed because Janus dev hasn’t been as active, and I’m pretty sure only a very few are concerned with its development anyway. If anything, announcing the fact will simply get me off that spectral hook of ‘developer obligation’ for EOL products. At least that’s what I hope.

I’ve always said in the past that Janus was never meant to be a mainstream tool. But over the course of the years, I learned one major reason why: many LWers didn’t try it. Part of me comprehends the rationale that the Janus video tutorials described a workflow that they didn’t like, or was confusing. But this is what I couldn’t understand: despite the complaints of the lack of a render layer system in LW, why people wouldn’t even attempt a free try in the hopes that they make something out of it.

Then there was the price point of 200 bucks (later 100 bucks) which might have made it totally incompatible with their idea of a layering system, no matter how well (or badly) designed. I kept hearing their demands to NT to put a layering system in there as part of their LW upgrade path, avoiding the need to invest in a 100-200 dollar plugin. Apparently, they’ve been waiting for a long time: at least 7 years.

What strikes me ironically, in retrospect, is that if they had invested in Janus from the beginning, it would have been a free Janus upgrade path from then on. Of course, I can’t say that I would have guaranteed it, though it seems likely since it was the case despite the minimal community support behind it. It would have been a very small price to pay to have such a tool that early on in LW9.5 — LW 2015 still does not have a functional equivalent of Janus or a render layer system. I say that in retrospect; a few Janus users have been saying it for years.

Most LWers have lived without a proper layering system, because the need is not truly pressing for most of them; I think despite their complaints, they can wait 7 more years if they have to.

Whether or not I would have stopped development regardless of its popularity is something I will never know myself. To me, Janus, as a commercial product, has run its course. I think it does a lot more than what was advertised, which is a good thing for me as a developer; I’m proud of what I’ve accomplished, but more grateful for the things I’ve learned developing this tool. Janus is still available to be bought, but no support will be given (unless I can actually afford to), and I will try my darnedest to ignore bug fix requests: it’s easy to get obsessed with them, and they eat up lots of my time.

This is not to say that I’m through with Janus: I use it daily, and I will continue to code it to solve problems that I encounter myself. I may yet support Janus in the context of a company as a technical consultant, which is a better use of my time, and I can be actually recompensed for my work. I may fork it, or create a derivative for other progs like Maya — who knows, really? The future unknown, and I’d rather not try to plan or predict. I’ve focused on tools for the most part of my vfx career, and left creative pursuits largely untravelled. And that’s where I’m headed next.

“Having no limitation as limitation”

That Bruce Lee quote is great and I apply that to almost every other thing in my life, especially when thinking about laying on the hurt. But in animation production, it doesn’t apply. And being such a cg elder, I would  propose a re-phrasing of Lee’s indelible words.

“Having limitation is not limitation.”

I’ve recently completed two works of two different styles. Both were bourne in the meditation that I may die in the middle of a commercial and, in dismay, realise that I have not created anything worthwhile. It’s pretty grim stuff: it’s the only stuff that promotes serious obligation. Much can be said about it, and such will be said … in due time.

But the point here is one of limitation. The works I have completed had serious limitations imposed on them; so serious, that at one point, I actually had to disagree to agree with my wife’s sensitive sensibilities; despite its merits, despite even my own opinion for it, the limitation could help it not out of its limbo.

 

Presently, I present Quiet Time:

Quiet Time’s look is sparse, isn’t it? I don’t find it particularly eyeball-busting, eye-candy-sweet. I like the colours I chose for it, though it could have gone through a few more passes of second-thoughts and fresh-looks. But the key point in understanding this small production was that from the outset, the predicted practical rigours of doing almost everything by myself must contribute significantly to the design choices if I were to realistically get it done; though no particular deadline was set at the beginning, I knew that if more than six months elapsed after I began, it will likely remain unfinished. Therefore, I actually imposed rational limitations that I would adhere to despite the fits of suppressed artistry boiling in the intestines that I will dot me down the schedule.

Some of those limitations were: a.) a fixed general position of the camera, b.) no camera movements, c.) 12 frames per second, d.) no fur fx, e.) stylised/non-photorealistic rendering, f.) no more than 20 shots.

Fixing the camera into a general position and not moving it offered one obvious technical advantage: it was possible to render the backdrop image (or even paint it) only once for each viewpoint, thus saving me production render time. But the more poignant effect of a fixed camera is that it changes the very nature of the storytelling. And this is one aspect that I enjoyed, more than any other aspect, about the short. By limiting the point of view, I’m forced to tell the story only by zooming and panning. Of course, I actually break this rule in the short for two shots, but it is not an example to say the limitation didn’t work. In fact, the limitation set the rhythm by which breaking it served as the useful counterpoint to everything else: peaks, plains, and valleys, as it were.

Deciding on 12 frames per second was also necessary. The obvious technical advantage was that I had less frames to render! It also meant that the performance had more give: follow-through animation could have less subtlety because the mind will fill it in. The creative impact was that this changed the performance of the character; it changed the timing of the performance, and the edit that cut itself around it: I could afford to hold for longer in certain shots without looking odd. It also meant that the performance itself had to be distinct and unambiguous.

Rejecting fur was a fair argument; on the one hand, I wasn’t happy with the look of the sheep’s ‘fur’. On the other, omitting fur was not only for the sake of the sheep character, but the whole look of the frame; if fur was on the sheep, then why not pick on the grass, or the shepherd’s hair, or the tree’s leaves, or the bird’s feathers to be of a similar detail? And if there is one thing I learned in my fine arts college that echoes to this day, it is the principle of echo: that things inside the frame echo things within the frame, or what may be abstracted to belong in the frame, even if it be outside the frame. The echo principle can be understood as a ‘totality principle’, or a ‘context principle’, or a ‘holistic principle’, or a ‘”I’ll see you and raise you” principle’ . Even though the sheep’s ‘fur’ leaves much to be desired, its sacrifice served the greater good, which is the short and its weird story.

By the very omission of fur, this consequently drove the final look to that very generic and ambiguous term called non-photorealistic rendering (NPR). I did not choose on a cartoony look, but instead on a subtly abusedly-modified Lambertian shading. Some NPR techniques are actually complicated, and the easily-controllable Lambertian approach reaffirmed the very rational why I had imposed this limitation.

And lastly, my arbitrary 20-shot limitation (I had 15 shots all up) was based on recent experience doing an episodic animation; I figured that I started to feel unhinged at 40 shots; so I halved it, thus it became ‘arbitrary’.

 

Now I move on to a more recent completion. I present you Poleis – White War (Stairwell Scene).

This began, and ended, quite simply as, what we industry vets coolly term, an “environment” piece. The first and biggest limitation that I set up was a.) that the camera was going to be fixed, and I will only have to render only one frame of the environment; I imposed this limitation because I wanted to focus on dressing the set with details, put a mood, apply a photographic touch; b.) no character was going to be involved, as it was truly just an environment piece.

Quiet Time’s six etched regulations were perfectly obeyed; and as seen in the videos, Poleis’s Stairwell Scene only had two limitations which were clearly broken. For my sins I paid dearly enough. But of the sins that I suffered greatest was the compromise of having to reject the very sensible opinion to re-frame the composition to suit the animation of the character. For this did not suit it as an environment piece, and having spent copious amount of time for the composition of the shot, I could not, for sanity’s sake, come to terms with that good comment.

But the question is why I broke the limitations, and what lessons, good and bad, did I discern.

I broke the camera movement limitation due to directorial ambiguity: is this an environment, which is better served with movement (ie presence of parallax), or is this an environment piece trying to make a piece to a larger — yet non-existing — ‘story’? I chose to remain faithful to the initial purpose of ‘environment’ and rejected ‘story’. I am not sure what I learned from my decision, but I think I understand myself more: my initial intentions are the catalysts — the muse, if you will — of these small ideas. And by keeping to the original idea, or even the lowly purpose, I feel to have stuck with my guns, and thus corroborate, support, and encourage the instinct, the intuition.

The character’s presence breaks the other limitation, but it is an easier rationale to impart: the environment is certainly better equipped with a breathing being, or failing that, simply a worthy subject matter. Its animation was kept purposely simple (some liked it very much, while some found it lacking), its composition distant as to not draw attention to itself for the specialness it intrinsically is, for the environment is the thing to be absorbed primarily, even if the eye is looking at the character. If anything, the addition of the character affirms basic principles of photography of having a subject matter, no matter how subjective, to photograph. If I had failed to produce a character in the middle of the scene, I would have resorted to rain or hard wind to rush inside through the window and disrupt the interior: the wind would have been my character, my subject matter.


One of the other realisations coming from these projects is the honesty of what a story is. My rejection of Poleis as a ‘story’ versus an environment piece is revealing to me, and encouraged me to look at things more honestly. I had to admit that telling a story is not the only thing we ought to be doing, nor is the only thing we can do.

In my line of work, I encounter a lot so-called ‘creative individuals’ with great pretensions to ‘story’, as though everything they touch, however menial, is elevated to the expectation and glory of ‘story’. A self-admitted meaningless series of disconnected, disembodied moving pixels is deviously described as a ‘story’; a ‘brand’ production company harps that brands tell a story, when in fact, brands are simply messages, and many false ones at that. To these posers, ‘story’ is nothing but buzzword; they’d sooner sell you a can of Coke and call it an epic because they licenced to use Wagner for their soundtrack; a cynical attempt to make a counterfeit human connection for the sake of advertising dollars.


Having completed the Stairwell Scene just before the new year following closely at the heel of Quiet Time, it gives me a completeness of, if anything else, having created something. As unromantic as that may sound, I realise these are small steps, but let’s not err to belittle these steps: for one day I may yet be free. Are the works substantial? No, but I thank God that it is not the point. The works have substance enough to say they are there and they have been done: finished — for now. But not only that: it creates substance in my brain, in my spirit. After the conclusion of the Quiet Time project, I secretly wondered if that would be my last. I shuddered at the thought that I would die with a legacy called Quiet Time, being ‘cute’ as it was, and be forgotten, or worse, be faintly remembered as that guy who did that ‘cute stuff’. I suppose that because the works themselves have little personal substance — more works of craft than art — I am all the more eager to press on urgently with it. For me, there is no choice; unless going mad is a choice.

 

The Tool and the Toy

When I first started in this vfx/CG industry over a decade ago, the progress of software innovations were not as numerous as today. But in relative terms, the innovation has continued, and in my opinion, is not necessarily superior; it has accomplished its goals at a certain level sophistication that is just expected given the fact that we develop on top of another’s innovation. We couldn’t get where we were unless we do all that research and development from scratch. The software back then were fewer, but those that existed were used in production, and they gave rise to production techniques around which today’s new software base their design.

But the main difference between yesterday and today is the relative power of a software to do its job versus creative expectations. I’ll use renderers as  my example. Not only was yesterday’s creative expectations lower, that a renderer capable of generating shiny raytraced reflections, coloured shadows, and post-effect lens flares amazed audiences, it was also more difficult to achieve with clunky interfaces and technical limitations. Today’s landscape is different: renderers are not only fast, but they provide photorealistic results — the predominant requirement — for anyone smart enough install the software and have a sketchy understanding of xyz.

I exaggerate. Still, it is not hard to argue how easy you can achieve good-looking renders with a few clicks. For me, it started when global illumination — physically-based rendering — was introduced, and from then unbiased rendering for the masses. It continued to real-time (GPU-accelerated) rendering, which is continuing to develop and make a mark — its overall contribution still unknown, yet very promising.

But this isn’t a history lesson as much as it is an observation of the present. In the same the idea of blogs gave rise to the idea that written vomit is publishable, the development of better and better photorealistic renderers is giving some people the notion that it should be immediately plugged in for production.

If this suggestion was given with professional caveats, or qualified proposals, I would be very eager to listen in. But in my world, the opposite often happens: software is being pushed in the pipe without regard of how it is to be implemented as a tool. This software is great — it’s a cool new toy. But it would be a great proposition if the software was used in seclusion. It would be bettered if the proposition was partnered with a real plan of implementation, but I would be satisfied with a sober feedback of how the tool performed in a small project. Implementing it into a pipeline requires more consideration than “hey, this is cool!” It’s nice to have new toys, but not all new toys are tools. They have the potential to be tools, but they continue to be toys until you clearly understand their limitation, until you see in what way they contribute to the workflow, until they are technically implemented in the pipe in consideration with the creative workflow.

I sometimes wonder why these things never occur to people who keep banging their head on the wall? Maybe they’re not banging their head: maybe it’s someone else’s head their banging and they don’t get hurt: poor decisions are of no consequence because it’s the lower rung that bears it; so I suppose it doesn’t matter to them. But it matters to me: because it’s the lack of thinking things thoroughly, the lack of planning, the lack of learning from past mistakes that frustrates; it takes me nowhere: I end up where I started from.

I’ve worked enough the industry to know what production means. It isn’t just a word thrown about, as some would like to do, to make it appear you have worked on a paying job and thus granted production experience. A dumb-ass could work in a production, but he’s still a dumb-ass with production experience. The problem I encounter is that for some people, professionalism in production is just for show — for the benefit of the boss who doesn’t know better,  for the benefit of a client who doesn’t understand. Look under the hood and you see a teetering structure of unintelligible hacks; you see cowboys shooting from the hip, wormtongues conjuring up smoke-and-mirrors consisting of embarrassingly inappropriate jargon. Professionalism in production is not about just about producing something professional-looking, it’s about being professional with how you do production: it’s acknowledging and learning from mistakes, it’s learning how to use your brain before you move your first vertex.

And that’s the difference between a professional working with tools, and a child playing with toys.

 

Disable “Taskbar Always On Top” in Windows 7

Just thought of sharing the worldwide protest of Windows 7’s stupid and intended omission of allowing other windows to go over the taskbar.

There have been numerous efforts to do this, but few of them I particular liked. So I decided to script my own AHK, and this method, so far, mimics Windows XP the closest.


~LButton::
WinGetActiveTitle, curWindow
WinSet, AlwaysOnTop, on ,%curWindow%
WinSet, bottom, , ahk_class Shell_TrayWnd
WinSet, AlwaysOnTop, off ,%curWindow%
return

 

EDIT: after working with it a bit more, I found that the script stole the focus from some menu items, causing the inability to select a menu item unless the mouse button was held down whilst choosing, and then releasing it to choose the entry. I modified the script — and glad it worked — so that instead of using the AlwaysOnTop parameter, Top replaces it, making the script less attention-getting.

 

~LButton::
WinGetActiveTitle, curWindow
WinSet, Top, ,%curWindow%
WinSet, bottom, , ahk_class Shell_TrayWnd
WinSet, Top, ,%curWindow%
return