Python and SQLite – First Steps

The company I work for uses LTO to archive data. A few years ago, the key IT personnel left, and along with them the full working knowledge of the archiving system (TBS), which was heavily based around Linux scripts (I’ve spotted some PERL).

The retrieval system is composed of a user-friendly HTML browser front-end that communicates with the archiving daemon, which publishes search results and the generates code to handle retrieval requests.

The archiving/storing system, on the other hand, is entirely another matter: it seems to be all Linux shell scripting, and no one knew how to use it. We could retrieve stuff, but we couldn’t archive anything back to tape.

The company consulted third-party IT pros and companies about it and they made a decision to buy and use PreRollPost as our archiving system. I haven’t really used PRP, and I don’t have any opinion on it. Needless to say, TBS allowed view access, search queries, and retrieval requests to the database from any networked computer. PRP can only be operated locally, so all actions and operations must be done in that one computer it is installed on.

In the past, when producers came to me asking to pull an archive, or check for its existence, I would do a search in TBS from my own workstation. Now, it wouldn’t be possible, and that an archive operator role (ARCOP) needs to be created. In an effort to go back to the same efficient workflow, one of the guys in charge of the new PRP system, decided that he’d create a Google spreadsheet and enter the database information by hand. It’s incredibly tedious and, I reckon, prone to mistakes. Due to the company’s slow adoption of a new archive system, he’s also backlogged with stuff that needs archiving.

I’m not in the IT department; I’m a CG technical supervisor, and I hesitate to try to assist in these things because I’m expected, first and foremost, to be a CG operator. The producers would sooner throw me a motion graphics job over any workflow project for the archive system. Actual jobs are visible, and thus get more attention. But they won’t pay anyone to fix things that aren’t readily visible — they don’t deem it really that important — until they become the cause of visible disruptions. And so launching into a coding project always risks not being able to finish it; we’re not a software company, too.

When I saw the spreadsheet, I first thought that I should make a simple helper script for the ARCOP by retrieving relevant information directly from PRP’s SQLite database, putting that as a CSV-formatted text in a buffer or file, and it would be a copy-paste matter after that. I began the journey first by converting the .sqlite to .csv using DB Browser for SQLite, which was indispensable both in analyzing PRP’s database structure, and learning and troubleshooting SQLite commands (more on that later). When I realised that I didn’t want the ARCOP to deal with having to convert .sqlite to .csv everytime he needs to update the spreadsheet, I reformatted the code to use the sqlite3 Python module and directly access the .sqlite itself.

All was going fine; I copied the .sqlite file from the PRP computer into my dev workstation as a local copy. Here, I saw that the PRP database stored full source paths and this meant that I could extrapolate some details from this: I could determine the project title, the tape number, whether or not an entry should be filtered out based on its relevance. But a surprise came when I re-checked the PRP database and found out I was using an older version. The main difference was huge: full paths were no longer stored; nodes were checked in with their own unique node numbers, and they also had a parent node. This meant that if I wanted the full path to a particular node, or the full path that a node belongs to, I would have to recurse through the hierarchy myself using SQL commands.

It was at this time that an assistant producer rang me asking if a certain project existed. I looked at the ARCOP’s spreadsheet, and it wasn’t there. I was getting a bit better using DB Browser that I decided to look directly into the database. I put in the keywords in one of the columns’ filter, and found it; tracing its parent node one-by-one until I got to the root, I finally concatenated the PRP location on a piece of paper, and sent it to the ARCOP to retrieve.

It was at this time that I decided that creating a helper script for the ARCOP to assist inputting data into the spreadsheet was an obsolete idea: there’s no need to recreate a database in a spreadsheet when I already had the ability to programmatically access the database. The issue here was that I wasn’t familiar at all with SQLite. And, up until recently, I hadn’t really touched GUI programming in Python. These were the major knowledge hurdles for me, but there seemed to be no other way, unless I simply dropped the endeavour, which didn’t appeal to me.

The idea now changed to creating a prog that will provide a simple search query mechanism to the database. It would have a ‘search all keywords’ function, it will need to filter out most nodes except .tar files (which is used as compression container for most projects) and directories. Then it needs to present the search results in a list and a string field so that the user can copy-paste the location to a text email when he/she requests an archive pull.

After some brief research I decided to implement wxPython for my GUI. I spotted a helper prog called wxFormBuilder that helped me immensely in quickly understanding, through its generated code, what wxPython was doing. Through the many articles in Stack Overflow, I also learned how to create a multi-column list which would be populated with search results.

I think the hardest part about the project was SQLite which I’ve no previous experience. The main difficulty lay in understanding the flow, and if there were actual concepts of variables, what Common Table Expressions were conceptually, and just how much difference lay in programming languages and SQL commands.

WITH RECURSIVE
    find_parent(Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE) AS ( 
        SELECT Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE FROM ZPRPFILESYSTEMNODE WHERE ZNAME LIKE '%fsp%'
        UNION 
        SELECT ZPRPFILESYSTEMNODE.Z_PK, ZPRPFILESYSTEMNODE.ZPARENTNODE, ZPRPFILESYSTEMNODE.ZNAME, ZPRPFILESYSTEMNODE.ZISDIRECTORY, ZPRPFILESYSTEMNODE.ZBYTECOUNT, ZPRPFILESYSTEMNODE.ZNOTES, ZPRPFILESYSTEMNODE.ZBACKUPDATABASE, ZPRPFILESYSTEMNODE.ZMODIFICATIONDATE, ZPRPFILESYSTEMNODE.ZCREATIONDATE FROM ZPRPFILESYSTEMNODE, find_parent WHERE find_parent.ZPARENTNODE=ZPRPFILESYSTEMNODE.Z_PK
        ),
    find_child(Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE) AS ( 
        SELECT Z_PK,ZPARENTNODE,ZNAME,ZISDIRECTORY,ZBYTECOUNT,ZNOTES,ZBACKUPDATABASE,ZMODIFICATIONDATE,ZCREATIONDATE FROM ZPRPFILESYSTEMNODE WHERE ZNAME LIKE '%fsp%'
        UNION  
        SELECT ZPRPFILESYSTEMNODE.Z_PK, ZPRPFILESYSTEMNODE.ZPARENTNODE, ZPRPFILESYSTEMNODE.ZNAME, ZPRPFILESYSTEMNODE.ZISDIRECTORY, ZPRPFILESYSTEMNODE.ZBYTECOUNT, ZPRPFILESYSTEMNODE.ZNOTES, ZPRPFILESYSTEMNODE.ZBACKUPDATABASE, ZPRPFILESYSTEMNODE.ZMODIFICATIONDATE, ZPRPFILESYSTEMNODE.ZCREATIONDATE FROM ZPRPFILESYSTEMNODE, find_child
        WHERE find_child.Z_PK=ZPRPFILESYSTEMNODE.ZPARENTNODE
        )

 

The snippet above is one the test SQL commands I was debugging under DB Browser. It is the template that I used to get a node’s parent node and child node. I admit that there is still this haze between my brain and SQL commands. I think time will tell whether I would need to delve into it a bit more.

Over the weekend, I struggled with many aspects of the search function and search results. One of the main issues was how much filtering should/could I do using fast SQL commands versus when should I let Python’s convenience functions do work. In the end, SQL command return all traversed trees containing any of the keywords, and Python adds another level of filtering based on the user options.

There are still things to do in the prog itself, such as problems with the sortable columns (probably the itemDataMap attribute is not being populated properly), adding the ability to choose a database to connect to, etc. I’d also need to decide whether the prog run as a script, or a compiled .app/.exe prog. Back at the office, the PRP database needs to be accessible to networked computers.

I’m pretty happy that I got relatively deeper than I thought in a few days of development. But this Saturday weekend was necessary for the push since there are too many distractions at work to be able to really absorb technical information especially a topic that’s really new to me.

 

Retirement and Retrospect: Janus EOL

After some thought last year, weighing in what I want to do for the future, I  decided that I should ‘clean my closet’ first. And one of the things that stood out in that closet was Janus development. There have been no new sales for Janus for quite a long time now, unsurprisingly, because I never really made any respectable effort in its marketing. Besides a few clients, its userbase has equally been quiet. And so I’ve decided to retire Janus from commercial development, and the main reason is that I can’t see myself guaranteeing the same kind of support that Janus users have enjoyed through the years for free.

It feels like a nominal thing to say that Janus is no longer being developed because Janus dev hasn’t been as active, and I’m pretty sure only a very few are concerned with its development anyway. If anything, announcing the fact will simply get me off that spectral hook of ‘developer obligation’ for EOL products. At least that’s what I hope.

I’ve always said in the past that Janus was never meant to be a mainstream tool. But over the course of the years, I learned one major reason why: many LWers didn’t try it. Part of me comprehends the rationale that the Janus video tutorials described a workflow that they didn’t like, or was confusing. But this is what I couldn’t understand: despite the complaints of the lack of a render layer system in LW, why people wouldn’t even attempt a free try in the hopes that they make something out of it.

Then there was the price point of 200 bucks (later 100 bucks) which might have made it totally incompatible with their idea of a layering system, no matter how well (or badly) designed. I kept hearing their demands to NT to put a layering system in there as part of their LW upgrade path, avoiding the need to invest in a 100-200 dollar plugin. Apparently, they’ve been waiting for a long time: at least 7 years.

What strikes me ironically, in retrospect, is that if they had invested in Janus from the beginning, it would have been a free Janus upgrade path from then on. Of course, I can’t say that I would have guaranteed it, though it seems likely since it was the case despite the minimal community support behind it. It would have been a very small price to pay to have such a tool that early on in LW9.5 — LW 2015 still does not have a functional equivalent of Janus or a render layer system. I say that in retrospect; a few Janus users have been saying it for years.

Most LWers have lived without a proper layering system, because the need is not truly pressing for most of them; I think despite their complaints, they can wait 7 more years if they have to.

Whether or not I would have stopped development regardless of its popularity is something I will never know myself. To me, Janus, as a commercial product, has run its course. I think it does a lot more than what was advertised, which is a good thing for me as a developer; I’m proud of what I’ve accomplished, but more grateful for the things I’ve learned developing this tool. Janus is still available to be bought, but no support will be given (unless I can actually afford to), and I will try my darnedest to ignore bug fix requests: it’s easy to get obsessed with them, and they eat up lots of my time.

This is not to say that I’m through with Janus: I use it daily, and I will continue to code it to solve problems that I encounter myself. I may yet support Janus in the context of a company as a technical consultant, which is a better use of my time, and I can be actually recompensed for my work. I may fork it, or create a derivative for other progs like Maya — who knows, really? The future unknown, and I’d rather not try to plan or predict. I’ve focused on tools for the most part of my vfx career, and left creative pursuits largely untravelled. And that’s where I’m headed next.

“Having no limitation as limitation”

That Bruce Lee quote is great and I apply that to almost every other thing in my life, especially when thinking about laying on the hurt. But in animation production, it doesn’t apply. And being such a cg elder, I would  propose a re-phrasing of Lee’s indelible words.

“Having limitation is not limitation.”

I’ve recently completed two works of two different styles. Both were bourne in the meditation that I may die in the middle of a commercial and, in dismay, realise that I have not created anything worthwhile. It’s pretty grim stuff: it’s the only stuff that promotes serious obligation. Much can be said about it, and such will be said … in due time.

But the point here is one of limitation. The works I have completed had serious limitations imposed on them; so serious, that at one point, I actually had to disagree to agree with my wife’s sensitive sensibilities; despite its merits, despite even my own opinion for it, the limitation could help it not out of its limbo.

 

Presently, I present Quiet Time:

Quiet Time’s look is sparse, isn’t it? I don’t find it particularly eyeball-busting, eye-candy-sweet. I like the colours I chose for it, though it could have gone through a few more passes of second-thoughts and fresh-looks. But the key point in understanding this small production was that from the outset, the predicted practical rigours of doing almost everything by myself must contribute significantly to the design choices if I were to realistically get it done; though no particular deadline was set at the beginning, I knew that if more than six months elapsed after I began, it will likely remain unfinished. Therefore, I actually imposed rational limitations that I would adhere to despite the fits of suppressed artistry boiling in the intestines that I will dot me down the schedule.

Some of those limitations were: a.) a fixed general position of the camera, b.) no camera movements, c.) 12 frames per second, d.) no fur fx, e.) stylised/non-photorealistic rendering, f.) no more than 20 shots.

Fixing the camera into a general position and not moving it offered one obvious technical advantage: it was possible to render the backdrop image (or even paint it) only once for each viewpoint, thus saving me production render time. But the more poignant effect of a fixed camera is that it changes the very nature of the storytelling. And this is one aspect that I enjoyed, more than any other aspect, about the short. By limiting the point of view, I’m forced to tell the story only by zooming and panning. Of course, I actually break this rule in the short for two shots, but it is not an example to say the limitation didn’t work. In fact, the limitation set the rhythm by which breaking it served as the useful counterpoint to everything else: peaks, plains, and valleys, as it were.

Deciding on 12 frames per second was also necessary. The obvious technical advantage was that I had less frames to render! It also meant that the performance had more give: follow-through animation could have less subtlety because the mind will fill it in. The creative impact was that this changed the performance of the character; it changed the timing of the performance, and the edit that cut itself around it: I could afford to hold for longer in certain shots without looking odd. It also meant that the performance itself had to be distinct and unambiguous.

Rejecting fur was a fair argument; on the one hand, I wasn’t happy with the look of the sheep’s ‘fur’. On the other, omitting fur was not only for the sake of the sheep character, but the whole look of the frame; if fur was on the sheep, then why not pick on the grass, or the shepherd’s hair, or the tree’s leaves, or the bird’s feathers to be of a similar detail? And if there is one thing I learned in my fine arts college that echoes to this day, it is the principle of echo: that things inside the frame echo things within the frame, or what may be abstracted to belong in the frame, even if it be outside the frame. The echo principle can be understood as a ‘totality principle’, or a ‘context principle’, or a ‘holistic principle’, or a ‘”I’ll see you and raise you” principle’ . Even though the sheep’s ‘fur’ leaves much to be desired, its sacrifice served the greater good, which is the short and its weird story.

By the very omission of fur, this consequently drove the final look to that very generic and ambiguous term called non-photorealistic rendering (NPR). I did not choose on a cartoony look, but instead on a subtly abusedly-modified Lambertian shading. Some NPR techniques are actually complicated, and the easily-controllable Lambertian approach reaffirmed the very rational why I had imposed this limitation.

And lastly, my arbitrary 20-shot limitation (I had 15 shots all up) was based on recent experience doing an episodic animation; I figured that I started to feel unhinged at 40 shots; so I halved it, thus it became ‘arbitrary’.

 

Now I move on to a more recent completion. I present you Poleis – White War (Stairwell Scene).

This began, and ended, quite simply as, what we industry vets coolly term, an “environment” piece. The first and biggest limitation that I set up was a.) that the camera was going to be fixed, and I will only have to render only one frame of the environment; I imposed this limitation because I wanted to focus on dressing the set with details, put a mood, apply a photographic touch; b.) no character was going to be involved, as it was truly just an environment piece.

Quiet Time’s six etched regulations were perfectly obeyed; and as seen in the videos, Poleis’s Stairwell Scene only had two limitations which were clearly broken. For my sins I paid dearly enough. But of the sins that I suffered greatest was the compromise of having to reject the very sensible opinion to re-frame the composition to suit the animation of the character. For this did not suit it as an environment piece, and having spent copious amount of time for the composition of the shot, I could not, for sanity’s sake, come to terms with that good comment.

But the question is why I broke the limitations, and what lessons, good and bad, did I discern.

I broke the camera movement limitation due to directorial ambiguity: is this an environment, which is better served with movement (ie presence of parallax), or is this an environment piece trying to make a piece to a larger — yet non-existing — ‘story’? I chose to remain faithful to the initial purpose of ‘environment’ and rejected ‘story’. I am not sure what I learned from my decision, but I think I understand myself more: my initial intentions are the catalysts — the muse, if you will — of these small ideas. And by keeping to the original idea, or even the lowly purpose, I feel to have stuck with my guns, and thus corroborate, support, and encourage the instinct, the intuition.

The character’s presence breaks the other limitation, but it is an easier rationale to impart: the environment is certainly better equipped with a breathing being, or failing that, simply a worthy subject matter. Its animation was kept purposely simple (some liked it very much, while some found it lacking), its composition distant as to not draw attention to itself for the specialness it intrinsically is, for the environment is the thing to be absorbed primarily, even if the eye is looking at the character. If anything, the addition of the character affirms basic principles of photography of having a subject matter, no matter how subjective, to photograph. If I had failed to produce a character in the middle of the scene, I would have resorted to rain or hard wind to rush inside through the window and disrupt the interior: the wind would have been my character, my subject matter.


One of the other realisations coming from these projects is the honesty of what a story is. My rejection of Poleis as a ‘story’ versus an environment piece is revealing to me, and encouraged me to look at things more honestly. I had to admit that telling a story is not the only thing we ought to be doing, nor is the only thing we can do.

In my line of work, I encounter a lot so-called ‘creative individuals’ with great pretensions to ‘story’, as though everything they touch, however menial, is elevated to the expectation and glory of ‘story’. A self-admitted meaningless series of disconnected, disembodied moving pixels is deviously described as a ‘story’; a ‘brand’ production company harps that brands tell a story, when in fact, brands are simply messages, and many false ones at that. To these posers, ‘story’ is nothing but buzzword; they’d sooner sell you a can of Coke and call it an epic because they licenced to use Wagner for their soundtrack; a cynical attempt to make a counterfeit human connection for the sake of advertising dollars.


Having completed the Stairwell Scene just before the new year following closely at the heel of Quiet Time, it gives me a completeness of, if anything else, having created something. As unromantic as that may sound, I realise these are small steps, but let’s not err to belittle these steps: for one day I may yet be free. Are the works substantial? No, but I thank God that it is not the point. The works have substance enough to say they are there and they have been done: finished — for now. But not only that: it creates substance in my brain, in my spirit. After the conclusion of the Quiet Time project, I secretly wondered if that would be my last. I shuddered at the thought that I would die with a legacy called Quiet Time, being ‘cute’ as it was, and be forgotten, or worse, be faintly remembered as that guy who did that ‘cute stuff’. I suppose that because the works themselves have little personal substance — more works of craft than art — I am all the more eager to press on urgently with it. For me, there is no choice; unless going mad is a choice.

 

The Tool and the Toy

When I first started in this vfx/CG industry over a decade ago, the progress of software innovations were not as numerous as today. But in relative terms, the innovation has continued, and in my opinion, is not necessarily superior; it has accomplished its goals at a certain level sophistication that is just expected given the fact that we develop on top of another’s innovation. We couldn’t get where we were unless we do all that research and development from scratch. The software back then were fewer, but those that existed were used in production, and they gave rise to production techniques around which today’s new software base their design.

But the main difference between yesterday and today is the relative power of a software to do its job versus creative expectations. I’ll use renderers as  my example. Not only was yesterday’s creative expectations lower, that a renderer capable of generating shiny raytraced reflections, coloured shadows, and post-effect lens flares amazed audiences, it was also more difficult to achieve with clunky interfaces and technical limitations. Today’s landscape is different: renderers are not only fast, but they provide photorealistic results — the predominant requirement — for anyone smart enough install the software and have a sketchy understanding of xyz.

I exaggerate. Still, it is not hard to argue how easy you can achieve good-looking renders with a few clicks. For me, it started when global illumination — physically-based rendering — was introduced, and from then unbiased rendering for the masses. It continued to real-time (GPU-accelerated) rendering, which is continuing to develop and make a mark — its overall contribution still unknown, yet very promising.

But this isn’t a history lesson as much as it is an observation of the present. In the same the idea of blogs gave rise to the idea that written vomit is publishable, the development of better and better photorealistic renderers is giving some people the notion that it should be immediately plugged in for production.

If this suggestion was given with professional caveats, or qualified proposals, I would be very eager to listen in. But in my world, the opposite often happens: software is being pushed in the pipe without regard of how it is to be implemented as a tool. This software is great — it’s a cool new toy. But it would be a great proposition if the software was used in seclusion. It would be bettered if the proposition was partnered with a real plan of implementation, but I would be satisfied with a sober feedback of how the tool performed in a small project. Implementing it into a pipeline requires more consideration than “hey, this is cool!” It’s nice to have new toys, but not all new toys are tools. They have the potential to be tools, but they continue to be toys until you clearly understand their limitation, until you see in what way they contribute to the workflow, until they are technically implemented in the pipe in consideration with the creative workflow.

I sometimes wonder why these things never occur to people who keep banging their head on the wall? Maybe they’re not banging their head: maybe it’s someone else’s head their banging and they don’t get hurt: poor decisions are of no consequence because it’s the lower rung that bears it; so I suppose it doesn’t matter to them. But it matters to me: because it’s the lack of thinking things thoroughly, the lack of planning, the lack of learning from past mistakes that frustrates; it takes me nowhere: I end up where I started from.

I’ve worked enough the industry to know what production means. It isn’t just a word thrown about, as some would like to do, to make it appear you have worked on a paying job and thus granted production experience. A dumb-ass could work in a production, but he’s still a dumb-ass with production experience. The problem I encounter is that for some people, professionalism in production is just for show — for the benefit of the boss who doesn’t know better,  for the benefit of a client who doesn’t understand. Look under the hood and you see a teetering structure of unintelligible hacks; you see cowboys shooting from the hip, wormtongues conjuring up smoke-and-mirrors consisting of embarrassingly inappropriate jargon. Professionalism in production is not about just about producing something professional-looking, it’s about being professional with how you do production: it’s acknowledging and learning from mistakes, it’s learning how to use your brain before you move your first vertex.

And that’s the difference between a professional working with tools, and a child playing with toys.

 

Disable “Taskbar Always On Top” in Windows 7

Just thought of sharing the worldwide protest of Windows 7’s stupid and intended omission of allowing other windows to go over the taskbar.

There have been numerous efforts to do this, but few of them I particular liked. So I decided to script my own AHK, and this method, so far, mimics Windows XP the closest.


~LButton::
WinGetActiveTitle, curWindow
WinSet, AlwaysOnTop, on ,%curWindow%
WinSet, bottom, , ahk_class Shell_TrayWnd
WinSet, AlwaysOnTop, off ,%curWindow%
return

 

EDIT: after working with it a bit more, I found that the script stole the focus from some menu items, causing the inability to select a menu item unless the mouse button was held down whilst choosing, and then releasing it to choose the entry. I modified the script — and glad it worked — so that instead of using the AlwaysOnTop parameter, Top replaces it, making the script less attention-getting.

 

~LButton::
WinGetActiveTitle, curWindow
WinSet, Top, ,%curWindow%
WinSet, bottom, , ahk_class Shell_TrayWnd
WinSet, Top, ,%curWindow%
return

User functions FTW

Here I am, few days later talking about user functions in Janus again. Why not? It’s been giving me results and it’s fun to play with — as a TD, I mean.

Back at work I wanted to output an RGB matte for multiple instanced object. LW doesn’t have a way to do a scene-based surface override on an object (perhaps shaderMeister?– though I don’t know if it the Spot Instance node will work with it). Basically, I wanted an easy way of assigning RGB colours to items in a particular group.

Before, Janus’s item processing was strictly group-oriented, where group settings were simply propagated to the items. There was no way to individually and programmatically affect certain items within the group. So in the latest rev, I changed that. But the important bit was that I tied it (again) to user functions, and I embedded a new context — the group item context — through populating two constant variables relating to the group item’s index and name. In this way, the user function can be created that references the group item’s index, and based on that value, the command parameters can be changed on that particular item as it is being processed. It’s basically a ‘last minute’ change in the render pass settings just before the group item is applied its parent group settings.

Since user functions are basically string replacements from within the cmd line, there’s a lot of flexibility (but can also end up a heaping mess if the Janus user doesn’t watch it), so that I can dynamically various bits of settings; I’m not limited to adding one parameter, for example, because I can just concatenate the new subcommands in the return string; so as along as the final resulting cmd line is syntactically valid, Janus will grok it.

Janus for Mac

It’s a bit too early to shout out, so I’m whispering here in my little blog about Janus for Mac. I’ve restarted development for the Mac. It basically just took a weekend to hunker down and sort what was going with it.

The biggest issue that I came up with was the use of LScript’s store() and recall() functions, where in Windows it uses the registry to store the information. In the Mac, I’m not sure where it ends up, but apparently, calling it those functions multiple times seemed to put LW in a state where it could no longer stream more files into memory. It was as if the i/o was full.

I avoided this by redirecting the functions; this was already facilitated by the fact that they were already housed in custom functions, and all I had to do was make it everything cohesive. I also cleaned up the code so that the array that Janus uses to store settings were more consistent, and that the PC version should remain unaffected by the change.

At this point the Mac seems to be very workable, and I’m probably going to do a few iterations of tests in the coming weeks. And who knows? — it may come out officially soon after that.

User functions revisited

User functions have come back, and what seemed to me at the time an excuse to implement a structure that’s both ambitious (because it wishes to liken itself to Houdini) and hacky (because it ain’t Houdini), seems now to have been vindicated.

Talking about user functions is like talking about… well, nothing analogous I can pick out: no one relates to it, not because it isn’t uncommon — cg ops use it all the time in other progs — but because it’s in Janus, and thus LW, and among the LW crowd, it’s weird.

But then I contact this person asking for some scripting help; he’s made his assets and scenes, and needs a way to break out passes; I learn about what he particularly wants, and think Janus fits the bill, albeit requires a bit of modification to get the exact workflow. I ponder how much modification is needed. Then I realise that I already have a system of user customisation in Janus: user functions. But user functions, prior to this new development, was only in the realm of the more broad render pass settings (eg subcommands, and the cmd line). What was needed was to expand the same functionality to partial surface overrides, which were contained in text files.

This modification — user functions implemented in partial surface overrides — was done in one night, because there wasn’t much fudging of code; the functions were all there, and all I needed to do was apply the right timings so that the parsing functions didn’t fall on top of one another, and that the proper variables were being populated first before passing it on to the user function. So now, using the very system I made for Janus users, I’m using that system to get unique results instead of hard-coding things in.

I’m very grateful, and again, feel vindicated, that by not taking the easy route, by sticking to the vision I had for Janus, I am seeing the result of the flexibility, which I had hoped would be more evident to LW users. While I still inwardly bemoan the fact that Janus is not a very popular tool, it’s hard to feel sorry for it when its proving itself up to task in more and more situations.

The tool is not the thing

Tools, and I mean software tools for 3D, continue to change, and they’re very impressive indeed: yet another toolset, this time for texture artists. There are already impressive ones in the killing fields and there is bound to be more; it is bound that these will continue to improve. Perhaps you can relate to this sentiment: you can never have enough of new software capabilities.

I find myself, like many others, being at awe with the amazing technologies that come out. It’s like this never-ending creation and re-creation of stuff. Industry lessons are learned, and these new methodologies are passed on to these new tools. And they will always impress. That’s because they will always be geared towards solving today’s so-called problems.

But what are those problems, really? I would like to point out that they are essentially technical problems. Technical problems birthed by past technical problems that were solved. “Now you can do this” is the cyclic tagline of many a software product.

I’d like to phrase the problem this way: Imagine yourself imagining your work. This is the impression I have when I see new software, or improvements to it. I get so enamoured by the capabilities of the software that I imagine what I can do if I had it. And that, to me, is part an advantage, but largely a curse. It is an advantage to those who can tune it out, because they’ll reap the benefits minus the curse. And the curse is, I think, that we can never sit still enough to ignore the fanfare and get on with it.

I am putting this in the context of a creative artist, not a production artist. In a production, in an industry, the creativity lies in many nooks and corners, dark hallways, or wide-open spaces – depending on the  group you work with. But as a rule, tools are meant for streamlining methods, making people or systems more efficient. But not everything that is efficient for production is efficient creatively. Whatever efficiency can be gained by an individual artist adopting a new tool every other month or so must be weighed against the distraction of a new feature that he or she has just got to have so the artist can achieve the look that wasn’t sought for before it came around.

Lastly, I’d like to add that, at some significant point, we become more creative by limiting our tools. And I quote Bruce Lee: “It’s not the daily increase but daily decrease. Hack away at the unessential.”

In modern-speak: Cut shit out.