2011-11-27

AM Class 4 Shot 2, refining


My current shot is coming together. I just submitted it.
It's our first dialogue shot at AM.

I'm discovering a curious thing about details:
The more subtle a motion is, the more detail you can put into it.
It's definitely a matter of "less allows more".


Reflecting on "refining":

You go from "wings of eagles" to "running, then walking," but not as a matter of growing weary.
Rather, it really is a natural progression of maturation: refinement of motion, refinement of intent, refinement of purpose.
This will end up being my most subtle AM shot so far.

2011-11-18

Smear frame methods in 3D, examples from 2011 Warner Brothers short


A link to this upcoming Warner Brothers short was posted on the AM Facebook page recently.
The full preview can be seen here. A handful of AM mentors and alumni worked on this movie.

Methods in the use of smear frames in 3D has been a matter of interest for me, so I thought I'd share two fleeting examples extracted from this short. Because of persistence of vision, combining multiple frames of motion into one frame like this does not look wrong at full speed, it's an effective technique for achieving the appearance of very fast motion by effectively merging multiple frames into one. In the top example, Sylvester's bat goes from fully up to fully down with just this one single inbetween frame, a very snappy action.

It's great to see 2d-like method being applied in 3D, instead of the more common method of motion blur via rendering. Because such 1-3 frame fast motions are only perceived subconsciously, there is arguably a greater degree of freedom-of-abstraction that an animator can take liberty with in these contexts. But at the same time, using such a method does seem to remain something of a special case, an exceptional treatment.

I wonder if those blur lines were volumic or were applied via vector art after rendering?

2011-11-15

Visions of fully integrated 3D GUI; input devices

In my previous posting, I said
In many ways I think pencil testing will be more reasonably bypassed when next generation 3D input devices and 3D displays become common. The task of working in 3D still remains a bit of an abstraction as things are currently done, and a certain amount of planning is a matter of coping with this.

I thought I’d share a few thoughts on this to clarify.

A more sculptural approach to posing in 3D should become more natural in future when more direct 3D interfacing should become common.

Having spent a certain amount of time as a clay sculptor in the past, I can attest that the existing 3D posing workflows, using SRT controls, are still abstractly removed from intuition and reality. The mouse is a clumsy old beast and the stylus is an improvement, but both are still primarily 2d input devices. We’re in a Graphical User Interface era that has resolved a lot of 2d display and input issues, but which has yet to fully resolve 3D display and input. Technologies are there but little of it has yet gained common usage.   
This 2d GUI era is still way better than the number crunching that Brad Bird had to do in his earliest pre-GUI days at Pixar, this is true (what might arguably be deemed the initial “1d” era of computing). But still, this is all less than ideal, and it seems to me we remain only halfway there, in terms of where we need to go in all of this. For 3D work GUI is badly in need of next generation methods that still-more-closely emulate 3D reality. I am impatient for the day when I will be able to "physically" grab a rig and manipulate it in a naturally three dimensional manner. Aren’t we “supposed” to have already achieved these levels of technological development? Moore’s law may be impressive, but technological progress is still a slow beast in light of visionary perspectives.

I'm glad to say that there are already early signs of this “fully integrated 3D GUI” future.

In terms of input devices, here are two examples of work being done to address these issues:
Kinect for motion capture (Softimage at Siggraph 2012), which is potentially useful for pose capture as well, and Japanese Q U M A humanoid posing input device. Regarding QUMA, here's a short review, and company link. The Softimage example may be more relevant to motion capture work, with QUMA perhaps showing greater potential for being generally relevant for character animation work. Either way, I'm happy to see this sort of innovation taking place, and leaders in innovation are worth celebrating.

Upon reflection, humanoid posing does easily run into various probable snags. Proportions won't always match, it's likely not of any use for IK inputs and non-viewport inputs, non-humanoid animation would be excluded, and even humanoid rigs would have to be relatively simple, limited to the input parameters of the device.

I see a more ideal solution involving Kinect-like motion capture of just the animator's hands, with this then being virtualized into the 3D environment with software and programming to then permit direct interaction between the virtualized hands and the rig and / or rig controllers within the virtual 3D environment. This would then permit interaction with any sort of rig in a 3D environment, and could also permit tactile sculpting for polygonal modeling or sculpturing applications akin to Zbrush and mudbox.

Integrate this with any given (already existing) perceptually-3D viewport setup, and you complete the creative artist's conscious feedback loop cycle. I expect something of this nature could easily speed up the animation workflow by many factors of multiplication. I'm guessing work could be done at least twice as fast, maybe up to about five times faster, and that it would be of a better sculptural quality. This is all the more relevant for the more truly sculptural animation needs of the gaming world.

A tactile feedback glove could be another more commonly considered option. This technology, which has existed for at least two decades already, has still failed to mainstream. Once again, Kinect-like has an advantage compared to this older idea, because it would be primarily software rather than hardware driven, making it more innovation and development friendly, with faster feedback and development iteration cycles.  

Multitouch is an improvement on the mouse, to be sure. But it is still fundamentally 2d in nature.

There is also the more prevalent maturing context of 3D displays. Many are already available. This is something that is now on the market. It’s still expensive and I’m sure there are issues with drivers and such even when obtained. The two-view approach is certainly achievable now and represents the bulk of what is currently referred to as “3D display.” Truly volumetric 3D displays remains a long way off but technologies are also slowly being developed for this.

But it’s great to see that side of things gradually emerging.

It took the computer industry a long time to make (2d) GUI the norm among users. It's still taking a long time for these next steps to be achieved. I look forward to the day computer interfacing will be as truly intuitive and straightforward in 3D as it has become in 2d, and I think a lot more discussion and effort remains to be seen in the process of raising up fully integrated 3D GUI. 

To pencil test or not to pencil test?

I haven't been showing much of my planning work yet on this blog, so here's an effort to share a bit of that side of things. Maybe in part because it's a rather more chaotic process, maybe in part due to the natural tendency of any artist to want to show only finished work.
This is the first blocking pass on my second shot of Class 4. We are using the famous AM Bishop rig for the first time this week. It's nicely integrated with the former rigs, with control names and methods remaining consistent to minimize confusion. He's still got a massively big head and awfully narrow shoulders. Dealing with his big hands is also a challenge in its own right. Unlike Stewie, there may not be a practical or straightforward way to shrink his head.  Ya just gotta run with it.
Prior to this was a process of simply selecting an appropriate audio clip (which I mulled over through the course of a few weeks: good to look ahead and get an early start on this, something that my mentor, Wayne, also encouraged the class to do). This was followed by the filming of video reference (I filmed about an hour's worth, then narrowed it down to 10 seconds for each of four finalist ideas). Of these, with feedback from Wayne and the class I selected one and ran with it. So there's a lot of work prior to this stage in this particular shot. In a studio environment it may be that a lot of this would be "served up" for the animator. But this is where, as a student, you are still in many ways both director and animator.

After finalizing video reference, I then did a pencil test study pass of this, followed by the in-Maya blocking. So the 3D animation you see here is blocking, in splines. In this case, about 32 starting key poses in a 240 frame timespan.
Based on this week's critique, a lot of things will need to be changed already. I planned for a clothing tug gesture, which is just going to have to be ditched I think because there's no practical way of going there at this stage and in this context. The background I chose led me to use  a wide angle lens, about 20 mm; this needs to be revised up to perhaps 70 mm instead. A lot of what needs revision is simple pragmatism. 
One thing that has surprised me about Wayne is that he doesn't seem to show interest in pencil testing. This contrasts with Pete, my previous mentor, who considered it a valuable step in the planning process. I included a pencil test planning step in my last 3 assignments, and I feel it does yield stronger results. At the same time, I notice that opinions are divided regarding the value of doing this pass. It seems that the majority of students don't pencil test at all, and that many of the mentors don't pencil test either; though many also do.
The matter was brought up explicitly again in a school-wide Q&A last night, when a student asked Drew if he used pencil test and rotoscoping in his workflow. The answer was no. However, he emphasized pushing poses when blocking, and distributing motion through many joints: being very careful to avoid making changes between poses that are isolated to just one joint (particularly in the spine, which will quickly produce lifeless and robotic results, as well as broken unorganic poses). Essentially, a building of strong poses via a more sculptural way of thinking. 
I'm still trying to figure this out.
In many ways I think pencil testing will be more reasonably bypassed when next generation 3D input devices and 3D displays become common. The task of working in 3D still remains a bit of an abstraction as things are currently done, and a certain amount of planning is a matter of coping with this.
I put quite a lot of work into my pencil test this week, but I still ran into some problems in the figure with balance. I would attend to this, but I think that doing the pencil test actually put me a bit behind in my workflow.
This leads me to the lingering question of whether the industry supports the extra time taken by pencil testing. With the tight deadlines in the industry, our personal animation workflow, as aspiring professional animators, needs to become efficient while still producing good results.
Pencil testing is not explicit to the prescribed workflow at AM. You're free to do it, sometimes encouraged to do it, but not required to do it and special time is not allocated for it. Though I should point out that pencil planning IS required, and this is not too different in nature. A fundamental difference between pencil planning and pencil testing is that pencil testing also involves more precise timing and spacing consideration, being done directly within in a 2d animation environment. This has its own ups and downs. For example, I did 2d planning in perspective this week, which led to me wrongfully applying a wide angle perspective from my planning to my 3D scene. I also followed my spacing planning a bit too closely, in that I didn't balance the rig correctly within the 3D environment during this first blocking pass. Timing and spacing can be resolved a bit on either side of this fence, but once you're in the 3D environment, they certainly must be revisited exclusively for this "reality", (along with all the other principles of animation, all continuing iteratively through various passes until the eventual point of completion).

To help resolve the conundrum I think I need to back off of pencil testing a bit next time around in my own personal workflow, and see how well I can plan directly within the 3D environment. It may be that timing and spacing are better passed over to the 3D environment at an earlier stage. The pencil work, in turn, can still remain valuable for the broadest strokes of planning.

AM accommodates people from non-visual-artist backgrounds, and this is part of what I'm observing here. A visual artist and a non-visual-artist will likely approach their work differently. But it is interesting to see how both can achieve success in 3D animation. Would there be a noticeable style difference between the two? Probably. But it is interesting, and for many surely encouraging, that this area of creative work really is open for people of many sorts and from many types of backgrounds.