2011-11-15

Visions of fully integrated 3D GUI; input devices

In my previous posting, I said
In many ways I think pencil testing will be more reasonably bypassed when next generation 3D input devices and 3D displays become common. The task of working in 3D still remains a bit of an abstraction as things are currently done, and a certain amount of planning is a matter of coping with this.

I thought I’d share a few thoughts on this to clarify.

A more sculptural approach to posing in 3D should become more natural in future when more direct 3D interfacing should become common.

Having spent a certain amount of time as a clay sculptor in the past, I can attest that the existing 3D posing workflows, using SRT controls, are still abstractly removed from intuition and reality. The mouse is a clumsy old beast and the stylus is an improvement, but both are still primarily 2d input devices. We’re in a Graphical User Interface era that has resolved a lot of 2d display and input issues, but which has yet to fully resolve 3D display and input. Technologies are there but little of it has yet gained common usage.   
This 2d GUI era is still way better than the number crunching that Brad Bird had to do in his earliest pre-GUI days at Pixar, this is true (what might arguably be deemed the initial “1d” era of computing). But still, this is all less than ideal, and it seems to me we remain only halfway there, in terms of where we need to go in all of this. For 3D work GUI is badly in need of next generation methods that still-more-closely emulate 3D reality. I am impatient for the day when I will be able to "physically" grab a rig and manipulate it in a naturally three dimensional manner. Aren’t we “supposed” to have already achieved these levels of technological development? Moore’s law may be impressive, but technological progress is still a slow beast in light of visionary perspectives.

I'm glad to say that there are already early signs of this “fully integrated 3D GUI” future.

In terms of input devices, here are two examples of work being done to address these issues:
Kinect for motion capture (Softimage at Siggraph 2012), which is potentially useful for pose capture as well, and Japanese Q U M A humanoid posing input device. Regarding QUMA, here's a short review, and company link. The Softimage example may be more relevant to motion capture work, with QUMA perhaps showing greater potential for being generally relevant for character animation work. Either way, I'm happy to see this sort of innovation taking place, and leaders in innovation are worth celebrating.

Upon reflection, humanoid posing does easily run into various probable snags. Proportions won't always match, it's likely not of any use for IK inputs and non-viewport inputs, non-humanoid animation would be excluded, and even humanoid rigs would have to be relatively simple, limited to the input parameters of the device.

I see a more ideal solution involving Kinect-like motion capture of just the animator's hands, with this then being virtualized into the 3D environment with software and programming to then permit direct interaction between the virtualized hands and the rig and / or rig controllers within the virtual 3D environment. This would then permit interaction with any sort of rig in a 3D environment, and could also permit tactile sculpting for polygonal modeling or sculpturing applications akin to Zbrush and mudbox.

Integrate this with any given (already existing) perceptually-3D viewport setup, and you complete the creative artist's conscious feedback loop cycle. I expect something of this nature could easily speed up the animation workflow by many factors of multiplication. I'm guessing work could be done at least twice as fast, maybe up to about five times faster, and that it would be of a better sculptural quality. This is all the more relevant for the more truly sculptural animation needs of the gaming world.

A tactile feedback glove could be another more commonly considered option. This technology, which has existed for at least two decades already, has still failed to mainstream. Once again, Kinect-like has an advantage compared to this older idea, because it would be primarily software rather than hardware driven, making it more innovation and development friendly, with faster feedback and development iteration cycles.  

Multitouch is an improvement on the mouse, to be sure. But it is still fundamentally 2d in nature.

There is also the more prevalent maturing context of 3D displays. Many are already available. This is something that is now on the market. It’s still expensive and I’m sure there are issues with drivers and such even when obtained. The two-view approach is certainly achievable now and represents the bulk of what is currently referred to as “3D display.” Truly volumetric 3D displays remains a long way off but technologies are also slowly being developed for this.

But it’s great to see that side of things gradually emerging.

It took the computer industry a long time to make (2d) GUI the norm among users. It's still taking a long time for these next steps to be achieved. I look forward to the day computer interfacing will be as truly intuitive and straightforward in 3D as it has become in 2d, and I think a lot more discussion and effort remains to be seen in the process of raising up fully integrated 3D GUI. 

No comments:

Post a Comment