Techniques, VR

Production VR

After a long day trying to get the Oculus working I have reluctantly come to the conclusion that we just don’t currently have the graphical power to run it in the university, despite my numerous attempts to don the headset and will it through the mystical power of imagination and pleas.

It’s very easy to find numerous articles online on the creation of virtual reality animations and films, with the film being consumed through the use of this hardware. I am, however, struggling to find much information regarding the same hardware being used for production purposes. There are thousands of incredibly short pieces of animation (5-20 seconds) done with Quill, with rumours of slightly bigger productions more akin to short films. There is the possibility of using the software to create and model characters within 3-dimensional space, to fully observe and immerse the creator with the depth perception and proportion of their creations. Directors could walk amongst these worlds and interact with the props and characters, analyse the scenes and shot choices and better analyse the movement and actions of the characters with direct interaction. The ability is here to mould virtual characters through direct means, to influence the world through gesture over mouse and keyboard. Yet I struggle to find a good example in which I can scrutinise the process and assess it’s viability.

Fujita and others like him are showing great examples of short tests with wonderful results, forming a merge between stop frame animation and 3-d cg animation such as Maya. They are even transferring their work from Quill to Maya but without testing the methods myself I can not see the limitations and short comings of these techniques. Perhaps there are heavy limits upon the length and scale of the scene that can be created which limits the animators to these shorter clips. In which case would it be possible to create a longer work by creating multiple shorter clips made from a segmented whole. If the results of the modelling is not quite upto industry standard, can Quill be used as a base to start from and used to continually moderate the work. Or is this just counter-productive in creating the final piece, a forced and unnecessary intervention within the standard pipeline.

Yet I have seen works with entirely unique stylisations born from these processes, creative and innovative responses to the limitations and triumphs of what this technology offers. Carlos Felipe Leon, an artist working for Pixar, created this beautiful yet slightly haunting image of a woman in a market.


The examples of longer and more narrative driven animations created by this process so far have been designed to be viewed in VR space also, either to be viewed solitarily or as a joint user experience. Whilst these projects have been successful, I can’t shake the feeling that had they been designed for the screen, the results would have been better directed, more resolved and much more successful.

Dear Angelica is a series of painted memories unfolding before our eyes. It premiered at the Sundance Film Festival in 2017, was produced by the Oculus Story Studio entirely in Quill and Houdini, and played in the Unreal Engine in real-time VR. With an emotional gripping story and unique aesthetic, each character , prop and image was realised by one single artist.