Pre-rendered and Mobile: more relevant for now?

So before I get stuck in to this I have a few admissions to make. I have only been developing VR content for about 8 months. Prior to this I had an unopened Oculus DK1 under my desk for almost 2 years (thanks Ben). I also had an Oculus DK2 for over a month before I put it on and it was actually unboxed by someone else with much more initial VR enthusiasm than me (Neil or Jimmy – can’t remember). So i guess I have been harboring a healthy amount of Skepticism for Virtual Reality technology. In 2004 (yes I’m quite old) I was studying for an MSc in Computing and Design and we learned all about virtual reality, augmented reality, cognitive computing etc and it really felt like these technologies were just about to explode into the commercial world. I made many attempts to revisit virtual and augmented reality in the subsequent years and each time I tried there was literally nothing there to revisit. So when the new crop of VR tech started to appear in 2013 I had become quite the jaded skeptic. Add to this the fact that I had spent over 8 years producing 3D visualizations and during this time the effort needed to produce high quality photorealistic content had reduced dramatically. High res photorealism has become the expected base level of quality across the architectural industry and only creative post production really sets good work apart from the standard. In my opinion any VR experience that was going to be useful as an architectural visualization tool needed to build upon the high quality photorealistc baseline that had been set for imagery and animation. For me the idea of compromising on visual quality to achieve experiential presence and scale is simply not an option in this industry.

While I was honing my unrealistically high expectations for VR some architects at BVN had taken the DK2 and actually gone to the bother of setting it up. We are a Revit based practice so the designers started using a nifty little plugin called Fuzer that promised one click instant real-time VR on the DK2. Hurrah! Well sort of. While it proved kind of useful for some designers to experience their WIP Revit models at scale the visual quality was no-where near good enough to use as a communication tool. I’m sure with a bit of extra effort this could be improved but that’s not really what a designer wants to hear when they are hurdling towards yet another deadline. Lots of people had a try but the novelty soon wore off and within a few weeks there were maybe a couple of people still using it (of course it didn’t help that the DK2 was a developer kit headset and we only had 2 for over 200 people). Gary from BVN had a brief fling with Autodesk’s 360 cloud rendering along with a google cardboard headset. The overall feedback was more positive as the visual quality was much better but the novelty factor eventually won and the technique fell by the wayside. However, our studio was now aware and excited about the potential of VR and I began to pay attention (this also coincided with Neil from BVN starting a campaign to berate me daily for not doing more in VR – it worked).

If I was going to get involved in VR I seen two distinct options. Start getting serious about real-time game engines and try to push the visual quality to an acceptable standard or have a go at 360 rendering in 3D max and Vray and try it with google cardboard. Architectural visualization artists have been trying to produce decent real-time experiences for years but the visual quality has never been great. Last year however an artist called Koola started showcasing some stunning results using the Unreal engine. A few months later Ronen Bekerman launched his competition “The Vineyard Challenge” utilizing the same engine and the visual standard was fantastic. I downloaded the Unreal engine and within a couple of weeks I was getting decent visual results. However as soon as I tried to view it on the DK2 everything looked awful. The experience was flickery and nauseating and barely even ran. It was at this point that I actually started researching VR development in general. It didn’t take long to build up a decent picture of the litany of requirements and limitations required to make enjoyable VR experiences (60 fps minimum – WTF!!!!!).

After I got over my initial rage/disappointment I decided to give spherical rendering a shot. I knew the functionality existed in Vray but in the late 2000’s i had done quite a bit of work with QTVR’s which allowed viewers to look around a 3D space in all directions by dragging their mouse around on the screen. I absolutely hated these as they made everything look distorted and post production was very difficult. Nonetheless I did a quick bit of research on resolution and pumped out an 8k spherical panorama render of an existing scene. Of course I completely ignored the stereo helper in Vray as I despise 3D movies at the cinema and just assumed it was the same crap. After downloading way too many apps in my attempt to get my render to work with cardboard I discovered that I could add my own content to the google street view app – yay – great success! I stuck my iPhone into Gary’s now dusty google cardboard and, despite the giant grease stain from previous excursions, I jammed it onto my face. This was good – this was really good. The detail was incredible. Everything felt a little small but I got a decent idea of the scale of the project and after 20 seconds of grinning like an idiot I was done. I showed it to Jimmy who had wandered over after seeing my reaction and suggested that I try it on the DK2. I had somehow completely missed the fact that the Oculus headset could still be used for pre-rendered VR. I downloaded Kolor eyes on Jimmy’s recommendation, drag and dropped the render and popped on the DK2. I was blown away! I was experiencing both presence and scale (even though I had no idea of the relevance of these terms at the time) and I was pulling a full on VR face (that idiotic giant open-mouthed grin that people can’t help pulling when they are enjoying a VR experience) and I had no idea that I was even pulling it. Watching people pull a cracking VR face can be just as enjoyable as the experience itself. I was hooked!

For the first time I was convinced that this could have a major impact on architectural visualization. Why look at an image when you can be inside it? It was the Matrix meets Mary Poppins (no I won’t take it back because that’s how it felt). I wanted to work on this technique, to develop it further and see how far we could push it. I needed buy in from the principals if I was going to get to spend a decent amount of time on this. I managed to convince James Grose (BVN’s CEO) to have a look. He was reluctant at first as he had tried VR before and had been less than impressed and like so many people who have tried bad VR before he had already pegged it as a novelty that would soon go away (bad VR is worse than no VR). He loved it – he pulled a VR face and VR face never lies.

Now I had support for investigating VR further but the fact that it was tethered to a powerful PC just seemed wrong. Its was pre-rendered VR! I had heard about the Samsung GearVR but I didn’t have a suitable Samsung phone. I didn’t really want to pay over $1000 AUD for a headset that was billed as a cheap alternative to the DK2 that cost $500 AUD but didn’t really see an alternative. I headed off to Autodesk University in Vegas to see what other people were doing in VR while Jimmy jumped on gumtree and tried to source a second-hand GearVR (Australia was completely sold out of innovator editions and the commercial release wouldn’t be available there for months). After a week in Vegas experiencing disappointing architectural VR demos, but listening to some really inspiring talks by VR content producers, I headed back to Sydney pretty pumped by the fact that the industry was really just getting started and we still had an opportunity to be a part of something new. In the meantime Jimmy had risked life and limb sourcing a GearVR  on the mean streets of Sydney and we had actually bit the bullet and ordered a $1000 Samsung S6 phone that would never make a single phone call. When I got back to the office Neil finally convinced me (read suggested threateningly) to give a stereo render a go. Reluctantly I doubled my render size and time for what seemed like a ridiculously small offset that made me angry. I eventually figured out how to load the stereo render into Oculus 360 Photos (seriously nothing is straight forward in VR) and tried the GearVR for the first time. Oh my fucking God – this was what I was looking for. This flat rendered image was alive. I finally understood what presence really meant. I really felt like I was there. I was standing in a conceptual design for a lost competition that would never exist and yet I felt like I was there. The people in Vegas told me that pre-rendered VR wasn’t even real VR because you couldn’t walk around and yet I was feeling all the things that they described as requirements for good VR experiences. Fuck them and their walking around – their experiences looked like shit and this looked bloody real. I walked (read ran like giddy goat) over to James Grose with the headset in my hand and asked him to have a try. After about two minutes  without saying anything James took the headset off but he looked really disappointed. He was disappointed because we had lost the competition and it was a great scheme. He wasn’t concerned with whether the VR experience was good or bad or that what we had produced was impressive or not. He had actually stood in the space that he had designed and felt what it would be like to be there and was disappointed that they didn’t choose our scheme.

Pre-rendered mobile VR had instantly became the obvious way forward for us – for the time being at least.





One thought on “Pre-rendered and Mobile: more relevant for now?

Add yours

  1. I totally agree. Pre-rendered has drawbacks but it is seriously underrated. I’m experimenting with Arnold renderers VR camera and whenever I get stuck and google something, allI find is real time VR.

    I’m also very curious about light fields but can’t wrap my head around it yet. I think developers of light fields should be more open and sharing.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Blog at

Up ↑

%d bloggers like this: