So it’s been over a year since I have written on this blog and that’s because I started experimenting with real-time VR shortly after the last post. Real-time is hard! Much harder than I had expected, and a large part of the last year has been spent getting up to speed with the technology.
If however, you are willing to put in the effort, the end result of the steep learning curve may just open your eyes to the true potential of the technology.
Previously on this blog I discussed the many reasons why mobile VR was more relevant to the architectural visualisation field, and many of those reasons still stand today. However, after a short period experimenting with real-time VR we switched our focus entirely to this approach and we have no intention of going back. There are a few key reasons for this pivot which unfolded over the course of the last year so let’s go back to where we had left off.
When we finished development on our Unity mobile platform we were pretty happy with ourselves. Our 3D artists could generate the rendered 360 stereo environments with a few extra clicks on top of their normal Vray workflow without ever needing to touch a game engine. When our clients put on the Samsung GearVR for the first time they were able to navigate through the pre-rendered journey without any external instructions and the feedback was that they had a comfortable and engaging experience. We added multiplayer functionality that included voice chat and a visual mechanism for knowing what the virtual tour leader was looking at so that they could personally guide clients through a space.
Overall the platform was a great success for what we had set out to achieve, and it gave viewers a much better impression of our future spaces than an image or an animation.
With the mobile platform development completed it seemed like a good time to give real-time VR a push and see exactly what we could get out of it. I started testing Unity’s real-time capabilities on a greenhouse project that we had developed for a competition entry (we lost). A couple of significant issues with the pre-rendered mobile VR approach became immediately reinforced. Scale and presence, the two major selling points of VR in architecture in general, are fundamentally flawed with this approach.
I already knew that the representation of scale was inaccurate (unless you happen to be the exact height of the camera that the scene was rendered from) but it didn’t seem like a huge issue as we were producing VR experiences for selling the vision of the project and not necessarily as a design tool.
Presence, on the other hand, presented a much bigger problem. With pre-rendered VR environments, when you move your head in any direction the environment comes with you. This problem is caused by the allowed degrees of freedom and this approach to VR only supports 3.
It’s a strange and uncomfortable sensation but most people will only attempt to move beyond the supported 3 movement types once and they quickly learn to stick to just rotating around to view the environment from the fixed viewport. It had always seemed like a necessary tradeoff between having a higher visual quality and less degrees of freedom to move around. However, after just a few weeks of testing the real-time approach with the HTC Vive, the mobile experiences started feeling more and more uncomfortable. My brain quickly got used to having 6 degrees of freedom and really didn’t cope well when it attempted to return to just 3.
At this point the real-time experience had only a few basic materials and pretty default lighting, but it already felt more natural and comfortable. The pre-rendered experiences now felt like viewing tools and any sense of being present in those spaces, that I had felt previously, were gone.
This increased level of immersion in a real-time experience was further enhanced by having the HTC Vive’s tracked controllers. This is arguably as important for presence as the ability to look around at scale. My brain very quickly got used to the sensation of having a functional use for my hands in VR and returning to a pre-rendered experience came with a sense of loss that felt frustrating and a little bit sad.
The case for a real-time approach became even stronger when Shanny showed me the work she had being doing in UE4 (Shanny had already been working on real-time for a year). While Unity had been a fantastic tool for me for developing a mobile platform, and for testing out real-time VR concepts quickly (mainly thanks to the free VRTK plugin), the visual end result was lacking the richness of the pre-rendered content. The materials and lighting in UE4 however provided a much superior visual quality that was more in line with the visualisation quality that our clients have become accustomed to.
So with an opportunity to a utilise a VR approach that gave us real presence, real scale, and excellent visuals there was surely no decent reason to go back to pre-rendered mobile experiences. Well no, but the main advantage of the mobile approach was and is still very relevant – it’s mobile! Clients can throw it in a bag and take it to a presentation. The bespoke mobile platform that we built also allows clients to run the experience themselves, providing they can turn on a mobile phone and click on an app.
The pre-rendered nature of this approach however means that it will only ever be a stepping stone into a real-time VR scenario. It’s a gateway drug that makes clients feel like they are using cutting edge technology, until you allow them to experience what real VR feels like. The illusion that they were somehow ahead of the curve by using VR on their projects quickly disappears, and is replaced by confusion and disillusionment about the cost, time constraints, and physical requirements of utilising the fully fledged version of the tech.
Real-time development is expensive and time consuming right now because, as I mentioned at the beginning, it’s difficult. For anyone coming from an architectural visualisation background, it is a completely different approach and workflow, and it can really feel like you are starting from scratch. There are a litany of new concepts that need to be learned, such as levels of detail (LOD’s), lightmapping, UWV unwrapping, model optimisation, scene optimisation and many more gaming methodologies that I had never heard of. Add to this the fact that the approach to materials and lighting is completely different to ray-traced offline render engines. But at least these real-time concepts and methodologies are well established in the game industry and can be learned if you are willing to put in the time and effort.
Developing specifically for VR experiences however churns up a whole heap of new problems. A VR headset is not just a viewing device for a real-time experience. It is an entirely new way of experiencing a 3D environment that, when done correctly, allows your brain to trick you into feeling like you are in another world. This becomes immediately obvious as soon as you move beyond just looking around in VR, and start to interact with the environment. Things that you thought would work perfectly in a virtual environment suddenly feel unintuitive and frustrating. Successful interaction methods, which are well established within the gaming and user interface/experience (UI/UX) industries, such as moving around, picking things up, reading text, using menus, activating functionality etc, simply don’t work in VR. The design of VR experiences beyond the environments that they occur in, is a brand new field and is just starting to take off. If you are in Sydney I would highly recommend the VR Design course at Academy XI.
The hardest thing for visualisation artists to accept is that they cannot just create a beautiful photo-realistic experience and expect people to enjoy it. In VR, an experience that has a bad environment, but a good user interface and intuitive interaction, is still a good experience. A beautiful environment that has poorly executed user interface elements will provide a bad user experience, and people will not spend enough time in your environment to appreciate the beauty that you spent all your time creating.
So once you get past all these development hurdles and create an experience that people actually like being immersed in, you still have the issue of actually presenting the VR to your clients audience. If you have a local client and they are willing to come to your office to view the experience then it’s not a big deal. However, even if they are happy to do this for reviewing your work-in-progress it’s quite unlikely that they will want to present the VR to their audience in your office. At some point you are going to have to demo off site which means lugging a powerful computer along with all the other equipment required for the HTC Vive.
When you have actually set all of this up you can’t exactly walk away and let your client take things from here. Creating a good user experience requires a physical on-boarding process that generally needs to be carried out by someone practiced in the process and familiar with the VR experience and its potential issues (things always go wrong during demo’s). While this can work fine if there is an event where you can deliver the VR, it’s not ideal for a client who needs to present the experience to various audiences at different locations at specific times.
On top of this, over the last two years VR has been over-hyped and oversold and everyone bought into it initially. Clients wanted VR to be seen to be using VR and the experience itself was almost irrelevant. The reality is that the actual content was generally quite crap. Now that the hype has dyed down people only remember the bad content and the lack of real value that it provided. Because of this a lot of people are uninterested in trying any more VR experiences. We are now having to convince these disillusioned people to have another crack at the tech and give us an opportunity to demonstrate VR that provides real value to their desired outcomes.
Now however, after a hard year of fighting our way through these transitory difficulties, we finally feel like we have managed to develop real-time VR prototypes that provide experiences of actual value. The reactions of our clients and their audiences to date has been extremely positive and they are finally starting to consider substantial VR experiences as part of their communication requirements. This response has helped us reinforce our position that real-time VR is the future of architectural visualisation and all the time and effort is worth it.
We are now investing all of the energy of our small team into developing intuitive engaging experiences that provide enough real-value to our clients to negate the difficulties inherent in using the technologies. For us this feels like the genuine start of the VR visualisation industry and we believe there is no going back from here.