VR and Architectural Visualisation: The Big Issues! (Part 1)

Using Virtual Reality (VR) for architectural visualisation (arch vis) seems like a complete no-brainer. Rather than looking at an image or a short movie you have the opportunity to be completely immersed in an unbuilt space and experience it for yourself at real life scale. Well that’s certainly what we believed when we started working on our first large scale VR prototype experience early last year.

Over the course of the following few months we had all of our preconceived ideas about the value of VR put to the test, as we looped through a seemingly endless develop/user-test cycle. Every “no-brainer” use case for this revolutionary new tech seemed to fall flat on its face and we struggled to simply engage people in our “immersive” experience. As the frustration levels grew we began to question some of these preconceived ideas by re-evaluating what the real value of VR might be within the arch vis industry.  We desperately needed to find out what were the fundamental issues that were creating an invisible wall between our intended users and their engagement in our experience.

Problem 1: What is the value of VR?

So before we can look at this question meaningfully we first need to establish the difference between arch vis as an industry and what we call design visualisation. Design vis is how architects and designers communicate their work in progress designs to their teams, their clients, and themselves. It is a fundamental part of the design process that enables designs to evolve and decisions to get made. For design vis, VR and real-time rendering have actually delivered on their promise. Architects, designers, and their clients can use tools like Enscape to experience their designs in VR with little to no experience or knowledge of the technology. It’s literally a one-click solution to instant VR and it is already having a real transformative impact on the design process. We certainly don’t need to question the value of VR within design vis as the results in terms of speed of decision making and increased design quality are clear to see.

For the arch vis industry however, design communication is not really its primary concern. While it’s certainly inherent in all good vis output, the primary purpose of arch vis is to sell the vision of a project. Whether it’s a still image capturing a moment in time, or an animation bringing the future space to life, arch vis output has a quantifiable value beyond design communication. Architects, developers and everyone in between is willing to pay a decent chunk of cash to produce these products as they need them to win competitions, sell apartments, raise capital, create public awareness, or simply to impress a client etc. If VR is going to be widely adopted within arch vis it needs to provide a similar or superior value proposition to these established outputs or nobody is going to be willing to pay for it.

VisualisationContent.png
Traditional arch vis content

Finding value beyond architectural communication and traditional arch vis became the primary goal of our VR prototype development. However, our starting point was heavily based on our preconceived notion that to create a valuable VR experience we had to achieve presence. ‘Presence’ is the term used within the VR industry to describe the feeling of being truly immersed in a VR experience. In other words you actually feel like you are physically in that space. We had heard from industry experts that if you can create this feeling of presence, you will have a great VR experience. 

The first thing that we did to achieve presense was to create a rich, photorealistic environment. What better way to achieve a sense of presence than creating an accurate representation of the unbuilt space?

 

We used Unreal Engine 4 for this as it gave us the absolute best visual quality that we could get from the available game engines. We felt that the visual quality alone would give us a clear value over the current one-click design communication VR options. 

Secondly we added spatial audio, which really helps to add to the sense of immersion in the virtual space. Audio is hugely important in a VR experience and it would be impossible to achieve a sense of presence without surrounding your viewers in ambient and active 3D sounds.

SpatialAudio
Spatial Audio in UE4

Thirdly we added the ability to teleport around the space. Teleportation was our preferred navigation method as it was the least likely to make our test users feel sick, and it’s a convenient and efficient way to get from a to b.  We often hear people comment that they would like to be able to “walk” through the space but to be honest, if those same people had the ability to teleport around in real life I can guarantee you they would use it.

navigation-e1520288402228.png
Teleporting in UE4

After implementing these three things, we felt that we had achieved a pretty solid way of creating a real sense of presence in our experience. However when we put some people in there, we had some unexpected results.

 

As you can see from the video, when users started the experience they would teleport from one end of the space to the other, and then take off the headset. Having spent almost a month developing the first iteration of our prototype this was not exactly the feeling of “presence” that we were hoping for. Clients would come out of the headset after literally 30 seconds and exclaim “that was great!”. We were naturally quite confused and pretty dejected at the response to the experience at this point. We were really happy with what we had developed and as far as we were concerned we were doing everything that we were supposed to do for creating good VR. We even had a really good onboarding system at this point which taught the users how to teleport and most people got it straight away. After a lot more testing and a hell of a lot of frustration we finally stumbled upon the reason for the lack of engagement from our users.

Problem 2: What do I do?

During our demos we noticed that as soon as we handed the vive controllers to our users they would generally all ask the same question – ‘what do I do?’. This seemingly straightforward and reasonable question quickly became the root of all of our engagement problems. It turns out that having controllers in our hands automatically translates to a need to “do” something in our brains. Naturally, if the only thing you know how to do in VR is teleport, then teleporting becomes your mission. When someone feels like they have teleported sufficiently within the available space they suffer a moment of awkward panic when they don’t know what they are supposed to do next. The only thing left to do after this is escape the virtual world.

InteractiveElements
Interactive Book about the history of the library

So if the overriding issue here is lack of user engagement in the experience, and the cause of the problem is that the users run out of things to do, then the solution should naturally be to give our users more things to do. And we did. We filled our experience with interactive objects that gave users a sense of doing something. As our prototype was a library we added lots of interactive books, and in the outdoor community space we added some deck chairs that could be picked up and placed somewhere else.

In our next round of testing the time spent in the experience went up dramatically by up to a few minutes. We had also added dynamic reactive elements such as falling leaves triggered by the user’s gaze, and butterflies that flew away when collided with.

 

Giving users more things to do did exactly what we wanted it to do – people became more engaged in the experience and spent more time immersed in our space. However, the user surveys that we were conducting post testing turned up some further unexpected results.

Problem 3: What building?

One of our key survey questions asked users what they remembered most from the experience. The vast majority of answers were related to the richness of the environment or the interactivity (a lot of comments about the butterflies), but very few mentioned anything about the building itself or its design. Considering the fact that the  whole point of utilising VR as an arch vis medium is to sell the vision of the project, this was not an ideal finding. As happy as we were that users were enjoying the experience and spending more time in it, if we couldn’t get them to notice the building and appreciate the design itself, then there was no real value in developing the VR in the first place. And there would certainly not be enough benefit for our clients to pay for it.

This one step forward two steps backwards routine was becoming a bit old at this point, and while we were not going to give up on VR just yet, the notion that it was the no-brainer future of arch vis had well and truly vanished down the rabbit hole. During these challenging times we observed the architects and designers using VR for design vis, and they didn’t appear to suffer from our “what building?” situation. This made sense as the designers know what they are going into VR to do. They usually already know how to use the controllers and they are in VR to look at a specific design solution, review it, and get back out. They tend not to spend much time immersed but they get exactly what they need out of it (they also tend to make themselves sick by insisting on using the xbox controller to walk around but that’s a whole other issue).

Likewise, when we put the designers into our prototype experience to review the materials etc, we tended not to give them the controllers as they were in there to checkout a specific area and didn’t need to move around. Without the controllers the “what do I do” question disappeared as there was nothing else to do except look around. We knew we needed to somehow give users a task in the experience that required them to look around at the design. If we could remove the ability to move around while they were doing this task then maybe we could start to control how people experienced the building itself.

While trying to find a solution to this quandary I went on my first architectural tour. Considering the fact that I had been working in an architectural studio for almost 7 years this was long overdue, but really I just wanted to get away from VR for a morning. From the moment the head designer started introducing the audience to the building I started feeling very dumb.

The second the designer started talking the crowd stopped what they were doing, looked in the designers direction, and then followed his gaze to look at the design features that he was describing. Then when he was finished talking they would follow him to the next area of interest and repeat the process. They were completely engaged in what the designer was saying and genuinely interested in looking at and learning about the design that he was describing.

BuildingTours
Real building tour.

What we had been doing in VR was the real-life equivalent of dumping someone at the front door of a brand new building and telling them to go explore on their own.

The value of the design in architecture generally comes from gaining an understanding of why a certain thing was designed in a certain way. Aside from pure aesthetics, most people are not trained to interpret and understand architectural design on their own.

So, we pretty much lifted the format of the real-life tour and added it to our prototype. We sat with the head designer and the design team, established the key areas of interest, and then recorded the head designer talking about the design features of these areas. We set a predetermined path through the experience and set up points of interest along the way. As soon as the user reaches one of these points the abilities to navigate and interact with objects are removed.

PointsOfInterest
A Point of Interest Area.

Then we played the voiceover recording and watched as our VR explorers started to follow the voice and look around the experience at the design features that were being described.

 

Finally we had a meaningful breakthrough. We were actually able to control what people were doing in the experience, while they still felt like they had autonomy over their journey. It wasn’t just the fact that people were looking at what was being described, but when they were listening to the voiceover they felt like they were actually doing something. As long as there was a voice to listen to we didn’t need to try and engage users with other methods like interactivity.

VR Journeys

By simply copying reality we had created a VR prototype that we felt provided substantial value to arch vis beyond the traditional methods. The linking up of the key points of interest into a journey through the project highlights allowed us to really sell the vision of the project while fully immersed within this future space.

VR_INTRODUCTION_VIDEO_SLIDE_2
VR Journeys Points of Interest

We still give the users lots to interact with to keep them engaged, but we build this interaction up over the course of the journey and make it relevant to the voiceover highlights that went before. We also found that as soon as the voiceover stops, the navigation nodes get re-enabled, and teleporting becomes the thing “to do” again, users still take off sprinting towards the next point of interest. But, as soon as they get there and the next voiceover starts, they relax, listen, and truly appreciate the design that surrounds them.

In the next post we will take a look at what we did after journeys, and how multi-user VR experiences remove almost all of the other issues that we spent so long trying to fix.

Barry

4 thoughts on “VR and Architectural Visualisation: The Big Issues! (Part 1)

Add yours

  1. Wow…! You just highlighted and overcame every single challenge that we currently have. The thoughts and processes behind your research is insane!

    Like

Leave a comment

Start a Blog at WordPress.com.

Up ↑