top of page

My first four weeks with Google Daydream VR

 

I started working on a new project at the beginning of January and found myself in a position of experimenting with this new form of Virtual Reality (VR) headset that I have never worked with. Having worked with HTC Vive before, I was aware of few of the problems associated with VR, the most common being motion sickness. But working with Daydream opened up some limitations at the expense of some great opportunities.

 

I am going to dive into some fascinating playtesting results that made me change my perspective on how to design for a VR environment.

 

 

Basic Interaction :

 

A game or an experience consists of an environment that contains a set of objects that the player can interact with.

 

Google’s Daydream controller offers 3 DOF (Degrees of Freedom) and does not support positional tracking.

 

I would like to explain the problem associated with the absence of positional tracking using an example - Imagine you have an object placed a little far away from you that requires you to lean forward to grab it. With no positional tracking, no matter how much you try and extend your arm, the headset cannot detect the change in positon of a controller, and the controller will appear fixed in the headset view.

 

So it’s important to find a way around this and take advantage of the other features that the controller has to offer as this hardware is capable of reaching masses, unlike most other VR hardware.

 

So the first milestone was to play with the controller interactions and make them resemble the real world interactions as much as we could. So we introduced a hand to replace the controller. The cursor was small in size and not very visible to refrain from making it look unrealistic ( We don’t see cursors in the real world! ). We decided to introduce a hand pointing animation to indicate that the player has selected that object as the target and they can now click on it to interact with it.

 

How does the player interact with the objects in the room?

 

There were two ways to do it,

 

  1. Bringing the object to your hand

        The traditional point-and-click interaction where clicking on an object will teleport it to your hand. You can then hold it and rotate it around to inspect it.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

2. Moving your hand towards the object

      This meant your hand would move towards the object, grab it and bring it to you.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

In both cases, the object’s rotation was limited by the wrist rotation as nobody would just rotate the controller keeping their hand in a fixed position. They would rotate their wrist because your brain wants to map your real-world hand movements to the virtual world hand movement.

 

The results of our playtest had an unexpected outcome.

 

  1. Players did not notice the hand pointing animation

        Players complained about lack of feedback when they pointed their controller at an object. They didn’t know if their hand was pointing at an object or not. When asked if the hand animation helped with any of that, they said they don’t remember seeing any hand animation and they were completely oblivious to it.

It reminded me of the concept of introducing a virtual nose in VR to reduce motion sickness. There is an article that talks about how having a fixed reference point in frame tend to reduce feelings of simulator sickness [1]. But what’s astonishing is that the players were baffled when asked about how the big honking nose felt because they didn’t even perceive it. The researchers attribute it to a perceptual phenomenon that allows our perceptual system to ignore objects that we see over and over again.

That’s probably what is happening in our case, the player blocks the notion of a hand after a while and acts completely indifferent to it. So that made us revert to traditional ways of highlighting objects when your cursor hovers over them rather than using some fancy hand animation.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  2. Players were upset when they realised they couldn’t physically reach out

       Placing a hand in the scene makes you believe that the real world hand movement can be mapped on to the controller which is not the case. Thus, placing a hand in the scene introduces expectations of it to be able to function like a hand in the real world. This assumption made a lot of players eagerly extend their arm hoping they could reach objects in the virtual world. That came off as surprising to the player not aware of this limitation.

 

  3. Players prefer bringing the object to your hand approach

      Most of the playtesters prefer this approach and said it felt more natural to them. One would think point-and-click interaction would break the immersion. Let us take a step back here. Games like Zork used text and command line input as an interaction. The industry felt the need to introduce a more direct way of manipulating the environment which led to the point-and-click mechanism and that has been around for so long now not just in games but almost anything that’s uses a point-and-click AI revolution. So much that it has established a firm pattern of neurons in our brain to the point that we believe it’s the most normal thing to do even in VR.

 

Point-and-click turned out to be the most dominant style and it didn’t matter that it did not necessarily echo a real-world interaction.

 

A lot of times, we as developers, try to find a way to mimic the real world interaction to make it look realistic and in the process lose our focus from the main goal of our product. Virtual Reality is still developing and the hardware keeps going through a never-ending upgrade cycle that aims at overcoming its shortcomings. So it doesn’t make much sense to put all your energy into exploring an idea to overcome that shortcoming as the company will do it for you in a few months anyway.

 

So instead of forcing your ideas on the platform, learning its capabilities and using its potential to your advantage can be truly rewarding. It has also taught me a very important lesson of adapting and iterating continuously on design decisions, by observing playtesting sessions, to build a product that does not break the immersion.  

 

So how does one curb motion sickness when moving is inevitable?

If you’re anything like me, the slightest of post-processing effect [2] or motion in Virtual Reality could be disorientating and cause motion sickness. Motion sickness is not exactly convenient when you’re a Designer and a Programmer who has to constantly check iterations on the platform and worse when the audience experiences it. The internet is filled with abundant articles on how to manage this particular problem so I am not going into detail but rather put forth an observation from an experience I played a few days back as a part of my research (I took it up as a challenge to find and play as many experiences and games as I could until I exhausted all the resources on the internet to understand how design decisions were made to tackle limitations in VR).

 

The experience is called “The Turning Forest” and it includes a sequence where you sail in the ocean on the back of a fantastical creature.

 

According to my understanding, when you move in a virtual world, your vestibular system (sensors in the inner ear) senses motion and disagrees with other senses that don’t witness that motion and then releases a toxin causing you to throw up as a defence mechanism [3]. When sailing on a boat or standing on a ship, you are advised to look at the horizon because it's relatively stable.

 

In “The Turning Forest”, the creature acted as a fixed reference that you could look at along with the horizon line. Your feet are not on the ground expecting some sort of motion or shifting of the floor beneath and this additionally helps with nausea.  Some would argue that reading a book (fixed reference) while travelling in a car with your feet on the floorboard still induces motion sickness and hence that theory cannot be justified. The advantage in VR is that you don’t feel the bumps and turns of the car that are usually the factors that trigger the sensors which make you feel sick. That doesn’t eliminate the problem of sickness completely but is one of the helping factors.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Source : https://www.youtube.com/watch?v=7g4uFXR-pRQ

 

That means riding on a fantastical creature that acts as a fixed reference with the horizon as an additional aid to look at, minimizes the possibility of feeling sick. The precaution doesn’t stop here but is affirmed by making the user focus on birds flying and diving into the sea along with fish swimming. These are used to indicate motion, provide a calming effect and to offer interaction to the user as a source of distraction.    

 

So what does this convey to me?

If there is a known issue with the interaction that is obligatory to your experience, research as much as you can and implement as many precautions as you can to reduce the effect of it. At the end of the day, you don’t want your experience to be unintentionally stomach-turning for a player.

 

Finally, this list in no way is comprehensive as the field of VR is volatile and continuously flourishing. I hope these few examples encouraged you to research more in this space and demonstrated how important playtesting is, primarily when working with less explored platforms.

 

If you wish to discuss more on this topic, you can contact me at tsurve@andrew.cmu.edu.

 

MORE BLOGS.
Experience-Taking

What is Experience-taking and how did I use it in VR?

Life Is Strange

Design of Revolutionized Choice-driven Mechanic for interactive storytelling in Life is Strange.

Tutorials Done Right

Ever thought about what goes into making a good tutorial?

© 2019 by Trisha Surve. Proudly created with Wix.com

bottom of page