So I finally found some time to get creative with some VR UX to improve how held objects behave when the VR player puts their hands somewhere they shouldn’t.
It’s pretty normal in VR (and any game using a physics engine) to expect that an object acting under gravity will be prevented from intersecting unrealistically with other objects and the virtual environment using physics engine collisions, but in current generation VR the player can put their hands wherever they want, unconstrained by the virtual environment and the behavior of a held object in that circumstance can’t really be resolved by the physics engine (as shown by the gif below) and is not totally established in VR as a whole yet.
One thing I spotted while playing “I Expect You To Die” (mostly in the office area because in the levels I was too absorbed to do any analysis) was how collisions behaved differently if an object was held, a held object would be allowed to pass through the environment as the hand does, and by setting it up that way the virtual hand and my real hand were always in sync, which just felt right.
I went through a few iterations to work this into BeeBeeQ which ended up in a neat solution, and now having finished it it feels like it should have been obvious… but this is gamedev, not kicking myself to hard.
So my first thought was just to disable all collisions between the environment and the tool while it’s held, basically what “I Expect You To Die” does (though they disable all collisions, with some extra speed based stuff as described in this far more interesting blog post). This worked for them but in BeeBeeQ this would have allowed the VR player to take a swing at Bee players through walls, in order to keep things fair a held object that’s intersecting a wall (or any part of the environment) should not be able to hit the bees.
After experimenting with changing layers in OnCollisionEnter (then not being able to detect OnCollisionExit), then changing the colliders to triggers in OnCollisionEnter and back in OnTriggerExit (OnTriggerExit was not reliably called) I eventually found a solution I feel works.
In Awake of an interactive object I duplicate all the colliders that make it up (we keep colliders and render objects separate so this doesn’t result in any extra geometry) and set the duplicates to be triggers. When the object is held the non-trigger colliders are set to a layer that doesn’t collide with the environment, triggers still do. In OnTriggerEnter with an environment (static) collider I store the static collider in a list and change the non-trigger colliders layer again to one that doesn’t collide with the environment or the bees, then in OnTriggerExit I remove the static collider from the list, and if the list is empty re-enable collisions with the bees. Then finally when the object is dropped I re-enable collisions with the environment.
The flow actually works very well and though I don’t like how it needs double colliders I can live with that.
The last thing to do was provide some visual feedback to the VR player to let them know they are performing an illegal move, this is done by changing the objects material to a 2 pass Fresnel shader, one pass renders the un-intersected area with ZTest LEqual, the 2nd pass renders the intersected area slightly more transparent with ZTest Greater. This shows the object through the environment but still makes it easy to see where the intersection is happening.
On a lucky day I get 2 hours to work on BeeBeeQ, this all happens pretty much automatically for any new intractable we add which is exactly what I needed. Whether all this will make it into the final game or not is going to depend on some intensive play testing, but so far I like this behavior a whole lot better than watching the physics engine struggle to maintain order!
Plus it makes it much easier to flip burgers.