Lessons learned while developing TurboTuscany

Below I sum a few points that we learned while developing the TurboTuscany demo. Some of our findings are consequential, while some are common knowledge if you have developed stuff for Razer Hydra, Kinect, or PS Move before.

turbogranny5

Latencies of used devices, smallest first:
Oculus Rift < Razer Hydra < PS Move < Kinect

Body tracking with Kinect has an easily noticeable lag, has plenty of jitter, and the tracking fails often. Nevertheless, Kinect adds a lot to the immersion and is fun to play around with.

From all the positional head tracking methods available in our TurboTuscany demo, PS Move is the best compromise: big tracking volume (almost as big as Kinect’s) and accurate
tracking (not as accurate as Razer Hydra though). Therefore the best experience of our demo is achieved with Oculus Rift + Kinect + PS Move. Occlusion of the Move controller from PS Eye’s view is a problem though for positional tracking (not for rotational).

Second best head tracking is achieved with combination of Oculus Rift, Kinect, and Razer Hydra. This comes with the added cumbersomeness of having to wear Hydra around the waist.

My personal opinion is that VR systems with a virtual body should track the user head, hands, and forward direction (chest/waist) separately. This is so that the user can look into different direction than the direction where he is pointing a hand-held tool/weapon, while walking in a third direction. In TurboTuscany demo we achieve this with the combination of Oculus Rift, Kinect, and Hydra/Move.

Latency requirements for positional head tracking

The relatively low latency of Razer Hydra’s position tracking should be low enough for many HMD use cases. If you’re viewing objects close, the Hydra’s latency becomes apparent however when moving your head. Unless STEM has some new optimization tricks, it will most likely have different latency (higher?) than Hydra because it’s wireless.

If head position tracking latency is less or equal to Oculus Rift’s rotational tracking, that should be good enough for most HMD applications. Since this is not a scientific paper that I’m writing here, I won’t cite earlier research that suggests latency requirements in milliseconds.

Because we had positional head tracking set up to track the point between eyes, we first set Oculus Rift’s “Eye Center Position” to (0,0,0) which determines a small translation that follows the orientation of Rift. But we found out that the latency of our positional head tracking was apparent when moving the head close (>0.5 meters) to objects, even with Razer Hydra. Therefore we ended up setting “Eye Center Position” to the default (0, 0.15, 0.09), and viewing close objects while moving became much more natural. Thus, our positional head tracking has a “virtual” component that follows the Rift’s orientation.

And now something completely different… We had lots of bugs in grandma when implementing Kinect controlled avatar for TurboTuscany demo, below are some results 🙂

HorizontalLevitation

turbogranny2resized

turbogranny

turbogranny4

turbogranny3

This entry was posted in RUIS and tagged , , , , , , , . Bookmark the permalink.

2 Responses to Lessons learned while developing TurboTuscany

  1. Sean Hall says:

    Great work. I am curious if you have ever experienced tracking issues with the Oculus Rift due to magnetic interference from the Razer Hydra? I am having intermittent tracking with the Rift on a USB repeater/extender with the Hydra base attached to my chest. I am about to update firmware and such, but I just wanted to exchange some information. I have some ideas how to overcome occulusion issues of the PSMove and still track the full body without the Kinect 🙂

    • Takala says:

      Thanks for your comment Sean.
      I can’t say that I have noticed any interference from Razer Hydra, but it is possible. As far as I can tell the Rift’s magnetometer is only used for yaw drift correction. I have noticed that the Rift’s pitch and roll can gain systematic error even when no Hydra is present. That’s related to the accelerometer though. We have been using the version Oculus SDK v0.2.4.
      I’m interested to know how you would overcome the occlusion issue with PSMove. The Sony team responsible for PSMove did a great job already with the sensor fusion, and the accelerometer+gyro data alone just isn’t enough to reliably update the 3D position.

Leave a Reply

Your email address will not be published. Required fields are marked *

*