Lessons learned while developing TurboTuscany

Below I sum a few points that we learned while developing the TurboTuscany demo. Some of our findings are consequential, while some are common knowledge if you have developed stuff for Razer Hydra, Kinect, or PS Move before.

turbogranny5

Latencies of used devices, smallest first:
Oculus Rift < Razer Hydra < PS Move < Kinect

Body tracking with Kinect has an easily noticeable lag, has plenty of jitter, and the tracking fails often. Nevertheless, Kinect adds a lot to the immersion and is fun to play around with.

From all the positional head tracking methods available in our TurboTuscany demo, PS Move is the best compromise: big tracking volume (almost as big as Kinect’s) and accurate
tracking (not as accurate as Razer Hydra though). Therefore the best experience of our demo is achieved with Oculus Rift + Kinect + PS Move. Occlusion of the Move controller from PS Eye’s view is a problem though for positional tracking (not for rotational).

Second best head tracking is achieved with combination of Oculus Rift, Kinect, and Razer Hydra. This comes with the added cumbersomeness of having to wear Hydra around the waist.

My personal opinion is that VR systems with a virtual body should track the user head, hands, and forward direction (chest/waist) separately. This is so that the user can look into different direction than the direction where he is pointing a hand-held tool/weapon, while walking in a third direction. In TurboTuscany demo we achieve this with the combination of Oculus Rift, Kinect, and Hydra/Move.

Latency requirements for positional head tracking

The relatively low latency of Razer Hydra’s position tracking should be low enough for many HMD use cases. If you’re viewing objects close, the Hydra’s latency becomes apparent however when moving your head. Unless STEM has some new optimization tricks, it will most likely have different latency (higher?) than Hydra because it’s wireless.

If head position tracking latency is less or equal to Oculus Rift’s rotational tracking, that should be good enough for most HMD applications. Since this is not a scientific paper that I’m writing here, I won’t cite earlier research that suggests latency requirements in milliseconds.

Because we had positional head tracking set up to track the point between eyes, we first set Oculus Rift’s “Eye Center Position” to (0,0,0) which determines a small translation that follows the orientation of Rift. But we found out that the latency of our positional head tracking was apparent when moving the head close (>0.5 meters) to objects, even with Razer Hydra. Therefore we ended up setting “Eye Center Position” to the default (0, 0.15, 0.09), and viewing close objects while moving became much more natural. Thus, our positional head tracking has a “virtual” component that follows the Rift’s orientation.

And now something completely different… We had lots of bugs in grandma when implementing Kinect controlled avatar for TurboTuscany demo, below are some results 🙂

HorizontalLevitation

turbogranny2resized

turbogranny

turbogranny4

turbogranny3

Posted in RUIS | Tagged , , , , , , , | 2 Comments

Oculus Rift + Kinect + Razer Hydra + PS Move demo released!

In the past months we’ve been adding new features and Oculus Rift and Razer Hydra support in RUIS for Unity. Our just released TurboTuscany demo showcases the new capabilities of RUIS:

Video part 2:

TurboTuscany features a 1st person view with a Kinect controlled full-body avatar and 4 methods for six-degrees-of-freedom (6DOF) head tracking:

  • Oculus Rift + Kinect
  • Oculus Rift + Razer Hydra
  • Oculus Rift + Razer Hydra + Kinect
  • Oculus Rift + PlayStation Move (+ Kinect)

Head tracking with Razer Hydra

It makes a difference to see your own body in the virtual world, affecting the sense of presence. Those of you with Kinect: Try pushing and kicking stuff, or even climb the ladder with your hands. You can take steps freely inside Kinect’s range, and when you need to go further, just use a wireless controller to walk or run like you would in any normal game. We are blending your Kinect captured pose with Mecanim animation, so while you’re running and your feet follow a run animation clip, you can still flail around your hands and upper body as you like.
(Kinect users will need to install Win32-bit version of OpenNI 1.5.4.0. See the readme that comes with the demo for other details.)

Positional head tracking with Kinect alone is quite rough, so try the Razer+Kinect or PS Move option if you can.

Minimum requirements

  • Oculus Rift
  • Windows operating system: Vista, Windows 7, Windows 8 (see comment section for details)

Supported input devices

  • Razer Hydra
  • ASUS Xtion Pro, Kinect for Xbox, (Kinect for Windows?)
  • PlayStation Move and PS Navigation controllers (Move.me software and PS3 required)
  • Gamepad (any Unity compatible gamepad or joystick)
  • Mouse and keyboard

This demo should be pretty fun to try out even with just mouse and keyboard. There’s several physics based activities, and we’ve hidden a bunch of Easter eggs in the scene.

Download links:
https://drive.google.com/file/d/0B0dcx4DSNNn0UEF0U2RMblNjelU/edit?usp=sharing
https://dl.dropboxusercontent.com/u/8247026/TurboTuscany101.zip
http://www.mediafire.com/?77ardrkg4okwhua
1080p version:
https://drive.google.com/file/d/0B0dcx4DSNNn0T2NaRFdNekkwUXM/edit?usp=sharing

Within a month or two we will release a new version of RUIS for Unity that will have all the features that we used to create TurboTuscany demo.

Posted in RUIS | Tagged , , , , , , | 12 Comments

Soon

In the near future we will release a Oculus Rift demo that is used either with Kinect, Razer Hydra, or PlayStation Move controllers (or any combination of those). This will be followed by a new version of RUIS for Unity, which allows you to easily create your own applications that use the aforementioned devices.

Posted in RUIS | Tagged , , , , | Leave a comment

Video of virtual reality games by students

This spring we organized virtual reality course in Aalto University for the third time using RUIS. We asked the students to create virtual reality applications that use Kinect and PlayStation Move controllers together. This resulted in games that would not be possible with either device alone:

Student projects seen in the video were created with RUIS for Unity in Aalto University’s virtual reality course for the year 2013. Majority of students had no experience in virtual reality development.

Posted in RUIS, Teaching | Leave a comment

RUIS team receives Oculus Rift!

We finally received our Oculus Rift development kit! As far as we know, this is the 3rd Oculus Rift kit in Finland (Unity’s office in Finland has one, and another one arrived to an early Finnish Oculus Rift Kickstarter backer).

We will soon add Oculus Rift support to RUIS for Unity. Stay tuned!

Posted in RUIS | Tagged , , , | Leave a comment

Oculus Rift and other highlights from VR2013 conference

This year’s IEEE Virtual Reality 2013 conference was held in Disney World’s Swan Hotel in Orlando, Florida. Leading VR researchers and practitioners (most of you might recognize at least the one Palmer Luckey of Oculus VR) gathered together to present the latest academic research, commercial technology, and to discuss different aspects of VR. I was there to participate in the annual 3DUI contest with my immersive 3D user interface for Blender (which received the “Best low-cost solution” prize).

Above is a video of WorldViz’s Pit demo in all its glory. During the conference I tried it myself while being tracked by a high-end motion tracker and wearing nVisor SX111, a $30,000 HMD from NVIS. It was so immersive that I had my legs shaking, just from walking on a narrow plank high above the pit. The presenter encouraged me to jump down the 10 meter drop, something that I hesitated for a few seconds, even though I was fully aware of the virtual nature of the fall. No game that I have played with an external display has ever made me scared like that! While the 1.3 kg weight of the HMD was distracting when tilting my head up or down, the 1280×1024 resolution, crisp picture quality, and the 102 degrees horizontal field of view were simply amazing.

Palmer Luckey trying on nVisor SX111.

Sixense Entertainment is a company that is creating software for Razer Hydra, which is a 6 degrees-of-freedom (DOF) controller that reminds me of PlayStation Move. Their booth had an intuitive 3D modeling application called MakeVR running with Oculus Rift. It was not the final Rift version that will start shipping in the end of this month. Yet it was a very recent prototype, judging from the fact that it was not held together by duct tape. In the below two photos you can see Palmer experiencing Sixense’s MakeVR software with Oculus Rift.

Yours truly wearing Oculus Rift and controlling MakeVR with Razer Hydra.

I have worn several HMDs in the past, ranging from professional $50,000 devices to Sony’s HMZ-T1. Putting on the Oculus Rift for the first time was an unforgettable experience. What struck me first was the low resolution and image blurriness (probably due to cheap optics). Before using the MakeVR application with the Rift, I had used MakeVR with a normal 2D display without problems, but now I was completely lost. The MakeVR interface (beta version) was not really optimized for the Rift, and the low resolution did not help the experience. This is an important reminder for VR developers that porting your application for a HMD requires plenty of user interface consideration, and that still does not guarantee that the application is easier or more efficient to use (while it might make the application more “immersive”). The resolution in the developer kit version of the Rift was so low that I could only read text elements of a floating 3D widget by bringing it so close that it covered everything else. Good luck for snipers in first-person-shooter (FPS) games: Just try to spot the enemy between these Duplo sized pixels. You can try it yourself by playing a game with a resolution of 640×800 and sticking your face close to the display so that it fills your field of view. Do note that I’m purposely comparing the Rift with high-end HMDs, because they set the bar when it comes visual quality.

It would have been interesting to try the Pit demo with the Rift in order to compare how much its lower quality optics and lower resolution impact the level of immersion. Unfortunately the Pit demo was not available for Oculus Rift.

While my first experience with Oculus Rift was a rude awakening, I have a very positive outlook for Oculus’ future. The Rift weighs very little, is very affordable, and the promised API support for UDK and Unity engines is just what the developers need. The low resolution of the Rift will be addressed in the consumer version (slated to be released in 2014). Oculus team is aiming to have at least 1080p display (960×1080 resolution for each eye) for the consumer version. Hopefully the optics will be adjustable for minimizing any blurriness. I’m slightly concerned that all the Oculus Rift hype that has been building up since August 2012 will backslash as a disappointed public reaction towards the developer kit version, and that backers & general audience will lose interest in the future consumer version. And that would be a shame as the consumer version of the Rift seems very promising. My opinion is that the developer kit version of the Rift will not deliver the VR bliss that some have been expecting. Instead, it will give developers the chance to refine their VR game and application concepts before the launch of the consumer version.

One of the most interesting panel discussions of the conference was the Consumer VR panel with Jason Jerald from NextGen Interactions, Palmer Luckey, Sébastien Kuntz from I’m in VR, David A. Smith from Lockheed Martin GTL, and Amir Rubin from Sixense. Palmer revealed that Oculus VR is well prepared to handle any liability issues that might result from users injuring themselves while using the Rift. Palmer also mentioned that Team Fortress 2 will have seven (if I remember the number correctly) different controlling modes for Oculus Rift that the player can choose from. “The least annoying one” from the seven modes was chosen as the default. The plethora of controlling modes highlights the challenges of developing 3D user interfaces, where there are no proper standards yet. This problem is pronounced when incorporating immersive technology to existing genres like FPS games, where we have a strong preconception about how they should be controlled. Some panelists argued that entirely new games should be created from ground up with VR in mind, instead of just porting existing games. It was also mentioned that in multiplayer games like TF2, players with the Rift will have a disadvantage compared to other players due to the head-tracked controls and low resolution.

I got the opportunity to try on Google Glass, as one conference participant had brought it with him. While taking photos of Glass was forbidden, I had the pleasure of playing around with it. My experience was brief: Voice commands and the glass frame’s touch interface seemed to work nicely, with the exception that the device kept turning itself off, apparently because my messy hair kept brushing against the off button. Time to get a haircut..?

VR2013 conference had several commercial vendors, one of them being zSpace Inc. I’m quite excited with their device (also named zSpace), as I have experience with digital painting and 3D modeling: The zSpace is a 24″, passive 3D display that has four embedded IR-cameras for tracking a 6DOF digital pen and user’s head. The head-tracking allows rendering 3D content from user’s point of view, making zSpace a desktop fishbowl VR system that is meant for 3D artists. While the tracking and the 3D manipulation capabilities of zSpace are wonderful, their current version (priced around $4,000) is of limited use unless someone comes up with a killer graphics app that has an unprecedented workflow for a 3D user interface. I suggested zSpace’s representative that if they could make the screen to be like Wacom’s Cintiq while retaining the other features, that would be awesome: Digital artists could work with the device just as they would with their normal drawing tablet, and then seamlessly start doing 6DOF manipulation by just lifting the pen from the surface. This would be the best way to introduce 3D motion controls for artists; by extending the capabilities of the tools that they already use.

As part of the conference, various Florida Universities presented their VR related projects. Most projects dealt with medical systems, but there were other applications as well, like the Kinect controlled quadcopter seen below.

Sean is a robot portraying a 14 year old teenage boy straight from the uncanny valley (I’m thinking of Johnny Cab). Sean has an animated face that is rear-projected onto a head-shaped screen, making all kinds of awkward, lip-synced facial expressions. The wheelchair and Sean’s neck motion was operated by an actress who also performed live acting of Sean’s voice. She was nowhere to be seen in the exhibition hall, but she did a great job portraying the character using cameras mounted onto Sean.

Virtual prostate examination; the future of VR games?

See you next year in VR2014!

P.S. If you’re an aspiring VR game developer, you should go experience DisneyQuest in Walt Disney World. DisneyQuest has VR games that are pioneering in the sense that they have been used by large audiences. They are also not very fun (at least the ones I tried), giving developers ideas about what works and what doesn’t.

Posted in Uncategorized | 3 Comments

Controlling Blender with PS Move: Winner of best low-cost solution in 3DUI 2013 contest

A new version of RUIS for Processing has just become available! I used this version to create an immersive user interface for 3D modeling in Blender with stereo 3D, PlayStation Move controllers, and head-tracking (with PS Move or Kinect). Check out the video here:

This application won prize for “The best low-cost / minimally-intrusive solution” in 3DUI contest at IEEE Symposium on 3D User Interfaces 2013, where I’ve been trying Google Glass, Oculus Rift, and many other AR/VR products (more about that in my next post).

The Blender scene and scripts, as well as the associated Processing sketch, are included in BlenderControl example in the new RUIS for Processing package. You can get it from our download page.

Posted in RUIS | Tagged , , , , , , , , , , , | Leave a comment

PlayStation 4 announced tonight

PlayStation 4 will be officially announced within one hour and I’m following the release closely:
http://www.techradar.com/news/gaming/consoles/ps4-release-date-news-and-features-937822 

PS Move controllers will certainly be supported by PS4 and some rumors suggest that they will be accompanied with full-body motion sensors:
http://www.techradar.com/news/gaming/consoles/ps4-promises-new-styles-of-play-more-motion-control-and-social-1130204

I hope that the rumors are true, as I’ve been preaching for over a year now how full-body tracking of Kinect is complemented well by accurate tracking of PS Move controllers (and its buttons for triggering actions). This is why we’ve made it possible to use Kinect and PS Move controllers in the same coordinate system in RUIS.

The above concept image illustrates an example how a Kinect-controlled gladiator character can grab and interact with multiple objects that are represented by PS Move controllers. This is different from current Kinect games that do not use controllers that are both tangible and location tracked. In a two-player game this allows a game mechanic where players are required to share several tangible tools (PS Move controllers) in order to advance. Different characters can also posses unique ways of using the tools; a skilled character can use a hammer to build things, while a strong character can use it to break otherwise unbreakable doors for example.

Regardless of where Sony is going with PS4, we are going to explore the possibilities of using Kinect and PS Move together in virtual reality and game applications with RUIS, and encourage others to follow suit.

Update: PS4 is fully compatible with PS Move and DualShock4 has a PS Move-like light bar presumably for position tracking. The new PS Eye is actually a stereocamera (it has two RGB sensors), which can provide depth images similar to what Kinect outputs. Depth images generated from two RGB images can be less accurate though, depending on the texture of the objects in view.

P.S. Do tell me if you know where the original image of the gladiator is from, so I can give credit where credit is due. I photoshopped it a year ago but have subsequently forgotten where I got the original from.

Posted in RUIS | Tagged , , , , | Leave a comment

RUIS for Unity released!

It’s out! Get RUIS for Unity from the download page.

RUIS  (Reality-based User Interface System) gives hobbyists and seasoned developers an easy access to the state-of-the-art interaction devices, so they can bring their innovations into the field of virtual reality and motion controlled applications. Currently RUIS for Unity includes a versatile display manager for handling several display devices, and supports the use of Kinect and PlayStation Move together in the same coordinate system. This means that avatars controlled by Kinect can interact with PS Move controlled objects; A player represented by a Kinect-controlled barbarian avatar can grab a PS Move controller that is rendered as a hammer within the application for example.

You can develop and test your own motion controlled applications even if you have only a mouse and keyboard, because in RUIS they can emulate 3D input devices. Learn more about RUIS from its readme-file.

Posted in RUIS | Tagged , , , , , , , , , , , | 2 Comments

Upcoming RUIS for Unity and a new VR course

We’ve been working hard since summer to port RUIS to Unity, and we are close with the first release! A strong maybe on this January with regards to the release schedule 🙂 Below is a screenshot of our test scene in Unity:

RUIS for Unity test scene

Kinect is used to control the characters and PS Move controller tracks the green water can (it can shoot water ;-} )

RUIS for Unity will have the same features as RUIS to Processing and more, including:

  • Supports Kinect and PlayStation Move devices
  • Calibration for matching PlayStation Move and Kinect coordinate systems
  • Easily modifiable 3D selection and manipulation routines
  • Multi-display stereo-rendering

Coming later:

  • Head-mounted-display support (including Oculus Rift)
  • Kinect and PlayStation Move -based head tracking
  • Projector keystone correction
  • Leap Motion support (if it performs well enough)

In the near future we will also update RUIS for Processing, simplifying its API and fixing some of the many bugs that have remained.

New Virtual Reality course

Also this January a new virtual reality course is starting in Aalto University’s Department of Media Technology. The course is mainly intended for Aalto’s computer science students and for Media Lab students of Aalto ARTS.

The course includes lectures about VR, and the students will create their own virtual reality applications in teams. Available for the students are Kinect, Playstation Move, Wiimotes, our own virtual environment (2-walled semi-CAVE), and Oculus Rift head-mounted-display as well as Leap Motion devices (as long as they will be delivered by April).

The below page will be updated by 15th of January with information about enrolling to the course:
https://noppa.aalto.fi/noppa/kurssi/t-111.5400/etusivu

 

Posted in RUIS, Teaching | Tagged , , , , , , , , , , , | Leave a comment