We just received our Oculus Rift Development Kit 2 (DK2) head-mounted-display and are thrilled to report our experiences with it.
Rift DK2 in front left, its position tracking camera in front center, and two Kinect 2 sensors behind them.
The DK2 comes bundled with an infrared webcam that tracks the Rift’s position (and most likely helps to correct yaw drift in orientation as well). My first question upon unboxing DK2 was “Where the infrared LEDs at??”
So I pointed Kinect 2′s infrared camera at it, and took the below picture:
The LEDs appear as overexposed white blobs in the infrared image.
It seems that the LEDs are below the Rift’s exterior, which is made of (special?) plastic that lets through IR spectrum but absorbs visible light, hiding the nasty insides.
DK2 demo experiences
After solving the LED mystery, we tried the following demos:
Oculus demo scene (comes with the Oculus Configuration Utility)
Oculus demo scene is best for checking out the tracking and image quality, as the scene is peaceful and its 3D objects are simple and elegant. Cyber Space is a virtual amusement park ride for those of us who want to explore our cyber-sickness limits, Horse World Online is only for the most hard-core horsie fans, and Chilling Space has a calm atmosphere (we didn’t notice how positional tracking was employed though).
DK2 has been out for a relatively short time, and I’m not aware of any killer apps for it yet. Personally I’m looking forward to the DK2 version of the Senza Peso opera.
In many ways Oculus Rift DK2 is superior to DK1: head position tracking is responsive and accurate, which is integral to immersion and minimizing nausea. While the screen door (pixel border) effect is still noticeable, it’s a minor nuisance as there are major improvements in other areas. DK2′s resolution is higher, its OLED-display produces a better color range, and its image is crisp because the motion blur from slow pixel color change times has improved (except for blacks). The tracking latency is still low as it should be, and the low persistence technique really seems to do the trick, considerably reducing cyber-sickness.
Meant for each other?
That’s it for now, we’ll get back to combining DK2 with Kinect 2! It’s wonderful stuff, keep your eyes on us!
I participated in the IEEE Virtual Reality 2014 conference that was held between March 29th – April 2nd in Minneapolis. Eager beavers can jump straight to the below link to see a list of the best papers and demos at the conference: http://ieeevr.org/2014/awards.html
VR works better with drugs (pain relief). Timothy Leary approves.
The biggest VR2014 highlight for me was trying out Tactical Haptics‘ Reactive Grip prototype:
Sense of touch is one of the major senses and perhaps the most challenging sense to provide with convincing virtual sensations. Currently haptic feedback is missing from most virtual reality applications. Reactive Grip could change that for many applications: it is cheap and simple haptic technology that could be integrated into any number of modern game controller variations. The handle of Reactive Grip utilizes four sliding contactor plates whose movement conveys the sense of inertia from the virtual object. Examples include gun recoil (kickback), struggle of a fish caught by a fishing rod, or hit of a sword against another virtual object. I tried bunch of demos that included those examples. Tactical Haptics have close ties with Sixense, and in the prototype motion tracking is handled via Razer Hydra controllers.
Reactive Grip has its limitations: because it is a game controller, the Tactical Haptics’ device can only approximate sensations from rigid, hand-held objects such as virtual gun grips, fishing rods, steering wheels, and other tools. For most games and applications this should be enough though. Reactive Grip is a mechanical device and I wonder how robust it can be. Traditionally haptic devices break easily.
Funny thing is that if you close your eyes when using the controllers, the haptic feedback alone doesn’t convey what you are doing in the virtual world, due to the vagueness and low fidelity of the haptic effect. But when combined with audiovisual cues, the different perceptions merge together gracefully providing more immersion than any of the cues alone. Most importantly, the haptic feedback doesn’t contradict the audiovisual cues but rather supports them.
Tactical Haptics ran a Kickstarter campaign last Autumn that unfortunately didn’t go through. People really need to try out this controller to see its potential. Tactical Haptics’ invention is something that for the first time could bring haptic feedback to the masses, especially if one of the major console manufacturers would adopt it.
The acquisition of Oculus VR by Facebook was a big news topic throughout the conference. As such it was a pity that we didn’t get to see Crystal Cove or DK2 prototype of Rift. Vicon was in talks with Oculus VR to bring DK2 to the conference, but at the time Oculus canceled public demonstrations of DK2 due to the Facebook buyout. That’s what I heard anyway. Palmer Luckey was also supposed to participate in the conference, but apparently the Facebook acquisition and the related death threats to Oculus staff got in the way.
Several times I witnessed Oculus’ HMD referred as Facebook Rift and FaceRift. Perhaps there was slight bitterness in the air regarding the 2 billion dollar buyout? This is understandable as traditionally VR hasn’t been a very lucrative business, and suddenly seasoned VR researchers and practitioners see a VR company go from zero to a hero in less than two years.
I talked to a person who had tried Sony’s Morpheus, DK2, and Valve’s prototype. In his opinion DK2 and Morpheus were very close to each other performance-wise. He liked Valve’s prototype the best though, because of the wide positional tracking that was implemented with camera-based inside-out-tracking of fiducial markers. With Michael Abrash joining Oculus, hopefully the good features of Valve’s prototype will be transferred to future Oculus HMDs.
University of Minnesota presented a bunch of their VR related projects to the conference audience. The most interesting one was a high-resolution, wide-FOV HMD built from an iPad mini and a 3D printed frame. In their demo up to 6-8 people wore the HMDs, dwelling the same VR place simultaneously, while being tracked over a large area using a commercial optical tracker.
The HMD utilized high-quality glass optics (~$40 a piece) to spread the iPad mini’s 2048-by-1536 resolution to a FOV that similar to Oculus Rift’s. Needless to say the image was much crispier than with Rift, whereas the iPad’s orientation tracking was slightly less responsive than that of the Rift. Overall, I was very impressed with this HMD!
After the conference it was time to get back to basics.
P.S. I also visited Kinect 2 Developer Preview Program Breakfast that was co-organized with Microsoft’s Build Conference in San Francisco. Microsoft hopes to start selling Kinect 2 for Windows in the summer, and us developers with the preview version should get a Kinect 2 Unity plugin even before that.
Last week Oculus Rift was acquired by Facebook for 2 billion dollars, which is the biggest move in virtual reality industry that we have seen. I speculate that this was at least partially influenced by Sony finally becoming serious with head-mounted-displays through their Morpheus HMD.
Another news piece of (almost) similar proportions, is that the latest version of RUIS for Unity is out Oculus Rift package has been updated to version 0.2.5c and several bugs have been fixed. So what can you do with RUIS for Unity? Use RUIS’ Wand prefabs to easily bring interaction via input devices like Razer Hydra, Kinect, and PlayStation Move to your application, configure multiple mono or stereo displays in Unity through RUIS’ DisplayManager, or use the MecanimBlendedCharacter prefab to blend real-time Kinect body tracking with Mecanim animation of your 3D avatar.
Speaking of Oculus Rift, apparently some people experience 150 ms latency in certain applications built with Unity. Jason Jerald found out that this can be remedied by
commenting out the following line inside SetMaximumVisualQuality() function of OVRCameraController.cs script:
//QualitySettings.vSyncCount = 1; //comment out this line.
The year 2014 looks very promising for virtual reality; A new version of Oculus Rift is coming out, along with plethora of VR peripherals like Sixense STEM and Virtuix Omni. Developing applications that use these devices means that middleware and software toolkits like RUIS will have an even more important role in the future, when developers want to combine different devices or develop using higher levels of abstraction.
Valve and Sony are working on their own head-mounted-displays, and who knows what surprises this year has in store for us! [update: seems like Valve is not making their own HMD after all] VR gaming is far from becoming mainstream however, and I suspect that it will be in 2015 at earliest, when indie developers start to make some serious profit with games that exclusively require VR peripherals. I don’t expect established game companies to develop big budget VR-only games in the near future. What about games that have a traditional UI and a VR user interface then? I have my own reservations; Getting two interfaces to work in one game while sharing gameplay mechanics etc., requires a lot of work and is likely to dilute both experiences if not botch at least one of them altogether.
New virtual reality course
Starting this January, we will run our virtual reality course for the 4th time in Aalto University (we started organizing it in 2011). Student teams will develop virtual reality applications using Oculus Rift, Kinect, PS Move, and other peripherals. Check out the projects from the previous year. Any interested Aalto University students should keep their eye on the course homepage, and note the new course name: Experimental User Interfaces. I also have access to Kinect 2, which will be supported in some future version of RUIS for Unity.
And speaking of further developments of RUIS: Since autumn we’ve been working at our own pace to improve RUIS for Unity with the aim of releasing it in Unity asset store. We have been improving documentation, adding essential features, fixing bugs, and making RUIS easier to use. Work has been slower than we anticipated and we missed our planned release date, as I’m busy writing publications for my PhD and Mikael has been focusing on his Master Thesis. It’s coming however, with all the features that we used to combine Oculus Rift with Kinect, PS Move, and Razer Hydra in our TurboTuscany demo. And then some
Below I sum a few points that we learned while developing the TurboTuscany demo. Some of our findings are consequential, while some are common knowledge if you have developed stuff for Razer Hydra, Kinect, or PS Move before.
Latencies of used devices, smallest first:
Oculus Rift < Razer Hydra < PS Move < Kinect
Body tracking with Kinect has an easily noticeable lag, has plenty of jitter, and the tracking fails often. Nevertheless, Kinect adds a lot to the immersion and is fun to play around with.
From all the positional head tracking methods available in our TurboTuscany demo, PS Move is the best compromise: big tracking volume (almost as big as Kinect’s) and accurate
tracking (not as accurate as Razer Hydra though). Therefore the best experience of our demo is achieved with Oculus Rift + Kinect + PS Move. Occlusion of the Move controller from PS Eye’s view is a problem though for positional tracking (not for rotational).
Second best head tracking is achieved with combination of Oculus Rift, Kinect, and Razer Hydra. This comes with the added cumbersomeness of having to wear Hydra around the waist.
My personal opinion is that VR systems with a virtual body should track the user head, hands, and forward direction (chest/waist) separately. This is so that the user can look into different direction than the direction where he is pointing a hand-held tool/weapon, while walking in a third direction. In TurboTuscany demo we achieve this with the combination of Oculus Rift, Kinect, and Hydra/Move.
Latency requirements for positional head tracking
The relatively low latency of Razer Hydra’s position tracking should be low enough for many HMD use cases. If you’re viewing objects close, the Hydra’s latency becomes apparent however when moving your head. Unless STEM has some new optimization tricks, it will most likely have different latency (higher?) than Hydra because it’s wireless.
If head position tracking latency is less or equal to Oculus Rift’s rotational tracking, that should be good enough for most HMD applications. Since this is not a scientific paper that I’m writing here, I won’t cite earlier research that suggests latency requirements in milliseconds.
Because we had positional head tracking set up to track the point between eyes, we first set Oculus Rift’s “Eye Center Position” to (0,0,0) which determines a small translation that follows the orientation of Rift. But we found out that the latency of our positional head tracking was apparent when moving the head close (>0.5 meters) to objects, even with Razer Hydra. Therefore we ended up setting “Eye Center Position” to the default (0, 0.15, 0.09), and viewing close objects while moving became much more natural. Thus, our positional head tracking has a “virtual” component that follows the Rift’s orientation.
And now something completely different… We had lots of bugs in grandma when implementing Kinect controlled avatar for TurboTuscany demo, below are some results
In the past months we’ve been adding new features and Oculus Rift and Razer Hydra support in RUIS for Unity. Our just released TurboTuscany demo showcases the new capabilities of RUIS:
Video part 2:
TurboTuscany features a 1st person view with a Kinect controlled full-body avatar and 4 methods for six-degrees-of-freedom (6DOF) head tracking:
Oculus Rift + Kinect
Oculus Rift + Razer Hydra
Oculus Rift + Razer Hydra + Kinect
Oculus Rift + PlayStation Move (+ Kinect)
Head tracking with Razer Hydra
It makes a difference to see your own body in the virtual world, affecting the sense of presence. Those of you with Kinect: Try pushing and kicking stuff, or even climb the ladder with your hands. You can take steps freely inside Kinect’s range, and when you need to go further, just use a wireless controller to walk or run like you would in any normal game. We are blending your Kinect captured pose with Mecanim animation, so while you’re running and your feet follow a run animation clip, you can still flail around your hands and upper body as you like.
(Kinect users will need to install Win32-bit version of OpenNI 22.214.171.124. See the readme that comes with the demo for other details.)
Positional head tracking with Kinect alone is quite rough, so try the Razer+Kinect or PS Move option if you can.
Windows operating system: Vista, Windows 7, Windows 8 (see comment section for details)
In the near future we will release a Oculus Rift demo that is used either with Kinect, Razer Hydra, or PlayStation Move controllers (or any combination of those). This will be followed by a new version of RUIS for Unity, which allows you to easily create your own applications that use the aforementioned devices.
This spring we organized virtual reality course in Aalto University for the third time using RUIS. We asked the students to create virtual reality applications that use Kinect and PlayStation Move controllers together. This resulted in games that would not be possible with either device alone:
Student projects seen in the video were created with RUIS for Unity in Aalto University’s virtual reality course for the year 2013. Majority of students had no experience in virtual reality development.
We finally received our Oculus Rift development kit! As far as we know, this is the 3rd Oculus Rift kit in Finland (Unity’s office in Finland has one, and another one arrived to an early Finnish Oculus Rift Kickstarter backer).
We will soon add Oculus Rift support to RUIS for Unity. Stay tuned!
This year’s IEEE Virtual Reality 2013 conference was held in Disney World’s Swan Hotel in Orlando, Florida. Leading VR researchers and practitioners (most of you might recognize at least the one Palmer Luckey of Oculus VR) gathered together to present the latest academic research, commercial technology, and to discuss different aspects of VR. I was there to participate in the annual 3DUI contest with my immersive 3D user interface for Blender (which received the “Best low-cost solution” prize).
Above is a video of WorldViz’s Pit demo in all its glory. During the conference I tried it myself while being tracked by a high-end motion tracker and wearing nVisor SX111, a $30,000 HMD from NVIS. It was so immersive that I had my legs shaking, just from walking on a narrow plank high above the pit. The presenter encouraged me to jump down the 10 meter drop, something that I hesitated for a few seconds, even though I was fully aware of the virtual nature of the fall. No game that I have played with an external display has ever made me scared like that! While the 1.3 kg weight of the HMD was distracting when tilting my head up or down, the 1280×1024 resolution, crisp picture quality, and the 102 degrees horizontal field of view were simply amazing.
Palmer Luckey trying on nVisor SX111.
Sixense Entertainment is a company that is creating software for Razer Hydra, which is a 6 degrees-of-freedom (DOF) controller that reminds me of PlayStation Move. Their booth had an intuitive 3D modeling application called MakeVR running with Oculus Rift. It was not the final Rift version that will start shipping in the end of this month. Yet it was a very recent prototype, judging from the fact that it was not held together by duct tape. In the below two photos you can see Palmer experiencing Sixense’s MakeVR software with Oculus Rift.
Yours truly wearing Oculus Rift and controlling MakeVR with Razer Hydra.
I have worn several HMDs in the past, ranging from professional $50,000 devices to Sony’s HMZ-T1. Putting on the Oculus Rift for the first time was an unforgettable experience. What struck me first was the low resolution and image blurriness (probably due to cheap optics). Before using the MakeVR application with the Rift, I had used MakeVR with a normal 2D display without problems, but now I was completely lost. The MakeVR interface (beta version) was not really optimized for the Rift, and the low resolution did not help the experience. This is an important reminder for VR developers that porting your application for a HMD requires plenty of user interface consideration, and that still does not guarantee that the application is easier or more efficient to use (while it might make the application more “immersive”). The resolution in the developer kit version of the Rift was so low that I could only read text elements of a floating 3D widget by bringing it so close that it covered everything else. Good luck for snipers in first-person-shooter (FPS) games: Just try to spot the enemy between these Duplo sized pixels. You can try it yourself by playing a game with a resolution of 640×800 and sticking your face close to the display so that it fills your field of view. Do note that I’m purposely comparing the Rift with high-end HMDs, because they set the bar when it comes visual quality.
It would have been interesting to try the Pit demo with the Rift in order to compare how much its lower quality optics and lower resolution impact the level of immersion. Unfortunately the Pit demo was not available for Oculus Rift.
While my first experience with Oculus Rift was a rude awakening, I have a very positive outlook for Oculus’ future. The Rift weighs very little, is very affordable, and the promised API support for UDK and Unity engines is just what the developers need. The low resolution of the Rift will be addressed in the consumer version (slated to be released in 2014). Oculus team is aiming to have at least 1080p display (960×1080 resolution for each eye) for the consumer version. Hopefully the optics will be adjustable for minimizing any blurriness. I’m slightly concerned that all the Oculus Rift hype that has been building up since August 2012 will backslash as a disappointed public reaction towards the developer kit version, and that backers & general audience will lose interest in the future consumer version. And that would be a shame as the consumer version of the Rift seems very promising. My opinion is that the developer kit version of the Rift will not deliver the VR bliss that some have been expecting. Instead, it will give developers the chance to refine their VR game and application concepts before the launch of the consumer version.
One of the most interesting panel discussions of the conference was the Consumer VR panel with Jason Jerald from NextGen Interactions, Palmer Luckey, Sébastien Kuntz from I’m in VR, David A. Smith from Lockheed Martin GTL, and Amir Rubin from Sixense. Palmer revealed that Oculus VR is well prepared to handle any liability issues that might result from users injuring themselves while using the Rift. Palmer also mentioned that Team Fortress 2 will have seven (if I remember the number correctly) different controlling modes for Oculus Rift that the player can choose from. “The least annoying one” from the seven modes was chosen as the default. The plethora of controlling modes highlights the challenges of developing 3D user interfaces, where there are no proper standards yet. This problem is pronounced when incorporating immersive technology to existing genres like FPS games, where we have a strong preconception about how they should be controlled. Some panelists argued that entirely new games should be created from ground up with VR in mind, instead of just porting existing games. It was also mentioned that in multiplayer games like TF2, players with the Rift will have a disadvantage compared to other players due to the head-tracked controls and low resolution.
I got the opportunity to try on Google Glass, as one conference participant had brought it with him. While taking photos of Glass was forbidden, I had the pleasure of playing around with it. My experience was brief: Voice commands and the glass frame’s touch interface seemed to work nicely, with the exception that the device kept turning itself off, apparently because my messy hair kept brushing against the off button. Time to get a haircut..?
VR2013 conference had several commercial vendors, one of them being zSpace Inc. I’m quite excited with their device (also named zSpace), as I have experience with digital painting and 3D modeling: The zSpace is a 24″, passive 3D display that has four embedded IR-cameras for tracking a 6DOF digital pen and user’s head. The head-tracking allows rendering 3D content from user’s point of view, making zSpace a desktop fishbowl VR system that is meant for 3D artists. While the tracking and the 3D manipulation capabilities of zSpace are wonderful, their current version (priced around $4,000) is of limited use unless someone comes up with a killer graphics app that has an unprecedented workflow for a 3D user interface. I suggested zSpace’s representative that if they could make the screen to be like Wacom’s Cintiq while retaining the other features, that would be awesome: Digital artists could work with the device just as they would with their normal drawing tablet, and then seamlessly start doing 6DOF manipulation by just lifting the pen from the surface. This would be the best way to introduce 3D motion controls for artists; by extending the capabilities of the tools that they already use.
As part of the conference, various Florida Universities presented their VR related projects. Most projects dealt with medical systems, but there were other applications as well, like the Kinect controlled quadcopter seen below.
Sean is a robot portraying a 14 year old teenage boy straight from the uncanny valley (I’m thinking of Johnny Cab). Sean has an animated face that is rear-projected onto a head-shaped screen, making all kinds of awkward, lip-synced facial expressions. The wheelchair and Sean’s neck motion was operated by an actress who also performed live acting of Sean’s voice. She was nowhere to be seen in the exhibition hall, but she did a great job portraying the character using cameras mounted onto Sean.
Virtual prostate examination; the future of VR games?
See you next year in VR2014!
P.S. If you’re an aspiring VR game developer, you should go experience DisneyQuest in Walt Disney World. DisneyQuest has VR games that are pioneering in the sense that they have been used by large audiences. They are also not very fun (at least the ones I tried), giving developers ideas about what works and what doesn’t.