Comparing the seated Vive and Rift CV1 experience in Radial-G

Because my body was complaining a bit after 2 days of room-scale madness, I decided to test the “seated” experience in the Vive today. We all know that room-scale with tracked controllers is a transformative experience by now, but there are still plenty of games best played seated. To be honest, I expected it to be quite comparable to the seated experience in CV1, though a bit worse overall. The reality is not that straightforward.

I performed this comparison based on the game Radial-G. Why? For one, it’s a cockpit game, so quite the typical seated experience. Secondly, it natively supports both the SteamVR and Oculus APIs. And thirdly, I already own it and enjoyed it a lot on DK2.

Ergonomics

No two ways about it, the Rift CV1 is both easier to put on and take off as well as lighter on your head. I slightly prefer the actual face interface (heh) of the Vive – the material is a bit softer and more rounded, but it doesn’t make up for the other ergonomic advantages of the Rift.

There is one use case however in which the ergonomic advantage of the Rift doesn’t hold: glasses. At least with my head shape and glasses size, the Vive (with its cut-outs) is basically just as comfortable to use with glasses as without them, while the Rift is far more of a struggle.

Field of View

Field of view was a hot discussion topic just days ago, but seems to have settled down a bit now that we know all the measurements. In the actual game, the difference is not massive. I’ve looked closely at what I can see in each headset just moving my eyes from the default position (right after reset) and here’s what it looks like with lines indicating the borders of my vision in each HMD (excuse the crude drawing):

As you can see, the difference is there in almost every direction, but only really noticeable at the bottom of the FoV. I found this additional viewing area towards to bottom worked to stave off motion sickness in this fast moving game, since you see more of your craft, but you can probably simply move back a bit to get a similar effect. All in all, I don’t believe the difference is as striking as a purely numerical comparison would indicate.

Tracking

Both of the HMDs tracked perfectly throughout all my testing in this game. Obviously a cockpit game is hardly a challenging tracking test, but I do have my lighthouses set up for room-scale and not seated stuff so I thought I’d remark on it anyway.

Image Quality

In terms of image quality, the only noticeable difference for me was that the contrast ratio on the Vive seemed higher. At first I thought that its black level was better, but someone pointed out to me that it could just as well be due to higher maximum brightness with the same black level. Since our eyes are relative measurement devices, the only way to be sure is to use some more specialized equipment.

Logically I know that the same amount of pixels must be spread over a slightly larger relative area on the Vive, but I’d be lying if I said I noticed that in this game, even in a direct (well, a few minutes delayed) comparison.

Optics

Here we get to the single most surprising and noteworthy part, at least to me. Radial-G is a game with a lot of bright spots on dark backgrounds, so the optical artifacts introduced by the 2 HMD’s fresnel lenses gain prominence. Here’s an impression of what these look like:

As you can see, the Rift generates a straight, smooth blur from the center to the edge, while the Vive spreads the same amount of “light leakage” (or even slightly more) across concentric rings.

Now, just looking at the image comparison above in isolation, there’s a discussion to be had about which one is preferable. Clearly, on the Vive, a larger amount of space is affected, but on the other hand this space is less severely affected. Personally, I’m not entirely sure which one I prefer.

Be that as it may, what I found is that this isolated comparison is both irrelevant and misleading. In a real game scenario, you very rarely have a flat black background and a single bright spot source. There will usually be at least something going on in the background, and more than one bright spot around. And this changes the visual impression drastically.

Here’s the exact same image, but with some level of background noise added to simulate that we rarely have nothing going on in VR except for one dot on a flat background. I repeat, that’s the same strength of the respective artifacts as above. With some background in place, the more dispersed artifact on the Vive becomes almost completely unnoticeable, while the more focused glow on CV1 is still readily apparent.

This image comparison might look extreme or artificial, but it really does reflect exactly what I observed in the game. Given that, I find the type of artifacts produced by the Vive highly preferable in the general case.

C++11 chrono timers

I’m a pretty big proponent of C++ as a language, and particularly enthused about C++11 and how that makes it even better. However, sadly reality still lags a bit behind specification in many areas.

One thing that was always troublesome in C++, particularly in high performance or realtime programming, was that there was no standard, platform independent way of getting a high performance timer. If you wanted cross-platform compatibility and a small timing period, you had to go with some external library, go OpenMP or roll your own on each supported platform.

In C++11, the chrono namespace was introduced. It, at least in theory, provides everything you always wanted in terms of timing, right there in the standard library. Three different types of clocks are offered for different use cases: system_clock ,  steady_clock  and high_resolution_clock.

Yesterday I wrote a small program to query and test these clocks in practice on different platforms. Here are the results:

So, sadly everything is not as great as it could be, yet. For each platform, the first three blocks are the values reported for the clock, and the last block contains values determined by repeated measurements:

  • “period” is the tick period reported by each clock, in nanoseconds.
  • “unit” is the unit used by clock values, also in nanoseconds.
  • “steady” indicates whether the time between ticks is always constant for the given clock.
  • “time/iter, no clock” is the time per loop iteration for the measurement loop without the actual measurement. It’s just a reference value to better judge the overhead of the clock measurements.
  • “time/iter, clock” is the average time per iteration, with clock measurement.
  • “min time delta” is the minimum difference between two consecutive, non-identical time measurements.

On Linux with GCC 4.8.1, all clocks report a tick period of 1 nanosecond. There isn’t really a reason to doubt that, and it’s obviously a great granularity. However, the drawback is that it takes around 120 nanoseconds on average to get a clock measurement. This would be understandable for the system clock, but seems excessive in the other cases, and could cause significant perturbation when trying to measure/instrument small code areas.

On Windows with VS12, a clock period of 100 nanoseconds is reported, but the actual measured tick period is a whopping 1000000 ns (1 millisecond). That is obviously unusable for many of the kind of use cases that would call for a “high resolution clock”. Windows is perfectly capable of supplying a true high resolution clock measurement, so this performance (or lack of it) is quite surprising. On the bright side, a measurement takes just 9 nanoseconds on average.

Clearly, both implementations tested here still have a way to go. If you want to test your own platform(s), here is the very simple program: